Nov 12 20:47:45.048403 kernel: Linux version 6.6.60-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Nov 12 16:20:46 -00 2024 Nov 12 20:47:45.048432 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:47:45.048446 kernel: BIOS-provided physical RAM map: Nov 12 20:47:45.048454 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 12 20:47:45.048460 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 12 20:47:45.048467 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 12 20:47:45.048476 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Nov 12 20:47:45.048483 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Nov 12 20:47:45.048491 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 12 20:47:45.048501 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 12 20:47:45.048514 kernel: NX (Execute Disable) protection: active Nov 12 20:47:45.048536 kernel: APIC: Static calls initialized Nov 12 20:47:45.048551 kernel: SMBIOS 2.8 present. Nov 12 20:47:45.048564 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Nov 12 20:47:45.048577 kernel: Hypervisor detected: KVM Nov 12 20:47:45.048594 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 12 20:47:45.048607 kernel: kvm-clock: using sched offset of 3486094099 cycles Nov 12 20:47:45.048625 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 12 20:47:45.048640 kernel: tsc: Detected 2294.608 MHz processor Nov 12 20:47:45.048656 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 12 20:47:45.048669 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 12 20:47:45.048683 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Nov 12 20:47:45.048696 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 12 20:47:45.048710 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 12 20:47:45.048728 kernel: ACPI: Early table checksum verification disabled Nov 12 20:47:45.048741 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Nov 12 20:47:45.048754 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:47:45.048768 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:47:45.048781 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:47:45.048794 kernel: ACPI: FACS 0x000000007FFE0000 000040 Nov 12 20:47:45.048806 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:47:45.048819 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:47:45.048833 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:47:45.048883 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:47:45.048902 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Nov 12 20:47:45.048911 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Nov 12 20:47:45.048920 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Nov 12 20:47:45.048937 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Nov 12 20:47:45.048950 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Nov 12 20:47:45.048967 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Nov 12 20:47:45.049007 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Nov 12 20:47:45.049026 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 12 20:47:45.049046 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 12 20:47:45.049066 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 12 20:47:45.049085 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Nov 12 20:47:45.049105 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Nov 12 20:47:45.049125 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Nov 12 20:47:45.049148 kernel: Zone ranges: Nov 12 20:47:45.049168 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 12 20:47:45.049187 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Nov 12 20:47:45.049207 kernel: Normal empty Nov 12 20:47:45.049226 kernel: Movable zone start for each node Nov 12 20:47:45.049246 kernel: Early memory node ranges Nov 12 20:47:45.049265 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 12 20:47:45.049284 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Nov 12 20:47:45.049303 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Nov 12 20:47:45.049327 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 12 20:47:45.049349 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 12 20:47:45.049369 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Nov 12 20:47:45.049388 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 12 20:47:45.049407 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 12 20:47:45.049427 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 12 20:47:45.049446 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 12 20:47:45.049466 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 12 20:47:45.049485 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 12 20:47:45.049508 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 12 20:47:45.049528 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 12 20:47:45.049547 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 12 20:47:45.049567 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 12 20:47:45.049586 kernel: TSC deadline timer available Nov 12 20:47:45.049605 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 12 20:47:45.049624 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 12 20:47:45.049644 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Nov 12 20:47:45.049664 kernel: Booting paravirtualized kernel on KVM Nov 12 20:47:45.049687 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 12 20:47:45.049711 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 12 20:47:45.049731 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Nov 12 20:47:45.049750 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Nov 12 20:47:45.049769 kernel: pcpu-alloc: [0] 0 1 Nov 12 20:47:45.049788 kernel: kvm-guest: PV spinlocks disabled, no host support Nov 12 20:47:45.049809 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:47:45.049829 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 12 20:47:45.049867 kernel: random: crng init done Nov 12 20:47:45.049901 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 12 20:47:45.049920 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 12 20:47:45.049940 kernel: Fallback order for Node 0: 0 Nov 12 20:47:45.049959 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Nov 12 20:47:45.049979 kernel: Policy zone: DMA32 Nov 12 20:47:45.049998 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 12 20:47:45.050019 kernel: Memory: 1971200K/2096612K available (12288K kernel code, 2305K rwdata, 22724K rodata, 42828K init, 2360K bss, 125152K reserved, 0K cma-reserved) Nov 12 20:47:45.050038 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 12 20:47:45.050061 kernel: Kernel/User page tables isolation: enabled Nov 12 20:47:45.050081 kernel: ftrace: allocating 37799 entries in 148 pages Nov 12 20:47:45.050101 kernel: ftrace: allocated 148 pages with 3 groups Nov 12 20:47:45.050120 kernel: Dynamic Preempt: voluntary Nov 12 20:47:45.050140 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 12 20:47:45.050166 kernel: rcu: RCU event tracing is enabled. Nov 12 20:47:45.050186 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 12 20:47:45.050206 kernel: Trampoline variant of Tasks RCU enabled. Nov 12 20:47:45.050226 kernel: Rude variant of Tasks RCU enabled. Nov 12 20:47:45.050245 kernel: Tracing variant of Tasks RCU enabled. Nov 12 20:47:45.050269 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 12 20:47:45.050305 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 12 20:47:45.050315 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 12 20:47:45.050328 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 12 20:47:45.050338 kernel: Console: colour VGA+ 80x25 Nov 12 20:47:45.050347 kernel: printk: console [tty0] enabled Nov 12 20:47:45.050356 kernel: printk: console [ttyS0] enabled Nov 12 20:47:45.050365 kernel: ACPI: Core revision 20230628 Nov 12 20:47:45.050374 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 12 20:47:45.050386 kernel: APIC: Switch to symmetric I/O mode setup Nov 12 20:47:45.050395 kernel: x2apic enabled Nov 12 20:47:45.050405 kernel: APIC: Switched APIC routing to: physical x2apic Nov 12 20:47:45.050414 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 12 20:47:45.050423 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns Nov 12 20:47:45.050432 kernel: Calibrating delay loop (skipped) preset value.. 4589.21 BogoMIPS (lpj=2294608) Nov 12 20:47:45.050441 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Nov 12 20:47:45.050450 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Nov 12 20:47:45.050474 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 12 20:47:45.050483 kernel: Spectre V2 : Mitigation: Retpolines Nov 12 20:47:45.050493 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Nov 12 20:47:45.050506 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Nov 12 20:47:45.050515 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Nov 12 20:47:45.050524 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 12 20:47:45.050534 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 12 20:47:45.050543 kernel: MDS: Mitigation: Clear CPU buffers Nov 12 20:47:45.050553 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 12 20:47:45.050568 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 12 20:47:45.050578 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 12 20:47:45.050588 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 12 20:47:45.050597 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 12 20:47:45.050607 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Nov 12 20:47:45.050616 kernel: Freeing SMP alternatives memory: 32K Nov 12 20:47:45.050626 kernel: pid_max: default: 32768 minimum: 301 Nov 12 20:47:45.050635 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 12 20:47:45.050648 kernel: landlock: Up and running. Nov 12 20:47:45.050658 kernel: SELinux: Initializing. Nov 12 20:47:45.050667 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 12 20:47:45.050677 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 12 20:47:45.050687 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Nov 12 20:47:45.050696 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 12 20:47:45.050706 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 12 20:47:45.050715 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 12 20:47:45.050725 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Nov 12 20:47:45.050738 kernel: signal: max sigframe size: 1776 Nov 12 20:47:45.050747 kernel: rcu: Hierarchical SRCU implementation. Nov 12 20:47:45.050757 kernel: rcu: Max phase no-delay instances is 400. Nov 12 20:47:45.050766 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 12 20:47:45.050775 kernel: smp: Bringing up secondary CPUs ... Nov 12 20:47:45.050785 kernel: smpboot: x86: Booting SMP configuration: Nov 12 20:47:45.050797 kernel: .... node #0, CPUs: #1 Nov 12 20:47:45.050806 kernel: smp: Brought up 1 node, 2 CPUs Nov 12 20:47:45.050816 kernel: smpboot: Max logical packages: 1 Nov 12 20:47:45.050829 kernel: smpboot: Total of 2 processors activated (9178.43 BogoMIPS) Nov 12 20:47:45.050838 kernel: devtmpfs: initialized Nov 12 20:47:45.050881 kernel: x86/mm: Memory block size: 128MB Nov 12 20:47:45.050891 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 12 20:47:45.050900 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 12 20:47:45.050910 kernel: pinctrl core: initialized pinctrl subsystem Nov 12 20:47:45.050919 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 12 20:47:45.050929 kernel: audit: initializing netlink subsys (disabled) Nov 12 20:47:45.050938 kernel: audit: type=2000 audit(1731444463.758:1): state=initialized audit_enabled=0 res=1 Nov 12 20:47:45.050952 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 12 20:47:45.050961 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 12 20:47:45.050971 kernel: cpuidle: using governor menu Nov 12 20:47:45.050980 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 12 20:47:45.050989 kernel: dca service started, version 1.12.1 Nov 12 20:47:45.050999 kernel: PCI: Using configuration type 1 for base access Nov 12 20:47:45.051008 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 12 20:47:45.051018 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 12 20:47:45.051027 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 12 20:47:45.051040 kernel: ACPI: Added _OSI(Module Device) Nov 12 20:47:45.051050 kernel: ACPI: Added _OSI(Processor Device) Nov 12 20:47:45.051059 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Nov 12 20:47:45.051069 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 12 20:47:45.051078 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 12 20:47:45.051087 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 12 20:47:45.051097 kernel: ACPI: Interpreter enabled Nov 12 20:47:45.051106 kernel: ACPI: PM: (supports S0 S5) Nov 12 20:47:45.051116 kernel: ACPI: Using IOAPIC for interrupt routing Nov 12 20:47:45.051129 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 12 20:47:45.051139 kernel: PCI: Using E820 reservations for host bridge windows Nov 12 20:47:45.051148 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Nov 12 20:47:45.051158 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 12 20:47:45.051424 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Nov 12 20:47:45.051538 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Nov 12 20:47:45.051638 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Nov 12 20:47:45.051656 kernel: acpiphp: Slot [3] registered Nov 12 20:47:45.051665 kernel: acpiphp: Slot [4] registered Nov 12 20:47:45.051675 kernel: acpiphp: Slot [5] registered Nov 12 20:47:45.051685 kernel: acpiphp: Slot [6] registered Nov 12 20:47:45.051694 kernel: acpiphp: Slot [7] registered Nov 12 20:47:45.051704 kernel: acpiphp: Slot [8] registered Nov 12 20:47:45.051713 kernel: acpiphp: Slot [9] registered Nov 12 20:47:45.051723 kernel: acpiphp: Slot [10] registered Nov 12 20:47:45.051732 kernel: acpiphp: Slot [11] registered Nov 12 20:47:45.051741 kernel: acpiphp: Slot [12] registered Nov 12 20:47:45.051755 kernel: acpiphp: Slot [13] registered Nov 12 20:47:45.051764 kernel: acpiphp: Slot [14] registered Nov 12 20:47:45.051774 kernel: acpiphp: Slot [15] registered Nov 12 20:47:45.051783 kernel: acpiphp: Slot [16] registered Nov 12 20:47:45.051793 kernel: acpiphp: Slot [17] registered Nov 12 20:47:45.051802 kernel: acpiphp: Slot [18] registered Nov 12 20:47:45.051811 kernel: acpiphp: Slot [19] registered Nov 12 20:47:45.051821 kernel: acpiphp: Slot [20] registered Nov 12 20:47:45.051830 kernel: acpiphp: Slot [21] registered Nov 12 20:47:45.051860 kernel: acpiphp: Slot [22] registered Nov 12 20:47:45.051870 kernel: acpiphp: Slot [23] registered Nov 12 20:47:45.051879 kernel: acpiphp: Slot [24] registered Nov 12 20:47:45.051888 kernel: acpiphp: Slot [25] registered Nov 12 20:47:45.051898 kernel: acpiphp: Slot [26] registered Nov 12 20:47:45.051907 kernel: acpiphp: Slot [27] registered Nov 12 20:47:45.051917 kernel: acpiphp: Slot [28] registered Nov 12 20:47:45.051926 kernel: acpiphp: Slot [29] registered Nov 12 20:47:45.051936 kernel: acpiphp: Slot [30] registered Nov 12 20:47:45.051946 kernel: acpiphp: Slot [31] registered Nov 12 20:47:45.051959 kernel: PCI host bridge to bus 0000:00 Nov 12 20:47:45.052084 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 12 20:47:45.052179 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 12 20:47:45.052268 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 12 20:47:45.052356 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Nov 12 20:47:45.052443 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Nov 12 20:47:45.052531 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 12 20:47:45.052659 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Nov 12 20:47:45.052772 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Nov 12 20:47:45.052910 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Nov 12 20:47:45.053011 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Nov 12 20:47:45.053112 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Nov 12 20:47:45.053210 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Nov 12 20:47:45.053319 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Nov 12 20:47:45.053419 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Nov 12 20:47:45.053525 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Nov 12 20:47:45.053623 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Nov 12 20:47:45.053737 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Nov 12 20:47:45.053835 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Nov 12 20:47:45.054571 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Nov 12 20:47:45.054721 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Nov 12 20:47:45.054826 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Nov 12 20:47:45.054953 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Nov 12 20:47:45.055051 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Nov 12 20:47:45.055148 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Nov 12 20:47:45.055248 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 12 20:47:45.055373 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Nov 12 20:47:45.055472 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Nov 12 20:47:45.055570 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Nov 12 20:47:45.055668 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Nov 12 20:47:45.055780 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Nov 12 20:47:45.055962 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Nov 12 20:47:45.056086 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Nov 12 20:47:45.056193 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Nov 12 20:47:45.056313 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Nov 12 20:47:45.056413 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Nov 12 20:47:45.056536 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Nov 12 20:47:45.056634 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Nov 12 20:47:45.056746 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Nov 12 20:47:45.056853 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Nov 12 20:47:45.056961 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Nov 12 20:47:45.057058 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Nov 12 20:47:45.057165 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Nov 12 20:47:45.057263 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Nov 12 20:47:45.057362 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Nov 12 20:47:45.057460 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Nov 12 20:47:45.057574 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Nov 12 20:47:45.057682 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Nov 12 20:47:45.057780 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Nov 12 20:47:45.057792 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 12 20:47:45.057802 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 12 20:47:45.057813 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 12 20:47:45.057822 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 12 20:47:45.057832 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Nov 12 20:47:45.057944 kernel: iommu: Default domain type: Translated Nov 12 20:47:45.057954 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 12 20:47:45.057964 kernel: PCI: Using ACPI for IRQ routing Nov 12 20:47:45.057974 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 12 20:47:45.057984 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 12 20:47:45.057993 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Nov 12 20:47:45.058096 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Nov 12 20:47:45.058194 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Nov 12 20:47:45.058322 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 12 20:47:45.058335 kernel: vgaarb: loaded Nov 12 20:47:45.058345 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 12 20:47:45.058355 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 12 20:47:45.058365 kernel: clocksource: Switched to clocksource kvm-clock Nov 12 20:47:45.058374 kernel: VFS: Disk quotas dquot_6.6.0 Nov 12 20:47:45.058385 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 12 20:47:45.058397 kernel: pnp: PnP ACPI init Nov 12 20:47:45.058422 kernel: pnp: PnP ACPI: found 4 devices Nov 12 20:47:45.058437 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 12 20:47:45.058447 kernel: NET: Registered PF_INET protocol family Nov 12 20:47:45.058457 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 12 20:47:45.058467 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 12 20:47:45.058476 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 12 20:47:45.058486 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 12 20:47:45.058496 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 12 20:47:45.058506 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 12 20:47:45.058516 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 12 20:47:45.058529 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 12 20:47:45.058539 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 12 20:47:45.058548 kernel: NET: Registered PF_XDP protocol family Nov 12 20:47:45.058678 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 12 20:47:45.058793 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 12 20:47:45.058943 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 12 20:47:45.059050 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Nov 12 20:47:45.059171 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Nov 12 20:47:45.059342 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Nov 12 20:47:45.059496 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 12 20:47:45.059524 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Nov 12 20:47:45.059690 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 35811 usecs Nov 12 20:47:45.059719 kernel: PCI: CLS 0 bytes, default 64 Nov 12 20:47:45.059740 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 12 20:47:45.059762 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns Nov 12 20:47:45.059783 kernel: Initialise system trusted keyrings Nov 12 20:47:45.059804 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 12 20:47:45.059832 kernel: Key type asymmetric registered Nov 12 20:47:45.060578 kernel: Asymmetric key parser 'x509' registered Nov 12 20:47:45.060598 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 12 20:47:45.060613 kernel: io scheduler mq-deadline registered Nov 12 20:47:45.060626 kernel: io scheduler kyber registered Nov 12 20:47:45.060642 kernel: io scheduler bfq registered Nov 12 20:47:45.060658 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 12 20:47:45.060674 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Nov 12 20:47:45.060691 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Nov 12 20:47:45.060715 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Nov 12 20:47:45.060731 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 12 20:47:45.060746 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 12 20:47:45.060762 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 12 20:47:45.060777 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 12 20:47:45.060792 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 12 20:47:45.061067 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 12 20:47:45.061096 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 12 20:47:45.061252 kernel: rtc_cmos 00:03: registered as rtc0 Nov 12 20:47:45.061402 kernel: rtc_cmos 00:03: setting system clock to 2024-11-12T20:47:44 UTC (1731444464) Nov 12 20:47:45.061545 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Nov 12 20:47:45.061567 kernel: intel_pstate: CPU model not supported Nov 12 20:47:45.061585 kernel: NET: Registered PF_INET6 protocol family Nov 12 20:47:45.061603 kernel: Segment Routing with IPv6 Nov 12 20:47:45.061620 kernel: In-situ OAM (IOAM) with IPv6 Nov 12 20:47:45.061637 kernel: NET: Registered PF_PACKET protocol family Nov 12 20:47:45.061654 kernel: Key type dns_resolver registered Nov 12 20:47:45.061675 kernel: IPI shorthand broadcast: enabled Nov 12 20:47:45.061689 kernel: sched_clock: Marking stable (944002590, 154706244)->(1252052190, -153343356) Nov 12 20:47:45.061702 kernel: registered taskstats version 1 Nov 12 20:47:45.061716 kernel: Loading compiled-in X.509 certificates Nov 12 20:47:45.061730 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.60-flatcar: 0473a73d840db5324524af106a53c13fc6fc218a' Nov 12 20:47:45.061745 kernel: Key type .fscrypt registered Nov 12 20:47:45.061759 kernel: Key type fscrypt-provisioning registered Nov 12 20:47:45.061783 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 12 20:47:45.061810 kernel: ima: Allocated hash algorithm: sha1 Nov 12 20:47:45.061831 kernel: ima: No architecture policies found Nov 12 20:47:45.061866 kernel: clk: Disabling unused clocks Nov 12 20:47:45.061887 kernel: Freeing unused kernel image (initmem) memory: 42828K Nov 12 20:47:45.061909 kernel: Write protecting the kernel read-only data: 36864k Nov 12 20:47:45.061961 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Nov 12 20:47:45.062017 kernel: Run /init as init process Nov 12 20:47:45.062040 kernel: with arguments: Nov 12 20:47:45.062062 kernel: /init Nov 12 20:47:45.062088 kernel: with environment: Nov 12 20:47:45.062109 kernel: HOME=/ Nov 12 20:47:45.062131 kernel: TERM=linux Nov 12 20:47:45.062153 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 12 20:47:45.062179 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 20:47:45.062205 systemd[1]: Detected virtualization kvm. Nov 12 20:47:45.062229 systemd[1]: Detected architecture x86-64. Nov 12 20:47:45.062251 systemd[1]: Running in initrd. Nov 12 20:47:45.062292 systemd[1]: No hostname configured, using default hostname. Nov 12 20:47:45.062317 systemd[1]: Hostname set to . Nov 12 20:47:45.062345 systemd[1]: Initializing machine ID from VM UUID. Nov 12 20:47:45.062368 systemd[1]: Queued start job for default target initrd.target. Nov 12 20:47:45.062391 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:47:45.062414 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:47:45.062438 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 12 20:47:45.062461 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 20:47:45.062488 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 12 20:47:45.062511 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 12 20:47:45.062537 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 12 20:47:45.062561 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 12 20:47:45.062584 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:47:45.062607 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:47:45.062631 systemd[1]: Reached target paths.target - Path Units. Nov 12 20:47:45.062658 systemd[1]: Reached target slices.target - Slice Units. Nov 12 20:47:45.062682 systemd[1]: Reached target swap.target - Swaps. Nov 12 20:47:45.062709 systemd[1]: Reached target timers.target - Timer Units. Nov 12 20:47:45.062732 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 20:47:45.062756 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 20:47:45.062783 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 12 20:47:45.062807 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 12 20:47:45.062830 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:47:45.062918 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 20:47:45.062942 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:47:45.062965 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 20:47:45.062989 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 12 20:47:45.063012 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 20:47:45.063035 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 12 20:47:45.063064 systemd[1]: Starting systemd-fsck-usr.service... Nov 12 20:47:45.063087 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 20:47:45.063110 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 20:47:45.063134 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:47:45.063207 systemd-journald[183]: Collecting audit messages is disabled. Nov 12 20:47:45.063265 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 12 20:47:45.063288 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:47:45.063312 systemd-journald[183]: Journal started Nov 12 20:47:45.063367 systemd-journald[183]: Runtime Journal (/run/log/journal/d41f0af2075b40c8afda29a739ea2944) is 4.9M, max 39.3M, 34.4M free. Nov 12 20:47:45.066920 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 20:47:45.072024 systemd[1]: Finished systemd-fsck-usr.service. Nov 12 20:47:45.077290 systemd-modules-load[184]: Inserted module 'overlay' Nov 12 20:47:45.088155 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 20:47:45.140349 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 12 20:47:45.140404 kernel: Bridge firewalling registered Nov 12 20:47:45.107249 systemd-modules-load[184]: Inserted module 'br_netfilter' Nov 12 20:47:45.149116 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 20:47:45.150208 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 20:47:45.157979 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:47:45.161363 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 20:47:45.162515 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:47:45.171103 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:47:45.173594 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 20:47:45.176274 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 20:47:45.205482 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:47:45.208291 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:47:45.216106 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 20:47:45.216885 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:47:45.219040 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 12 20:47:45.253023 dracut-cmdline[218]: dracut-dracut-053 Nov 12 20:47:45.257179 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:47:45.284236 systemd-resolved[217]: Positive Trust Anchors: Nov 12 20:47:45.284268 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 20:47:45.284357 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 20:47:45.294575 systemd-resolved[217]: Defaulting to hostname 'linux'. Nov 12 20:47:45.297356 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 20:47:45.298135 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:47:45.366885 kernel: SCSI subsystem initialized Nov 12 20:47:45.378880 kernel: Loading iSCSI transport class v2.0-870. Nov 12 20:47:45.390880 kernel: iscsi: registered transport (tcp) Nov 12 20:47:45.416021 kernel: iscsi: registered transport (qla4xxx) Nov 12 20:47:45.416098 kernel: QLogic iSCSI HBA Driver Nov 12 20:47:45.469404 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 12 20:47:45.475083 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 12 20:47:45.525526 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 12 20:47:45.525643 kernel: device-mapper: uevent: version 1.0.3 Nov 12 20:47:45.525675 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 12 20:47:45.576900 kernel: raid6: avx2x4 gen() 17622 MB/s Nov 12 20:47:45.593917 kernel: raid6: avx2x2 gen() 17066 MB/s Nov 12 20:47:45.612503 kernel: raid6: avx2x1 gen() 12004 MB/s Nov 12 20:47:45.612592 kernel: raid6: using algorithm avx2x4 gen() 17622 MB/s Nov 12 20:47:45.631025 kernel: raid6: .... xor() 5767 MB/s, rmw enabled Nov 12 20:47:45.631123 kernel: raid6: using avx2x2 recovery algorithm Nov 12 20:47:45.655891 kernel: xor: automatically using best checksumming function avx Nov 12 20:47:45.850925 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 12 20:47:45.869501 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 12 20:47:45.877208 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:47:45.914074 systemd-udevd[401]: Using default interface naming scheme 'v255'. Nov 12 20:47:45.921989 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:47:45.930118 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 12 20:47:45.960262 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Nov 12 20:47:46.012037 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 20:47:46.019343 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 20:47:46.113920 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:47:46.126130 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 12 20:47:46.165571 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 12 20:47:46.171722 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 20:47:46.173997 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:47:46.174643 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 20:47:46.183089 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 12 20:47:46.215927 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 12 20:47:46.261834 kernel: ACPI: bus type USB registered Nov 12 20:47:46.261963 kernel: usbcore: registered new interface driver usbfs Nov 12 20:47:46.261996 kernel: usbcore: registered new interface driver hub Nov 12 20:47:46.262019 kernel: usbcore: registered new device driver usb Nov 12 20:47:46.272873 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Nov 12 20:47:46.360921 kernel: scsi host0: Virtio SCSI HBA Nov 12 20:47:46.361246 kernel: cryptd: max_cpu_qlen set to 1000 Nov 12 20:47:46.361278 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Nov 12 20:47:46.361471 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 12 20:47:46.361497 kernel: GPT:9289727 != 125829119 Nov 12 20:47:46.361515 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 12 20:47:46.361532 kernel: GPT:9289727 != 125829119 Nov 12 20:47:46.361549 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 12 20:47:46.361567 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 20:47:46.361600 kernel: libata version 3.00 loaded. Nov 12 20:47:46.361628 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Nov 12 20:47:46.427365 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Nov 12 20:47:46.427588 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Nov 12 20:47:46.427768 kernel: ata_piix 0000:00:01.1: version 2.13 Nov 12 20:47:46.427971 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Nov 12 20:47:46.428157 kernel: AVX2 version of gcm_enc/dec engaged. Nov 12 20:47:46.428197 kernel: AES CTR mode by8 optimization enabled Nov 12 20:47:46.428218 kernel: virtio_blk virtio5: [vdb] 968 512-byte logical blocks (496 kB/484 KiB) Nov 12 20:47:46.428399 kernel: scsi host1: ata_piix Nov 12 20:47:46.428607 kernel: scsi host2: ata_piix Nov 12 20:47:46.428907 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Nov 12 20:47:46.428938 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Nov 12 20:47:46.428958 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Nov 12 20:47:46.429205 kernel: hub 1-0:1.0: USB hub found Nov 12 20:47:46.429437 kernel: hub 1-0:1.0: 2 ports detected Nov 12 20:47:46.355934 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 20:47:46.502518 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (451) Nov 12 20:47:46.502572 kernel: BTRFS: device fsid 9dfeafbb-8ab7-4be2-acae-f51db463fc77 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (459) Nov 12 20:47:46.356694 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:47:46.357616 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:47:46.358329 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:47:46.358498 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:47:46.359152 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:47:46.373146 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:47:46.488973 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 12 20:47:46.511063 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 12 20:47:46.518631 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:47:46.525258 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 12 20:47:46.526135 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 12 20:47:46.538444 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 12 20:47:46.553258 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 12 20:47:46.558124 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:47:46.570420 disk-uuid[529]: Primary Header is updated. Nov 12 20:47:46.570420 disk-uuid[529]: Secondary Entries is updated. Nov 12 20:47:46.570420 disk-uuid[529]: Secondary Header is updated. Nov 12 20:47:46.580874 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 20:47:46.588866 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 20:47:46.605923 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:47:47.604991 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 20:47:47.605066 disk-uuid[532]: The operation has completed successfully. Nov 12 20:47:47.666595 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 12 20:47:47.667626 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 12 20:47:47.685116 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 12 20:47:47.692125 sh[561]: Success Nov 12 20:47:47.715804 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 12 20:47:47.790426 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 12 20:47:47.800979 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 12 20:47:47.802566 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 12 20:47:47.835024 kernel: BTRFS info (device dm-0): first mount of filesystem 9dfeafbb-8ab7-4be2-acae-f51db463fc77 Nov 12 20:47:47.835108 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:47:47.838266 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 12 20:47:47.838352 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 12 20:47:47.839679 kernel: BTRFS info (device dm-0): using free space tree Nov 12 20:47:47.853873 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 12 20:47:47.855473 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 12 20:47:47.862297 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 12 20:47:47.865142 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 12 20:47:47.883927 kernel: BTRFS info (device vda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:47:47.883999 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:47:47.886375 kernel: BTRFS info (device vda6): using free space tree Nov 12 20:47:47.891880 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 20:47:47.903433 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 12 20:47:47.908600 kernel: BTRFS info (device vda6): last unmount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:47:47.918062 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 12 20:47:47.925037 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 12 20:47:48.034612 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 20:47:48.044239 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 20:47:48.083410 ignition[645]: Ignition 2.19.0 Nov 12 20:47:48.083423 ignition[645]: Stage: fetch-offline Nov 12 20:47:48.083468 ignition[645]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:47:48.083483 ignition[645]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 12 20:47:48.085238 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 20:47:48.083615 ignition[645]: parsed url from cmdline: "" Nov 12 20:47:48.083620 ignition[645]: no config URL provided Nov 12 20:47:48.083626 ignition[645]: reading system config file "/usr/lib/ignition/user.ign" Nov 12 20:47:48.083635 ignition[645]: no config at "/usr/lib/ignition/user.ign" Nov 12 20:47:48.083641 ignition[645]: failed to fetch config: resource requires networking Nov 12 20:47:48.084083 ignition[645]: Ignition finished successfully Nov 12 20:47:48.097378 systemd-networkd[746]: lo: Link UP Nov 12 20:47:48.097392 systemd-networkd[746]: lo: Gained carrier Nov 12 20:47:48.100767 systemd-networkd[746]: Enumeration completed Nov 12 20:47:48.101269 systemd-networkd[746]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Nov 12 20:47:48.101274 systemd-networkd[746]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Nov 12 20:47:48.101598 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 20:47:48.102307 systemd-networkd[746]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:47:48.102313 systemd-networkd[746]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 20:47:48.102827 systemd[1]: Reached target network.target - Network. Nov 12 20:47:48.103179 systemd-networkd[746]: eth0: Link UP Nov 12 20:47:48.103185 systemd-networkd[746]: eth0: Gained carrier Nov 12 20:47:48.103199 systemd-networkd[746]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Nov 12 20:47:48.107274 systemd-networkd[746]: eth1: Link UP Nov 12 20:47:48.107281 systemd-networkd[746]: eth1: Gained carrier Nov 12 20:47:48.107296 systemd-networkd[746]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:47:48.109915 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 12 20:47:48.122322 systemd-networkd[746]: eth1: DHCPv4 address 10.124.0.5/20 acquired from 169.254.169.253 Nov 12 20:47:48.126930 systemd-networkd[746]: eth0: DHCPv4 address 147.182.197.11/20, gateway 147.182.192.1 acquired from 169.254.169.253 Nov 12 20:47:48.138316 ignition[753]: Ignition 2.19.0 Nov 12 20:47:48.138335 ignition[753]: Stage: fetch Nov 12 20:47:48.138777 ignition[753]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:47:48.138802 ignition[753]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 12 20:47:48.139008 ignition[753]: parsed url from cmdline: "" Nov 12 20:47:48.139015 ignition[753]: no config URL provided Nov 12 20:47:48.139025 ignition[753]: reading system config file "/usr/lib/ignition/user.ign" Nov 12 20:47:48.139076 ignition[753]: no config at "/usr/lib/ignition/user.ign" Nov 12 20:47:48.139113 ignition[753]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Nov 12 20:47:48.158349 ignition[753]: GET result: OK Nov 12 20:47:48.158540 ignition[753]: parsing config with SHA512: 7c536cfbf817700c381ccab526eb36e51b7b49f34d05d87ed6d3c86439f84aedae2ade50ecab00e979a9ef14748949113994426b7aa4e1962324e36ad1b1e214 Nov 12 20:47:48.165946 unknown[753]: fetched base config from "system" Nov 12 20:47:48.165960 unknown[753]: fetched base config from "system" Nov 12 20:47:48.166650 ignition[753]: fetch: fetch complete Nov 12 20:47:48.165968 unknown[753]: fetched user config from "digitalocean" Nov 12 20:47:48.166657 ignition[753]: fetch: fetch passed Nov 12 20:47:48.168993 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 12 20:47:48.166719 ignition[753]: Ignition finished successfully Nov 12 20:47:48.184029 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 12 20:47:48.202825 ignition[760]: Ignition 2.19.0 Nov 12 20:47:48.202836 ignition[760]: Stage: kargs Nov 12 20:47:48.203056 ignition[760]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:47:48.205284 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 12 20:47:48.203067 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 12 20:47:48.203948 ignition[760]: kargs: kargs passed Nov 12 20:47:48.203999 ignition[760]: Ignition finished successfully Nov 12 20:47:48.218102 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 12 20:47:48.240233 ignition[766]: Ignition 2.19.0 Nov 12 20:47:48.240246 ignition[766]: Stage: disks Nov 12 20:47:48.240488 ignition[766]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:47:48.240500 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 12 20:47:48.242358 ignition[766]: disks: disks passed Nov 12 20:47:48.243688 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 12 20:47:48.242443 ignition[766]: Ignition finished successfully Nov 12 20:47:48.249614 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 12 20:47:48.250224 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 12 20:47:48.251216 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 20:47:48.252153 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 20:47:48.253137 systemd[1]: Reached target basic.target - Basic System. Nov 12 20:47:48.268562 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 12 20:47:48.291157 systemd-fsck[774]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 12 20:47:48.299348 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 12 20:47:48.305989 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 12 20:47:48.406867 kernel: EXT4-fs (vda9): mounted filesystem cc5635ac-cac6-420e-b789-89e3a937cfb2 r/w with ordered data mode. Quota mode: none. Nov 12 20:47:48.407690 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 12 20:47:48.409922 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 12 20:47:48.416971 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 20:47:48.427018 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 12 20:47:48.432119 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Nov 12 20:47:48.437130 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 12 20:47:48.441024 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 12 20:47:48.443943 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (782) Nov 12 20:47:48.441106 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 20:47:48.446922 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 12 20:47:48.453541 kernel: BTRFS info (device vda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:47:48.453578 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:47:48.453605 kernel: BTRFS info (device vda6): using free space tree Nov 12 20:47:48.460033 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 12 20:47:48.466900 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 20:47:48.470573 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 20:47:48.536997 coreos-metadata[785]: Nov 12 20:47:48.536 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 12 20:47:48.542684 initrd-setup-root[812]: cut: /sysroot/etc/passwd: No such file or directory Nov 12 20:47:48.550702 coreos-metadata[785]: Nov 12 20:47:48.550 INFO Fetch successful Nov 12 20:47:48.552056 initrd-setup-root[819]: cut: /sysroot/etc/group: No such file or directory Nov 12 20:47:48.554075 coreos-metadata[784]: Nov 12 20:47:48.553 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 12 20:47:48.561501 coreos-metadata[785]: Nov 12 20:47:48.561 INFO wrote hostname ci-4081.2.0-2-eeaeb2d4c6 to /sysroot/etc/hostname Nov 12 20:47:48.563301 initrd-setup-root[826]: cut: /sysroot/etc/shadow: No such file or directory Nov 12 20:47:48.564001 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 12 20:47:48.569116 coreos-metadata[784]: Nov 12 20:47:48.566 INFO Fetch successful Nov 12 20:47:48.569700 initrd-setup-root[834]: cut: /sysroot/etc/gshadow: No such file or directory Nov 12 20:47:48.576261 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Nov 12 20:47:48.576430 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Nov 12 20:47:48.685463 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 12 20:47:48.691988 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 12 20:47:48.695231 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 12 20:47:48.707875 kernel: BTRFS info (device vda6): last unmount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:47:48.745658 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 12 20:47:48.758869 ignition[902]: INFO : Ignition 2.19.0 Nov 12 20:47:48.758869 ignition[902]: INFO : Stage: mount Nov 12 20:47:48.758869 ignition[902]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:47:48.758869 ignition[902]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 12 20:47:48.761755 ignition[902]: INFO : mount: mount passed Nov 12 20:47:48.761755 ignition[902]: INFO : Ignition finished successfully Nov 12 20:47:48.761841 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 12 20:47:48.766051 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 12 20:47:48.834566 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 12 20:47:48.844129 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 20:47:48.856930 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (915) Nov 12 20:47:48.861649 kernel: BTRFS info (device vda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:47:48.861734 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:47:48.861768 kernel: BTRFS info (device vda6): using free space tree Nov 12 20:47:48.867894 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 20:47:48.870703 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 20:47:48.907723 ignition[932]: INFO : Ignition 2.19.0 Nov 12 20:47:48.907723 ignition[932]: INFO : Stage: files Nov 12 20:47:48.909395 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:47:48.909395 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 12 20:47:48.911049 ignition[932]: DEBUG : files: compiled without relabeling support, skipping Nov 12 20:47:48.911914 ignition[932]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 12 20:47:48.911914 ignition[932]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 12 20:47:48.916094 ignition[932]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 12 20:47:48.917025 ignition[932]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 12 20:47:48.917025 ignition[932]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 12 20:47:48.916652 unknown[932]: wrote ssh authorized keys file for user: core Nov 12 20:47:48.919643 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 20:47:48.919643 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Nov 12 20:47:48.954515 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 12 20:47:49.111935 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 20:47:49.113520 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 12 20:47:49.113520 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 12 20:47:49.113520 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 12 20:47:49.113520 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 12 20:47:49.113520 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 20:47:49.113520 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 20:47:49.113520 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 20:47:49.113520 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 20:47:49.121575 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 20:47:49.121575 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 20:47:49.121575 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Nov 12 20:47:49.121575 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Nov 12 20:47:49.121575 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Nov 12 20:47:49.121575 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Nov 12 20:47:49.349052 systemd-networkd[746]: eth0: Gained IPv6LL Nov 12 20:47:49.612759 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 12 20:47:49.920339 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Nov 12 20:47:49.920339 ignition[932]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 12 20:47:49.942925 ignition[932]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 20:47:49.944946 ignition[932]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 20:47:49.944946 ignition[932]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 12 20:47:49.944946 ignition[932]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 12 20:47:49.944946 ignition[932]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 12 20:47:49.944946 ignition[932]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 12 20:47:49.944946 ignition[932]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 12 20:47:49.944946 ignition[932]: INFO : files: files passed Nov 12 20:47:49.944946 ignition[932]: INFO : Ignition finished successfully Nov 12 20:47:49.945742 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 12 20:47:49.956216 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 12 20:47:49.963063 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 12 20:47:49.965571 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 12 20:47:49.965715 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 12 20:47:49.987079 initrd-setup-root-after-ignition[960]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:47:49.987079 initrd-setup-root-after-ignition[960]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:47:49.990728 initrd-setup-root-after-ignition[964]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:47:49.990080 systemd-networkd[746]: eth1: Gained IPv6LL Nov 12 20:47:49.993820 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 20:47:49.994801 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 12 20:47:50.001103 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 12 20:47:50.048967 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 12 20:47:50.049151 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 12 20:47:50.051377 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 12 20:47:50.052075 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 12 20:47:50.053465 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 12 20:47:50.062066 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 12 20:47:50.080007 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 20:47:50.086200 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 12 20:47:50.103545 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:47:50.104331 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:47:50.105860 systemd[1]: Stopped target timers.target - Timer Units. Nov 12 20:47:50.106880 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 12 20:47:50.107016 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 20:47:50.108616 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 12 20:47:50.109423 systemd[1]: Stopped target basic.target - Basic System. Nov 12 20:47:50.110708 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 12 20:47:50.111997 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 20:47:50.113092 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 12 20:47:50.114438 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 12 20:47:50.115789 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 20:47:50.117310 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 12 20:47:50.118624 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 12 20:47:50.120044 systemd[1]: Stopped target swap.target - Swaps. Nov 12 20:47:50.121216 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 12 20:47:50.121429 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 12 20:47:50.122882 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:47:50.123761 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:47:50.124761 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 12 20:47:50.125061 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:47:50.126129 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 12 20:47:50.126411 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 12 20:47:50.128006 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 12 20:47:50.128138 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 20:47:50.128867 systemd[1]: ignition-files.service: Deactivated successfully. Nov 12 20:47:50.128975 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 12 20:47:50.130316 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 12 20:47:50.130543 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 12 20:47:50.140860 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 12 20:47:50.145278 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 12 20:47:50.145932 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 12 20:47:50.146253 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:47:50.150103 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 12 20:47:50.150399 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 20:47:50.172293 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 12 20:47:50.173937 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 12 20:47:50.184880 ignition[984]: INFO : Ignition 2.19.0 Nov 12 20:47:50.184880 ignition[984]: INFO : Stage: umount Nov 12 20:47:50.184880 ignition[984]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:47:50.184880 ignition[984]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 12 20:47:50.190892 ignition[984]: INFO : umount: umount passed Nov 12 20:47:50.190892 ignition[984]: INFO : Ignition finished successfully Nov 12 20:47:50.191715 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 12 20:47:50.193538 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 12 20:47:50.194434 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 12 20:47:50.194502 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 12 20:47:50.195194 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 12 20:47:50.195262 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 12 20:47:50.198748 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 12 20:47:50.198824 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 12 20:47:50.199438 systemd[1]: Stopped target network.target - Network. Nov 12 20:47:50.206042 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 12 20:47:50.206220 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 20:47:50.207246 systemd[1]: Stopped target paths.target - Path Units. Nov 12 20:47:50.208330 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 12 20:47:50.211927 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:47:50.212779 systemd[1]: Stopped target slices.target - Slice Units. Nov 12 20:47:50.214212 systemd[1]: Stopped target sockets.target - Socket Units. Nov 12 20:47:50.215251 systemd[1]: iscsid.socket: Deactivated successfully. Nov 12 20:47:50.215317 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 20:47:50.216421 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 12 20:47:50.216480 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 20:47:50.217684 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 12 20:47:50.217737 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 12 20:47:50.218828 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 12 20:47:50.218904 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 12 20:47:50.220052 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 12 20:47:50.221340 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 12 20:47:50.224117 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 12 20:47:50.224905 systemd-networkd[746]: eth0: DHCPv6 lease lost Nov 12 20:47:50.225229 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 12 20:47:50.225355 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 12 20:47:50.226543 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 12 20:47:50.226661 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 12 20:47:50.228967 systemd-networkd[746]: eth1: DHCPv6 lease lost Nov 12 20:47:50.230385 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 12 20:47:50.230506 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 12 20:47:50.232163 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 12 20:47:50.232239 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:47:50.237086 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 12 20:47:50.238223 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 12 20:47:50.238314 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 20:47:50.239145 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:47:50.241527 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 12 20:47:50.242627 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 12 20:47:50.256498 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 12 20:47:50.256666 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:47:50.258830 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 12 20:47:50.259352 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 12 20:47:50.259995 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 12 20:47:50.260046 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:47:50.262519 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 12 20:47:50.262585 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 12 20:47:50.264298 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 12 20:47:50.264372 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 12 20:47:50.265515 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 20:47:50.265568 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:47:50.276585 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 12 20:47:50.277262 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 12 20:47:50.277349 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:47:50.278005 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 12 20:47:50.278071 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 12 20:47:50.278898 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 12 20:47:50.278968 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:47:50.280447 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 12 20:47:50.280513 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 20:47:50.283882 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 12 20:47:50.283963 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:47:50.285174 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 12 20:47:50.285229 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:47:50.287749 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:47:50.287823 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:47:50.289156 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 12 20:47:50.289310 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 12 20:47:50.291123 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 12 20:47:50.291236 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 12 20:47:50.293592 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 12 20:47:50.300221 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 12 20:47:50.315746 systemd[1]: Switching root. Nov 12 20:47:50.366291 systemd-journald[183]: Journal stopped Nov 12 20:47:51.795365 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Nov 12 20:47:51.795440 kernel: SELinux: policy capability network_peer_controls=1 Nov 12 20:47:51.795456 kernel: SELinux: policy capability open_perms=1 Nov 12 20:47:51.795469 kernel: SELinux: policy capability extended_socket_class=1 Nov 12 20:47:51.795481 kernel: SELinux: policy capability always_check_network=0 Nov 12 20:47:51.795498 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 12 20:47:51.795514 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 12 20:47:51.795531 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 12 20:47:51.795547 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 12 20:47:51.795562 kernel: audit: type=1403 audit(1731444470.623:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 12 20:47:51.795576 systemd[1]: Successfully loaded SELinux policy in 46.540ms. Nov 12 20:47:51.795601 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.654ms. Nov 12 20:47:51.795615 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 20:47:51.795629 systemd[1]: Detected virtualization kvm. Nov 12 20:47:51.795645 systemd[1]: Detected architecture x86-64. Nov 12 20:47:51.795658 systemd[1]: Detected first boot. Nov 12 20:47:51.795671 systemd[1]: Hostname set to . Nov 12 20:47:51.795683 systemd[1]: Initializing machine ID from VM UUID. Nov 12 20:47:51.795696 zram_generator::config[1027]: No configuration found. Nov 12 20:47:51.795714 systemd[1]: Populated /etc with preset unit settings. Nov 12 20:47:51.795728 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 12 20:47:51.795741 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 12 20:47:51.795756 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 12 20:47:51.795769 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 12 20:47:51.795782 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 12 20:47:51.795795 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 12 20:47:51.795807 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 12 20:47:51.795820 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 12 20:47:51.795833 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 12 20:47:51.801053 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 12 20:47:51.801088 systemd[1]: Created slice user.slice - User and Session Slice. Nov 12 20:47:51.801111 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:47:51.801125 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:47:51.801137 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 12 20:47:51.801150 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 12 20:47:51.801163 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 12 20:47:51.801176 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 20:47:51.801189 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 12 20:47:51.801202 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:47:51.801215 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 12 20:47:51.801232 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 12 20:47:51.801245 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 12 20:47:51.801261 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 12 20:47:51.801274 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:47:51.801287 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 20:47:51.801299 systemd[1]: Reached target slices.target - Slice Units. Nov 12 20:47:51.801315 systemd[1]: Reached target swap.target - Swaps. Nov 12 20:47:51.801328 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 12 20:47:51.801342 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 12 20:47:51.801355 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:47:51.801368 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 20:47:51.801380 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:47:51.801393 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 12 20:47:51.801406 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 12 20:47:51.801419 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 12 20:47:51.801434 systemd[1]: Mounting media.mount - External Media Directory... Nov 12 20:47:51.801448 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:47:51.801461 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 12 20:47:51.801473 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 12 20:47:51.801486 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 12 20:47:51.801500 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 12 20:47:51.801512 systemd[1]: Reached target machines.target - Containers. Nov 12 20:47:51.801525 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 12 20:47:51.801539 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:47:51.801555 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 20:47:51.801568 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 12 20:47:51.801581 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:47:51.801594 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 20:47:51.801607 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:47:51.801619 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 12 20:47:51.801632 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:47:51.801646 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 12 20:47:51.801662 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 12 20:47:51.801674 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 12 20:47:51.801687 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 12 20:47:51.801699 systemd[1]: Stopped systemd-fsck-usr.service. Nov 12 20:47:51.801712 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 20:47:51.801724 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 20:47:51.801737 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 12 20:47:51.801750 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 12 20:47:51.801763 kernel: loop: module loaded Nov 12 20:47:51.801780 kernel: fuse: init (API version 7.39) Nov 12 20:47:51.801793 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 20:47:51.801806 systemd[1]: verity-setup.service: Deactivated successfully. Nov 12 20:47:51.801818 systemd[1]: Stopped verity-setup.service. Nov 12 20:47:51.801831 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:47:51.801856 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 12 20:47:51.801870 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 12 20:47:51.801882 systemd[1]: Mounted media.mount - External Media Directory. Nov 12 20:47:51.801895 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 12 20:47:51.801912 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 12 20:47:51.801925 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 12 20:47:51.801939 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:47:51.801955 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 12 20:47:51.801968 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 12 20:47:51.801981 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:47:51.801994 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:47:51.802007 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:47:51.802040 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:47:51.802061 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 12 20:47:51.802077 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 12 20:47:51.802090 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:47:51.802103 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:47:51.802116 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 20:47:51.802129 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 12 20:47:51.802142 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 12 20:47:51.802155 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 12 20:47:51.802169 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 12 20:47:51.802221 systemd-journald[1096]: Collecting audit messages is disabled. Nov 12 20:47:51.802248 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 12 20:47:51.802261 kernel: ACPI: bus type drm_connector registered Nov 12 20:47:51.802273 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 12 20:47:51.802286 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 20:47:51.802300 systemd-journald[1096]: Journal started Nov 12 20:47:51.802330 systemd-journald[1096]: Runtime Journal (/run/log/journal/d41f0af2075b40c8afda29a739ea2944) is 4.9M, max 39.3M, 34.4M free. Nov 12 20:47:51.326429 systemd[1]: Queued start job for default target multi-user.target. Nov 12 20:47:51.345478 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 12 20:47:51.346160 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 12 20:47:51.806918 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 12 20:47:51.815878 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 12 20:47:51.821867 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 12 20:47:51.821948 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:47:51.833867 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 12 20:47:51.833960 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 20:47:51.846037 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 12 20:47:51.849863 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 20:47:51.854868 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 20:47:51.880879 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 12 20:47:51.885873 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 20:47:51.892047 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 20:47:51.893337 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 12 20:47:51.894283 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 20:47:51.894421 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 20:47:51.895265 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:47:51.896753 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 12 20:47:51.897405 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 12 20:47:51.898449 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 12 20:47:51.899445 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 12 20:47:51.934247 kernel: loop0: detected capacity change from 0 to 142488 Nov 12 20:47:51.939789 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:47:51.951994 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 12 20:47:51.964600 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 12 20:47:51.977100 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 12 20:47:51.978962 systemd-tmpfiles[1131]: ACLs are not supported, ignoring. Nov 12 20:47:51.978978 systemd-tmpfiles[1131]: ACLs are not supported, ignoring. Nov 12 20:47:51.983579 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 12 20:47:52.007166 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 12 20:47:52.002982 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 20:47:52.009217 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 12 20:47:52.032830 systemd-journald[1096]: Time spent on flushing to /var/log/journal/d41f0af2075b40c8afda29a739ea2944 is 66.009ms for 1000 entries. Nov 12 20:47:52.032830 systemd-journald[1096]: System Journal (/var/log/journal/d41f0af2075b40c8afda29a739ea2944) is 8.0M, max 195.6M, 187.6M free. Nov 12 20:47:52.129873 systemd-journald[1096]: Received client request to flush runtime journal. Nov 12 20:47:52.129954 kernel: loop1: detected capacity change from 0 to 205544 Nov 12 20:47:52.129980 kernel: loop2: detected capacity change from 0 to 8 Nov 12 20:47:52.048114 udevadm[1159]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 12 20:47:52.076231 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 12 20:47:52.078689 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 12 20:47:52.129009 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 12 20:47:52.131370 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 12 20:47:52.144879 kernel: loop3: detected capacity change from 0 to 140768 Nov 12 20:47:52.160171 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 20:47:52.214907 kernel: loop4: detected capacity change from 0 to 142488 Nov 12 20:47:52.215585 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Nov 12 20:47:52.215619 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Nov 12 20:47:52.235633 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:47:52.262922 kernel: loop5: detected capacity change from 0 to 205544 Nov 12 20:47:52.283177 kernel: loop6: detected capacity change from 0 to 8 Nov 12 20:47:52.290893 kernel: loop7: detected capacity change from 0 to 140768 Nov 12 20:47:52.325916 (sd-merge)[1174]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Nov 12 20:47:52.328944 (sd-merge)[1174]: Merged extensions into '/usr'. Nov 12 20:47:52.337055 systemd[1]: Reloading requested from client PID 1129 ('systemd-sysext') (unit systemd-sysext.service)... Nov 12 20:47:52.337505 systemd[1]: Reloading... Nov 12 20:47:52.498878 zram_generator::config[1203]: No configuration found. Nov 12 20:47:52.702940 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:47:52.757934 ldconfig[1125]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 12 20:47:52.759043 systemd[1]: Reloading finished in 420 ms. Nov 12 20:47:52.788711 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 12 20:47:52.790511 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 12 20:47:52.802185 systemd[1]: Starting ensure-sysext.service... Nov 12 20:47:52.813830 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 20:47:52.834118 systemd[1]: Reloading requested from client PID 1246 ('systemctl') (unit ensure-sysext.service)... Nov 12 20:47:52.834141 systemd[1]: Reloading... Nov 12 20:47:52.877471 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 12 20:47:52.879470 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 12 20:47:52.884189 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 12 20:47:52.884918 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Nov 12 20:47:52.885026 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Nov 12 20:47:52.891428 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 20:47:52.891446 systemd-tmpfiles[1247]: Skipping /boot Nov 12 20:47:52.924055 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 20:47:52.924068 systemd-tmpfiles[1247]: Skipping /boot Nov 12 20:47:53.027873 zram_generator::config[1277]: No configuration found. Nov 12 20:47:53.240613 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:47:53.366945 systemd[1]: Reloading finished in 532 ms. Nov 12 20:47:53.386458 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 12 20:47:53.391555 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:47:53.408269 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 12 20:47:53.414197 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 12 20:47:53.419272 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 12 20:47:53.430235 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 20:47:53.436200 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:47:53.442155 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 12 20:47:53.457330 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:47:53.457780 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:47:53.472472 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:47:53.476422 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:47:53.482606 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:47:53.484189 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:47:53.484434 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:47:53.498341 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 12 20:47:53.500012 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:47:53.500221 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:47:53.505080 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:47:53.505281 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:47:53.512379 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:47:53.513198 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:47:53.513463 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:47:53.514546 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:47:53.514725 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:47:53.516681 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 20:47:53.520829 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:47:53.521456 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:47:53.535258 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 20:47:53.538223 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:47:53.539182 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:47:53.539387 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:47:53.543614 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:47:53.543977 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:47:53.547599 systemd[1]: Finished ensure-sysext.service. Nov 12 20:47:53.562310 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 12 20:47:53.564026 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 12 20:47:53.565665 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 12 20:47:53.579880 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 12 20:47:53.587706 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 20:47:53.587895 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 20:47:53.597518 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:47:53.597773 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:47:53.599698 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 20:47:53.609434 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:47:53.610095 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:47:53.612831 augenrules[1357]: No rules Nov 12 20:47:53.614558 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 12 20:47:53.616907 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 20:47:53.622707 systemd-udevd[1328]: Using default interface naming scheme 'v255'. Nov 12 20:47:53.642992 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 12 20:47:53.654170 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 12 20:47:53.655089 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 12 20:47:53.667486 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 12 20:47:53.673007 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:47:53.680051 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 20:47:53.762608 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 12 20:47:53.763970 systemd[1]: Reached target time-set.target - System Time Set. Nov 12 20:47:53.843247 systemd-networkd[1372]: lo: Link UP Nov 12 20:47:53.843586 systemd-networkd[1372]: lo: Gained carrier Nov 12 20:47:53.844523 systemd-networkd[1372]: Enumeration completed Nov 12 20:47:53.844730 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 20:47:53.854192 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 12 20:47:53.862523 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 12 20:47:53.871995 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Nov 12 20:47:53.873176 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:47:53.873368 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:47:53.875106 systemd-resolved[1324]: Positive Trust Anchors: Nov 12 20:47:53.875122 systemd-resolved[1324]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 20:47:53.875158 systemd-resolved[1324]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 20:47:53.876058 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:47:53.888134 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:47:53.895135 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:47:53.895950 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:47:53.896010 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 12 20:47:53.896037 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:47:53.899869 kernel: ISO 9660 Extensions: RRIP_1991A Nov 12 20:47:53.902197 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Nov 12 20:47:53.908241 systemd-resolved[1324]: Using system hostname 'ci-4081.2.0-2-eeaeb2d4c6'. Nov 12 20:47:53.913001 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 20:47:53.915254 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:47:53.916437 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:47:53.921395 systemd[1]: Reached target network.target - Network. Nov 12 20:47:53.922180 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:47:53.928759 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:47:53.928966 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:47:53.929723 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 20:47:53.946779 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:47:53.948948 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:47:53.960319 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 20:47:53.964214 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1388) Nov 12 20:47:53.983417 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1388) Nov 12 20:47:53.983505 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1379) Nov 12 20:47:54.016435 systemd-networkd[1372]: eth1: Configuring with /run/systemd/network/10-c2:58:ee:91:53:27.network. Nov 12 20:47:54.017478 systemd-networkd[1372]: eth1: Link UP Nov 12 20:47:54.017486 systemd-networkd[1372]: eth1: Gained carrier Nov 12 20:47:54.022901 systemd-timesyncd[1350]: Network configuration changed, trying to establish connection. Nov 12 20:47:54.033701 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 12 20:47:54.043231 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 12 20:47:54.062554 systemd-networkd[1372]: eth0: Configuring with /run/systemd/network/10-6e:a6:fe:5a:8e:b5.network. Nov 12 20:47:54.065324 systemd-networkd[1372]: eth0: Link UP Nov 12 20:47:54.065452 systemd-networkd[1372]: eth0: Gained carrier Nov 12 20:47:54.068228 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Nov 12 20:47:54.068238 systemd-timesyncd[1350]: Network configuration changed, trying to establish connection. Nov 12 20:47:54.072621 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 12 20:47:54.073614 systemd-timesyncd[1350]: Network configuration changed, trying to establish connection. Nov 12 20:47:54.073785 systemd-timesyncd[1350]: Network configuration changed, trying to establish connection. Nov 12 20:47:54.086899 kernel: ACPI: button: Power Button [PWRF] Nov 12 20:47:54.116910 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Nov 12 20:47:54.151926 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Nov 12 20:47:54.186912 kernel: mousedev: PS/2 mouse device common for all mice Nov 12 20:47:54.206528 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:47:54.224119 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Nov 12 20:47:54.224248 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Nov 12 20:47:54.230884 kernel: Console: switching to colour dummy device 80x25 Nov 12 20:47:54.234240 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Nov 12 20:47:54.234356 kernel: [drm] features: -context_init Nov 12 20:47:54.247244 kernel: [drm] number of scanouts: 1 Nov 12 20:47:54.247321 kernel: [drm] number of cap sets: 0 Nov 12 20:47:54.249882 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Nov 12 20:47:54.251764 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:47:54.252072 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:47:54.260342 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Nov 12 20:47:54.260414 kernel: Console: switching to colour frame buffer device 128x48 Nov 12 20:47:54.260315 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:47:54.294304 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Nov 12 20:47:54.304794 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:47:54.305037 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:47:54.324426 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:47:54.380912 kernel: EDAC MC: Ver: 3.0.0 Nov 12 20:47:54.411313 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 12 20:47:54.419079 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 12 20:47:54.426810 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:47:54.442947 lvm[1430]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 20:47:54.480713 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 12 20:47:54.482270 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:47:54.482457 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 20:47:54.482707 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 12 20:47:54.482900 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 12 20:47:54.483286 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 12 20:47:54.483592 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 12 20:47:54.483709 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 12 20:47:54.483800 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 12 20:47:54.483833 systemd[1]: Reached target paths.target - Path Units. Nov 12 20:47:54.486236 systemd[1]: Reached target timers.target - Timer Units. Nov 12 20:47:54.488503 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 12 20:47:54.490934 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 12 20:47:54.498876 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 12 20:47:54.509137 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 12 20:47:54.510839 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 12 20:47:54.513102 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 20:47:54.513575 systemd[1]: Reached target basic.target - Basic System. Nov 12 20:47:54.514250 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 12 20:47:54.514277 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 12 20:47:54.516159 lvm[1435]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 20:47:54.523002 systemd[1]: Starting containerd.service - containerd container runtime... Nov 12 20:47:54.529148 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 12 20:47:54.536039 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 12 20:47:54.543986 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 12 20:47:54.547206 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 12 20:47:54.548704 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 12 20:47:54.553110 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 12 20:47:54.563985 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 12 20:47:54.568607 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 12 20:47:54.574207 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 12 20:47:54.586129 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 12 20:47:54.588603 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 12 20:47:54.595316 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 12 20:47:54.597206 systemd[1]: Starting update-engine.service - Update Engine... Nov 12 20:47:54.605555 dbus-daemon[1438]: [system] SELinux support is enabled Nov 12 20:47:54.611027 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 12 20:47:54.612667 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 12 20:47:54.627998 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 12 20:47:54.638366 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 12 20:47:54.638410 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 12 20:47:54.640761 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 12 20:47:54.649397 jq[1439]: false Nov 12 20:47:54.654568 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 12 20:47:54.654816 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 12 20:47:54.667681 extend-filesystems[1440]: Found loop4 Nov 12 20:47:54.672195 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Nov 12 20:47:54.672250 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 12 20:47:54.680066 extend-filesystems[1440]: Found loop5 Nov 12 20:47:54.680066 extend-filesystems[1440]: Found loop6 Nov 12 20:47:54.680066 extend-filesystems[1440]: Found loop7 Nov 12 20:47:54.680066 extend-filesystems[1440]: Found vda Nov 12 20:47:54.680066 extend-filesystems[1440]: Found vda1 Nov 12 20:47:54.680066 extend-filesystems[1440]: Found vda2 Nov 12 20:47:54.680066 extend-filesystems[1440]: Found vda3 Nov 12 20:47:54.680066 extend-filesystems[1440]: Found usr Nov 12 20:47:54.680066 extend-filesystems[1440]: Found vda4 Nov 12 20:47:54.680066 extend-filesystems[1440]: Found vda6 Nov 12 20:47:54.680066 extend-filesystems[1440]: Found vda7 Nov 12 20:47:54.680066 extend-filesystems[1440]: Found vda9 Nov 12 20:47:54.680066 extend-filesystems[1440]: Checking size of /dev/vda9 Nov 12 20:47:54.722493 coreos-metadata[1437]: Nov 12 20:47:54.679 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 12 20:47:54.722493 coreos-metadata[1437]: Nov 12 20:47:54.700 INFO Fetch successful Nov 12 20:47:54.708430 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 12 20:47:54.709520 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 12 20:47:54.726222 jq[1451]: true Nov 12 20:47:54.725294 systemd[1]: motdgen.service: Deactivated successfully. Nov 12 20:47:54.726095 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 12 20:47:54.726447 (ntainerd)[1462]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 12 20:47:54.734881 update_engine[1450]: I20241112 20:47:54.730028 1450 main.cc:92] Flatcar Update Engine starting Nov 12 20:47:54.738794 update_engine[1450]: I20241112 20:47:54.738730 1450 update_check_scheduler.cc:74] Next update check in 10m53s Nov 12 20:47:54.747942 systemd[1]: Started update-engine.service - Update Engine. Nov 12 20:47:54.757706 tar[1461]: linux-amd64/helm Nov 12 20:47:54.758098 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 12 20:47:54.768719 extend-filesystems[1440]: Resized partition /dev/vda9 Nov 12 20:47:54.783270 extend-filesystems[1479]: resize2fs 1.47.1 (20-May-2024) Nov 12 20:47:54.792618 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Nov 12 20:47:54.815326 jq[1471]: true Nov 12 20:47:54.830024 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1373) Nov 12 20:47:54.867077 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 12 20:47:54.872095 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 12 20:47:54.942877 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Nov 12 20:47:54.961629 systemd-logind[1448]: New seat seat0. Nov 12 20:47:55.020305 systemd-logind[1448]: Watching system buttons on /dev/input/event1 (Power Button) Nov 12 20:47:55.020335 systemd-logind[1448]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 12 20:47:55.021176 systemd[1]: Started systemd-logind.service - User Login Management. Nov 12 20:47:55.025706 extend-filesystems[1479]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 12 20:47:55.025706 extend-filesystems[1479]: old_desc_blocks = 1, new_desc_blocks = 8 Nov 12 20:47:55.025706 extend-filesystems[1479]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Nov 12 20:47:55.035569 extend-filesystems[1440]: Resized filesystem in /dev/vda9 Nov 12 20:47:55.035569 extend-filesystems[1440]: Found vdb Nov 12 20:47:55.035190 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 12 20:47:55.042580 sshd_keygen[1482]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 12 20:47:55.036192 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 12 20:47:55.071689 bash[1500]: Updated "/home/core/.ssh/authorized_keys" Nov 12 20:47:55.072593 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 12 20:47:55.089077 systemd[1]: Starting sshkeys.service... Nov 12 20:47:55.097920 locksmithd[1476]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 12 20:47:55.125062 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 12 20:47:55.136266 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 12 20:47:55.146934 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 12 20:47:55.159034 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 12 20:47:55.218321 systemd[1]: issuegen.service: Deactivated successfully. Nov 12 20:47:55.218605 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 12 20:47:55.235947 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 12 20:47:55.241108 coreos-metadata[1521]: Nov 12 20:47:55.235 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 12 20:47:55.257971 coreos-metadata[1521]: Nov 12 20:47:55.255 INFO Fetch successful Nov 12 20:47:55.279952 unknown[1521]: wrote ssh authorized keys file for user: core Nov 12 20:47:55.295927 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 12 20:47:55.309413 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 12 20:47:55.318659 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 12 20:47:55.321270 systemd[1]: Reached target getty.target - Login Prompts. Nov 12 20:47:55.374385 update-ssh-keys[1532]: Updated "/home/core/.ssh/authorized_keys" Nov 12 20:47:55.374188 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 12 20:47:55.380314 systemd[1]: Finished sshkeys.service. Nov 12 20:47:55.391882 containerd[1462]: time="2024-11-12T20:47:55.390699280Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 12 20:47:55.441921 containerd[1462]: time="2024-11-12T20:47:55.441809873Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:47:55.445726 containerd[1462]: time="2024-11-12T20:47:55.445659544Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.60-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:47:55.445726 containerd[1462]: time="2024-11-12T20:47:55.445711956Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 12 20:47:55.445726 containerd[1462]: time="2024-11-12T20:47:55.445735957Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 12 20:47:55.446040 containerd[1462]: time="2024-11-12T20:47:55.446011150Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 12 20:47:55.446092 containerd[1462]: time="2024-11-12T20:47:55.446042142Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 12 20:47:55.446138 containerd[1462]: time="2024-11-12T20:47:55.446113521Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:47:55.446138 containerd[1462]: time="2024-11-12T20:47:55.446130909Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:47:55.446384 containerd[1462]: time="2024-11-12T20:47:55.446350776Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:47:55.446384 containerd[1462]: time="2024-11-12T20:47:55.446376580Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 12 20:47:55.446477 containerd[1462]: time="2024-11-12T20:47:55.446396658Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:47:55.446477 containerd[1462]: time="2024-11-12T20:47:55.446410941Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 12 20:47:55.446522 containerd[1462]: time="2024-11-12T20:47:55.446501249Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:47:55.446783 containerd[1462]: time="2024-11-12T20:47:55.446748518Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:47:55.446997 containerd[1462]: time="2024-11-12T20:47:55.446967415Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:47:55.446997 containerd[1462]: time="2024-11-12T20:47:55.446989987Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 12 20:47:55.447107 containerd[1462]: time="2024-11-12T20:47:55.447088327Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 12 20:47:55.447153 containerd[1462]: time="2024-11-12T20:47:55.447139005Z" level=info msg="metadata content store policy set" policy=shared Nov 12 20:47:55.467970 containerd[1462]: time="2024-11-12T20:47:55.467884530Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 12 20:47:55.468242 containerd[1462]: time="2024-11-12T20:47:55.468013218Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 12 20:47:55.468242 containerd[1462]: time="2024-11-12T20:47:55.468044921Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 12 20:47:55.468242 containerd[1462]: time="2024-11-12T20:47:55.468068529Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 12 20:47:55.468242 containerd[1462]: time="2024-11-12T20:47:55.468089390Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 12 20:47:55.468418 containerd[1462]: time="2024-11-12T20:47:55.468285499Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 12 20:47:55.469334 containerd[1462]: time="2024-11-12T20:47:55.469263523Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 12 20:47:55.469594 containerd[1462]: time="2024-11-12T20:47:55.469479212Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 12 20:47:55.469594 containerd[1462]: time="2024-11-12T20:47:55.469515337Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 12 20:47:55.469594 containerd[1462]: time="2024-11-12T20:47:55.469536929Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 12 20:47:55.469594 containerd[1462]: time="2024-11-12T20:47:55.469566034Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 12 20:47:55.469594 containerd[1462]: time="2024-11-12T20:47:55.469592197Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 12 20:47:55.469862 containerd[1462]: time="2024-11-12T20:47:55.469616016Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 12 20:47:55.469862 containerd[1462]: time="2024-11-12T20:47:55.469641121Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 12 20:47:55.469862 containerd[1462]: time="2024-11-12T20:47:55.469666860Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 12 20:47:55.469862 containerd[1462]: time="2024-11-12T20:47:55.469690596Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 12 20:47:55.469862 containerd[1462]: time="2024-11-12T20:47:55.469711037Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 12 20:47:55.469862 containerd[1462]: time="2024-11-12T20:47:55.469735127Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 12 20:47:55.469862 containerd[1462]: time="2024-11-12T20:47:55.469793359Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 12 20:47:55.469862 containerd[1462]: time="2024-11-12T20:47:55.469821940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 12 20:47:55.469862 containerd[1462]: time="2024-11-12T20:47:55.469858113Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 12 20:47:55.470878 containerd[1462]: time="2024-11-12T20:47:55.469897716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 12 20:47:55.470878 containerd[1462]: time="2024-11-12T20:47:55.469927990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 12 20:47:55.470878 containerd[1462]: time="2024-11-12T20:47:55.469953529Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 12 20:47:55.470878 containerd[1462]: time="2024-11-12T20:47:55.469973132Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 12 20:47:55.470878 containerd[1462]: time="2024-11-12T20:47:55.469998290Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 12 20:47:55.470878 containerd[1462]: time="2024-11-12T20:47:55.470021860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 12 20:47:55.470878 containerd[1462]: time="2024-11-12T20:47:55.470063972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 12 20:47:55.470878 containerd[1462]: time="2024-11-12T20:47:55.470088615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 12 20:47:55.470878 containerd[1462]: time="2024-11-12T20:47:55.470111235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 12 20:47:55.470878 containerd[1462]: time="2024-11-12T20:47:55.470136721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 12 20:47:55.470878 containerd[1462]: time="2024-11-12T20:47:55.470164666Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 12 20:47:55.470878 containerd[1462]: time="2024-11-12T20:47:55.470199913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 12 20:47:55.470878 containerd[1462]: time="2024-11-12T20:47:55.470222965Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 12 20:47:55.470878 containerd[1462]: time="2024-11-12T20:47:55.470241178Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 12 20:47:55.473371 containerd[1462]: time="2024-11-12T20:47:55.470343706Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 12 20:47:55.473371 containerd[1462]: time="2024-11-12T20:47:55.470379635Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 12 20:47:55.473371 containerd[1462]: time="2024-11-12T20:47:55.470402319Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 12 20:47:55.473371 containerd[1462]: time="2024-11-12T20:47:55.470426488Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 12 20:47:55.473371 containerd[1462]: time="2024-11-12T20:47:55.470444286Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 12 20:47:55.473371 containerd[1462]: time="2024-11-12T20:47:55.470467304Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 12 20:47:55.473371 containerd[1462]: time="2024-11-12T20:47:55.470487474Z" level=info msg="NRI interface is disabled by configuration." Nov 12 20:47:55.473371 containerd[1462]: time="2024-11-12T20:47:55.470503495Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 12 20:47:55.473794 containerd[1462]: time="2024-11-12T20:47:55.471881417Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 12 20:47:55.473794 containerd[1462]: time="2024-11-12T20:47:55.471988057Z" level=info msg="Connect containerd service" Nov 12 20:47:55.473794 containerd[1462]: time="2024-11-12T20:47:55.472063574Z" level=info msg="using legacy CRI server" Nov 12 20:47:55.473794 containerd[1462]: time="2024-11-12T20:47:55.472089927Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 12 20:47:55.474226 containerd[1462]: time="2024-11-12T20:47:55.473819216Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 12 20:47:55.477501 containerd[1462]: time="2024-11-12T20:47:55.477347083Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 12 20:47:55.481001 containerd[1462]: time="2024-11-12T20:47:55.477577424Z" level=info msg="Start subscribing containerd event" Nov 12 20:47:55.481001 containerd[1462]: time="2024-11-12T20:47:55.477643833Z" level=info msg="Start recovering state" Nov 12 20:47:55.481001 containerd[1462]: time="2024-11-12T20:47:55.477729263Z" level=info msg="Start event monitor" Nov 12 20:47:55.481001 containerd[1462]: time="2024-11-12T20:47:55.477752356Z" level=info msg="Start snapshots syncer" Nov 12 20:47:55.481001 containerd[1462]: time="2024-11-12T20:47:55.477762798Z" level=info msg="Start cni network conf syncer for default" Nov 12 20:47:55.481001 containerd[1462]: time="2024-11-12T20:47:55.477769889Z" level=info msg="Start streaming server" Nov 12 20:47:55.481001 containerd[1462]: time="2024-11-12T20:47:55.477811404Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 12 20:47:55.481001 containerd[1462]: time="2024-11-12T20:47:55.477878624Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 12 20:47:55.481001 containerd[1462]: time="2024-11-12T20:47:55.478008264Z" level=info msg="containerd successfully booted in 0.088751s" Nov 12 20:47:55.478134 systemd[1]: Started containerd.service - containerd container runtime. Nov 12 20:47:55.646618 tar[1461]: linux-amd64/LICENSE Nov 12 20:47:55.647945 tar[1461]: linux-amd64/README.md Nov 12 20:47:55.665507 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 12 20:47:55.763221 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 12 20:47:55.771372 systemd[1]: Started sshd@0-147.182.197.11:22-139.178.68.195:35398.service - OpenSSH per-connection server daemon (139.178.68.195:35398). Nov 12 20:47:55.872887 sshd[1545]: Accepted publickey for core from 139.178.68.195 port 35398 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:47:55.876637 sshd[1545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:47:55.877106 systemd-networkd[1372]: eth1: Gained IPv6LL Nov 12 20:47:55.877690 systemd-timesyncd[1350]: Network configuration changed, trying to establish connection. Nov 12 20:47:55.884465 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 12 20:47:55.889619 systemd[1]: Reached target network-online.target - Network is Online. Nov 12 20:47:55.898294 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:47:55.911458 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 12 20:47:55.937193 systemd-logind[1448]: New session 1 of user core. Nov 12 20:47:55.939940 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 12 20:47:55.948300 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 12 20:47:55.959829 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 12 20:47:55.981125 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 12 20:47:55.991262 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 12 20:47:56.007268 systemd-networkd[1372]: eth0: Gained IPv6LL Nov 12 20:47:56.007783 systemd-timesyncd[1350]: Network configuration changed, trying to establish connection. Nov 12 20:47:56.009061 (systemd)[1560]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 12 20:47:56.198527 systemd[1560]: Queued start job for default target default.target. Nov 12 20:47:56.209602 systemd[1560]: Created slice app.slice - User Application Slice. Nov 12 20:47:56.209653 systemd[1560]: Reached target paths.target - Paths. Nov 12 20:47:56.209679 systemd[1560]: Reached target timers.target - Timers. Nov 12 20:47:56.212683 systemd[1560]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 12 20:47:56.238595 systemd[1560]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 12 20:47:56.238776 systemd[1560]: Reached target sockets.target - Sockets. Nov 12 20:47:56.238805 systemd[1560]: Reached target basic.target - Basic System. Nov 12 20:47:56.238883 systemd[1560]: Reached target default.target - Main User Target. Nov 12 20:47:56.238925 systemd[1560]: Startup finished in 219ms. Nov 12 20:47:56.239301 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 12 20:47:56.249254 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 12 20:47:56.325390 systemd[1]: Started sshd@1-147.182.197.11:22-139.178.68.195:35410.service - OpenSSH per-connection server daemon (139.178.68.195:35410). Nov 12 20:47:56.392066 sshd[1571]: Accepted publickey for core from 139.178.68.195 port 35410 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:47:56.395734 sshd[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:47:56.412637 systemd-logind[1448]: New session 2 of user core. Nov 12 20:47:56.423340 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 12 20:47:56.497139 sshd[1571]: pam_unix(sshd:session): session closed for user core Nov 12 20:47:56.515712 systemd[1]: sshd@1-147.182.197.11:22-139.178.68.195:35410.service: Deactivated successfully. Nov 12 20:47:56.518924 systemd[1]: session-2.scope: Deactivated successfully. Nov 12 20:47:56.521014 systemd-logind[1448]: Session 2 logged out. Waiting for processes to exit. Nov 12 20:47:56.529188 systemd[1]: Started sshd@2-147.182.197.11:22-139.178.68.195:35414.service - OpenSSH per-connection server daemon (139.178.68.195:35414). Nov 12 20:47:56.533640 systemd-logind[1448]: Removed session 2. Nov 12 20:47:56.574852 sshd[1578]: Accepted publickey for core from 139.178.68.195 port 35414 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:47:56.576475 sshd[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:47:56.584770 systemd-logind[1448]: New session 3 of user core. Nov 12 20:47:56.589094 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 12 20:47:56.660688 sshd[1578]: pam_unix(sshd:session): session closed for user core Nov 12 20:47:56.665887 systemd[1]: sshd@2-147.182.197.11:22-139.178.68.195:35414.service: Deactivated successfully. Nov 12 20:47:56.667049 systemd-logind[1448]: Session 3 logged out. Waiting for processes to exit. Nov 12 20:47:56.669664 systemd[1]: session-3.scope: Deactivated successfully. Nov 12 20:47:56.673952 systemd-logind[1448]: Removed session 3. Nov 12 20:47:57.280035 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:47:57.284925 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 12 20:47:57.288551 systemd[1]: Startup finished in 1.105s (kernel) + 5.884s (initrd) + 6.709s (userspace) = 13.699s. Nov 12 20:47:57.291439 (kubelet)[1588]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:47:58.117073 kubelet[1588]: E1112 20:47:58.116926 1588 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:47:58.119734 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:47:58.120039 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:47:58.121190 systemd[1]: kubelet.service: Consumed 1.345s CPU time. Nov 12 20:48:06.686317 systemd[1]: Started sshd@3-147.182.197.11:22-139.178.68.195:34646.service - OpenSSH per-connection server daemon (139.178.68.195:34646). Nov 12 20:48:06.726605 sshd[1601]: Accepted publickey for core from 139.178.68.195 port 34646 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:48:06.729107 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:48:06.736941 systemd-logind[1448]: New session 4 of user core. Nov 12 20:48:06.748191 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 12 20:48:06.812402 sshd[1601]: pam_unix(sshd:session): session closed for user core Nov 12 20:48:06.828765 systemd[1]: sshd@3-147.182.197.11:22-139.178.68.195:34646.service: Deactivated successfully. Nov 12 20:48:06.831094 systemd[1]: session-4.scope: Deactivated successfully. Nov 12 20:48:06.832050 systemd-logind[1448]: Session 4 logged out. Waiting for processes to exit. Nov 12 20:48:06.841310 systemd[1]: Started sshd@4-147.182.197.11:22-139.178.68.195:34654.service - OpenSSH per-connection server daemon (139.178.68.195:34654). Nov 12 20:48:06.843741 systemd-logind[1448]: Removed session 4. Nov 12 20:48:06.885084 sshd[1608]: Accepted publickey for core from 139.178.68.195 port 34654 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:48:06.887428 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:48:06.895263 systemd-logind[1448]: New session 5 of user core. Nov 12 20:48:06.906206 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 12 20:48:06.965372 sshd[1608]: pam_unix(sshd:session): session closed for user core Nov 12 20:48:06.979246 systemd[1]: sshd@4-147.182.197.11:22-139.178.68.195:34654.service: Deactivated successfully. Nov 12 20:48:06.981725 systemd[1]: session-5.scope: Deactivated successfully. Nov 12 20:48:06.984137 systemd-logind[1448]: Session 5 logged out. Waiting for processes to exit. Nov 12 20:48:06.988354 systemd[1]: Started sshd@5-147.182.197.11:22-139.178.68.195:34660.service - OpenSSH per-connection server daemon (139.178.68.195:34660). Nov 12 20:48:06.992830 systemd-logind[1448]: Removed session 5. Nov 12 20:48:07.048209 sshd[1615]: Accepted publickey for core from 139.178.68.195 port 34660 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:48:07.050554 sshd[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:48:07.055792 systemd-logind[1448]: New session 6 of user core. Nov 12 20:48:07.066196 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 12 20:48:07.133148 sshd[1615]: pam_unix(sshd:session): session closed for user core Nov 12 20:48:07.143338 systemd[1]: sshd@5-147.182.197.11:22-139.178.68.195:34660.service: Deactivated successfully. Nov 12 20:48:07.145901 systemd[1]: session-6.scope: Deactivated successfully. Nov 12 20:48:07.147887 systemd-logind[1448]: Session 6 logged out. Waiting for processes to exit. Nov 12 20:48:07.153604 systemd[1]: Started sshd@6-147.182.197.11:22-139.178.68.195:34670.service - OpenSSH per-connection server daemon (139.178.68.195:34670). Nov 12 20:48:07.155877 systemd-logind[1448]: Removed session 6. Nov 12 20:48:07.202352 sshd[1622]: Accepted publickey for core from 139.178.68.195 port 34670 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:48:07.204215 sshd[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:48:07.211686 systemd-logind[1448]: New session 7 of user core. Nov 12 20:48:07.218188 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 12 20:48:07.296342 sudo[1625]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 12 20:48:07.296664 sudo[1625]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:48:07.310331 sudo[1625]: pam_unix(sudo:session): session closed for user root Nov 12 20:48:07.314645 sshd[1622]: pam_unix(sshd:session): session closed for user core Nov 12 20:48:07.324459 systemd[1]: sshd@6-147.182.197.11:22-139.178.68.195:34670.service: Deactivated successfully. Nov 12 20:48:07.327778 systemd[1]: session-7.scope: Deactivated successfully. Nov 12 20:48:07.330935 systemd-logind[1448]: Session 7 logged out. Waiting for processes to exit. Nov 12 20:48:07.338322 systemd[1]: Started sshd@7-147.182.197.11:22-139.178.68.195:34682.service - OpenSSH per-connection server daemon (139.178.68.195:34682). Nov 12 20:48:07.340821 systemd-logind[1448]: Removed session 7. Nov 12 20:48:07.378731 sshd[1630]: Accepted publickey for core from 139.178.68.195 port 34682 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:48:07.381090 sshd[1630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:48:07.389305 systemd-logind[1448]: New session 8 of user core. Nov 12 20:48:07.391105 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 12 20:48:07.452632 sudo[1634]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 12 20:48:07.453104 sudo[1634]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:48:07.458594 sudo[1634]: pam_unix(sudo:session): session closed for user root Nov 12 20:48:07.467402 sudo[1633]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 12 20:48:07.468464 sudo[1633]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:48:07.490363 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 12 20:48:07.493812 auditctl[1637]: No rules Nov 12 20:48:07.494314 systemd[1]: audit-rules.service: Deactivated successfully. Nov 12 20:48:07.494621 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 12 20:48:07.498342 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 12 20:48:07.552445 augenrules[1655]: No rules Nov 12 20:48:07.554359 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 12 20:48:07.556203 sudo[1633]: pam_unix(sudo:session): session closed for user root Nov 12 20:48:07.560221 sshd[1630]: pam_unix(sshd:session): session closed for user core Nov 12 20:48:07.570800 systemd[1]: sshd@7-147.182.197.11:22-139.178.68.195:34682.service: Deactivated successfully. Nov 12 20:48:07.573515 systemd[1]: session-8.scope: Deactivated successfully. Nov 12 20:48:07.576154 systemd-logind[1448]: Session 8 logged out. Waiting for processes to exit. Nov 12 20:48:07.580323 systemd[1]: Started sshd@8-147.182.197.11:22-139.178.68.195:34684.service - OpenSSH per-connection server daemon (139.178.68.195:34684). Nov 12 20:48:07.583459 systemd-logind[1448]: Removed session 8. Nov 12 20:48:07.632572 sshd[1663]: Accepted publickey for core from 139.178.68.195 port 34684 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:48:07.635473 sshd[1663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:48:07.644141 systemd-logind[1448]: New session 9 of user core. Nov 12 20:48:07.653219 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 12 20:48:07.715937 sudo[1666]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 12 20:48:07.716408 sudo[1666]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:48:08.209086 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 12 20:48:08.216334 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 12 20:48:08.218370 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:48:08.221422 (dockerd)[1682]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 12 20:48:08.380151 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:48:08.383199 (kubelet)[1691]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:48:08.484133 kubelet[1691]: E1112 20:48:08.483968 1691 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:48:08.489995 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:48:08.490384 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:48:08.835729 dockerd[1682]: time="2024-11-12T20:48:08.834569159Z" level=info msg="Starting up" Nov 12 20:48:09.076979 dockerd[1682]: time="2024-11-12T20:48:09.076920069Z" level=info msg="Loading containers: start." Nov 12 20:48:09.251915 kernel: Initializing XFRM netlink socket Nov 12 20:48:09.287497 systemd-timesyncd[1350]: Network configuration changed, trying to establish connection. Nov 12 20:48:09.360884 systemd-networkd[1372]: docker0: Link UP Nov 12 20:48:09.367451 systemd-timesyncd[1350]: Contacted time server 208.113.130.146:123 (2.flatcar.pool.ntp.org). Nov 12 20:48:09.367539 systemd-timesyncd[1350]: Initial clock synchronization to Tue 2024-11-12 20:48:09.733067 UTC. Nov 12 20:48:09.395077 dockerd[1682]: time="2024-11-12T20:48:09.395030077Z" level=info msg="Loading containers: done." Nov 12 20:48:09.426404 dockerd[1682]: time="2024-11-12T20:48:09.426313705Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 12 20:48:09.426677 dockerd[1682]: time="2024-11-12T20:48:09.426446339Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 12 20:48:09.426677 dockerd[1682]: time="2024-11-12T20:48:09.426570545Z" level=info msg="Daemon has completed initialization" Nov 12 20:48:09.510620 dockerd[1682]: time="2024-11-12T20:48:09.510441828Z" level=info msg="API listen on /run/docker.sock" Nov 12 20:48:09.510814 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 12 20:48:10.434172 containerd[1462]: time="2024-11-12T20:48:10.434019799Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.2\"" Nov 12 20:48:11.268638 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2929433173.mount: Deactivated successfully. Nov 12 20:48:12.987798 containerd[1462]: time="2024-11-12T20:48:12.986838037Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:12.989246 containerd[1462]: time="2024-11-12T20:48:12.989182867Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.2: active requests=0, bytes read=27975588" Nov 12 20:48:12.992002 containerd[1462]: time="2024-11-12T20:48:12.991905092Z" level=info msg="ImageCreate event name:\"sha256:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:12.997721 containerd[1462]: time="2024-11-12T20:48:12.997634258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:13.000106 containerd[1462]: time="2024-11-12T20:48:13.000047646Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.2\" with image id \"sha256:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0\", size \"27972388\" in 2.565875736s" Nov 12 20:48:13.000446 containerd[1462]: time="2024-11-12T20:48:13.000286163Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.2\" returns image reference \"sha256:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173\"" Nov 12 20:48:13.002912 containerd[1462]: time="2024-11-12T20:48:13.002840706Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.2\"" Nov 12 20:48:14.753905 containerd[1462]: time="2024-11-12T20:48:14.753478393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:14.757768 containerd[1462]: time="2024-11-12T20:48:14.757683104Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.2: active requests=0, bytes read=24701922" Nov 12 20:48:14.760616 containerd[1462]: time="2024-11-12T20:48:14.760559098Z" level=info msg="ImageCreate event name:\"sha256:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:14.771576 containerd[1462]: time="2024-11-12T20:48:14.771475481Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:14.772512 containerd[1462]: time="2024-11-12T20:48:14.772465196Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.2\" with image id \"sha256:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752\", size \"26147288\" in 1.769570198s" Nov 12 20:48:14.772512 containerd[1462]: time="2024-11-12T20:48:14.772509265Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.2\" returns image reference \"sha256:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503\"" Nov 12 20:48:14.773378 containerd[1462]: time="2024-11-12T20:48:14.772976768Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.2\"" Nov 12 20:48:16.133543 containerd[1462]: time="2024-11-12T20:48:16.133394524Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:16.139213 containerd[1462]: time="2024-11-12T20:48:16.139122089Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.2: active requests=0, bytes read=18657606" Nov 12 20:48:16.143835 containerd[1462]: time="2024-11-12T20:48:16.143727906Z" level=info msg="ImageCreate event name:\"sha256:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:16.155425 containerd[1462]: time="2024-11-12T20:48:16.155345456Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:16.157138 containerd[1462]: time="2024-11-12T20:48:16.156963920Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.2\" with image id \"sha256:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282\", size \"20102990\" in 1.383941618s" Nov 12 20:48:16.157138 containerd[1462]: time="2024-11-12T20:48:16.157023778Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.2\" returns image reference \"sha256:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856\"" Nov 12 20:48:16.157913 containerd[1462]: time="2024-11-12T20:48:16.157728992Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.2\"" Nov 12 20:48:16.163300 systemd-resolved[1324]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Nov 12 20:48:17.420081 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3840139214.mount: Deactivated successfully. Nov 12 20:48:18.022628 containerd[1462]: time="2024-11-12T20:48:18.022554322Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:18.028813 containerd[1462]: time="2024-11-12T20:48:18.028723040Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.2: active requests=0, bytes read=30226814" Nov 12 20:48:18.033776 containerd[1462]: time="2024-11-12T20:48:18.033719303Z" level=info msg="ImageCreate event name:\"sha256:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:18.044501 containerd[1462]: time="2024-11-12T20:48:18.044383997Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:18.045822 containerd[1462]: time="2024-11-12T20:48:18.045259685Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.2\" with image id \"sha256:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38\", repo tag \"registry.k8s.io/kube-proxy:v1.31.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe\", size \"30225833\" in 1.887489806s" Nov 12 20:48:18.045822 containerd[1462]: time="2024-11-12T20:48:18.045315434Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.2\" returns image reference \"sha256:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38\"" Nov 12 20:48:18.046398 containerd[1462]: time="2024-11-12T20:48:18.046245403Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Nov 12 20:48:18.508785 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 12 20:48:18.515222 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:48:18.671103 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:48:18.686249 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3860510411.mount: Deactivated successfully. Nov 12 20:48:18.688311 (kubelet)[1924]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:48:18.752943 kubelet[1924]: E1112 20:48:18.752851 1924 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:48:18.755132 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:48:18.755343 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:48:19.237061 systemd-resolved[1324]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Nov 12 20:48:20.057937 containerd[1462]: time="2024-11-12T20:48:20.057395568Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:20.061416 containerd[1462]: time="2024-11-12T20:48:20.061313427Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Nov 12 20:48:20.067217 containerd[1462]: time="2024-11-12T20:48:20.067102950Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:20.074078 containerd[1462]: time="2024-11-12T20:48:20.073964224Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:20.076272 containerd[1462]: time="2024-11-12T20:48:20.076047821Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.029697297s" Nov 12 20:48:20.076272 containerd[1462]: time="2024-11-12T20:48:20.076115763Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Nov 12 20:48:20.077151 containerd[1462]: time="2024-11-12T20:48:20.077112869Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 12 20:48:20.685172 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4038491153.mount: Deactivated successfully. Nov 12 20:48:20.713493 containerd[1462]: time="2024-11-12T20:48:20.713399340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:20.718967 containerd[1462]: time="2024-11-12T20:48:20.718874092Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 12 20:48:20.724070 containerd[1462]: time="2024-11-12T20:48:20.723989607Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:20.732019 containerd[1462]: time="2024-11-12T20:48:20.731918896Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:20.733600 containerd[1462]: time="2024-11-12T20:48:20.733441681Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 655.927433ms" Nov 12 20:48:20.733600 containerd[1462]: time="2024-11-12T20:48:20.733483200Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 12 20:48:20.734360 containerd[1462]: time="2024-11-12T20:48:20.734155671Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Nov 12 20:48:21.382389 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1502050883.mount: Deactivated successfully. Nov 12 20:48:23.892892 containerd[1462]: time="2024-11-12T20:48:23.892669495Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:23.903127 containerd[1462]: time="2024-11-12T20:48:23.902806042Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779650" Nov 12 20:48:23.911225 containerd[1462]: time="2024-11-12T20:48:23.911126370Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:23.920748 containerd[1462]: time="2024-11-12T20:48:23.920662371Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:23.923821 containerd[1462]: time="2024-11-12T20:48:23.923765552Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.189576524s" Nov 12 20:48:23.924194 containerd[1462]: time="2024-11-12T20:48:23.924056582Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Nov 12 20:48:26.448498 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:48:26.460361 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:48:26.508109 systemd[1]: Reloading requested from client PID 2058 ('systemctl') (unit session-9.scope)... Nov 12 20:48:26.508128 systemd[1]: Reloading... Nov 12 20:48:26.654947 zram_generator::config[2097]: No configuration found. Nov 12 20:48:26.871361 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:48:26.968982 systemd[1]: Reloading finished in 460 ms. Nov 12 20:48:27.027089 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 12 20:48:27.027208 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 12 20:48:27.027587 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:48:27.040444 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:48:27.185145 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:48:27.185537 (kubelet)[2150]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 20:48:27.252771 kubelet[2150]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:48:27.252771 kubelet[2150]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 20:48:27.252771 kubelet[2150]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:48:27.254679 kubelet[2150]: I1112 20:48:27.254408 2150 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 20:48:27.596162 kubelet[2150]: I1112 20:48:27.596105 2150 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Nov 12 20:48:27.596162 kubelet[2150]: I1112 20:48:27.596143 2150 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 20:48:27.596470 kubelet[2150]: I1112 20:48:27.596444 2150 server.go:929] "Client rotation is on, will bootstrap in background" Nov 12 20:48:27.629062 kubelet[2150]: I1112 20:48:27.628761 2150 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 20:48:27.629508 kubelet[2150]: E1112 20:48:27.629468 2150 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://147.182.197.11:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 147.182.197.11:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:48:27.642831 kubelet[2150]: E1112 20:48:27.642761 2150 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 12 20:48:27.642831 kubelet[2150]: I1112 20:48:27.642809 2150 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 12 20:48:27.648447 kubelet[2150]: I1112 20:48:27.648405 2150 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 20:48:27.651943 kubelet[2150]: I1112 20:48:27.651874 2150 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 12 20:48:27.652142 kubelet[2150]: I1112 20:48:27.652102 2150 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 20:48:27.652355 kubelet[2150]: I1112 20:48:27.652143 2150 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.2.0-2-eeaeb2d4c6","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 12 20:48:27.652355 kubelet[2150]: I1112 20:48:27.652351 2150 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 20:48:27.652546 kubelet[2150]: I1112 20:48:27.652362 2150 container_manager_linux.go:300] "Creating device plugin manager" Nov 12 20:48:27.652546 kubelet[2150]: I1112 20:48:27.652490 2150 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:48:27.655157 kubelet[2150]: I1112 20:48:27.655121 2150 kubelet.go:408] "Attempting to sync node with API server" Nov 12 20:48:27.655157 kubelet[2150]: I1112 20:48:27.655152 2150 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 20:48:27.656025 kubelet[2150]: I1112 20:48:27.655187 2150 kubelet.go:314] "Adding apiserver pod source" Nov 12 20:48:27.656025 kubelet[2150]: I1112 20:48:27.655204 2150 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 20:48:27.665651 kubelet[2150]: W1112 20:48:27.665407 2150 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.182.197.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.0-2-eeaeb2d4c6&limit=500&resourceVersion=0": dial tcp 147.182.197.11:6443: connect: connection refused Nov 12 20:48:27.665651 kubelet[2150]: E1112 20:48:27.665483 2150 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://147.182.197.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.0-2-eeaeb2d4c6&limit=500&resourceVersion=0\": dial tcp 147.182.197.11:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:48:27.667385 kubelet[2150]: W1112 20:48:27.667120 2150 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.182.197.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 147.182.197.11:6443: connect: connection refused Nov 12 20:48:27.667385 kubelet[2150]: E1112 20:48:27.667187 2150 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://147.182.197.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.182.197.11:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:48:27.667385 kubelet[2150]: I1112 20:48:27.667289 2150 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 12 20:48:27.669877 kubelet[2150]: I1112 20:48:27.669802 2150 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 20:48:27.671216 kubelet[2150]: W1112 20:48:27.671176 2150 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 12 20:48:27.671869 kubelet[2150]: I1112 20:48:27.671839 2150 server.go:1269] "Started kubelet" Nov 12 20:48:27.673673 kubelet[2150]: I1112 20:48:27.673066 2150 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 20:48:27.674300 kubelet[2150]: I1112 20:48:27.674264 2150 server.go:460] "Adding debug handlers to kubelet server" Nov 12 20:48:27.677317 kubelet[2150]: I1112 20:48:27.677130 2150 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 20:48:27.679768 kubelet[2150]: I1112 20:48:27.679012 2150 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 20:48:27.679768 kubelet[2150]: I1112 20:48:27.679294 2150 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 20:48:27.684553 kubelet[2150]: E1112 20:48:27.679603 2150 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://147.182.197.11:6443/api/v1/namespaces/default/events\": dial tcp 147.182.197.11:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.2.0-2-eeaeb2d4c6.1807539b2c14505a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.2.0-2-eeaeb2d4c6,UID:ci-4081.2.0-2-eeaeb2d4c6,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.2.0-2-eeaeb2d4c6,},FirstTimestamp:2024-11-12 20:48:27.671810138 +0000 UTC m=+0.477289196,LastTimestamp:2024-11-12 20:48:27.671810138 +0000 UTC m=+0.477289196,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.2.0-2-eeaeb2d4c6,}" Nov 12 20:48:27.685300 kubelet[2150]: I1112 20:48:27.685281 2150 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 12 20:48:27.686422 kubelet[2150]: I1112 20:48:27.686399 2150 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 12 20:48:27.686677 kubelet[2150]: E1112 20:48:27.686655 2150 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.2.0-2-eeaeb2d4c6\" not found" Nov 12 20:48:27.688097 kubelet[2150]: E1112 20:48:27.688057 2150 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.182.197.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.0-2-eeaeb2d4c6?timeout=10s\": dial tcp 147.182.197.11:6443: connect: connection refused" interval="200ms" Nov 12 20:48:27.688576 kubelet[2150]: I1112 20:48:27.688563 2150 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 12 20:48:27.689183 kubelet[2150]: W1112 20:48:27.689146 2150 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.182.197.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.182.197.11:6443: connect: connection refused Nov 12 20:48:27.689374 kubelet[2150]: E1112 20:48:27.689332 2150 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://147.182.197.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 147.182.197.11:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:48:27.691440 kubelet[2150]: I1112 20:48:27.691407 2150 reconciler.go:26] "Reconciler: start to sync state" Nov 12 20:48:27.691861 kubelet[2150]: I1112 20:48:27.691814 2150 factory.go:221] Registration of the containerd container factory successfully Nov 12 20:48:27.692881 kubelet[2150]: I1112 20:48:27.691995 2150 factory.go:221] Registration of the systemd container factory successfully Nov 12 20:48:27.692881 kubelet[2150]: I1112 20:48:27.692152 2150 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 20:48:27.698884 kubelet[2150]: E1112 20:48:27.698833 2150 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 20:48:27.709765 kubelet[2150]: I1112 20:48:27.709702 2150 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 20:48:27.711203 kubelet[2150]: I1112 20:48:27.711173 2150 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 20:48:27.711302 kubelet[2150]: I1112 20:48:27.711214 2150 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 20:48:27.711302 kubelet[2150]: I1112 20:48:27.711257 2150 kubelet.go:2321] "Starting kubelet main sync loop" Nov 12 20:48:27.711355 kubelet[2150]: E1112 20:48:27.711312 2150 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 20:48:27.720379 kubelet[2150]: W1112 20:48:27.720323 2150 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.182.197.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.182.197.11:6443: connect: connection refused Nov 12 20:48:27.720576 kubelet[2150]: E1112 20:48:27.720552 2150 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://147.182.197.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 147.182.197.11:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:48:27.723506 kubelet[2150]: I1112 20:48:27.723482 2150 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 20:48:27.723655 kubelet[2150]: I1112 20:48:27.723643 2150 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 20:48:27.723713 kubelet[2150]: I1112 20:48:27.723706 2150 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:48:27.730677 kubelet[2150]: I1112 20:48:27.730643 2150 policy_none.go:49] "None policy: Start" Nov 12 20:48:27.731684 kubelet[2150]: I1112 20:48:27.731667 2150 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 20:48:27.731816 kubelet[2150]: I1112 20:48:27.731806 2150 state_mem.go:35] "Initializing new in-memory state store" Nov 12 20:48:27.743071 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 12 20:48:27.757171 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 12 20:48:27.762487 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 12 20:48:27.773366 kubelet[2150]: I1112 20:48:27.773327 2150 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 20:48:27.773366 kubelet[2150]: I1112 20:48:27.773609 2150 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 12 20:48:27.773366 kubelet[2150]: I1112 20:48:27.773624 2150 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 12 20:48:27.774250 kubelet[2150]: I1112 20:48:27.774023 2150 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 20:48:27.777078 kubelet[2150]: E1112 20:48:27.777052 2150 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.2.0-2-eeaeb2d4c6\" not found" Nov 12 20:48:27.825141 systemd[1]: Created slice kubepods-burstable-pod6538f2afc25041c814c51bb3200f857a.slice - libcontainer container kubepods-burstable-pod6538f2afc25041c814c51bb3200f857a.slice. Nov 12 20:48:27.849188 systemd[1]: Created slice kubepods-burstable-podc18dfe0fdf721989cdaf8cf0e1a5d18c.slice - libcontainer container kubepods-burstable-podc18dfe0fdf721989cdaf8cf0e1a5d18c.slice. Nov 12 20:48:27.871383 systemd[1]: Created slice kubepods-burstable-podd17e97cef5cbe4a6db616720f746a00b.slice - libcontainer container kubepods-burstable-podd17e97cef5cbe4a6db616720f746a00b.slice. Nov 12 20:48:27.875075 kubelet[2150]: I1112 20:48:27.875030 2150 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:48:27.875518 kubelet[2150]: E1112 20:48:27.875396 2150 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://147.182.197.11:6443/api/v1/nodes\": dial tcp 147.182.197.11:6443: connect: connection refused" node="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:48:27.889256 kubelet[2150]: E1112 20:48:27.889196 2150 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.182.197.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.0-2-eeaeb2d4c6?timeout=10s\": dial tcp 147.182.197.11:6443: connect: connection refused" interval="400ms" Nov 12 20:48:27.894032 kubelet[2150]: I1112 20:48:27.893571 2150 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6538f2afc25041c814c51bb3200f857a-kubeconfig\") pod \"kube-scheduler-ci-4081.2.0-2-eeaeb2d4c6\" (UID: \"6538f2afc25041c814c51bb3200f857a\") " pod="kube-system/kube-scheduler-ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:48:27.894032 kubelet[2150]: I1112 20:48:27.893640 2150 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c18dfe0fdf721989cdaf8cf0e1a5d18c-k8s-certs\") pod \"kube-apiserver-ci-4081.2.0-2-eeaeb2d4c6\" (UID: \"c18dfe0fdf721989cdaf8cf0e1a5d18c\") " pod="kube-system/kube-apiserver-ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:48:27.894032 kubelet[2150]: I1112 20:48:27.893670 2150 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c18dfe0fdf721989cdaf8cf0e1a5d18c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.2.0-2-eeaeb2d4c6\" (UID: \"c18dfe0fdf721989cdaf8cf0e1a5d18c\") " pod="kube-system/kube-apiserver-ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:48:27.894032 kubelet[2150]: I1112 20:48:27.893707 2150 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d17e97cef5cbe4a6db616720f746a00b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.2.0-2-eeaeb2d4c6\" (UID: \"d17e97cef5cbe4a6db616720f746a00b\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:48:27.894032 kubelet[2150]: I1112 20:48:27.893740 2150 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c18dfe0fdf721989cdaf8cf0e1a5d18c-ca-certs\") pod \"kube-apiserver-ci-4081.2.0-2-eeaeb2d4c6\" (UID: \"c18dfe0fdf721989cdaf8cf0e1a5d18c\") " pod="kube-system/kube-apiserver-ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:48:27.894403 kubelet[2150]: I1112 20:48:27.893764 2150 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d17e97cef5cbe4a6db616720f746a00b-ca-certs\") pod \"kube-controller-manager-ci-4081.2.0-2-eeaeb2d4c6\" (UID: \"d17e97cef5cbe4a6db616720f746a00b\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:48:27.894403 kubelet[2150]: I1112 20:48:27.893788 2150 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d17e97cef5cbe4a6db616720f746a00b-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.2.0-2-eeaeb2d4c6\" (UID: \"d17e97cef5cbe4a6db616720f746a00b\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:48:27.894403 kubelet[2150]: I1112 20:48:27.893811 2150 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d17e97cef5cbe4a6db616720f746a00b-k8s-certs\") pod \"kube-controller-manager-ci-4081.2.0-2-eeaeb2d4c6\" (UID: \"d17e97cef5cbe4a6db616720f746a00b\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:48:27.894403 kubelet[2150]: I1112 20:48:27.893858 2150 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d17e97cef5cbe4a6db616720f746a00b-kubeconfig\") pod \"kube-controller-manager-ci-4081.2.0-2-eeaeb2d4c6\" (UID: \"d17e97cef5cbe4a6db616720f746a00b\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:48:28.077659 kubelet[2150]: I1112 20:48:28.077612 2150 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:48:28.078106 kubelet[2150]: E1112 20:48:28.078070 2150 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://147.182.197.11:6443/api/v1/nodes\": dial tcp 147.182.197.11:6443: connect: connection refused" node="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:48:28.146559 kubelet[2150]: E1112 20:48:28.146365 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:48:28.147777 containerd[1462]: time="2024-11-12T20:48:28.147705109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.2.0-2-eeaeb2d4c6,Uid:6538f2afc25041c814c51bb3200f857a,Namespace:kube-system,Attempt:0,}" Nov 12 20:48:28.154187 kubelet[2150]: E1112 20:48:28.154110 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:48:28.160001 systemd-resolved[1324]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Nov 12 20:48:28.161056 containerd[1462]: time="2024-11-12T20:48:28.160996921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.2.0-2-eeaeb2d4c6,Uid:c18dfe0fdf721989cdaf8cf0e1a5d18c,Namespace:kube-system,Attempt:0,}" Nov 12 20:48:28.175126 kubelet[2150]: E1112 20:48:28.175062 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:48:28.175970 containerd[1462]: time="2024-11-12T20:48:28.175818485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.2.0-2-eeaeb2d4c6,Uid:d17e97cef5cbe4a6db616720f746a00b,Namespace:kube-system,Attempt:0,}" Nov 12 20:48:28.289988 kubelet[2150]: E1112 20:48:28.289924 2150 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.182.197.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.0-2-eeaeb2d4c6?timeout=10s\": dial tcp 147.182.197.11:6443: connect: connection refused" interval="800ms" Nov 12 20:48:28.480065 kubelet[2150]: I1112 20:48:28.479946 2150 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:48:28.480376 kubelet[2150]: E1112 20:48:28.480341 2150 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://147.182.197.11:6443/api/v1/nodes\": dial tcp 147.182.197.11:6443: connect: connection refused" node="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:48:28.524681 kubelet[2150]: W1112 20:48:28.524551 2150 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.182.197.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 147.182.197.11:6443: connect: connection refused Nov 12 20:48:28.524681 kubelet[2150]: E1112 20:48:28.524629 2150 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://147.182.197.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.182.197.11:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:48:28.682552 kubelet[2150]: W1112 20:48:28.682449 2150 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.182.197.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.0-2-eeaeb2d4c6&limit=500&resourceVersion=0": dial tcp 147.182.197.11:6443: connect: connection refused Nov 12 20:48:28.682701 kubelet[2150]: E1112 20:48:28.682572 2150 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://147.182.197.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.0-2-eeaeb2d4c6&limit=500&resourceVersion=0\": dial tcp 147.182.197.11:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:48:28.767814 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount70828927.mount: Deactivated successfully. Nov 12 20:48:28.797903 containerd[1462]: time="2024-11-12T20:48:28.797061622Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:48:28.800489 containerd[1462]: time="2024-11-12T20:48:28.800327194Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 12 20:48:28.806913 containerd[1462]: time="2024-11-12T20:48:28.806406088Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:48:28.813366 containerd[1462]: time="2024-11-12T20:48:28.813079073Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:48:28.816164 containerd[1462]: time="2024-11-12T20:48:28.816076965Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 20:48:28.820812 containerd[1462]: time="2024-11-12T20:48:28.820018814Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:48:28.822937 containerd[1462]: time="2024-11-12T20:48:28.822831576Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 20:48:28.829355 containerd[1462]: time="2024-11-12T20:48:28.829278093Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:48:28.830281 containerd[1462]: time="2024-11-12T20:48:28.830030742Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 668.928571ms" Nov 12 20:48:28.832316 containerd[1462]: time="2024-11-12T20:48:28.832267909Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 684.462853ms" Nov 12 20:48:28.842839 containerd[1462]: time="2024-11-12T20:48:28.842764727Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 666.807913ms" Nov 12 20:48:29.019876 kubelet[2150]: W1112 20:48:29.019336 2150 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.182.197.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.182.197.11:6443: connect: connection refused Nov 12 20:48:29.019876 kubelet[2150]: E1112 20:48:29.019422 2150 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://147.182.197.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 147.182.197.11:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:48:29.091377 kubelet[2150]: E1112 20:48:29.091123 2150 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.182.197.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.0-2-eeaeb2d4c6?timeout=10s\": dial tcp 147.182.197.11:6443: connect: connection refused" interval="1.6s" Nov 12 20:48:29.100879 containerd[1462]: time="2024-11-12T20:48:29.100451626Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:48:29.102090 containerd[1462]: time="2024-11-12T20:48:29.101384402Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:48:29.102090 containerd[1462]: time="2024-11-12T20:48:29.101431986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:48:29.108137 containerd[1462]: time="2024-11-12T20:48:29.106826531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:48:29.108137 containerd[1462]: time="2024-11-12T20:48:29.104659784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:48:29.108137 containerd[1462]: time="2024-11-12T20:48:29.104739808Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:48:29.108137 containerd[1462]: time="2024-11-12T20:48:29.104756213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:48:29.108137 containerd[1462]: time="2024-11-12T20:48:29.104986783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:48:29.115086 containerd[1462]: time="2024-11-12T20:48:29.114334651Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:48:29.115086 containerd[1462]: time="2024-11-12T20:48:29.114421216Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:48:29.115086 containerd[1462]: time="2024-11-12T20:48:29.114449093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:48:29.115086 containerd[1462]: time="2024-11-12T20:48:29.114562568Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:48:29.143496 systemd[1]: Started cri-containerd-2376008e10cc064b20767eef26b97f1288fa662e12cff46d2d908680af0e84fe.scope - libcontainer container 2376008e10cc064b20767eef26b97f1288fa662e12cff46d2d908680af0e84fe. Nov 12 20:48:29.157155 systemd[1]: Started cri-containerd-17401c36d7902711490ce780ca5d53f36b5dedbb3bfbbbaad15ceb9f7f3b6c37.scope - libcontainer container 17401c36d7902711490ce780ca5d53f36b5dedbb3bfbbbaad15ceb9f7f3b6c37. Nov 12 20:48:29.160243 kubelet[2150]: W1112 20:48:29.160194 2150 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.182.197.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.182.197.11:6443: connect: connection refused Nov 12 20:48:29.160373 kubelet[2150]: E1112 20:48:29.160265 2150 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://147.182.197.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 147.182.197.11:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:48:29.160599 systemd[1]: Started cri-containerd-bcb006fdbadbda4d853fe59ed152032b9c09e5670a4c39222218c01fc6c95c89.scope - libcontainer container bcb006fdbadbda4d853fe59ed152032b9c09e5670a4c39222218c01fc6c95c89. Nov 12 20:48:29.223124 containerd[1462]: time="2024-11-12T20:48:29.222833967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.2.0-2-eeaeb2d4c6,Uid:d17e97cef5cbe4a6db616720f746a00b,Namespace:kube-system,Attempt:0,} returns sandbox id \"2376008e10cc064b20767eef26b97f1288fa662e12cff46d2d908680af0e84fe\"" Nov 12 20:48:29.231871 kubelet[2150]: E1112 20:48:29.231593 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:48:29.236178 containerd[1462]: time="2024-11-12T20:48:29.236080541Z" level=info msg="CreateContainer within sandbox \"2376008e10cc064b20767eef26b97f1288fa662e12cff46d2d908680af0e84fe\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 12 20:48:29.253894 containerd[1462]: time="2024-11-12T20:48:29.253679665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.2.0-2-eeaeb2d4c6,Uid:c18dfe0fdf721989cdaf8cf0e1a5d18c,Namespace:kube-system,Attempt:0,} returns sandbox id \"17401c36d7902711490ce780ca5d53f36b5dedbb3bfbbbaad15ceb9f7f3b6c37\"" Nov 12 20:48:29.256268 kubelet[2150]: E1112 20:48:29.256053 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:48:29.261743 containerd[1462]: time="2024-11-12T20:48:29.261610658Z" level=info msg="CreateContainer within sandbox \"17401c36d7902711490ce780ca5d53f36b5dedbb3bfbbbaad15ceb9f7f3b6c37\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 12 20:48:29.262810 containerd[1462]: time="2024-11-12T20:48:29.262779096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.2.0-2-eeaeb2d4c6,Uid:6538f2afc25041c814c51bb3200f857a,Namespace:kube-system,Attempt:0,} returns sandbox id \"bcb006fdbadbda4d853fe59ed152032b9c09e5670a4c39222218c01fc6c95c89\"" Nov 12 20:48:29.264593 kubelet[2150]: E1112 20:48:29.264564 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:48:29.266773 containerd[1462]: time="2024-11-12T20:48:29.266732330Z" level=info msg="CreateContainer within sandbox \"bcb006fdbadbda4d853fe59ed152032b9c09e5670a4c39222218c01fc6c95c89\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 12 20:48:29.283723 kubelet[2150]: I1112 20:48:29.283115 2150 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:48:29.284744 kubelet[2150]: E1112 20:48:29.284702 2150 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://147.182.197.11:6443/api/v1/nodes\": dial tcp 147.182.197.11:6443: connect: connection refused" node="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:48:29.305206 containerd[1462]: time="2024-11-12T20:48:29.305152780Z" level=info msg="CreateContainer within sandbox \"2376008e10cc064b20767eef26b97f1288fa662e12cff46d2d908680af0e84fe\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"69f6913744023ca5e16e0d25d5b46db552915bcea6a35ceb952439bf11ebfadd\"" Nov 12 20:48:29.306037 containerd[1462]: time="2024-11-12T20:48:29.305987032Z" level=info msg="StartContainer for \"69f6913744023ca5e16e0d25d5b46db552915bcea6a35ceb952439bf11ebfadd\"" Nov 12 20:48:29.341283 systemd[1]: Started cri-containerd-69f6913744023ca5e16e0d25d5b46db552915bcea6a35ceb952439bf11ebfadd.scope - libcontainer container 69f6913744023ca5e16e0d25d5b46db552915bcea6a35ceb952439bf11ebfadd. Nov 12 20:48:29.350948 containerd[1462]: time="2024-11-12T20:48:29.350815374Z" level=info msg="CreateContainer within sandbox \"bcb006fdbadbda4d853fe59ed152032b9c09e5670a4c39222218c01fc6c95c89\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"00e5386e4a1198e485b2d933a0b2b5893a55b3ddf6801acf4503e8e3e0092211\"" Nov 12 20:48:29.351602 containerd[1462]: time="2024-11-12T20:48:29.351565223Z" level=info msg="StartContainer for \"00e5386e4a1198e485b2d933a0b2b5893a55b3ddf6801acf4503e8e3e0092211\"" Nov 12 20:48:29.358027 containerd[1462]: time="2024-11-12T20:48:29.357906759Z" level=info msg="CreateContainer within sandbox \"17401c36d7902711490ce780ca5d53f36b5dedbb3bfbbbaad15ceb9f7f3b6c37\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8b2bb883713b8b8115847024afb57a8dc6ea96279945c360dddf23a1eb00cf42\"" Nov 12 20:48:29.359197 containerd[1462]: time="2024-11-12T20:48:29.358763529Z" level=info msg="StartContainer for \"8b2bb883713b8b8115847024afb57a8dc6ea96279945c360dddf23a1eb00cf42\"" Nov 12 20:48:29.406559 systemd[1]: Started cri-containerd-00e5386e4a1198e485b2d933a0b2b5893a55b3ddf6801acf4503e8e3e0092211.scope - libcontainer container 00e5386e4a1198e485b2d933a0b2b5893a55b3ddf6801acf4503e8e3e0092211. Nov 12 20:48:29.432130 systemd[1]: Started cri-containerd-8b2bb883713b8b8115847024afb57a8dc6ea96279945c360dddf23a1eb00cf42.scope - libcontainer container 8b2bb883713b8b8115847024afb57a8dc6ea96279945c360dddf23a1eb00cf42. Nov 12 20:48:29.437164 containerd[1462]: time="2024-11-12T20:48:29.435596156Z" level=info msg="StartContainer for \"69f6913744023ca5e16e0d25d5b46db552915bcea6a35ceb952439bf11ebfadd\" returns successfully" Nov 12 20:48:29.509336 containerd[1462]: time="2024-11-12T20:48:29.508461188Z" level=info msg="StartContainer for \"8b2bb883713b8b8115847024afb57a8dc6ea96279945c360dddf23a1eb00cf42\" returns successfully" Nov 12 20:48:29.536274 containerd[1462]: time="2024-11-12T20:48:29.536153188Z" level=info msg="StartContainer for \"00e5386e4a1198e485b2d933a0b2b5893a55b3ddf6801acf4503e8e3e0092211\" returns successfully" Nov 12 20:48:29.732251 kubelet[2150]: E1112 20:48:29.732215 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:48:29.735746 kubelet[2150]: E1112 20:48:29.735562 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:48:29.737482 kubelet[2150]: E1112 20:48:29.737412 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:48:30.741506 kubelet[2150]: E1112 20:48:30.741418 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:48:30.886489 kubelet[2150]: I1112 20:48:30.886068 2150 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:48:30.897921 kubelet[2150]: E1112 20:48:30.897728 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:48:31.516838 kubelet[2150]: E1112 20:48:31.516778 2150 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.2.0-2-eeaeb2d4c6\" not found" node="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:48:31.647628 kubelet[2150]: I1112 20:48:31.647381 2150 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:48:31.647628 kubelet[2150]: E1112 20:48:31.647438 2150 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081.2.0-2-eeaeb2d4c6\": node \"ci-4081.2.0-2-eeaeb2d4c6\" not found" Nov 12 20:48:31.668046 kubelet[2150]: I1112 20:48:31.668006 2150 apiserver.go:52] "Watching apiserver" Nov 12 20:48:31.689882 kubelet[2150]: I1112 20:48:31.689742 2150 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 12 20:48:34.050320 kubelet[2150]: W1112 20:48:34.050271 2150 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 12 20:48:34.051451 kubelet[2150]: E1112 20:48:34.050741 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:48:34.201747 systemd[1]: Reloading requested from client PID 2420 ('systemctl') (unit session-9.scope)... Nov 12 20:48:34.201916 systemd[1]: Reloading... Nov 12 20:48:34.367998 zram_generator::config[2462]: No configuration found. Nov 12 20:48:34.562671 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:48:34.711022 systemd[1]: Reloading finished in 507 ms. Nov 12 20:48:34.752727 kubelet[2150]: E1112 20:48:34.751138 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:48:34.768979 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:48:34.782903 systemd[1]: kubelet.service: Deactivated successfully. Nov 12 20:48:34.783363 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:48:34.791462 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:48:34.969220 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:48:34.973346 (kubelet)[2510]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 20:48:35.064416 kubelet[2510]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:48:35.064416 kubelet[2510]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 20:48:35.064416 kubelet[2510]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:48:35.066421 kubelet[2510]: I1112 20:48:35.066176 2510 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 20:48:35.079284 kubelet[2510]: I1112 20:48:35.079228 2510 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Nov 12 20:48:35.079284 kubelet[2510]: I1112 20:48:35.079275 2510 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 20:48:35.080697 kubelet[2510]: I1112 20:48:35.080406 2510 server.go:929] "Client rotation is on, will bootstrap in background" Nov 12 20:48:35.082723 kubelet[2510]: I1112 20:48:35.082509 2510 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 12 20:48:35.085216 kubelet[2510]: I1112 20:48:35.085165 2510 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 20:48:35.089224 kubelet[2510]: E1112 20:48:35.089187 2510 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 12 20:48:35.089224 kubelet[2510]: I1112 20:48:35.089219 2510 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 12 20:48:35.095827 kubelet[2510]: I1112 20:48:35.095788 2510 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 20:48:35.096031 kubelet[2510]: I1112 20:48:35.095956 2510 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 12 20:48:35.096114 kubelet[2510]: I1112 20:48:35.096072 2510 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 20:48:35.096371 kubelet[2510]: I1112 20:48:35.096118 2510 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.2.0-2-eeaeb2d4c6","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 12 20:48:35.096504 kubelet[2510]: I1112 20:48:35.096381 2510 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 20:48:35.096504 kubelet[2510]: I1112 20:48:35.096398 2510 container_manager_linux.go:300] "Creating device plugin manager" Nov 12 20:48:35.096504 kubelet[2510]: I1112 20:48:35.096445 2510 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:48:35.096672 kubelet[2510]: I1112 20:48:35.096597 2510 kubelet.go:408] "Attempting to sync node with API server" Nov 12 20:48:35.096672 kubelet[2510]: I1112 20:48:35.096616 2510 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 20:48:35.096672 kubelet[2510]: I1112 20:48:35.096652 2510 kubelet.go:314] "Adding apiserver pod source" Nov 12 20:48:35.096672 kubelet[2510]: I1112 20:48:35.096670 2510 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 20:48:35.106178 kubelet[2510]: I1112 20:48:35.106144 2510 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 12 20:48:35.106794 kubelet[2510]: I1112 20:48:35.106629 2510 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 20:48:35.108953 kubelet[2510]: I1112 20:48:35.107984 2510 server.go:1269] "Started kubelet" Nov 12 20:48:35.113666 kubelet[2510]: I1112 20:48:35.112691 2510 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 20:48:35.133035 kubelet[2510]: I1112 20:48:35.132815 2510 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 20:48:35.135351 kubelet[2510]: I1112 20:48:35.135279 2510 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 20:48:35.135691 kubelet[2510]: I1112 20:48:35.135661 2510 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 20:48:35.137437 kubelet[2510]: I1112 20:48:35.137316 2510 server.go:460] "Adding debug handlers to kubelet server" Nov 12 20:48:35.139346 kubelet[2510]: I1112 20:48:35.139324 2510 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 12 20:48:35.141825 kubelet[2510]: I1112 20:48:35.141805 2510 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 12 20:48:35.142123 kubelet[2510]: I1112 20:48:35.142110 2510 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 12 20:48:35.143098 kubelet[2510]: I1112 20:48:35.142330 2510 reconciler.go:26] "Reconciler: start to sync state" Nov 12 20:48:35.143422 kubelet[2510]: I1112 20:48:35.143407 2510 factory.go:221] Registration of the systemd container factory successfully Nov 12 20:48:35.143789 kubelet[2510]: I1112 20:48:35.143724 2510 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 20:48:35.148898 kubelet[2510]: E1112 20:48:35.148037 2510 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 20:48:35.148898 kubelet[2510]: I1112 20:48:35.148341 2510 factory.go:221] Registration of the containerd container factory successfully Nov 12 20:48:35.155823 kubelet[2510]: I1112 20:48:35.155781 2510 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 20:48:35.162457 kubelet[2510]: I1112 20:48:35.162424 2510 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 20:48:35.162635 kubelet[2510]: I1112 20:48:35.162625 2510 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 20:48:35.162750 kubelet[2510]: I1112 20:48:35.162738 2510 kubelet.go:2321] "Starting kubelet main sync loop" Nov 12 20:48:35.162933 kubelet[2510]: E1112 20:48:35.162912 2510 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 20:48:35.224030 kubelet[2510]: I1112 20:48:35.223912 2510 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 20:48:35.224629 kubelet[2510]: I1112 20:48:35.224216 2510 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 20:48:35.224629 kubelet[2510]: I1112 20:48:35.224271 2510 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:48:35.224629 kubelet[2510]: I1112 20:48:35.224485 2510 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 12 20:48:35.224629 kubelet[2510]: I1112 20:48:35.224504 2510 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 12 20:48:35.224629 kubelet[2510]: I1112 20:48:35.224533 2510 policy_none.go:49] "None policy: Start" Nov 12 20:48:35.230148 kubelet[2510]: I1112 20:48:35.230118 2510 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 20:48:35.230148 kubelet[2510]: I1112 20:48:35.230156 2510 state_mem.go:35] "Initializing new in-memory state store" Nov 12 20:48:35.230392 kubelet[2510]: I1112 20:48:35.230374 2510 state_mem.go:75] "Updated machine memory state" Nov 12 20:48:35.242529 kubelet[2510]: I1112 20:48:35.242403 2510 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 20:48:35.244493 kubelet[2510]: I1112 20:48:35.244307 2510 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 12 20:48:35.244493 kubelet[2510]: I1112 20:48:35.244331 2510 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 12 20:48:35.245460 kubelet[2510]: I1112 20:48:35.244741 2510 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 20:48:35.278964 kubelet[2510]: W1112 20:48:35.277741 2510 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 12 20:48:35.282887 kubelet[2510]: W1112 20:48:35.282730 2510 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 12 20:48:35.282887 kubelet[2510]: E1112 20:48:35.282869 2510 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.2.0-2-eeaeb2d4c6\" already exists" pod="kube-system/kube-apiserver-ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:48:35.288619 kubelet[2510]: W1112 20:48:35.288369 2510 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 12 20:48:35.344763 kubelet[2510]: I1112 20:48:35.344360 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c18dfe0fdf721989cdaf8cf0e1a5d18c-k8s-certs\") pod \"kube-apiserver-ci-4081.2.0-2-eeaeb2d4c6\" (UID: \"c18dfe0fdf721989cdaf8cf0e1a5d18c\") " pod="kube-system/kube-apiserver-ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:48:35.344763 kubelet[2510]: I1112 20:48:35.344444 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c18dfe0fdf721989cdaf8cf0e1a5d18c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.2.0-2-eeaeb2d4c6\" (UID: \"c18dfe0fdf721989cdaf8cf0e1a5d18c\") " pod="kube-system/kube-apiserver-ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:48:35.344763 kubelet[2510]: I1112 20:48:35.344482 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d17e97cef5cbe4a6db616720f746a00b-ca-certs\") pod \"kube-controller-manager-ci-4081.2.0-2-eeaeb2d4c6\" (UID: \"d17e97cef5cbe4a6db616720f746a00b\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:48:35.344763 kubelet[2510]: I1112 20:48:35.344509 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d17e97cef5cbe4a6db616720f746a00b-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.2.0-2-eeaeb2d4c6\" (UID: \"d17e97cef5cbe4a6db616720f746a00b\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:48:35.344763 kubelet[2510]: I1112 20:48:35.344531 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d17e97cef5cbe4a6db616720f746a00b-k8s-certs\") pod \"kube-controller-manager-ci-4081.2.0-2-eeaeb2d4c6\" (UID: \"d17e97cef5cbe4a6db616720f746a00b\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:48:35.345246 kubelet[2510]: I1112 20:48:35.344559 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d17e97cef5cbe4a6db616720f746a00b-kubeconfig\") pod \"kube-controller-manager-ci-4081.2.0-2-eeaeb2d4c6\" (UID: \"d17e97cef5cbe4a6db616720f746a00b\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:48:35.345246 kubelet[2510]: I1112 20:48:35.344584 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d17e97cef5cbe4a6db616720f746a00b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.2.0-2-eeaeb2d4c6\" (UID: \"d17e97cef5cbe4a6db616720f746a00b\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:48:35.345246 kubelet[2510]: I1112 20:48:35.344610 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6538f2afc25041c814c51bb3200f857a-kubeconfig\") pod \"kube-scheduler-ci-4081.2.0-2-eeaeb2d4c6\" (UID: \"6538f2afc25041c814c51bb3200f857a\") " pod="kube-system/kube-scheduler-ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:48:35.345246 kubelet[2510]: I1112 20:48:35.344633 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c18dfe0fdf721989cdaf8cf0e1a5d18c-ca-certs\") pod \"kube-apiserver-ci-4081.2.0-2-eeaeb2d4c6\" (UID: \"c18dfe0fdf721989cdaf8cf0e1a5d18c\") " pod="kube-system/kube-apiserver-ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:48:35.365179 kubelet[2510]: I1112 20:48:35.364758 2510 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:48:35.378875 kubelet[2510]: I1112 20:48:35.378718 2510 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:48:35.378875 kubelet[2510]: I1112 20:48:35.378816 2510 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:48:35.578781 kubelet[2510]: E1112 20:48:35.578723 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:48:35.584593 kubelet[2510]: E1112 20:48:35.584372 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:48:35.590264 kubelet[2510]: E1112 20:48:35.589815 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:48:36.098522 kubelet[2510]: I1112 20:48:36.097499 2510 apiserver.go:52] "Watching apiserver" Nov 12 20:48:36.142586 kubelet[2510]: I1112 20:48:36.142531 2510 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 12 20:48:36.198579 kubelet[2510]: E1112 20:48:36.197790 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:48:36.203498 kubelet[2510]: E1112 20:48:36.203224 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:48:36.218748 kubelet[2510]: W1112 20:48:36.217559 2510 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 12 20:48:36.218748 kubelet[2510]: E1112 20:48:36.217639 2510 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.2.0-2-eeaeb2d4c6\" already exists" pod="kube-system/kube-apiserver-ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:48:36.218748 kubelet[2510]: E1112 20:48:36.217814 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:48:36.260798 kubelet[2510]: I1112 20:48:36.260724 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.2.0-2-eeaeb2d4c6" podStartSLOduration=1.260702044 podStartE2EDuration="1.260702044s" podCreationTimestamp="2024-11-12 20:48:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:48:36.243829615 +0000 UTC m=+1.248806985" watchObservedRunningTime="2024-11-12 20:48:36.260702044 +0000 UTC m=+1.265679434" Nov 12 20:48:36.276543 kubelet[2510]: I1112 20:48:36.275735 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.2.0-2-eeaeb2d4c6" podStartSLOduration=2.2757119550000002 podStartE2EDuration="2.275711955s" podCreationTimestamp="2024-11-12 20:48:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:48:36.261753984 +0000 UTC m=+1.266731354" watchObservedRunningTime="2024-11-12 20:48:36.275711955 +0000 UTC m=+1.280689317" Nov 12 20:48:36.304224 kubelet[2510]: I1112 20:48:36.304149 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.2.0-2-eeaeb2d4c6" podStartSLOduration=1.304122718 podStartE2EDuration="1.304122718s" podCreationTimestamp="2024-11-12 20:48:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:48:36.276636297 +0000 UTC m=+1.281613668" watchObservedRunningTime="2024-11-12 20:48:36.304122718 +0000 UTC m=+1.309100090" Nov 12 20:48:37.201876 kubelet[2510]: E1112 20:48:37.201819 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:48:37.203439 kubelet[2510]: E1112 20:48:37.203245 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:48:38.923150 kubelet[2510]: I1112 20:48:38.923099 2510 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 12 20:48:38.923559 containerd[1462]: time="2024-11-12T20:48:38.923485682Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 12 20:48:38.925928 kubelet[2510]: I1112 20:48:38.924117 2510 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 12 20:48:39.496127 sudo[1666]: pam_unix(sudo:session): session closed for user root Nov 12 20:48:39.501941 sshd[1663]: pam_unix(sshd:session): session closed for user core Nov 12 20:48:39.509256 systemd[1]: sshd@8-147.182.197.11:22-139.178.68.195:34684.service: Deactivated successfully. Nov 12 20:48:39.512789 systemd[1]: session-9.scope: Deactivated successfully. Nov 12 20:48:39.513091 systemd[1]: session-9.scope: Consumed 5.210s CPU time, 152.2M memory peak, 0B memory swap peak. Nov 12 20:48:39.514711 systemd-logind[1448]: Session 9 logged out. Waiting for processes to exit. Nov 12 20:48:39.517846 systemd-logind[1448]: Removed session 9. Nov 12 20:48:39.589969 update_engine[1450]: I20241112 20:48:39.589833 1450 update_attempter.cc:509] Updating boot flags... Nov 12 20:48:39.638395 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2597) Nov 12 20:48:39.674872 kubelet[2510]: I1112 20:48:39.674595 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6af4f646-952f-4753-bef6-5be32b59d47e-xtables-lock\") pod \"kube-proxy-pdqrm\" (UID: \"6af4f646-952f-4753-bef6-5be32b59d47e\") " pod="kube-system/kube-proxy-pdqrm" Nov 12 20:48:39.674872 kubelet[2510]: I1112 20:48:39.674636 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6af4f646-952f-4753-bef6-5be32b59d47e-lib-modules\") pod \"kube-proxy-pdqrm\" (UID: \"6af4f646-952f-4753-bef6-5be32b59d47e\") " pod="kube-system/kube-proxy-pdqrm" Nov 12 20:48:39.674872 kubelet[2510]: I1112 20:48:39.674697 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6af4f646-952f-4753-bef6-5be32b59d47e-kube-proxy\") pod \"kube-proxy-pdqrm\" (UID: \"6af4f646-952f-4753-bef6-5be32b59d47e\") " pod="kube-system/kube-proxy-pdqrm" Nov 12 20:48:39.674872 kubelet[2510]: I1112 20:48:39.674725 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g4bv\" (UniqueName: \"kubernetes.io/projected/6af4f646-952f-4753-bef6-5be32b59d47e-kube-api-access-2g4bv\") pod \"kube-proxy-pdqrm\" (UID: \"6af4f646-952f-4753-bef6-5be32b59d47e\") " pod="kube-system/kube-proxy-pdqrm" Nov 12 20:48:39.688822 systemd[1]: Created slice kubepods-besteffort-pod6af4f646_952f_4753_bef6_5be32b59d47e.slice - libcontainer container kubepods-besteffort-pod6af4f646_952f_4753_bef6_5be32b59d47e.slice. Nov 12 20:48:39.938522 systemd[1]: Created slice kubepods-besteffort-podf281f2de_a36a_4e2d_adea_1f09ed511652.slice - libcontainer container kubepods-besteffort-podf281f2de_a36a_4e2d_adea_1f09ed511652.slice. Nov 12 20:48:40.023252 kubelet[2510]: E1112 20:48:40.023025 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:48:40.024719 containerd[1462]: time="2024-11-12T20:48:40.024681397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pdqrm,Uid:6af4f646-952f-4753-bef6-5be32b59d47e,Namespace:kube-system,Attempt:0,}" Nov 12 20:48:40.077405 kubelet[2510]: I1112 20:48:40.077348 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f281f2de-a36a-4e2d-adea-1f09ed511652-var-lib-calico\") pod \"tigera-operator-f8bc97d4c-6snfd\" (UID: \"f281f2de-a36a-4e2d-adea-1f09ed511652\") " pod="tigera-operator/tigera-operator-f8bc97d4c-6snfd" Nov 12 20:48:40.077405 kubelet[2510]: I1112 20:48:40.077398 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68nvw\" (UniqueName: \"kubernetes.io/projected/f281f2de-a36a-4e2d-adea-1f09ed511652-kube-api-access-68nvw\") pod \"tigera-operator-f8bc97d4c-6snfd\" (UID: \"f281f2de-a36a-4e2d-adea-1f09ed511652\") " pod="tigera-operator/tigera-operator-f8bc97d4c-6snfd" Nov 12 20:48:40.127798 containerd[1462]: time="2024-11-12T20:48:40.127611483Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:48:40.127798 containerd[1462]: time="2024-11-12T20:48:40.127697275Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:48:40.127798 containerd[1462]: time="2024-11-12T20:48:40.127723174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:48:40.128169 containerd[1462]: time="2024-11-12T20:48:40.127935878Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:48:40.156183 systemd[1]: Started cri-containerd-94f656106b5c69549d7e1bd029cea3abfda7327ac2faa629691ca251e2b6c6fb.scope - libcontainer container 94f656106b5c69549d7e1bd029cea3abfda7327ac2faa629691ca251e2b6c6fb. Nov 12 20:48:40.203777 containerd[1462]: time="2024-11-12T20:48:40.201660660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pdqrm,Uid:6af4f646-952f-4753-bef6-5be32b59d47e,Namespace:kube-system,Attempt:0,} returns sandbox id \"94f656106b5c69549d7e1bd029cea3abfda7327ac2faa629691ca251e2b6c6fb\"" Nov 12 20:48:40.207853 kubelet[2510]: E1112 20:48:40.207535 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:48:40.214911 containerd[1462]: time="2024-11-12T20:48:40.214860999Z" level=info msg="CreateContainer within sandbox \"94f656106b5c69549d7e1bd029cea3abfda7327ac2faa629691ca251e2b6c6fb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 12 20:48:40.244888 containerd[1462]: time="2024-11-12T20:48:40.244699302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-f8bc97d4c-6snfd,Uid:f281f2de-a36a-4e2d-adea-1f09ed511652,Namespace:tigera-operator,Attempt:0,}" Nov 12 20:48:40.263668 containerd[1462]: time="2024-11-12T20:48:40.263593379Z" level=info msg="CreateContainer within sandbox \"94f656106b5c69549d7e1bd029cea3abfda7327ac2faa629691ca251e2b6c6fb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ac55ae87d5651db3f8039b7cd07936579f377cb7694521e947718ddabfe176e9\"" Nov 12 20:48:40.264786 containerd[1462]: time="2024-11-12T20:48:40.264588453Z" level=info msg="StartContainer for \"ac55ae87d5651db3f8039b7cd07936579f377cb7694521e947718ddabfe176e9\"" Nov 12 20:48:40.306186 systemd[1]: Started cri-containerd-ac55ae87d5651db3f8039b7cd07936579f377cb7694521e947718ddabfe176e9.scope - libcontainer container ac55ae87d5651db3f8039b7cd07936579f377cb7694521e947718ddabfe176e9. Nov 12 20:48:40.316589 containerd[1462]: time="2024-11-12T20:48:40.316402443Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:48:40.316589 containerd[1462]: time="2024-11-12T20:48:40.316470679Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:48:40.316589 containerd[1462]: time="2024-11-12T20:48:40.316486710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:48:40.316589 containerd[1462]: time="2024-11-12T20:48:40.316591215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:48:40.352245 systemd[1]: Started cri-containerd-684af863d2e0c6860bfb4653a9e79b37c0987d02fceef6e621b859fbd61d6014.scope - libcontainer container 684af863d2e0c6860bfb4653a9e79b37c0987d02fceef6e621b859fbd61d6014. Nov 12 20:48:40.393874 containerd[1462]: time="2024-11-12T20:48:40.393450452Z" level=info msg="StartContainer for \"ac55ae87d5651db3f8039b7cd07936579f377cb7694521e947718ddabfe176e9\" returns successfully" Nov 12 20:48:40.432913 containerd[1462]: time="2024-11-12T20:48:40.432840891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-f8bc97d4c-6snfd,Uid:f281f2de-a36a-4e2d-adea-1f09ed511652,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"684af863d2e0c6860bfb4653a9e79b37c0987d02fceef6e621b859fbd61d6014\"" Nov 12 20:48:40.436156 containerd[1462]: time="2024-11-12T20:48:40.435965630Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.0\"" Nov 12 20:48:41.230998 kubelet[2510]: E1112 20:48:41.230537 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:48:41.255652 kubelet[2510]: I1112 20:48:41.255577 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pdqrm" podStartSLOduration=2.255508692 podStartE2EDuration="2.255508692s" podCreationTimestamp="2024-11-12 20:48:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:48:41.253186535 +0000 UTC m=+6.258163906" watchObservedRunningTime="2024-11-12 20:48:41.255508692 +0000 UTC m=+6.260486061" Nov 12 20:48:42.236236 kubelet[2510]: E1112 20:48:42.236187 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:48:43.652760 kubelet[2510]: E1112 20:48:43.652257 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:48:44.241476 kubelet[2510]: E1112 20:48:44.240610 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:48:44.469606 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2412880548.mount: Deactivated successfully. Nov 12 20:48:44.621026 kubelet[2510]: E1112 20:48:44.620986 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:48:45.242264 kubelet[2510]: E1112 20:48:45.241908 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:48:47.036675 kubelet[2510]: E1112 20:48:47.035839 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:48:47.288997 containerd[1462]: time="2024-11-12T20:48:47.288506657Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:47.291331 containerd[1462]: time="2024-11-12T20:48:47.291243161Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.0: active requests=0, bytes read=21763347" Nov 12 20:48:47.295409 containerd[1462]: time="2024-11-12T20:48:47.295314576Z" level=info msg="ImageCreate event name:\"sha256:6969e3644ac6358fd921194ec267a243ad5856f3d9595bdbb9a76dc5c5e9875d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:47.304510 containerd[1462]: time="2024-11-12T20:48:47.304444953Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:67a96f7dcdde24abff66b978202c5e64b9909f4a8fcd9357daca92b499b26e4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:47.307524 containerd[1462]: time="2024-11-12T20:48:47.307183501Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.0\" with image id \"sha256:6969e3644ac6358fd921194ec267a243ad5856f3d9595bdbb9a76dc5c5e9875d\", repo tag \"quay.io/tigera/operator:v1.36.0\", repo digest \"quay.io/tigera/operator@sha256:67a96f7dcdde24abff66b978202c5e64b9909f4a8fcd9357daca92b499b26e4d\", size \"21757542\" in 6.871164431s" Nov 12 20:48:47.307524 containerd[1462]: time="2024-11-12T20:48:47.307240824Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.0\" returns image reference \"sha256:6969e3644ac6358fd921194ec267a243ad5856f3d9595bdbb9a76dc5c5e9875d\"" Nov 12 20:48:47.379796 containerd[1462]: time="2024-11-12T20:48:47.379750231Z" level=info msg="CreateContainer within sandbox \"684af863d2e0c6860bfb4653a9e79b37c0987d02fceef6e621b859fbd61d6014\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 12 20:48:47.413604 containerd[1462]: time="2024-11-12T20:48:47.413520480Z" level=info msg="CreateContainer within sandbox \"684af863d2e0c6860bfb4653a9e79b37c0987d02fceef6e621b859fbd61d6014\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"85c663a18565804ff39c6913e7c7a4ba97eaf63eca1bed5358c29cce6ad91180\"" Nov 12 20:48:47.415374 containerd[1462]: time="2024-11-12T20:48:47.415330041Z" level=info msg="StartContainer for \"85c663a18565804ff39c6913e7c7a4ba97eaf63eca1bed5358c29cce6ad91180\"" Nov 12 20:48:47.465378 systemd[1]: Started cri-containerd-85c663a18565804ff39c6913e7c7a4ba97eaf63eca1bed5358c29cce6ad91180.scope - libcontainer container 85c663a18565804ff39c6913e7c7a4ba97eaf63eca1bed5358c29cce6ad91180. Nov 12 20:48:47.507063 containerd[1462]: time="2024-11-12T20:48:47.506961433Z" level=info msg="StartContainer for \"85c663a18565804ff39c6913e7c7a4ba97eaf63eca1bed5358c29cce6ad91180\" returns successfully" Nov 12 20:48:50.734407 kubelet[2510]: I1112 20:48:50.734305 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-f8bc97d4c-6snfd" podStartSLOduration=4.853428765 podStartE2EDuration="11.732770374s" podCreationTimestamp="2024-11-12 20:48:39 +0000 UTC" firstStartedPulling="2024-11-12 20:48:40.435108686 +0000 UTC m=+5.440086032" lastFinishedPulling="2024-11-12 20:48:47.314450292 +0000 UTC m=+12.319427641" observedRunningTime="2024-11-12 20:48:48.272225446 +0000 UTC m=+13.277202815" watchObservedRunningTime="2024-11-12 20:48:50.732770374 +0000 UTC m=+15.737747744" Nov 12 20:48:50.747471 systemd[1]: Created slice kubepods-besteffort-pod675fde2e_078d_4015_9e28_be09433f6214.slice - libcontainer container kubepods-besteffort-pod675fde2e_078d_4015_9e28_be09433f6214.slice. Nov 12 20:48:50.848884 kubelet[2510]: I1112 20:48:50.848743 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/675fde2e-078d-4015-9e28-be09433f6214-tigera-ca-bundle\") pod \"calico-typha-5455776599-xjg5v\" (UID: \"675fde2e-078d-4015-9e28-be09433f6214\") " pod="calico-system/calico-typha-5455776599-xjg5v" Nov 12 20:48:50.848884 kubelet[2510]: I1112 20:48:50.848797 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/675fde2e-078d-4015-9e28-be09433f6214-typha-certs\") pod \"calico-typha-5455776599-xjg5v\" (UID: \"675fde2e-078d-4015-9e28-be09433f6214\") " pod="calico-system/calico-typha-5455776599-xjg5v" Nov 12 20:48:50.848884 kubelet[2510]: I1112 20:48:50.848868 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r68lw\" (UniqueName: \"kubernetes.io/projected/675fde2e-078d-4015-9e28-be09433f6214-kube-api-access-r68lw\") pod \"calico-typha-5455776599-xjg5v\" (UID: \"675fde2e-078d-4015-9e28-be09433f6214\") " pod="calico-system/calico-typha-5455776599-xjg5v" Nov 12 20:48:50.923511 systemd[1]: Created slice kubepods-besteffort-pod5423797a_a9ae_41b8_b941_5f27fb427451.slice - libcontainer container kubepods-besteffort-pod5423797a_a9ae_41b8_b941_5f27fb427451.slice. Nov 12 20:48:51.050646 kubelet[2510]: I1112 20:48:51.050201 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5423797a-a9ae-41b8-b941-5f27fb427451-xtables-lock\") pod \"calico-node-2p5h6\" (UID: \"5423797a-a9ae-41b8-b941-5f27fb427451\") " pod="calico-system/calico-node-2p5h6" Nov 12 20:48:51.050646 kubelet[2510]: I1112 20:48:51.050261 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5423797a-a9ae-41b8-b941-5f27fb427451-var-lib-calico\") pod \"calico-node-2p5h6\" (UID: \"5423797a-a9ae-41b8-b941-5f27fb427451\") " pod="calico-system/calico-node-2p5h6" Nov 12 20:48:51.050646 kubelet[2510]: I1112 20:48:51.050292 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5423797a-a9ae-41b8-b941-5f27fb427451-lib-modules\") pod \"calico-node-2p5h6\" (UID: \"5423797a-a9ae-41b8-b941-5f27fb427451\") " pod="calico-system/calico-node-2p5h6" Nov 12 20:48:51.050646 kubelet[2510]: I1112 20:48:51.050316 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5423797a-a9ae-41b8-b941-5f27fb427451-var-run-calico\") pod \"calico-node-2p5h6\" (UID: \"5423797a-a9ae-41b8-b941-5f27fb427451\") " pod="calico-system/calico-node-2p5h6" Nov 12 20:48:51.050646 kubelet[2510]: I1112 20:48:51.050340 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5423797a-a9ae-41b8-b941-5f27fb427451-policysync\") pod \"calico-node-2p5h6\" (UID: \"5423797a-a9ae-41b8-b941-5f27fb427451\") " pod="calico-system/calico-node-2p5h6" Nov 12 20:48:51.050957 kubelet[2510]: I1112 20:48:51.050367 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgltv\" (UniqueName: \"kubernetes.io/projected/5423797a-a9ae-41b8-b941-5f27fb427451-kube-api-access-xgltv\") pod \"calico-node-2p5h6\" (UID: \"5423797a-a9ae-41b8-b941-5f27fb427451\") " pod="calico-system/calico-node-2p5h6" Nov 12 20:48:51.050957 kubelet[2510]: I1112 20:48:51.050396 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5423797a-a9ae-41b8-b941-5f27fb427451-cni-net-dir\") pod \"calico-node-2p5h6\" (UID: \"5423797a-a9ae-41b8-b941-5f27fb427451\") " pod="calico-system/calico-node-2p5h6" Nov 12 20:48:51.050957 kubelet[2510]: I1112 20:48:51.050426 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5423797a-a9ae-41b8-b941-5f27fb427451-flexvol-driver-host\") pod \"calico-node-2p5h6\" (UID: \"5423797a-a9ae-41b8-b941-5f27fb427451\") " pod="calico-system/calico-node-2p5h6" Nov 12 20:48:51.050957 kubelet[2510]: I1112 20:48:51.050452 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5423797a-a9ae-41b8-b941-5f27fb427451-cni-bin-dir\") pod \"calico-node-2p5h6\" (UID: \"5423797a-a9ae-41b8-b941-5f27fb427451\") " pod="calico-system/calico-node-2p5h6" Nov 12 20:48:51.050957 kubelet[2510]: I1112 20:48:51.050476 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5423797a-a9ae-41b8-b941-5f27fb427451-cni-log-dir\") pod \"calico-node-2p5h6\" (UID: \"5423797a-a9ae-41b8-b941-5f27fb427451\") " pod="calico-system/calico-node-2p5h6" Nov 12 20:48:51.051159 kubelet[2510]: I1112 20:48:51.050505 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5423797a-a9ae-41b8-b941-5f27fb427451-tigera-ca-bundle\") pod \"calico-node-2p5h6\" (UID: \"5423797a-a9ae-41b8-b941-5f27fb427451\") " pod="calico-system/calico-node-2p5h6" Nov 12 20:48:51.051159 kubelet[2510]: I1112 20:48:51.050528 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5423797a-a9ae-41b8-b941-5f27fb427451-node-certs\") pod \"calico-node-2p5h6\" (UID: \"5423797a-a9ae-41b8-b941-5f27fb427451\") " pod="calico-system/calico-node-2p5h6" Nov 12 20:48:51.055812 kubelet[2510]: E1112 20:48:51.055739 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-srhkg" podUID="09520a02-648f-4672-85a7-9b0a62557d5f" Nov 12 20:48:51.058051 kubelet[2510]: E1112 20:48:51.057719 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:48:51.059587 containerd[1462]: time="2024-11-12T20:48:51.058431903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5455776599-xjg5v,Uid:675fde2e-078d-4015-9e28-be09433f6214,Namespace:calico-system,Attempt:0,}" Nov 12 20:48:51.131650 containerd[1462]: time="2024-11-12T20:48:51.131330189Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:48:51.132158 containerd[1462]: time="2024-11-12T20:48:51.131495211Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:48:51.132158 containerd[1462]: time="2024-11-12T20:48:51.131521164Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:48:51.132158 containerd[1462]: time="2024-11-12T20:48:51.131675900Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:48:51.151879 kubelet[2510]: I1112 20:48:51.151555 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/09520a02-648f-4672-85a7-9b0a62557d5f-kubelet-dir\") pod \"csi-node-driver-srhkg\" (UID: \"09520a02-648f-4672-85a7-9b0a62557d5f\") " pod="calico-system/csi-node-driver-srhkg" Nov 12 20:48:51.151879 kubelet[2510]: I1112 20:48:51.151610 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/09520a02-648f-4672-85a7-9b0a62557d5f-varrun\") pod \"csi-node-driver-srhkg\" (UID: \"09520a02-648f-4672-85a7-9b0a62557d5f\") " pod="calico-system/csi-node-driver-srhkg" Nov 12 20:48:51.151879 kubelet[2510]: I1112 20:48:51.151629 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/09520a02-648f-4672-85a7-9b0a62557d5f-socket-dir\") pod \"csi-node-driver-srhkg\" (UID: \"09520a02-648f-4672-85a7-9b0a62557d5f\") " pod="calico-system/csi-node-driver-srhkg" Nov 12 20:48:51.151879 kubelet[2510]: I1112 20:48:51.151646 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsgr6\" (UniqueName: \"kubernetes.io/projected/09520a02-648f-4672-85a7-9b0a62557d5f-kube-api-access-gsgr6\") pod \"csi-node-driver-srhkg\" (UID: \"09520a02-648f-4672-85a7-9b0a62557d5f\") " pod="calico-system/csi-node-driver-srhkg" Nov 12 20:48:51.151879 kubelet[2510]: I1112 20:48:51.151710 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/09520a02-648f-4672-85a7-9b0a62557d5f-registration-dir\") pod \"csi-node-driver-srhkg\" (UID: \"09520a02-648f-4672-85a7-9b0a62557d5f\") " pod="calico-system/csi-node-driver-srhkg" Nov 12 20:48:51.174378 kubelet[2510]: E1112 20:48:51.172601 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:48:51.174378 kubelet[2510]: W1112 20:48:51.172643 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:48:51.174378 kubelet[2510]: E1112 20:48:51.172676 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:48:51.187266 systemd[1]: Started cri-containerd-fea3ca469ea83c1e222db5c1bab983a0fa8cf3c2c1e6dbede1a8f997043f53f3.scope - libcontainer container fea3ca469ea83c1e222db5c1bab983a0fa8cf3c2c1e6dbede1a8f997043f53f3. Nov 12 20:48:51.196892 kubelet[2510]: E1112 20:48:51.195699 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:48:51.196892 kubelet[2510]: W1112 20:48:51.195724 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:48:51.196892 kubelet[2510]: E1112 20:48:51.195746 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:48:51.229868 kubelet[2510]: E1112 20:48:51.229639 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:48:51.232812 containerd[1462]: time="2024-11-12T20:48:51.231839005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2p5h6,Uid:5423797a-a9ae-41b8-b941-5f27fb427451,Namespace:calico-system,Attempt:0,}" Nov 12 20:48:51.254056 kubelet[2510]: E1112 20:48:51.254016 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:48:51.254056 kubelet[2510]: W1112 20:48:51.254043 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:48:51.254766 kubelet[2510]: E1112 20:48:51.254074 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:48:51.254766 kubelet[2510]: E1112 20:48:51.254430 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:48:51.254766 kubelet[2510]: W1112 20:48:51.254443 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:48:51.254766 kubelet[2510]: E1112 20:48:51.254465 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:48:51.256079 kubelet[2510]: E1112 20:48:51.255911 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:48:51.256079 kubelet[2510]: W1112 20:48:51.255954 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:48:51.256079 kubelet[2510]: E1112 20:48:51.255979 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:48:51.257514 kubelet[2510]: E1112 20:48:51.256905 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:48:51.257514 kubelet[2510]: W1112 20:48:51.256923 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:48:51.257514 kubelet[2510]: E1112 20:48:51.256982 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:48:51.258049 kubelet[2510]: E1112 20:48:51.257818 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:48:51.258049 kubelet[2510]: W1112 20:48:51.257833 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:48:51.258411 kubelet[2510]: E1112 20:48:51.258254 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:48:51.258411 kubelet[2510]: E1112 20:48:51.258302 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:48:51.258411 kubelet[2510]: W1112 20:48:51.258322 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:48:51.258799 kubelet[2510]: E1112 20:48:51.258631 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:48:51.259233 kubelet[2510]: E1112 20:48:51.259074 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:48:51.259233 kubelet[2510]: W1112 20:48:51.259090 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:48:51.259729 kubelet[2510]: E1112 20:48:51.259700 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:48:51.261659 kubelet[2510]: W1112 20:48:51.261179 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:48:51.261659 kubelet[2510]: E1112 20:48:51.261242 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:48:51.261659 kubelet[2510]: E1112 20:48:51.261258 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:48:51.261659 kubelet[2510]: E1112 20:48:51.261591 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:48:51.261659 kubelet[2510]: W1112 20:48:51.261605 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:48:51.261659 kubelet[2510]: E1112 20:48:51.261624 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:48:51.262145 kubelet[2510]: E1112 20:48:51.262123 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:48:51.262145 kubelet[2510]: W1112 20:48:51.262143 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:48:51.262237 kubelet[2510]: E1112 20:48:51.262168 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:48:51.262825 kubelet[2510]: E1112 20:48:51.262796 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:48:51.262825 kubelet[2510]: W1112 20:48:51.262813 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:48:51.263003 kubelet[2510]: E1112 20:48:51.262972 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:48:51.263263 kubelet[2510]: E1112 20:48:51.263247 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:48:51.263385 kubelet[2510]: W1112 20:48:51.263260 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:48:51.263531 kubelet[2510]: E1112 20:48:51.263497 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:48:51.264000 kubelet[2510]: E1112 20:48:51.263977 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:48:51.264000 kubelet[2510]: W1112 20:48:51.263994 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:48:51.264349 kubelet[2510]: E1112 20:48:51.264187 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:48:51.264792 kubelet[2510]: E1112 20:48:51.264773 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:48:51.264792 kubelet[2510]: W1112 20:48:51.264789 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:48:51.265116 kubelet[2510]: E1112 20:48:51.264865 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:48:51.265116 kubelet[2510]: E1112 20:48:51.265081 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:48:51.265116 kubelet[2510]: W1112 20:48:51.265091 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:48:51.265387 kubelet[2510]: E1112 20:48:51.265362 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:48:51.266706 kubelet[2510]: E1112 20:48:51.266678 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:48:51.266706 kubelet[2510]: W1112 20:48:51.266702 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:48:51.266924 kubelet[2510]: E1112 20:48:51.266836 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:48:51.267964 kubelet[2510]: E1112 20:48:51.267938 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:48:51.267964 kubelet[2510]: W1112 20:48:51.267960 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:48:51.269782 kubelet[2510]: E1112 20:48:51.269749 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:48:51.270153 kubelet[2510]: E1112 20:48:51.270135 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:48:51.270153 kubelet[2510]: W1112 20:48:51.270151 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:48:51.270358 kubelet[2510]: E1112 20:48:51.270341 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:48:51.270486 kubelet[2510]: E1112 20:48:51.270473 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:48:51.270486 kubelet[2510]: W1112 20:48:51.270485 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:48:51.270647 kubelet[2510]: E1112 20:48:51.270575 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:48:51.270723 kubelet[2510]: E1112 20:48:51.270709 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:48:51.270723 kubelet[2510]: W1112 20:48:51.270722 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:48:51.270830 kubelet[2510]: E1112 20:48:51.270811 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:48:51.271030 kubelet[2510]: E1112 20:48:51.271015 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:48:51.271030 kubelet[2510]: W1112 20:48:51.271029 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:48:51.271132 kubelet[2510]: E1112 20:48:51.271045 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:48:51.271289 kubelet[2510]: E1112 20:48:51.271270 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:48:51.271289 kubelet[2510]: W1112 20:48:51.271286 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:48:51.271410 kubelet[2510]: E1112 20:48:51.271315 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:48:51.272127 kubelet[2510]: E1112 20:48:51.272101 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:48:51.272127 kubelet[2510]: W1112 20:48:51.272124 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:48:51.272239 kubelet[2510]: E1112 20:48:51.272141 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:48:51.275994 kubelet[2510]: E1112 20:48:51.275958 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:48:51.275994 kubelet[2510]: W1112 20:48:51.275989 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:48:51.276223 kubelet[2510]: E1112 20:48:51.276024 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:48:51.277142 kubelet[2510]: E1112 20:48:51.277110 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:48:51.277142 kubelet[2510]: W1112 20:48:51.277136 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:48:51.277305 kubelet[2510]: E1112 20:48:51.277158 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:48:51.300561 kubelet[2510]: E1112 20:48:51.300524 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:48:51.300561 kubelet[2510]: W1112 20:48:51.300550 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:48:51.300561 kubelet[2510]: E1112 20:48:51.300580 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:48:51.315242 containerd[1462]: time="2024-11-12T20:48:51.314000610Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:48:51.315242 containerd[1462]: time="2024-11-12T20:48:51.314341896Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:48:51.315242 containerd[1462]: time="2024-11-12T20:48:51.314360488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:48:51.315242 containerd[1462]: time="2024-11-12T20:48:51.314536514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:48:51.334698 containerd[1462]: time="2024-11-12T20:48:51.334313673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5455776599-xjg5v,Uid:675fde2e-078d-4015-9e28-be09433f6214,Namespace:calico-system,Attempt:0,} returns sandbox id \"fea3ca469ea83c1e222db5c1bab983a0fa8cf3c2c1e6dbede1a8f997043f53f3\"" Nov 12 20:48:51.337595 kubelet[2510]: E1112 20:48:51.336816 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:48:51.343062 containerd[1462]: time="2024-11-12T20:48:51.342991871Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.0\"" Nov 12 20:48:51.372323 systemd[1]: Started cri-containerd-9ed48d0f7b7bfe12118ef5ef8c3e77797165efa14ca6c99605cc1fa1e60ba1ce.scope - libcontainer container 9ed48d0f7b7bfe12118ef5ef8c3e77797165efa14ca6c99605cc1fa1e60ba1ce. Nov 12 20:48:51.424378 containerd[1462]: time="2024-11-12T20:48:51.424329035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2p5h6,Uid:5423797a-a9ae-41b8-b941-5f27fb427451,Namespace:calico-system,Attempt:0,} returns sandbox id \"9ed48d0f7b7bfe12118ef5ef8c3e77797165efa14ca6c99605cc1fa1e60ba1ce\"" Nov 12 20:48:51.428084 kubelet[2510]: E1112 20:48:51.427981 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:48:52.163904 kubelet[2510]: E1112 20:48:52.163503 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-srhkg" podUID="09520a02-648f-4672-85a7-9b0a62557d5f" Nov 12 20:48:53.299786 containerd[1462]: time="2024-11-12T20:48:53.299662069Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:53.303433 containerd[1462]: time="2024-11-12T20:48:53.303117767Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.0: active requests=0, bytes read=29849168" Nov 12 20:48:53.307261 containerd[1462]: time="2024-11-12T20:48:53.307156429Z" level=info msg="ImageCreate event name:\"sha256:eb8a933b39daca50b75ccf193cc6193e39512bc996c16898d43d4c1f39c8603b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:53.314756 containerd[1462]: time="2024-11-12T20:48:53.314638786Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:850e5f751e100580bffb57d1b70d4e90d90ecaab5ef1b6dc6a43dcd34a5e1057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:53.315945 containerd[1462]: time="2024-11-12T20:48:53.315507467Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.0\" with image id \"sha256:eb8a933b39daca50b75ccf193cc6193e39512bc996c16898d43d4c1f39c8603b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:850e5f751e100580bffb57d1b70d4e90d90ecaab5ef1b6dc6a43dcd34a5e1057\", size \"31342252\" in 1.972444209s" Nov 12 20:48:53.315945 containerd[1462]: time="2024-11-12T20:48:53.315546756Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.0\" returns image reference \"sha256:eb8a933b39daca50b75ccf193cc6193e39512bc996c16898d43d4c1f39c8603b\"" Nov 12 20:48:53.318712 containerd[1462]: time="2024-11-12T20:48:53.317977665Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\"" Nov 12 20:48:53.336571 containerd[1462]: time="2024-11-12T20:48:53.336525144Z" level=info msg="CreateContainer within sandbox \"fea3ca469ea83c1e222db5c1bab983a0fa8cf3c2c1e6dbede1a8f997043f53f3\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 12 20:48:53.433191 containerd[1462]: time="2024-11-12T20:48:53.433056326Z" level=info msg="CreateContainer within sandbox \"fea3ca469ea83c1e222db5c1bab983a0fa8cf3c2c1e6dbede1a8f997043f53f3\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"12da42a9a4181c130860dbef138279683636b90fab3a6195ec2a6661ad0fd68a\"" Nov 12 20:48:53.434874 containerd[1462]: time="2024-11-12T20:48:53.434550283Z" level=info msg="StartContainer for \"12da42a9a4181c130860dbef138279683636b90fab3a6195ec2a6661ad0fd68a\"" Nov 12 20:48:53.484205 systemd[1]: Started cri-containerd-12da42a9a4181c130860dbef138279683636b90fab3a6195ec2a6661ad0fd68a.scope - libcontainer container 12da42a9a4181c130860dbef138279683636b90fab3a6195ec2a6661ad0fd68a. Nov 12 20:48:53.558947 containerd[1462]: time="2024-11-12T20:48:53.558370344Z" level=info msg="StartContainer for \"12da42a9a4181c130860dbef138279683636b90fab3a6195ec2a6661ad0fd68a\" returns successfully" Nov 12 20:48:54.163921 kubelet[2510]: E1112 20:48:54.163808 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-srhkg" podUID="09520a02-648f-4672-85a7-9b0a62557d5f" Nov 12 20:48:54.273589 kubelet[2510]: E1112 20:48:54.273252 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:48:54.292706 kubelet[2510]: E1112 20:48:54.292635 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:48:54.292706 kubelet[2510]: W1112 20:48:54.292672 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:48:54.292706 kubelet[2510]: E1112 20:48:54.292706 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:48:54.293145 kubelet[2510]: E1112 20:48:54.293107 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:48:54.293145 kubelet[2510]: W1112 20:48:54.293139 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:48:54.293240 kubelet[2510]: E1112 20:48:54.293156 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:48:54.293478 kubelet[2510]: E1112 20:48:54.293449 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:48:54.293478 kubelet[2510]: W1112 20:48:54.293467 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:48:54.293627 kubelet[2510]: E1112 20:48:54.293481 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:48:54.293884 kubelet[2510]: E1112 20:48:54.293834 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:48:54.293884 kubelet[2510]: W1112 20:48:54.293866 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:48:54.293884 kubelet[2510]: E1112 20:48:54.293880 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:48:54.295260 kubelet[2510]: E1112 20:48:54.294383 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:48:54.295260 kubelet[2510]: W1112 20:48:54.294408 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:48:54.295260 kubelet[2510]: E1112 20:48:54.294428 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:48:54.295260 kubelet[2510]: E1112 20:48:54.294800 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:48:54.295260 kubelet[2510]: W1112 20:48:54.294811 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:48:54.295260 kubelet[2510]: E1112 20:48:54.294824 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:48:54.295628 kubelet[2510]: E1112 20:48:54.295416 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:48:54.295628 kubelet[2510]: W1112 20:48:54.295429 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:48:54.295628 kubelet[2510]: E1112 20:48:54.295475 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:48:54.296098 kubelet[2510]: E1112 20:48:54.296071 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:48:54.296201 kubelet[2510]: W1112 20:48:54.296124 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:48:54.296201 kubelet[2510]: E1112 20:48:54.296139 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:48:54.296885 kubelet[2510]: E1112 20:48:54.296445 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:48:54.296885 kubelet[2510]: W1112 20:48:54.296460 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:48:54.296885 kubelet[2510]: E1112 20:48:54.296498 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:48:54.297384 kubelet[2510]: E1112 20:48:54.297353 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:48:54.297384 kubelet[2510]: W1112 20:48:54.297377 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:48:54.297629 kubelet[2510]: E1112 20:48:54.297392 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:48:54.297629 kubelet[2510]: E1112 20:48:54.297626 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:48:54.298649 kubelet[2510]: W1112 20:48:54.297635 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:48:54.298649 kubelet[2510]: E1112 20:48:54.297646 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:48:54.298649 kubelet[2510]: E1112 20:48:54.298181 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:48:54.298649 kubelet[2510]: W1112 20:48:54.298196 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:48:54.298649 kubelet[2510]: E1112 20:48:54.298210 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:48:54.298977 kubelet[2510]: E1112 20:48:54.298682 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:48:54.298977 kubelet[2510]: W1112 20:48:54.298695 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:48:54.298977 kubelet[2510]: E1112 20:48:54.298707 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:48:54.299974 kubelet[2510]: E1112 20:48:54.299326 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:48:54.299974 kubelet[2510]: W1112 20:48:54.299344 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:48:54.299974 kubelet[2510]: E1112 20:48:54.299420 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:48:54.299974 kubelet[2510]: E1112 20:48:54.299739 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:48:54.299974 kubelet[2510]: W1112 20:48:54.299750 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:48:54.299974 kubelet[2510]: E1112 20:48:54.299779 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:48:54.302017 kubelet[2510]: I1112 20:48:54.301962 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5455776599-xjg5v" podStartSLOduration=2.325738248 podStartE2EDuration="4.301940518s" podCreationTimestamp="2024-11-12 20:48:50 +0000 UTC" firstStartedPulling="2024-11-12 20:48:51.340697492 +0000 UTC m=+16.345674842" lastFinishedPulling="2024-11-12 20:48:53.316899752 +0000 UTC m=+18.321877112" observedRunningTime="2024-11-12 20:48:54.296111799 +0000 UTC m=+19.301089167" watchObservedRunningTime="2024-11-12 20:48:54.301940518 +0000 UTC m=+19.306917879" Nov 12 20:48:54.386059 kubelet[2510]: E1112 20:48:54.385971 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:48:54.386059 kubelet[2510]: W1112 20:48:54.386015 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:48:54.386059 kubelet[2510]: E1112 20:48:54.386068 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:48:54.386696 kubelet[2510]: E1112 20:48:54.386620 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:48:54.386696 kubelet[2510]: W1112 20:48:54.386642 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:48:54.386696 kubelet[2510]: E1112 20:48:54.386689 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:48:54.387295 kubelet[2510]: E1112 20:48:54.387073 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:48:54.387295 kubelet[2510]: W1112 20:48:54.387088 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:48:54.387295 kubelet[2510]: E1112 20:48:54.387112 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:48:54.387468 kubelet[2510]: E1112 20:48:54.387352 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:48:54.387468 kubelet[2510]: W1112 20:48:54.387364 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:48:54.387468 kubelet[2510]: E1112 20:48:54.387392 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:48:54.387678 kubelet[2510]: E1112 20:48:54.387655 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:48:54.387678 kubelet[2510]: W1112 20:48:54.387673 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:48:54.387830 kubelet[2510]: E1112 20:48:54.387703 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:48:54.387997 kubelet[2510]: E1112 20:48:54.387982 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:48:54.388097 kubelet[2510]: W1112 20:48:54.387997 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:48:54.388097 kubelet[2510]: E1112 20:48:54.388026 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:48:54.388644 kubelet[2510]: E1112 20:48:54.388623 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:48:54.388644 kubelet[2510]: W1112 20:48:54.388644 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:48:54.388789 kubelet[2510]: E1112 20:48:54.388666 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:48:54.389240 kubelet[2510]: E1112 20:48:54.389123 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:48:54.389240 kubelet[2510]: W1112 20:48:54.389142 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:48:54.389240 kubelet[2510]: E1112 20:48:54.389184 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:48:54.389472 kubelet[2510]: E1112 20:48:54.389415 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:48:54.389472 kubelet[2510]: W1112 20:48:54.389428 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:48:54.389663 kubelet[2510]: E1112 20:48:54.389626 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:48:54.389749 kubelet[2510]: E1112 20:48:54.389661 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:48:54.389749 kubelet[2510]: W1112 20:48:54.389674 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:48:54.389749 kubelet[2510]: E1112 20:48:54.389707 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:48:54.390154 kubelet[2510]: E1112 20:48:54.390135 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:48:54.390219 kubelet[2510]: W1112 20:48:54.390155 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:48:54.390219 kubelet[2510]: E1112 20:48:54.390179 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:48:54.390442 kubelet[2510]: E1112 20:48:54.390426 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:48:54.390483 kubelet[2510]: W1112 20:48:54.390442 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:48:54.390483 kubelet[2510]: E1112 20:48:54.390472 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:48:54.390786 kubelet[2510]: E1112 20:48:54.390764 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:48:54.390786 kubelet[2510]: W1112 20:48:54.390781 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:48:54.390887 kubelet[2510]: E1112 20:48:54.390808 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:48:54.392024 kubelet[2510]: E1112 20:48:54.391947 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:48:54.392024 kubelet[2510]: W1112 20:48:54.391965 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:48:54.392024 kubelet[2510]: E1112 20:48:54.391984 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:48:54.392247 kubelet[2510]: E1112 20:48:54.392235 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:48:54.392416 kubelet[2510]: W1112 20:48:54.392247 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:48:54.392416 kubelet[2510]: E1112 20:48:54.392272 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:48:54.392569 kubelet[2510]: E1112 20:48:54.392553 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:48:54.392569 kubelet[2510]: W1112 20:48:54.392568 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:48:54.392671 kubelet[2510]: E1112 20:48:54.392592 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:48:54.392996 kubelet[2510]: E1112 20:48:54.392963 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:48:54.392996 kubelet[2510]: W1112 20:48:54.392976 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:48:54.392996 kubelet[2510]: E1112 20:48:54.392992 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:48:54.393222 kubelet[2510]: E1112 20:48:54.393195 2510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:48:54.393222 kubelet[2510]: W1112 20:48:54.393207 2510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:48:54.393222 kubelet[2510]: E1112 20:48:54.393216 2510 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:48:54.766164 containerd[1462]: time="2024-11-12T20:48:54.766015138Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:54.768944 containerd[1462]: time="2024-11-12T20:48:54.768888971Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0: active requests=0, bytes read=5362116" Nov 12 20:48:54.773314 containerd[1462]: time="2024-11-12T20:48:54.773197159Z" level=info msg="ImageCreate event name:\"sha256:3fbafc0cb73520aede9a07469f27fd8798e681807d14465761f19c8c2bda1cec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:54.780126 containerd[1462]: time="2024-11-12T20:48:54.780046805Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:bed11f00e388b9bbf6eb3be410d4bc86d7020f790902b87f9e330df5a2058769\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:54.781386 containerd[1462]: time="2024-11-12T20:48:54.781064140Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" with image id \"sha256:3fbafc0cb73520aede9a07469f27fd8798e681807d14465761f19c8c2bda1cec\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:bed11f00e388b9bbf6eb3be410d4bc86d7020f790902b87f9e330df5a2058769\", size \"6855168\" in 1.463047808s" Nov 12 20:48:54.781386 containerd[1462]: time="2024-11-12T20:48:54.781103640Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" returns image reference \"sha256:3fbafc0cb73520aede9a07469f27fd8798e681807d14465761f19c8c2bda1cec\"" Nov 12 20:48:54.785326 containerd[1462]: time="2024-11-12T20:48:54.785195222Z" level=info msg="CreateContainer within sandbox \"9ed48d0f7b7bfe12118ef5ef8c3e77797165efa14ca6c99605cc1fa1e60ba1ce\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 12 20:48:54.820405 containerd[1462]: time="2024-11-12T20:48:54.820327656Z" level=info msg="CreateContainer within sandbox \"9ed48d0f7b7bfe12118ef5ef8c3e77797165efa14ca6c99605cc1fa1e60ba1ce\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"db49d2220c5f6b3a04d465c95c66f9c684524074334e38f04ad6a154c973aa68\"" Nov 12 20:48:54.822901 containerd[1462]: time="2024-11-12T20:48:54.821192830Z" level=info msg="StartContainer for \"db49d2220c5f6b3a04d465c95c66f9c684524074334e38f04ad6a154c973aa68\"" Nov 12 20:48:54.870406 systemd[1]: Started cri-containerd-db49d2220c5f6b3a04d465c95c66f9c684524074334e38f04ad6a154c973aa68.scope - libcontainer container db49d2220c5f6b3a04d465c95c66f9c684524074334e38f04ad6a154c973aa68. Nov 12 20:48:54.920519 containerd[1462]: time="2024-11-12T20:48:54.920305327Z" level=info msg="StartContainer for \"db49d2220c5f6b3a04d465c95c66f9c684524074334e38f04ad6a154c973aa68\" returns successfully" Nov 12 20:48:54.928014 systemd[1]: cri-containerd-db49d2220c5f6b3a04d465c95c66f9c684524074334e38f04ad6a154c973aa68.scope: Deactivated successfully. Nov 12 20:48:54.971758 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db49d2220c5f6b3a04d465c95c66f9c684524074334e38f04ad6a154c973aa68-rootfs.mount: Deactivated successfully. Nov 12 20:48:54.986250 containerd[1462]: time="2024-11-12T20:48:54.985947621Z" level=info msg="shim disconnected" id=db49d2220c5f6b3a04d465c95c66f9c684524074334e38f04ad6a154c973aa68 namespace=k8s.io Nov 12 20:48:54.986250 containerd[1462]: time="2024-11-12T20:48:54.986218096Z" level=warning msg="cleaning up after shim disconnected" id=db49d2220c5f6b3a04d465c95c66f9c684524074334e38f04ad6a154c973aa68 namespace=k8s.io Nov 12 20:48:54.986250 containerd[1462]: time="2024-11-12T20:48:54.986239610Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:48:55.276882 kubelet[2510]: I1112 20:48:55.276835 2510 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:48:55.277435 kubelet[2510]: E1112 20:48:55.277188 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:48:55.278006 kubelet[2510]: E1112 20:48:55.277746 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:48:55.281025 containerd[1462]: time="2024-11-12T20:48:55.280971516Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.0\"" Nov 12 20:48:56.164035 kubelet[2510]: E1112 20:48:56.163958 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-srhkg" podUID="09520a02-648f-4672-85a7-9b0a62557d5f" Nov 12 20:48:58.163148 kubelet[2510]: E1112 20:48:58.163105 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-srhkg" podUID="09520a02-648f-4672-85a7-9b0a62557d5f" Nov 12 20:48:58.833214 containerd[1462]: time="2024-11-12T20:48:58.832951573Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:58.836143 containerd[1462]: time="2024-11-12T20:48:58.836057351Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.0: active requests=0, bytes read=96163683" Nov 12 20:48:58.838636 containerd[1462]: time="2024-11-12T20:48:58.838561866Z" level=info msg="ImageCreate event name:\"sha256:124793defc2ae544a3e0dcd1a225bff5166dbebc1bdacb41c4161b9c0c53425c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:58.843271 containerd[1462]: time="2024-11-12T20:48:58.843180425Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:a7c1b02375aa96ae882655397cd9dd0dcc867d9587ce7b866cf9cd65fd7ca1dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:58.844996 containerd[1462]: time="2024-11-12T20:48:58.844473806Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.0\" with image id \"sha256:124793defc2ae544a3e0dcd1a225bff5166dbebc1bdacb41c4161b9c0c53425c\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:a7c1b02375aa96ae882655397cd9dd0dcc867d9587ce7b866cf9cd65fd7ca1dd\", size \"97656775\" in 3.563375136s" Nov 12 20:48:58.844996 containerd[1462]: time="2024-11-12T20:48:58.844515021Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.0\" returns image reference \"sha256:124793defc2ae544a3e0dcd1a225bff5166dbebc1bdacb41c4161b9c0c53425c\"" Nov 12 20:48:58.902471 containerd[1462]: time="2024-11-12T20:48:58.902399092Z" level=info msg="CreateContainer within sandbox \"9ed48d0f7b7bfe12118ef5ef8c3e77797165efa14ca6c99605cc1fa1e60ba1ce\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 12 20:48:58.941113 containerd[1462]: time="2024-11-12T20:48:58.940996155Z" level=info msg="CreateContainer within sandbox \"9ed48d0f7b7bfe12118ef5ef8c3e77797165efa14ca6c99605cc1fa1e60ba1ce\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"03735e1738c5ecd0b4c95aa5535fe6d1fe6418f4df35cb1a7a605479cc40dfb9\"" Nov 12 20:48:58.941815 containerd[1462]: time="2024-11-12T20:48:58.941691707Z" level=info msg="StartContainer for \"03735e1738c5ecd0b4c95aa5535fe6d1fe6418f4df35cb1a7a605479cc40dfb9\"" Nov 12 20:48:59.046205 systemd[1]: run-containerd-runc-k8s.io-03735e1738c5ecd0b4c95aa5535fe6d1fe6418f4df35cb1a7a605479cc40dfb9-runc.om6bXD.mount: Deactivated successfully. Nov 12 20:48:59.058236 systemd[1]: Started cri-containerd-03735e1738c5ecd0b4c95aa5535fe6d1fe6418f4df35cb1a7a605479cc40dfb9.scope - libcontainer container 03735e1738c5ecd0b4c95aa5535fe6d1fe6418f4df35cb1a7a605479cc40dfb9. Nov 12 20:48:59.110055 containerd[1462]: time="2024-11-12T20:48:59.109804956Z" level=info msg="StartContainer for \"03735e1738c5ecd0b4c95aa5535fe6d1fe6418f4df35cb1a7a605479cc40dfb9\" returns successfully" Nov 12 20:48:59.295562 kubelet[2510]: E1112 20:48:59.295211 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:48:59.839472 systemd[1]: cri-containerd-03735e1738c5ecd0b4c95aa5535fe6d1fe6418f4df35cb1a7a605479cc40dfb9.scope: Deactivated successfully. Nov 12 20:48:59.914217 containerd[1462]: time="2024-11-12T20:48:59.914116275Z" level=info msg="shim disconnected" id=03735e1738c5ecd0b4c95aa5535fe6d1fe6418f4df35cb1a7a605479cc40dfb9 namespace=k8s.io Nov 12 20:48:59.916079 containerd[1462]: time="2024-11-12T20:48:59.915022731Z" level=warning msg="cleaning up after shim disconnected" id=03735e1738c5ecd0b4c95aa5535fe6d1fe6418f4df35cb1a7a605479cc40dfb9 namespace=k8s.io Nov 12 20:48:59.916079 containerd[1462]: time="2024-11-12T20:48:59.915052080Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:48:59.925951 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-03735e1738c5ecd0b4c95aa5535fe6d1fe6418f4df35cb1a7a605479cc40dfb9-rootfs.mount: Deactivated successfully. Nov 12 20:48:59.940230 kubelet[2510]: I1112 20:48:59.939279 2510 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Nov 12 20:48:59.997948 systemd[1]: Created slice kubepods-burstable-podb84cd075_0ebd_4d27_8bb3_99aa5a83c1e5.slice - libcontainer container kubepods-burstable-podb84cd075_0ebd_4d27_8bb3_99aa5a83c1e5.slice. Nov 12 20:49:00.035580 systemd[1]: Created slice kubepods-besteffort-podd0780930_4dfa_4cd9_9093_a1b94ae21874.slice - libcontainer container kubepods-besteffort-podd0780930_4dfa_4cd9_9093_a1b94ae21874.slice. Nov 12 20:49:00.052861 systemd[1]: Created slice kubepods-burstable-pod5666e17e_e08c_4c69_b9f9_6f9b8433b194.slice - libcontainer container kubepods-burstable-pod5666e17e_e08c_4c69_b9f9_6f9b8433b194.slice. Nov 12 20:49:00.068791 systemd[1]: Created slice kubepods-besteffort-pod3cfde3c6_532c_42e8_b5c0_a7b194fb76ba.slice - libcontainer container kubepods-besteffort-pod3cfde3c6_532c_42e8_b5c0_a7b194fb76ba.slice. Nov 12 20:49:00.086892 systemd[1]: Created slice kubepods-besteffort-pod0ad6a9de_2d10_4788_aa6a_d0b5f89e72a6.slice - libcontainer container kubepods-besteffort-pod0ad6a9de_2d10_4788_aa6a_d0b5f89e72a6.slice. Nov 12 20:49:00.155514 kubelet[2510]: I1112 20:49:00.154616 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b84cd075-0ebd-4d27-8bb3-99aa5a83c1e5-config-volume\") pod \"coredns-6f6b679f8f-bg25k\" (UID: \"b84cd075-0ebd-4d27-8bb3-99aa5a83c1e5\") " pod="kube-system/coredns-6f6b679f8f-bg25k" Nov 12 20:49:00.155514 kubelet[2510]: I1112 20:49:00.154691 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5666e17e-e08c-4c69-b9f9-6f9b8433b194-config-volume\") pod \"coredns-6f6b679f8f-lvkdh\" (UID: \"5666e17e-e08c-4c69-b9f9-6f9b8433b194\") " pod="kube-system/coredns-6f6b679f8f-lvkdh" Nov 12 20:49:00.155514 kubelet[2510]: I1112 20:49:00.154727 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0ad6a9de-2d10-4788-aa6a-d0b5f89e72a6-calico-apiserver-certs\") pod \"calico-apiserver-857999858d-qgxrw\" (UID: \"0ad6a9de-2d10-4788-aa6a-d0b5f89e72a6\") " pod="calico-apiserver/calico-apiserver-857999858d-qgxrw" Nov 12 20:49:00.155514 kubelet[2510]: I1112 20:49:00.154765 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbfcj\" (UniqueName: \"kubernetes.io/projected/3cfde3c6-532c-42e8-b5c0-a7b194fb76ba-kube-api-access-nbfcj\") pod \"calico-apiserver-857999858d-g2wj2\" (UID: \"3cfde3c6-532c-42e8-b5c0-a7b194fb76ba\") " pod="calico-apiserver/calico-apiserver-857999858d-g2wj2" Nov 12 20:49:00.155514 kubelet[2510]: I1112 20:49:00.154798 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tggj\" (UniqueName: \"kubernetes.io/projected/d0780930-4dfa-4cd9-9093-a1b94ae21874-kube-api-access-2tggj\") pod \"calico-kube-controllers-6fbd766b6b-f77jl\" (UID: \"d0780930-4dfa-4cd9-9093-a1b94ae21874\") " pod="calico-system/calico-kube-controllers-6fbd766b6b-f77jl" Nov 12 20:49:00.155949 kubelet[2510]: I1112 20:49:00.154880 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpc4f\" (UniqueName: \"kubernetes.io/projected/b84cd075-0ebd-4d27-8bb3-99aa5a83c1e5-kube-api-access-dpc4f\") pod \"coredns-6f6b679f8f-bg25k\" (UID: \"b84cd075-0ebd-4d27-8bb3-99aa5a83c1e5\") " pod="kube-system/coredns-6f6b679f8f-bg25k" Nov 12 20:49:00.155949 kubelet[2510]: I1112 20:49:00.154911 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db8th\" (UniqueName: \"kubernetes.io/projected/0ad6a9de-2d10-4788-aa6a-d0b5f89e72a6-kube-api-access-db8th\") pod \"calico-apiserver-857999858d-qgxrw\" (UID: \"0ad6a9de-2d10-4788-aa6a-d0b5f89e72a6\") " pod="calico-apiserver/calico-apiserver-857999858d-qgxrw" Nov 12 20:49:00.155949 kubelet[2510]: I1112 20:49:00.154938 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0780930-4dfa-4cd9-9093-a1b94ae21874-tigera-ca-bundle\") pod \"calico-kube-controllers-6fbd766b6b-f77jl\" (UID: \"d0780930-4dfa-4cd9-9093-a1b94ae21874\") " pod="calico-system/calico-kube-controllers-6fbd766b6b-f77jl" Nov 12 20:49:00.155949 kubelet[2510]: I1112 20:49:00.154968 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm4xv\" (UniqueName: \"kubernetes.io/projected/5666e17e-e08c-4c69-b9f9-6f9b8433b194-kube-api-access-tm4xv\") pod \"coredns-6f6b679f8f-lvkdh\" (UID: \"5666e17e-e08c-4c69-b9f9-6f9b8433b194\") " pod="kube-system/coredns-6f6b679f8f-lvkdh" Nov 12 20:49:00.155949 kubelet[2510]: I1112 20:49:00.154995 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3cfde3c6-532c-42e8-b5c0-a7b194fb76ba-calico-apiserver-certs\") pod \"calico-apiserver-857999858d-g2wj2\" (UID: \"3cfde3c6-532c-42e8-b5c0-a7b194fb76ba\") " pod="calico-apiserver/calico-apiserver-857999858d-g2wj2" Nov 12 20:49:00.176423 systemd[1]: Created slice kubepods-besteffort-pod09520a02_648f_4672_85a7_9b0a62557d5f.slice - libcontainer container kubepods-besteffort-pod09520a02_648f_4672_85a7_9b0a62557d5f.slice. Nov 12 20:49:00.185766 containerd[1462]: time="2024-11-12T20:49:00.185705129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-srhkg,Uid:09520a02-648f-4672-85a7-9b0a62557d5f,Namespace:calico-system,Attempt:0,}" Nov 12 20:49:00.362929 kubelet[2510]: E1112 20:49:00.361956 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:49:00.370059 containerd[1462]: time="2024-11-12T20:49:00.368705073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lvkdh,Uid:5666e17e-e08c-4c69-b9f9-6f9b8433b194,Namespace:kube-system,Attempt:0,}" Nov 12 20:49:00.379606 containerd[1462]: time="2024-11-12T20:49:00.369361738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fbd766b6b-f77jl,Uid:d0780930-4dfa-4cd9-9093-a1b94ae21874,Namespace:calico-system,Attempt:0,}" Nov 12 20:49:00.394667 containerd[1462]: time="2024-11-12T20:49:00.389430025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-857999858d-g2wj2,Uid:3cfde3c6-532c-42e8-b5c0-a7b194fb76ba,Namespace:calico-apiserver,Attempt:0,}" Nov 12 20:49:00.400030 kubelet[2510]: E1112 20:49:00.399401 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:49:00.400884 containerd[1462]: time="2024-11-12T20:49:00.400541480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-857999858d-qgxrw,Uid:0ad6a9de-2d10-4788-aa6a-d0b5f89e72a6,Namespace:calico-apiserver,Attempt:0,}" Nov 12 20:49:00.438543 containerd[1462]: time="2024-11-12T20:49:00.437379518Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.0\"" Nov 12 20:49:00.605392 kubelet[2510]: E1112 20:49:00.605331 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:49:00.606611 containerd[1462]: time="2024-11-12T20:49:00.606551194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bg25k,Uid:b84cd075-0ebd-4d27-8bb3-99aa5a83c1e5,Namespace:kube-system,Attempt:0,}" Nov 12 20:49:00.895406 containerd[1462]: time="2024-11-12T20:49:00.895251306Z" level=error msg="Failed to destroy network for sandbox \"81a9aaf60851460b6722b031eec6ff073db3f6c8d8f779d9753f4ca5ad0d92a8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:49:00.896095 containerd[1462]: time="2024-11-12T20:49:00.896041628Z" level=error msg="encountered an error cleaning up failed sandbox \"81a9aaf60851460b6722b031eec6ff073db3f6c8d8f779d9753f4ca5ad0d92a8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:49:00.896417 containerd[1462]: time="2024-11-12T20:49:00.896347916Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-srhkg,Uid:09520a02-648f-4672-85a7-9b0a62557d5f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"81a9aaf60851460b6722b031eec6ff073db3f6c8d8f779d9753f4ca5ad0d92a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:49:00.906884 kubelet[2510]: E1112 20:49:00.906807 2510 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81a9aaf60851460b6722b031eec6ff073db3f6c8d8f779d9753f4ca5ad0d92a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:49:00.907054 kubelet[2510]: E1112 20:49:00.906907 2510 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81a9aaf60851460b6722b031eec6ff073db3f6c8d8f779d9753f4ca5ad0d92a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-srhkg" Nov 12 20:49:00.907054 kubelet[2510]: E1112 20:49:00.906930 2510 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81a9aaf60851460b6722b031eec6ff073db3f6c8d8f779d9753f4ca5ad0d92a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-srhkg" Nov 12 20:49:00.907054 kubelet[2510]: E1112 20:49:00.906981 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-srhkg_calico-system(09520a02-648f-4672-85a7-9b0a62557d5f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-srhkg_calico-system(09520a02-648f-4672-85a7-9b0a62557d5f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"81a9aaf60851460b6722b031eec6ff073db3f6c8d8f779d9753f4ca5ad0d92a8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-srhkg" podUID="09520a02-648f-4672-85a7-9b0a62557d5f" Nov 12 20:49:00.972705 containerd[1462]: time="2024-11-12T20:49:00.972582057Z" level=error msg="Failed to destroy network for sandbox \"7ee9c426229b318250540458b462318f3408de31a97a5fb0c59eed7ca0fdb800\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:49:00.984986 containerd[1462]: time="2024-11-12T20:49:00.984841122Z" level=error msg="encountered an error cleaning up failed sandbox \"7ee9c426229b318250540458b462318f3408de31a97a5fb0c59eed7ca0fdb800\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:49:00.985441 containerd[1462]: time="2024-11-12T20:49:00.985395385Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-857999858d-g2wj2,Uid:3cfde3c6-532c-42e8-b5c0-a7b194fb76ba,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7ee9c426229b318250540458b462318f3408de31a97a5fb0c59eed7ca0fdb800\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:49:00.986114 kubelet[2510]: E1112 20:49:00.985923 2510 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ee9c426229b318250540458b462318f3408de31a97a5fb0c59eed7ca0fdb800\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:49:00.986114 kubelet[2510]: E1112 20:49:00.985996 2510 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ee9c426229b318250540458b462318f3408de31a97a5fb0c59eed7ca0fdb800\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-857999858d-g2wj2" Nov 12 20:49:00.986114 kubelet[2510]: E1112 20:49:00.986027 2510 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ee9c426229b318250540458b462318f3408de31a97a5fb0c59eed7ca0fdb800\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-857999858d-g2wj2" Nov 12 20:49:00.986416 kubelet[2510]: E1112 20:49:00.986090 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-857999858d-g2wj2_calico-apiserver(3cfde3c6-532c-42e8-b5c0-a7b194fb76ba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-857999858d-g2wj2_calico-apiserver(3cfde3c6-532c-42e8-b5c0-a7b194fb76ba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7ee9c426229b318250540458b462318f3408de31a97a5fb0c59eed7ca0fdb800\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-857999858d-g2wj2" podUID="3cfde3c6-532c-42e8-b5c0-a7b194fb76ba" Nov 12 20:49:00.993281 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-81a9aaf60851460b6722b031eec6ff073db3f6c8d8f779d9753f4ca5ad0d92a8-shm.mount: Deactivated successfully. Nov 12 20:49:01.004540 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7ee9c426229b318250540458b462318f3408de31a97a5fb0c59eed7ca0fdb800-shm.mount: Deactivated successfully. Nov 12 20:49:01.110595 containerd[1462]: time="2024-11-12T20:49:01.110520870Z" level=error msg="Failed to destroy network for sandbox \"55d3add20a404fd2b235d07455b2c011273625d2463517900d4e75ad63a675ca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:49:01.117565 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-55d3add20a404fd2b235d07455b2c011273625d2463517900d4e75ad63a675ca-shm.mount: Deactivated successfully. Nov 12 20:49:01.119917 containerd[1462]: time="2024-11-12T20:49:01.119073043Z" level=error msg="encountered an error cleaning up failed sandbox \"55d3add20a404fd2b235d07455b2c011273625d2463517900d4e75ad63a675ca\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:49:01.122042 containerd[1462]: time="2024-11-12T20:49:01.119178898Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lvkdh,Uid:5666e17e-e08c-4c69-b9f9-6f9b8433b194,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"55d3add20a404fd2b235d07455b2c011273625d2463517900d4e75ad63a675ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:49:01.122431 kubelet[2510]: E1112 20:49:01.122324 2510 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55d3add20a404fd2b235d07455b2c011273625d2463517900d4e75ad63a675ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:49:01.122431 kubelet[2510]: E1112 20:49:01.122398 2510 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55d3add20a404fd2b235d07455b2c011273625d2463517900d4e75ad63a675ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-lvkdh" Nov 12 20:49:01.123549 kubelet[2510]: E1112 20:49:01.122433 2510 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55d3add20a404fd2b235d07455b2c011273625d2463517900d4e75ad63a675ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-lvkdh" Nov 12 20:49:01.123549 kubelet[2510]: E1112 20:49:01.122494 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-lvkdh_kube-system(5666e17e-e08c-4c69-b9f9-6f9b8433b194)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-lvkdh_kube-system(5666e17e-e08c-4c69-b9f9-6f9b8433b194)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"55d3add20a404fd2b235d07455b2c011273625d2463517900d4e75ad63a675ca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-lvkdh" podUID="5666e17e-e08c-4c69-b9f9-6f9b8433b194" Nov 12 20:49:01.139148 containerd[1462]: time="2024-11-12T20:49:01.139070350Z" level=error msg="Failed to destroy network for sandbox \"ec1d5e27d6437cca4c8ad3ef77f3a589a1a8dd4fdd278bd179f966cbb35983c2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:49:01.142585 containerd[1462]: time="2024-11-12T20:49:01.141234449Z" level=error msg="encountered an error cleaning up failed sandbox \"ec1d5e27d6437cca4c8ad3ef77f3a589a1a8dd4fdd278bd179f966cbb35983c2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:49:01.142585 containerd[1462]: time="2024-11-12T20:49:01.141345261Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-857999858d-qgxrw,Uid:0ad6a9de-2d10-4788-aa6a-d0b5f89e72a6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ec1d5e27d6437cca4c8ad3ef77f3a589a1a8dd4fdd278bd179f966cbb35983c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:49:01.143147 kubelet[2510]: E1112 20:49:01.141689 2510 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec1d5e27d6437cca4c8ad3ef77f3a589a1a8dd4fdd278bd179f966cbb35983c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:49:01.143147 kubelet[2510]: E1112 20:49:01.141773 2510 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec1d5e27d6437cca4c8ad3ef77f3a589a1a8dd4fdd278bd179f966cbb35983c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-857999858d-qgxrw" Nov 12 20:49:01.143147 kubelet[2510]: E1112 20:49:01.141817 2510 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec1d5e27d6437cca4c8ad3ef77f3a589a1a8dd4fdd278bd179f966cbb35983c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-857999858d-qgxrw" Nov 12 20:49:01.145405 kubelet[2510]: E1112 20:49:01.141931 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-857999858d-qgxrw_calico-apiserver(0ad6a9de-2d10-4788-aa6a-d0b5f89e72a6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-857999858d-qgxrw_calico-apiserver(0ad6a9de-2d10-4788-aa6a-d0b5f89e72a6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ec1d5e27d6437cca4c8ad3ef77f3a589a1a8dd4fdd278bd179f966cbb35983c2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-857999858d-qgxrw" podUID="0ad6a9de-2d10-4788-aa6a-d0b5f89e72a6" Nov 12 20:49:01.146413 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ec1d5e27d6437cca4c8ad3ef77f3a589a1a8dd4fdd278bd179f966cbb35983c2-shm.mount: Deactivated successfully. Nov 12 20:49:01.158073 containerd[1462]: time="2024-11-12T20:49:01.158003927Z" level=error msg="Failed to destroy network for sandbox \"728463c1accba5daf51aa8f42fe2a17e9a738a245208f22863c34b09f50c5a16\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:49:01.159425 containerd[1462]: time="2024-11-12T20:49:01.158718031Z" level=error msg="encountered an error cleaning up failed sandbox \"728463c1accba5daf51aa8f42fe2a17e9a738a245208f22863c34b09f50c5a16\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:49:01.159425 containerd[1462]: time="2024-11-12T20:49:01.158807100Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fbd766b6b-f77jl,Uid:d0780930-4dfa-4cd9-9093-a1b94ae21874,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"728463c1accba5daf51aa8f42fe2a17e9a738a245208f22863c34b09f50c5a16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:49:01.159762 kubelet[2510]: E1112 20:49:01.159178 2510 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"728463c1accba5daf51aa8f42fe2a17e9a738a245208f22863c34b09f50c5a16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:49:01.159762 kubelet[2510]: E1112 20:49:01.159256 2510 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"728463c1accba5daf51aa8f42fe2a17e9a738a245208f22863c34b09f50c5a16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6fbd766b6b-f77jl" Nov 12 20:49:01.159762 kubelet[2510]: E1112 20:49:01.159284 2510 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"728463c1accba5daf51aa8f42fe2a17e9a738a245208f22863c34b09f50c5a16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6fbd766b6b-f77jl" Nov 12 20:49:01.161022 kubelet[2510]: E1112 20:49:01.159344 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6fbd766b6b-f77jl_calico-system(d0780930-4dfa-4cd9-9093-a1b94ae21874)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6fbd766b6b-f77jl_calico-system(d0780930-4dfa-4cd9-9093-a1b94ae21874)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"728463c1accba5daf51aa8f42fe2a17e9a738a245208f22863c34b09f50c5a16\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6fbd766b6b-f77jl" podUID="d0780930-4dfa-4cd9-9093-a1b94ae21874" Nov 12 20:49:01.161886 containerd[1462]: time="2024-11-12T20:49:01.161352521Z" level=error msg="Failed to destroy network for sandbox \"22b84a0f12da83b15e0c2c992d12554faea446cd9daee45cbaa7f51f0e37384c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:49:01.162027 containerd[1462]: time="2024-11-12T20:49:01.161779111Z" level=error msg="encountered an error cleaning up failed sandbox \"22b84a0f12da83b15e0c2c992d12554faea446cd9daee45cbaa7f51f0e37384c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:49:01.162187 containerd[1462]: time="2024-11-12T20:49:01.162155880Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bg25k,Uid:b84cd075-0ebd-4d27-8bb3-99aa5a83c1e5,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"22b84a0f12da83b15e0c2c992d12554faea446cd9daee45cbaa7f51f0e37384c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:49:01.162802 kubelet[2510]: E1112 20:49:01.162549 2510 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22b84a0f12da83b15e0c2c992d12554faea446cd9daee45cbaa7f51f0e37384c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:49:01.162802 kubelet[2510]: E1112 20:49:01.162630 2510 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22b84a0f12da83b15e0c2c992d12554faea446cd9daee45cbaa7f51f0e37384c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-bg25k" Nov 12 20:49:01.162802 kubelet[2510]: E1112 20:49:01.162663 2510 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22b84a0f12da83b15e0c2c992d12554faea446cd9daee45cbaa7f51f0e37384c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-bg25k" Nov 12 20:49:01.163086 kubelet[2510]: E1112 20:49:01.162717 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-bg25k_kube-system(b84cd075-0ebd-4d27-8bb3-99aa5a83c1e5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-bg25k_kube-system(b84cd075-0ebd-4d27-8bb3-99aa5a83c1e5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"22b84a0f12da83b15e0c2c992d12554faea446cd9daee45cbaa7f51f0e37384c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-bg25k" podUID="b84cd075-0ebd-4d27-8bb3-99aa5a83c1e5" Nov 12 20:49:01.167653 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-22b84a0f12da83b15e0c2c992d12554faea446cd9daee45cbaa7f51f0e37384c-shm.mount: Deactivated successfully. Nov 12 20:49:01.167903 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-728463c1accba5daf51aa8f42fe2a17e9a738a245208f22863c34b09f50c5a16-shm.mount: Deactivated successfully. Nov 12 20:49:01.408995 kubelet[2510]: I1112 20:49:01.408185 2510 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="728463c1accba5daf51aa8f42fe2a17e9a738a245208f22863c34b09f50c5a16" Nov 12 20:49:01.422754 containerd[1462]: time="2024-11-12T20:49:01.422666840Z" level=info msg="StopPodSandbox for \"728463c1accba5daf51aa8f42fe2a17e9a738a245208f22863c34b09f50c5a16\"" Nov 12 20:49:01.440223 kubelet[2510]: I1112 20:49:01.439500 2510 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55d3add20a404fd2b235d07455b2c011273625d2463517900d4e75ad63a675ca" Nov 12 20:49:01.442685 containerd[1462]: time="2024-11-12T20:49:01.442290003Z" level=info msg="StopPodSandbox for \"55d3add20a404fd2b235d07455b2c011273625d2463517900d4e75ad63a675ca\"" Nov 12 20:49:01.446137 containerd[1462]: time="2024-11-12T20:49:01.445544659Z" level=info msg="Ensure that sandbox 55d3add20a404fd2b235d07455b2c011273625d2463517900d4e75ad63a675ca in task-service has been cleanup successfully" Nov 12 20:49:01.446137 containerd[1462]: time="2024-11-12T20:49:01.445712118Z" level=info msg="Ensure that sandbox 728463c1accba5daf51aa8f42fe2a17e9a738a245208f22863c34b09f50c5a16 in task-service has been cleanup successfully" Nov 12 20:49:01.466985 kubelet[2510]: I1112 20:49:01.466940 2510 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81a9aaf60851460b6722b031eec6ff073db3f6c8d8f779d9753f4ca5ad0d92a8" Nov 12 20:49:01.472917 containerd[1462]: time="2024-11-12T20:49:01.472491305Z" level=info msg="StopPodSandbox for \"81a9aaf60851460b6722b031eec6ff073db3f6c8d8f779d9753f4ca5ad0d92a8\"" Nov 12 20:49:01.474173 containerd[1462]: time="2024-11-12T20:49:01.473956061Z" level=info msg="Ensure that sandbox 81a9aaf60851460b6722b031eec6ff073db3f6c8d8f779d9753f4ca5ad0d92a8 in task-service has been cleanup successfully" Nov 12 20:49:01.486723 kubelet[2510]: I1112 20:49:01.485785 2510 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22b84a0f12da83b15e0c2c992d12554faea446cd9daee45cbaa7f51f0e37384c" Nov 12 20:49:01.489548 containerd[1462]: time="2024-11-12T20:49:01.489356885Z" level=info msg="StopPodSandbox for \"22b84a0f12da83b15e0c2c992d12554faea446cd9daee45cbaa7f51f0e37384c\"" Nov 12 20:49:01.493711 containerd[1462]: time="2024-11-12T20:49:01.492119549Z" level=info msg="Ensure that sandbox 22b84a0f12da83b15e0c2c992d12554faea446cd9daee45cbaa7f51f0e37384c in task-service has been cleanup successfully" Nov 12 20:49:01.494049 kubelet[2510]: I1112 20:49:01.493118 2510 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec1d5e27d6437cca4c8ad3ef77f3a589a1a8dd4fdd278bd179f966cbb35983c2" Nov 12 20:49:01.495132 containerd[1462]: time="2024-11-12T20:49:01.495071540Z" level=info msg="StopPodSandbox for \"ec1d5e27d6437cca4c8ad3ef77f3a589a1a8dd4fdd278bd179f966cbb35983c2\"" Nov 12 20:49:01.495381 containerd[1462]: time="2024-11-12T20:49:01.495350471Z" level=info msg="Ensure that sandbox ec1d5e27d6437cca4c8ad3ef77f3a589a1a8dd4fdd278bd179f966cbb35983c2 in task-service has been cleanup successfully" Nov 12 20:49:01.502089 kubelet[2510]: I1112 20:49:01.502020 2510 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ee9c426229b318250540458b462318f3408de31a97a5fb0c59eed7ca0fdb800" Nov 12 20:49:01.504498 containerd[1462]: time="2024-11-12T20:49:01.504388432Z" level=info msg="StopPodSandbox for \"7ee9c426229b318250540458b462318f3408de31a97a5fb0c59eed7ca0fdb800\"" Nov 12 20:49:01.510718 containerd[1462]: time="2024-11-12T20:49:01.504676952Z" level=info msg="Ensure that sandbox 7ee9c426229b318250540458b462318f3408de31a97a5fb0c59eed7ca0fdb800 in task-service has been cleanup successfully" Nov 12 20:49:01.717346 containerd[1462]: time="2024-11-12T20:49:01.715460748Z" level=error msg="StopPodSandbox for \"ec1d5e27d6437cca4c8ad3ef77f3a589a1a8dd4fdd278bd179f966cbb35983c2\" failed" error="failed to destroy network for sandbox \"ec1d5e27d6437cca4c8ad3ef77f3a589a1a8dd4fdd278bd179f966cbb35983c2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:49:01.717346 containerd[1462]: time="2024-11-12T20:49:01.715688915Z" level=error msg="StopPodSandbox for \"728463c1accba5daf51aa8f42fe2a17e9a738a245208f22863c34b09f50c5a16\" failed" error="failed to destroy network for sandbox \"728463c1accba5daf51aa8f42fe2a17e9a738a245208f22863c34b09f50c5a16\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:49:01.717519 kubelet[2510]: E1112 20:49:01.716030 2510 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ec1d5e27d6437cca4c8ad3ef77f3a589a1a8dd4fdd278bd179f966cbb35983c2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ec1d5e27d6437cca4c8ad3ef77f3a589a1a8dd4fdd278bd179f966cbb35983c2" Nov 12 20:49:01.717519 kubelet[2510]: E1112 20:49:01.716210 2510 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ec1d5e27d6437cca4c8ad3ef77f3a589a1a8dd4fdd278bd179f966cbb35983c2"} Nov 12 20:49:01.717519 kubelet[2510]: E1112 20:49:01.716316 2510 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0ad6a9de-2d10-4788-aa6a-d0b5f89e72a6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ec1d5e27d6437cca4c8ad3ef77f3a589a1a8dd4fdd278bd179f966cbb35983c2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:49:01.717519 kubelet[2510]: E1112 20:49:01.716349 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0ad6a9de-2d10-4788-aa6a-d0b5f89e72a6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ec1d5e27d6437cca4c8ad3ef77f3a589a1a8dd4fdd278bd179f966cbb35983c2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-857999858d-qgxrw" podUID="0ad6a9de-2d10-4788-aa6a-d0b5f89e72a6" Nov 12 20:49:01.717837 kubelet[2510]: E1112 20:49:01.716433 2510 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"728463c1accba5daf51aa8f42fe2a17e9a738a245208f22863c34b09f50c5a16\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="728463c1accba5daf51aa8f42fe2a17e9a738a245208f22863c34b09f50c5a16" Nov 12 20:49:01.717837 kubelet[2510]: E1112 20:49:01.716462 2510 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"728463c1accba5daf51aa8f42fe2a17e9a738a245208f22863c34b09f50c5a16"} Nov 12 20:49:01.717837 kubelet[2510]: E1112 20:49:01.716492 2510 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d0780930-4dfa-4cd9-9093-a1b94ae21874\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"728463c1accba5daf51aa8f42fe2a17e9a738a245208f22863c34b09f50c5a16\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:49:01.717837 kubelet[2510]: E1112 20:49:01.716516 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d0780930-4dfa-4cd9-9093-a1b94ae21874\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"728463c1accba5daf51aa8f42fe2a17e9a738a245208f22863c34b09f50c5a16\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6fbd766b6b-f77jl" podUID="d0780930-4dfa-4cd9-9093-a1b94ae21874" Nov 12 20:49:01.719834 containerd[1462]: time="2024-11-12T20:49:01.719754775Z" level=error msg="StopPodSandbox for \"7ee9c426229b318250540458b462318f3408de31a97a5fb0c59eed7ca0fdb800\" failed" error="failed to destroy network for sandbox \"7ee9c426229b318250540458b462318f3408de31a97a5fb0c59eed7ca0fdb800\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:49:01.721006 kubelet[2510]: E1112 20:49:01.720196 2510 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7ee9c426229b318250540458b462318f3408de31a97a5fb0c59eed7ca0fdb800\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7ee9c426229b318250540458b462318f3408de31a97a5fb0c59eed7ca0fdb800" Nov 12 20:49:01.721006 kubelet[2510]: E1112 20:49:01.720261 2510 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7ee9c426229b318250540458b462318f3408de31a97a5fb0c59eed7ca0fdb800"} Nov 12 20:49:01.721006 kubelet[2510]: E1112 20:49:01.720309 2510 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3cfde3c6-532c-42e8-b5c0-a7b194fb76ba\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7ee9c426229b318250540458b462318f3408de31a97a5fb0c59eed7ca0fdb800\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:49:01.721006 kubelet[2510]: E1112 20:49:01.720345 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3cfde3c6-532c-42e8-b5c0-a7b194fb76ba\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7ee9c426229b318250540458b462318f3408de31a97a5fb0c59eed7ca0fdb800\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-857999858d-g2wj2" podUID="3cfde3c6-532c-42e8-b5c0-a7b194fb76ba" Nov 12 20:49:01.721987 containerd[1462]: time="2024-11-12T20:49:01.721835293Z" level=error msg="StopPodSandbox for \"55d3add20a404fd2b235d07455b2c011273625d2463517900d4e75ad63a675ca\" failed" error="failed to destroy network for sandbox \"55d3add20a404fd2b235d07455b2c011273625d2463517900d4e75ad63a675ca\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:49:01.724046 kubelet[2510]: E1112 20:49:01.723750 2510 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"55d3add20a404fd2b235d07455b2c011273625d2463517900d4e75ad63a675ca\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="55d3add20a404fd2b235d07455b2c011273625d2463517900d4e75ad63a675ca" Nov 12 20:49:01.724046 kubelet[2510]: E1112 20:49:01.723842 2510 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"55d3add20a404fd2b235d07455b2c011273625d2463517900d4e75ad63a675ca"} Nov 12 20:49:01.724046 kubelet[2510]: E1112 20:49:01.723923 2510 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5666e17e-e08c-4c69-b9f9-6f9b8433b194\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"55d3add20a404fd2b235d07455b2c011273625d2463517900d4e75ad63a675ca\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:49:01.724046 kubelet[2510]: E1112 20:49:01.723959 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5666e17e-e08c-4c69-b9f9-6f9b8433b194\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"55d3add20a404fd2b235d07455b2c011273625d2463517900d4e75ad63a675ca\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-lvkdh" podUID="5666e17e-e08c-4c69-b9f9-6f9b8433b194" Nov 12 20:49:01.727591 containerd[1462]: time="2024-11-12T20:49:01.727323112Z" level=error msg="StopPodSandbox for \"22b84a0f12da83b15e0c2c992d12554faea446cd9daee45cbaa7f51f0e37384c\" failed" error="failed to destroy network for sandbox \"22b84a0f12da83b15e0c2c992d12554faea446cd9daee45cbaa7f51f0e37384c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:49:01.727911 kubelet[2510]: E1112 20:49:01.727711 2510 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"22b84a0f12da83b15e0c2c992d12554faea446cd9daee45cbaa7f51f0e37384c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="22b84a0f12da83b15e0c2c992d12554faea446cd9daee45cbaa7f51f0e37384c" Nov 12 20:49:01.727911 kubelet[2510]: E1112 20:49:01.727784 2510 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"22b84a0f12da83b15e0c2c992d12554faea446cd9daee45cbaa7f51f0e37384c"} Nov 12 20:49:01.727911 kubelet[2510]: E1112 20:49:01.727839 2510 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b84cd075-0ebd-4d27-8bb3-99aa5a83c1e5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"22b84a0f12da83b15e0c2c992d12554faea446cd9daee45cbaa7f51f0e37384c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:49:01.727911 kubelet[2510]: E1112 20:49:01.727901 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b84cd075-0ebd-4d27-8bb3-99aa5a83c1e5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"22b84a0f12da83b15e0c2c992d12554faea446cd9daee45cbaa7f51f0e37384c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-bg25k" podUID="b84cd075-0ebd-4d27-8bb3-99aa5a83c1e5" Nov 12 20:49:01.728710 containerd[1462]: time="2024-11-12T20:49:01.728550568Z" level=error msg="StopPodSandbox for \"81a9aaf60851460b6722b031eec6ff073db3f6c8d8f779d9753f4ca5ad0d92a8\" failed" error="failed to destroy network for sandbox \"81a9aaf60851460b6722b031eec6ff073db3f6c8d8f779d9753f4ca5ad0d92a8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:49:01.729152 kubelet[2510]: E1112 20:49:01.729103 2510 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"81a9aaf60851460b6722b031eec6ff073db3f6c8d8f779d9753f4ca5ad0d92a8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="81a9aaf60851460b6722b031eec6ff073db3f6c8d8f779d9753f4ca5ad0d92a8" Nov 12 20:49:01.729275 kubelet[2510]: E1112 20:49:01.729170 2510 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"81a9aaf60851460b6722b031eec6ff073db3f6c8d8f779d9753f4ca5ad0d92a8"} Nov 12 20:49:01.729275 kubelet[2510]: E1112 20:49:01.729226 2510 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"09520a02-648f-4672-85a7-9b0a62557d5f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"81a9aaf60851460b6722b031eec6ff073db3f6c8d8f779d9753f4ca5ad0d92a8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:49:01.729456 kubelet[2510]: E1112 20:49:01.729274 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"09520a02-648f-4672-85a7-9b0a62557d5f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"81a9aaf60851460b6722b031eec6ff073db3f6c8d8f779d9753f4ca5ad0d92a8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-srhkg" podUID="09520a02-648f-4672-85a7-9b0a62557d5f" Nov 12 20:49:06.835346 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3806043202.mount: Deactivated successfully. Nov 12 20:49:06.895891 containerd[1462]: time="2024-11-12T20:49:06.895594634Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:06.898216 containerd[1462]: time="2024-11-12T20:49:06.898154251Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.0: active requests=0, bytes read=140580710" Nov 12 20:49:06.900779 containerd[1462]: time="2024-11-12T20:49:06.900686938Z" level=info msg="ImageCreate event name:\"sha256:df7e265d5ccd035f529156d2ef608d879200d07c1539ca9cac539da91478bc9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:06.905870 containerd[1462]: time="2024-11-12T20:49:06.905797011Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:0761a4b4a20aefdf788f2b42a221bfcfe926a474152b74fbe091d847f5d823d7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:06.907947 containerd[1462]: time="2024-11-12T20:49:06.907636556Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.0\" with image id \"sha256:df7e265d5ccd035f529156d2ef608d879200d07c1539ca9cac539da91478bc9f\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:0761a4b4a20aefdf788f2b42a221bfcfe926a474152b74fbe091d847f5d823d7\", size \"140580572\" in 6.469646331s" Nov 12 20:49:06.907947 containerd[1462]: time="2024-11-12T20:49:06.907704841Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.0\" returns image reference \"sha256:df7e265d5ccd035f529156d2ef608d879200d07c1539ca9cac539da91478bc9f\"" Nov 12 20:49:06.928389 containerd[1462]: time="2024-11-12T20:49:06.928328913Z" level=info msg="CreateContainer within sandbox \"9ed48d0f7b7bfe12118ef5ef8c3e77797165efa14ca6c99605cc1fa1e60ba1ce\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 12 20:49:06.995904 containerd[1462]: time="2024-11-12T20:49:06.995758671Z" level=info msg="CreateContainer within sandbox \"9ed48d0f7b7bfe12118ef5ef8c3e77797165efa14ca6c99605cc1fa1e60ba1ce\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"013e7d5423e39b0bd926a47b1792ea9bd3c9b29c88d507c775010dd5e9551580\"" Nov 12 20:49:06.998728 containerd[1462]: time="2024-11-12T20:49:06.998229948Z" level=info msg="StartContainer for \"013e7d5423e39b0bd926a47b1792ea9bd3c9b29c88d507c775010dd5e9551580\"" Nov 12 20:49:07.171080 systemd[1]: Started cri-containerd-013e7d5423e39b0bd926a47b1792ea9bd3c9b29c88d507c775010dd5e9551580.scope - libcontainer container 013e7d5423e39b0bd926a47b1792ea9bd3c9b29c88d507c775010dd5e9551580. Nov 12 20:49:07.218364 containerd[1462]: time="2024-11-12T20:49:07.217640816Z" level=info msg="StartContainer for \"013e7d5423e39b0bd926a47b1792ea9bd3c9b29c88d507c775010dd5e9551580\" returns successfully" Nov 12 20:49:07.325148 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 12 20:49:07.327501 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 12 20:49:07.582525 kubelet[2510]: E1112 20:49:07.582482 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:49:07.637655 kubelet[2510]: I1112 20:49:07.637284 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-2p5h6" podStartSLOduration=2.159036002 podStartE2EDuration="17.637261197s" podCreationTimestamp="2024-11-12 20:48:50 +0000 UTC" firstStartedPulling="2024-11-12 20:48:51.430780435 +0000 UTC m=+16.435757796" lastFinishedPulling="2024-11-12 20:49:06.90900564 +0000 UTC m=+31.913982991" observedRunningTime="2024-11-12 20:49:07.613084539 +0000 UTC m=+32.618061907" watchObservedRunningTime="2024-11-12 20:49:07.637261197 +0000 UTC m=+32.642238581" Nov 12 20:49:08.584292 kubelet[2510]: I1112 20:49:08.584093 2510 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:49:08.585003 kubelet[2510]: E1112 20:49:08.584611 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:49:12.729805 kubelet[2510]: I1112 20:49:12.729683 2510 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:49:12.730566 kubelet[2510]: E1112 20:49:12.730229 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:49:13.175804 containerd[1462]: time="2024-11-12T20:49:13.175759735Z" level=info msg="StopPodSandbox for \"22b84a0f12da83b15e0c2c992d12554faea446cd9daee45cbaa7f51f0e37384c\"" Nov 12 20:49:13.473982 kernel: bpftool[3818]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 12 20:49:13.597726 kubelet[2510]: E1112 20:49:13.597318 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:49:13.842421 containerd[1462]: 2024-11-12 20:49:13.311 [INFO][3785] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="22b84a0f12da83b15e0c2c992d12554faea446cd9daee45cbaa7f51f0e37384c" Nov 12 20:49:13.842421 containerd[1462]: 2024-11-12 20:49:13.312 [INFO][3785] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="22b84a0f12da83b15e0c2c992d12554faea446cd9daee45cbaa7f51f0e37384c" iface="eth0" netns="/var/run/netns/cni-e9d3c3c8-73d8-fd41-922d-9cd8a61cc0c6" Nov 12 20:49:13.842421 containerd[1462]: 2024-11-12 20:49:13.313 [INFO][3785] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="22b84a0f12da83b15e0c2c992d12554faea446cd9daee45cbaa7f51f0e37384c" iface="eth0" netns="/var/run/netns/cni-e9d3c3c8-73d8-fd41-922d-9cd8a61cc0c6" Nov 12 20:49:13.842421 containerd[1462]: 2024-11-12 20:49:13.318 [INFO][3785] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="22b84a0f12da83b15e0c2c992d12554faea446cd9daee45cbaa7f51f0e37384c" iface="eth0" netns="/var/run/netns/cni-e9d3c3c8-73d8-fd41-922d-9cd8a61cc0c6" Nov 12 20:49:13.842421 containerd[1462]: 2024-11-12 20:49:13.318 [INFO][3785] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="22b84a0f12da83b15e0c2c992d12554faea446cd9daee45cbaa7f51f0e37384c" Nov 12 20:49:13.842421 containerd[1462]: 2024-11-12 20:49:13.318 [INFO][3785] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="22b84a0f12da83b15e0c2c992d12554faea446cd9daee45cbaa7f51f0e37384c" Nov 12 20:49:13.842421 containerd[1462]: 2024-11-12 20:49:13.815 [INFO][3796] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="22b84a0f12da83b15e0c2c992d12554faea446cd9daee45cbaa7f51f0e37384c" HandleID="k8s-pod-network.22b84a0f12da83b15e0c2c992d12554faea446cd9daee45cbaa7f51f0e37384c" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-coredns--6f6b679f8f--bg25k-eth0" Nov 12 20:49:13.842421 containerd[1462]: 2024-11-12 20:49:13.818 [INFO][3796] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:49:13.842421 containerd[1462]: 2024-11-12 20:49:13.819 [INFO][3796] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:49:13.842421 containerd[1462]: 2024-11-12 20:49:13.834 [WARNING][3796] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="22b84a0f12da83b15e0c2c992d12554faea446cd9daee45cbaa7f51f0e37384c" HandleID="k8s-pod-network.22b84a0f12da83b15e0c2c992d12554faea446cd9daee45cbaa7f51f0e37384c" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-coredns--6f6b679f8f--bg25k-eth0" Nov 12 20:49:13.842421 containerd[1462]: 2024-11-12 20:49:13.834 [INFO][3796] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="22b84a0f12da83b15e0c2c992d12554faea446cd9daee45cbaa7f51f0e37384c" HandleID="k8s-pod-network.22b84a0f12da83b15e0c2c992d12554faea446cd9daee45cbaa7f51f0e37384c" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-coredns--6f6b679f8f--bg25k-eth0" Nov 12 20:49:13.842421 containerd[1462]: 2024-11-12 20:49:13.836 [INFO][3796] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:49:13.842421 containerd[1462]: 2024-11-12 20:49:13.839 [INFO][3785] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="22b84a0f12da83b15e0c2c992d12554faea446cd9daee45cbaa7f51f0e37384c" Nov 12 20:49:13.845040 containerd[1462]: time="2024-11-12T20:49:13.844971110Z" level=info msg="TearDown network for sandbox \"22b84a0f12da83b15e0c2c992d12554faea446cd9daee45cbaa7f51f0e37384c\" successfully" Nov 12 20:49:13.845040 containerd[1462]: time="2024-11-12T20:49:13.845012783Z" level=info msg="StopPodSandbox for \"22b84a0f12da83b15e0c2c992d12554faea446cd9daee45cbaa7f51f0e37384c\" returns successfully" Nov 12 20:49:13.851122 systemd[1]: run-netns-cni\x2de9d3c3c8\x2d73d8\x2dfd41\x2d922d\x2d9cd8a61cc0c6.mount: Deactivated successfully. Nov 12 20:49:13.889248 kubelet[2510]: E1112 20:49:13.889017 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:49:13.913049 containerd[1462]: time="2024-11-12T20:49:13.912988191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bg25k,Uid:b84cd075-0ebd-4d27-8bb3-99aa5a83c1e5,Namespace:kube-system,Attempt:1,}" Nov 12 20:49:14.041498 systemd-networkd[1372]: vxlan.calico: Link UP Nov 12 20:49:14.041512 systemd-networkd[1372]: vxlan.calico: Gained carrier Nov 12 20:49:14.170953 containerd[1462]: time="2024-11-12T20:49:14.170177547Z" level=info msg="StopPodSandbox for \"728463c1accba5daf51aa8f42fe2a17e9a738a245208f22863c34b09f50c5a16\"" Nov 12 20:49:14.377163 containerd[1462]: 2024-11-12 20:49:14.311 [INFO][3915] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="728463c1accba5daf51aa8f42fe2a17e9a738a245208f22863c34b09f50c5a16" Nov 12 20:49:14.377163 containerd[1462]: 2024-11-12 20:49:14.311 [INFO][3915] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="728463c1accba5daf51aa8f42fe2a17e9a738a245208f22863c34b09f50c5a16" iface="eth0" netns="/var/run/netns/cni-3ac71e66-5185-b594-66b8-92e6444f7257" Nov 12 20:49:14.377163 containerd[1462]: 2024-11-12 20:49:14.312 [INFO][3915] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="728463c1accba5daf51aa8f42fe2a17e9a738a245208f22863c34b09f50c5a16" iface="eth0" netns="/var/run/netns/cni-3ac71e66-5185-b594-66b8-92e6444f7257" Nov 12 20:49:14.377163 containerd[1462]: 2024-11-12 20:49:14.312 [INFO][3915] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="728463c1accba5daf51aa8f42fe2a17e9a738a245208f22863c34b09f50c5a16" iface="eth0" netns="/var/run/netns/cni-3ac71e66-5185-b594-66b8-92e6444f7257" Nov 12 20:49:14.377163 containerd[1462]: 2024-11-12 20:49:14.312 [INFO][3915] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="728463c1accba5daf51aa8f42fe2a17e9a738a245208f22863c34b09f50c5a16" Nov 12 20:49:14.377163 containerd[1462]: 2024-11-12 20:49:14.312 [INFO][3915] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="728463c1accba5daf51aa8f42fe2a17e9a738a245208f22863c34b09f50c5a16" Nov 12 20:49:14.377163 containerd[1462]: 2024-11-12 20:49:14.359 [INFO][3929] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="728463c1accba5daf51aa8f42fe2a17e9a738a245208f22863c34b09f50c5a16" HandleID="k8s-pod-network.728463c1accba5daf51aa8f42fe2a17e9a738a245208f22863c34b09f50c5a16" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--kube--controllers--6fbd766b6b--f77jl-eth0" Nov 12 20:49:14.377163 containerd[1462]: 2024-11-12 20:49:14.359 [INFO][3929] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:49:14.377163 containerd[1462]: 2024-11-12 20:49:14.359 [INFO][3929] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:49:14.377163 containerd[1462]: 2024-11-12 20:49:14.369 [WARNING][3929] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="728463c1accba5daf51aa8f42fe2a17e9a738a245208f22863c34b09f50c5a16" HandleID="k8s-pod-network.728463c1accba5daf51aa8f42fe2a17e9a738a245208f22863c34b09f50c5a16" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--kube--controllers--6fbd766b6b--f77jl-eth0" Nov 12 20:49:14.377163 containerd[1462]: 2024-11-12 20:49:14.369 [INFO][3929] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="728463c1accba5daf51aa8f42fe2a17e9a738a245208f22863c34b09f50c5a16" HandleID="k8s-pod-network.728463c1accba5daf51aa8f42fe2a17e9a738a245208f22863c34b09f50c5a16" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--kube--controllers--6fbd766b6b--f77jl-eth0" Nov 12 20:49:14.377163 containerd[1462]: 2024-11-12 20:49:14.371 [INFO][3929] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:49:14.377163 containerd[1462]: 2024-11-12 20:49:14.375 [INFO][3915] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="728463c1accba5daf51aa8f42fe2a17e9a738a245208f22863c34b09f50c5a16" Nov 12 20:49:14.378949 containerd[1462]: time="2024-11-12T20:49:14.378866706Z" level=info msg="TearDown network for sandbox \"728463c1accba5daf51aa8f42fe2a17e9a738a245208f22863c34b09f50c5a16\" successfully" Nov 12 20:49:14.379125 containerd[1462]: time="2024-11-12T20:49:14.379006151Z" level=info msg="StopPodSandbox for \"728463c1accba5daf51aa8f42fe2a17e9a738a245208f22863c34b09f50c5a16\" returns successfully" Nov 12 20:49:14.382895 containerd[1462]: time="2024-11-12T20:49:14.381836103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fbd766b6b-f77jl,Uid:d0780930-4dfa-4cd9-9093-a1b94ae21874,Namespace:calico-system,Attempt:1,}" Nov 12 20:49:14.386348 systemd[1]: run-netns-cni\x2d3ac71e66\x2d5185\x2db594\x2d66b8\x2d92e6444f7257.mount: Deactivated successfully. Nov 12 20:49:14.618776 systemd-networkd[1372]: calibca256e7035: Link UP Nov 12 20:49:14.619923 systemd-networkd[1372]: calibca256e7035: Gained carrier Nov 12 20:49:14.673138 containerd[1462]: 2024-11-12 20:49:14.163 [INFO][3879] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.0--2--eeaeb2d4c6-k8s-coredns--6f6b679f8f--bg25k-eth0 coredns-6f6b679f8f- kube-system b84cd075-0ebd-4d27-8bb3-99aa5a83c1e5 762 0 2024-11-12 20:48:39 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.2.0-2-eeaeb2d4c6 coredns-6f6b679f8f-bg25k eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibca256e7035 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="0f1bf72710ddad3969ca9de5c618b175e374b2762d4aec04c9ef2209fac4455c" Namespace="kube-system" Pod="coredns-6f6b679f8f-bg25k" WorkloadEndpoint="ci--4081.2.0--2--eeaeb2d4c6-k8s-coredns--6f6b679f8f--bg25k-" Nov 12 20:49:14.673138 containerd[1462]: 2024-11-12 20:49:14.163 [INFO][3879] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0f1bf72710ddad3969ca9de5c618b175e374b2762d4aec04c9ef2209fac4455c" Namespace="kube-system" Pod="coredns-6f6b679f8f-bg25k" WorkloadEndpoint="ci--4081.2.0--2--eeaeb2d4c6-k8s-coredns--6f6b679f8f--bg25k-eth0" Nov 12 20:49:14.673138 containerd[1462]: 2024-11-12 20:49:14.261 [INFO][3920] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0f1bf72710ddad3969ca9de5c618b175e374b2762d4aec04c9ef2209fac4455c" HandleID="k8s-pod-network.0f1bf72710ddad3969ca9de5c618b175e374b2762d4aec04c9ef2209fac4455c" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-coredns--6f6b679f8f--bg25k-eth0" Nov 12 20:49:14.673138 containerd[1462]: 2024-11-12 20:49:14.382 [INFO][3920] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0f1bf72710ddad3969ca9de5c618b175e374b2762d4aec04c9ef2209fac4455c" HandleID="k8s-pod-network.0f1bf72710ddad3969ca9de5c618b175e374b2762d4aec04c9ef2209fac4455c" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-coredns--6f6b679f8f--bg25k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ec930), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.2.0-2-eeaeb2d4c6", "pod":"coredns-6f6b679f8f-bg25k", "timestamp":"2024-11-12 20:49:14.261419222 +0000 UTC"}, Hostname:"ci-4081.2.0-2-eeaeb2d4c6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:49:14.673138 containerd[1462]: 2024-11-12 20:49:14.382 [INFO][3920] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:49:14.673138 containerd[1462]: 2024-11-12 20:49:14.382 [INFO][3920] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:49:14.673138 containerd[1462]: 2024-11-12 20:49:14.382 [INFO][3920] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.0-2-eeaeb2d4c6' Nov 12 20:49:14.673138 containerd[1462]: 2024-11-12 20:49:14.390 [INFO][3920] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0f1bf72710ddad3969ca9de5c618b175e374b2762d4aec04c9ef2209fac4455c" host="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:49:14.673138 containerd[1462]: 2024-11-12 20:49:14.480 [INFO][3920] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:49:14.673138 containerd[1462]: 2024-11-12 20:49:14.498 [INFO][3920] ipam/ipam.go 489: Trying affinity for 192.168.60.192/26 host="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:49:14.673138 containerd[1462]: 2024-11-12 20:49:14.506 [INFO][3920] ipam/ipam.go 155: Attempting to load block cidr=192.168.60.192/26 host="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:49:14.673138 containerd[1462]: 2024-11-12 20:49:14.530 [INFO][3920] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.60.192/26 host="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:49:14.673138 containerd[1462]: 2024-11-12 20:49:14.530 [INFO][3920] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.60.192/26 handle="k8s-pod-network.0f1bf72710ddad3969ca9de5c618b175e374b2762d4aec04c9ef2209fac4455c" host="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:49:14.673138 containerd[1462]: 2024-11-12 20:49:14.541 [INFO][3920] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0f1bf72710ddad3969ca9de5c618b175e374b2762d4aec04c9ef2209fac4455c Nov 12 20:49:14.673138 containerd[1462]: 2024-11-12 20:49:14.570 [INFO][3920] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.60.192/26 handle="k8s-pod-network.0f1bf72710ddad3969ca9de5c618b175e374b2762d4aec04c9ef2209fac4455c" host="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:49:14.673138 containerd[1462]: 2024-11-12 20:49:14.593 [INFO][3920] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.60.193/26] block=192.168.60.192/26 handle="k8s-pod-network.0f1bf72710ddad3969ca9de5c618b175e374b2762d4aec04c9ef2209fac4455c" host="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:49:14.673138 containerd[1462]: 2024-11-12 20:49:14.593 [INFO][3920] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.60.193/26] handle="k8s-pod-network.0f1bf72710ddad3969ca9de5c618b175e374b2762d4aec04c9ef2209fac4455c" host="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:49:14.673138 containerd[1462]: 2024-11-12 20:49:14.593 [INFO][3920] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:49:14.673138 containerd[1462]: 2024-11-12 20:49:14.593 [INFO][3920] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.60.193/26] IPv6=[] ContainerID="0f1bf72710ddad3969ca9de5c618b175e374b2762d4aec04c9ef2209fac4455c" HandleID="k8s-pod-network.0f1bf72710ddad3969ca9de5c618b175e374b2762d4aec04c9ef2209fac4455c" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-coredns--6f6b679f8f--bg25k-eth0" Nov 12 20:49:14.674677 containerd[1462]: 2024-11-12 20:49:14.597 [INFO][3879] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0f1bf72710ddad3969ca9de5c618b175e374b2762d4aec04c9ef2209fac4455c" Namespace="kube-system" Pod="coredns-6f6b679f8f-bg25k" WorkloadEndpoint="ci--4081.2.0--2--eeaeb2d4c6-k8s-coredns--6f6b679f8f--bg25k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--2--eeaeb2d4c6-k8s-coredns--6f6b679f8f--bg25k-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"b84cd075-0ebd-4d27-8bb3-99aa5a83c1e5", ResourceVersion:"762", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 48, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-2-eeaeb2d4c6", ContainerID:"", Pod:"coredns-6f6b679f8f-bg25k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibca256e7035", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:49:14.674677 containerd[1462]: 2024-11-12 20:49:14.597 [INFO][3879] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.60.193/32] ContainerID="0f1bf72710ddad3969ca9de5c618b175e374b2762d4aec04c9ef2209fac4455c" Namespace="kube-system" Pod="coredns-6f6b679f8f-bg25k" WorkloadEndpoint="ci--4081.2.0--2--eeaeb2d4c6-k8s-coredns--6f6b679f8f--bg25k-eth0" Nov 12 20:49:14.674677 containerd[1462]: 2024-11-12 20:49:14.599 [INFO][3879] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibca256e7035 ContainerID="0f1bf72710ddad3969ca9de5c618b175e374b2762d4aec04c9ef2209fac4455c" Namespace="kube-system" Pod="coredns-6f6b679f8f-bg25k" WorkloadEndpoint="ci--4081.2.0--2--eeaeb2d4c6-k8s-coredns--6f6b679f8f--bg25k-eth0" Nov 12 20:49:14.674677 containerd[1462]: 2024-11-12 20:49:14.612 [INFO][3879] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0f1bf72710ddad3969ca9de5c618b175e374b2762d4aec04c9ef2209fac4455c" Namespace="kube-system" Pod="coredns-6f6b679f8f-bg25k" WorkloadEndpoint="ci--4081.2.0--2--eeaeb2d4c6-k8s-coredns--6f6b679f8f--bg25k-eth0" Nov 12 20:49:14.674677 containerd[1462]: 2024-11-12 20:49:14.613 [INFO][3879] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0f1bf72710ddad3969ca9de5c618b175e374b2762d4aec04c9ef2209fac4455c" Namespace="kube-system" Pod="coredns-6f6b679f8f-bg25k" WorkloadEndpoint="ci--4081.2.0--2--eeaeb2d4c6-k8s-coredns--6f6b679f8f--bg25k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--2--eeaeb2d4c6-k8s-coredns--6f6b679f8f--bg25k-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"b84cd075-0ebd-4d27-8bb3-99aa5a83c1e5", ResourceVersion:"762", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 48, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-2-eeaeb2d4c6", ContainerID:"0f1bf72710ddad3969ca9de5c618b175e374b2762d4aec04c9ef2209fac4455c", Pod:"coredns-6f6b679f8f-bg25k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibca256e7035", MAC:"4a:a2:78:71:8c:2c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:49:14.674677 containerd[1462]: 2024-11-12 20:49:14.665 [INFO][3879] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0f1bf72710ddad3969ca9de5c618b175e374b2762d4aec04c9ef2209fac4455c" Namespace="kube-system" Pod="coredns-6f6b679f8f-bg25k" WorkloadEndpoint="ci--4081.2.0--2--eeaeb2d4c6-k8s-coredns--6f6b679f8f--bg25k-eth0" Nov 12 20:49:14.743256 containerd[1462]: time="2024-11-12T20:49:14.743142185Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:49:14.743256 containerd[1462]: time="2024-11-12T20:49:14.743222020Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:49:14.744529 containerd[1462]: time="2024-11-12T20:49:14.744263514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:49:14.744529 containerd[1462]: time="2024-11-12T20:49:14.744425135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:49:14.774798 systemd[1]: Started cri-containerd-0f1bf72710ddad3969ca9de5c618b175e374b2762d4aec04c9ef2209fac4455c.scope - libcontainer container 0f1bf72710ddad3969ca9de5c618b175e374b2762d4aec04c9ef2209fac4455c. Nov 12 20:49:14.878634 systemd-networkd[1372]: cali0bf81bce2cb: Link UP Nov 12 20:49:14.880626 systemd-networkd[1372]: cali0bf81bce2cb: Gained carrier Nov 12 20:49:14.921057 systemd[1]: Started sshd@9-147.182.197.11:22-139.178.68.195:44400.service - OpenSSH per-connection server daemon (139.178.68.195:44400). Nov 12 20:49:14.926592 containerd[1462]: time="2024-11-12T20:49:14.926056020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bg25k,Uid:b84cd075-0ebd-4d27-8bb3-99aa5a83c1e5,Namespace:kube-system,Attempt:1,} returns sandbox id \"0f1bf72710ddad3969ca9de5c618b175e374b2762d4aec04c9ef2209fac4455c\"" Nov 12 20:49:14.937567 kubelet[2510]: E1112 20:49:14.937407 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:49:14.962330 containerd[1462]: 2024-11-12 20:49:14.469 [INFO][3938] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--kube--controllers--6fbd766b6b--f77jl-eth0 calico-kube-controllers-6fbd766b6b- calico-system d0780930-4dfa-4cd9-9093-a1b94ae21874 773 0 2024-11-12 20:48:51 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6fbd766b6b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.2.0-2-eeaeb2d4c6 calico-kube-controllers-6fbd766b6b-f77jl eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali0bf81bce2cb [] []}} ContainerID="db96ca28e0933ae5fb83ea05328a6dfee02ce93cf56f4910f5da5b4cba462c0b" Namespace="calico-system" Pod="calico-kube-controllers-6fbd766b6b-f77jl" WorkloadEndpoint="ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--kube--controllers--6fbd766b6b--f77jl-" Nov 12 20:49:14.962330 containerd[1462]: 2024-11-12 20:49:14.469 [INFO][3938] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="db96ca28e0933ae5fb83ea05328a6dfee02ce93cf56f4910f5da5b4cba462c0b" Namespace="calico-system" Pod="calico-kube-controllers-6fbd766b6b-f77jl" WorkloadEndpoint="ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--kube--controllers--6fbd766b6b--f77jl-eth0" Nov 12 20:49:14.962330 containerd[1462]: 2024-11-12 20:49:14.560 [INFO][3949] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="db96ca28e0933ae5fb83ea05328a6dfee02ce93cf56f4910f5da5b4cba462c0b" HandleID="k8s-pod-network.db96ca28e0933ae5fb83ea05328a6dfee02ce93cf56f4910f5da5b4cba462c0b" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--kube--controllers--6fbd766b6b--f77jl-eth0" Nov 12 20:49:14.962330 containerd[1462]: 2024-11-12 20:49:14.688 [INFO][3949] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="db96ca28e0933ae5fb83ea05328a6dfee02ce93cf56f4910f5da5b4cba462c0b" HandleID="k8s-pod-network.db96ca28e0933ae5fb83ea05328a6dfee02ce93cf56f4910f5da5b4cba462c0b" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--kube--controllers--6fbd766b6b--f77jl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000dc650), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.2.0-2-eeaeb2d4c6", "pod":"calico-kube-controllers-6fbd766b6b-f77jl", "timestamp":"2024-11-12 20:49:14.560084466 +0000 UTC"}, Hostname:"ci-4081.2.0-2-eeaeb2d4c6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:49:14.962330 containerd[1462]: 2024-11-12 20:49:14.688 [INFO][3949] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:49:14.962330 containerd[1462]: 2024-11-12 20:49:14.688 [INFO][3949] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:49:14.962330 containerd[1462]: 2024-11-12 20:49:14.692 [INFO][3949] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.0-2-eeaeb2d4c6' Nov 12 20:49:14.962330 containerd[1462]: 2024-11-12 20:49:14.706 [INFO][3949] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.db96ca28e0933ae5fb83ea05328a6dfee02ce93cf56f4910f5da5b4cba462c0b" host="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:49:14.962330 containerd[1462]: 2024-11-12 20:49:14.784 [INFO][3949] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:49:14.962330 containerd[1462]: 2024-11-12 20:49:14.805 [INFO][3949] ipam/ipam.go 489: Trying affinity for 192.168.60.192/26 host="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:49:14.962330 containerd[1462]: 2024-11-12 20:49:14.809 [INFO][3949] ipam/ipam.go 155: Attempting to load block cidr=192.168.60.192/26 host="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:49:14.962330 containerd[1462]: 2024-11-12 20:49:14.814 [INFO][3949] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.60.192/26 host="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:49:14.962330 containerd[1462]: 2024-11-12 20:49:14.814 [INFO][3949] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.60.192/26 handle="k8s-pod-network.db96ca28e0933ae5fb83ea05328a6dfee02ce93cf56f4910f5da5b4cba462c0b" host="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:49:14.962330 containerd[1462]: 2024-11-12 20:49:14.823 [INFO][3949] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.db96ca28e0933ae5fb83ea05328a6dfee02ce93cf56f4910f5da5b4cba462c0b Nov 12 20:49:14.962330 containerd[1462]: 2024-11-12 20:49:14.833 [INFO][3949] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.60.192/26 handle="k8s-pod-network.db96ca28e0933ae5fb83ea05328a6dfee02ce93cf56f4910f5da5b4cba462c0b" host="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:49:14.962330 containerd[1462]: 2024-11-12 20:49:14.846 [INFO][3949] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.60.194/26] block=192.168.60.192/26 handle="k8s-pod-network.db96ca28e0933ae5fb83ea05328a6dfee02ce93cf56f4910f5da5b4cba462c0b" host="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:49:14.962330 containerd[1462]: 2024-11-12 20:49:14.850 [INFO][3949] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.60.194/26] handle="k8s-pod-network.db96ca28e0933ae5fb83ea05328a6dfee02ce93cf56f4910f5da5b4cba462c0b" host="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:49:14.962330 containerd[1462]: 2024-11-12 20:49:14.850 [INFO][3949] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:49:14.962330 containerd[1462]: 2024-11-12 20:49:14.854 [INFO][3949] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.60.194/26] IPv6=[] ContainerID="db96ca28e0933ae5fb83ea05328a6dfee02ce93cf56f4910f5da5b4cba462c0b" HandleID="k8s-pod-network.db96ca28e0933ae5fb83ea05328a6dfee02ce93cf56f4910f5da5b4cba462c0b" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--kube--controllers--6fbd766b6b--f77jl-eth0" Nov 12 20:49:14.967059 containerd[1462]: 2024-11-12 20:49:14.872 [INFO][3938] cni-plugin/k8s.go 386: Populated endpoint ContainerID="db96ca28e0933ae5fb83ea05328a6dfee02ce93cf56f4910f5da5b4cba462c0b" Namespace="calico-system" Pod="calico-kube-controllers-6fbd766b6b-f77jl" WorkloadEndpoint="ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--kube--controllers--6fbd766b6b--f77jl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--kube--controllers--6fbd766b6b--f77jl-eth0", GenerateName:"calico-kube-controllers-6fbd766b6b-", Namespace:"calico-system", SelfLink:"", UID:"d0780930-4dfa-4cd9-9093-a1b94ae21874", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 48, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6fbd766b6b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-2-eeaeb2d4c6", ContainerID:"", Pod:"calico-kube-controllers-6fbd766b6b-f77jl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.60.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0bf81bce2cb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:49:14.967059 containerd[1462]: 2024-11-12 20:49:14.873 [INFO][3938] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.60.194/32] ContainerID="db96ca28e0933ae5fb83ea05328a6dfee02ce93cf56f4910f5da5b4cba462c0b" Namespace="calico-system" Pod="calico-kube-controllers-6fbd766b6b-f77jl" WorkloadEndpoint="ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--kube--controllers--6fbd766b6b--f77jl-eth0" Nov 12 20:49:14.967059 containerd[1462]: 2024-11-12 20:49:14.873 [INFO][3938] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0bf81bce2cb ContainerID="db96ca28e0933ae5fb83ea05328a6dfee02ce93cf56f4910f5da5b4cba462c0b" Namespace="calico-system" Pod="calico-kube-controllers-6fbd766b6b-f77jl" WorkloadEndpoint="ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--kube--controllers--6fbd766b6b--f77jl-eth0" Nov 12 20:49:14.967059 containerd[1462]: 2024-11-12 20:49:14.882 [INFO][3938] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="db96ca28e0933ae5fb83ea05328a6dfee02ce93cf56f4910f5da5b4cba462c0b" Namespace="calico-system" Pod="calico-kube-controllers-6fbd766b6b-f77jl" WorkloadEndpoint="ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--kube--controllers--6fbd766b6b--f77jl-eth0" Nov 12 20:49:14.967059 containerd[1462]: 2024-11-12 20:49:14.883 [INFO][3938] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="db96ca28e0933ae5fb83ea05328a6dfee02ce93cf56f4910f5da5b4cba462c0b" Namespace="calico-system" Pod="calico-kube-controllers-6fbd766b6b-f77jl" WorkloadEndpoint="ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--kube--controllers--6fbd766b6b--f77jl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--kube--controllers--6fbd766b6b--f77jl-eth0", GenerateName:"calico-kube-controllers-6fbd766b6b-", Namespace:"calico-system", SelfLink:"", UID:"d0780930-4dfa-4cd9-9093-a1b94ae21874", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 48, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6fbd766b6b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-2-eeaeb2d4c6", ContainerID:"db96ca28e0933ae5fb83ea05328a6dfee02ce93cf56f4910f5da5b4cba462c0b", Pod:"calico-kube-controllers-6fbd766b6b-f77jl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.60.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0bf81bce2cb", MAC:"76:0b:cd:0d:c9:f8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:49:14.967059 containerd[1462]: 2024-11-12 20:49:14.956 [INFO][3938] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="db96ca28e0933ae5fb83ea05328a6dfee02ce93cf56f4910f5da5b4cba462c0b" Namespace="calico-system" Pod="calico-kube-controllers-6fbd766b6b-f77jl" WorkloadEndpoint="ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--kube--controllers--6fbd766b6b--f77jl-eth0" Nov 12 20:49:15.000033 containerd[1462]: time="2024-11-12T20:49:14.998538980Z" level=info msg="CreateContainer within sandbox \"0f1bf72710ddad3969ca9de5c618b175e374b2762d4aec04c9ef2209fac4455c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 20:49:15.063821 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3098876927.mount: Deactivated successfully. Nov 12 20:49:15.064911 containerd[1462]: time="2024-11-12T20:49:15.062095753Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:49:15.064911 containerd[1462]: time="2024-11-12T20:49:15.062162861Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:49:15.064911 containerd[1462]: time="2024-11-12T20:49:15.062178365Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:49:15.064911 containerd[1462]: time="2024-11-12T20:49:15.062292655Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:49:15.085882 containerd[1462]: time="2024-11-12T20:49:15.083444566Z" level=info msg="CreateContainer within sandbox \"0f1bf72710ddad3969ca9de5c618b175e374b2762d4aec04c9ef2209fac4455c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d196525e2a1b93732c0e7bd13ea0c5a4040d993d4afb521fab626a528df69bd9\"" Nov 12 20:49:15.089032 containerd[1462]: time="2024-11-12T20:49:15.088982843Z" level=info msg="StartContainer for \"d196525e2a1b93732c0e7bd13ea0c5a4040d993d4afb521fab626a528df69bd9\"" Nov 12 20:49:15.114804 sshd[4029]: Accepted publickey for core from 139.178.68.195 port 44400 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:49:15.118345 sshd[4029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:49:15.123127 systemd[1]: Started cri-containerd-db96ca28e0933ae5fb83ea05328a6dfee02ce93cf56f4910f5da5b4cba462c0b.scope - libcontainer container db96ca28e0933ae5fb83ea05328a6dfee02ce93cf56f4910f5da5b4cba462c0b. Nov 12 20:49:15.138114 systemd-logind[1448]: New session 10 of user core. Nov 12 20:49:15.140001 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 12 20:49:15.169088 systemd[1]: Started cri-containerd-d196525e2a1b93732c0e7bd13ea0c5a4040d993d4afb521fab626a528df69bd9.scope - libcontainer container d196525e2a1b93732c0e7bd13ea0c5a4040d993d4afb521fab626a528df69bd9. Nov 12 20:49:15.170955 containerd[1462]: time="2024-11-12T20:49:15.170824369Z" level=info msg="StopPodSandbox for \"ec1d5e27d6437cca4c8ad3ef77f3a589a1a8dd4fdd278bd179f966cbb35983c2\"" Nov 12 20:49:15.348248 containerd[1462]: time="2024-11-12T20:49:15.347636584Z" level=info msg="StartContainer for \"d196525e2a1b93732c0e7bd13ea0c5a4040d993d4afb521fab626a528df69bd9\" returns successfully" Nov 12 20:49:15.355599 containerd[1462]: time="2024-11-12T20:49:15.353543321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fbd766b6b-f77jl,Uid:d0780930-4dfa-4cd9-9093-a1b94ae21874,Namespace:calico-system,Attempt:1,} returns sandbox id \"db96ca28e0933ae5fb83ea05328a6dfee02ce93cf56f4910f5da5b4cba462c0b\"" Nov 12 20:49:15.388984 containerd[1462]: time="2024-11-12T20:49:15.388253276Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\"" Nov 12 20:49:15.451877 kubelet[2510]: I1112 20:49:15.450933 2510 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:49:15.451877 kubelet[2510]: E1112 20:49:15.451545 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:49:15.616321 containerd[1462]: 2024-11-12 20:49:15.344 [INFO][4131] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ec1d5e27d6437cca4c8ad3ef77f3a589a1a8dd4fdd278bd179f966cbb35983c2" Nov 12 20:49:15.616321 containerd[1462]: 2024-11-12 20:49:15.346 [INFO][4131] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ec1d5e27d6437cca4c8ad3ef77f3a589a1a8dd4fdd278bd179f966cbb35983c2" iface="eth0" netns="/var/run/netns/cni-caa5a09c-f91b-9412-313a-926466452f7a" Nov 12 20:49:15.616321 containerd[1462]: 2024-11-12 20:49:15.355 [INFO][4131] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ec1d5e27d6437cca4c8ad3ef77f3a589a1a8dd4fdd278bd179f966cbb35983c2" iface="eth0" netns="/var/run/netns/cni-caa5a09c-f91b-9412-313a-926466452f7a" Nov 12 20:49:15.616321 containerd[1462]: 2024-11-12 20:49:15.355 [INFO][4131] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ec1d5e27d6437cca4c8ad3ef77f3a589a1a8dd4fdd278bd179f966cbb35983c2" iface="eth0" netns="/var/run/netns/cni-caa5a09c-f91b-9412-313a-926466452f7a" Nov 12 20:49:15.616321 containerd[1462]: 2024-11-12 20:49:15.355 [INFO][4131] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ec1d5e27d6437cca4c8ad3ef77f3a589a1a8dd4fdd278bd179f966cbb35983c2" Nov 12 20:49:15.616321 containerd[1462]: 2024-11-12 20:49:15.356 [INFO][4131] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ec1d5e27d6437cca4c8ad3ef77f3a589a1a8dd4fdd278bd179f966cbb35983c2" Nov 12 20:49:15.616321 containerd[1462]: 2024-11-12 20:49:15.432 [INFO][4158] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ec1d5e27d6437cca4c8ad3ef77f3a589a1a8dd4fdd278bd179f966cbb35983c2" HandleID="k8s-pod-network.ec1d5e27d6437cca4c8ad3ef77f3a589a1a8dd4fdd278bd179f966cbb35983c2" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--apiserver--857999858d--qgxrw-eth0" Nov 12 20:49:15.616321 containerd[1462]: 2024-11-12 20:49:15.434 [INFO][4158] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:49:15.616321 containerd[1462]: 2024-11-12 20:49:15.437 [INFO][4158] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:49:15.616321 containerd[1462]: 2024-11-12 20:49:15.486 [WARNING][4158] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ec1d5e27d6437cca4c8ad3ef77f3a589a1a8dd4fdd278bd179f966cbb35983c2" HandleID="k8s-pod-network.ec1d5e27d6437cca4c8ad3ef77f3a589a1a8dd4fdd278bd179f966cbb35983c2" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--apiserver--857999858d--qgxrw-eth0" Nov 12 20:49:15.616321 containerd[1462]: 2024-11-12 20:49:15.487 [INFO][4158] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ec1d5e27d6437cca4c8ad3ef77f3a589a1a8dd4fdd278bd179f966cbb35983c2" HandleID="k8s-pod-network.ec1d5e27d6437cca4c8ad3ef77f3a589a1a8dd4fdd278bd179f966cbb35983c2" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--apiserver--857999858d--qgxrw-eth0" Nov 12 20:49:15.616321 containerd[1462]: 2024-11-12 20:49:15.502 [INFO][4158] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:49:15.616321 containerd[1462]: 2024-11-12 20:49:15.521 [INFO][4131] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ec1d5e27d6437cca4c8ad3ef77f3a589a1a8dd4fdd278bd179f966cbb35983c2" Nov 12 20:49:15.618088 containerd[1462]: time="2024-11-12T20:49:15.617894853Z" level=info msg="TearDown network for sandbox \"ec1d5e27d6437cca4c8ad3ef77f3a589a1a8dd4fdd278bd179f966cbb35983c2\" successfully" Nov 12 20:49:15.618632 containerd[1462]: time="2024-11-12T20:49:15.617928440Z" level=info msg="StopPodSandbox for \"ec1d5e27d6437cca4c8ad3ef77f3a589a1a8dd4fdd278bd179f966cbb35983c2\" returns successfully" Nov 12 20:49:15.663998 containerd[1462]: time="2024-11-12T20:49:15.663739490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-857999858d-qgxrw,Uid:0ad6a9de-2d10-4788-aa6a-d0b5f89e72a6,Namespace:calico-apiserver,Attempt:1,}" Nov 12 20:49:15.675940 sshd[4029]: pam_unix(sshd:session): session closed for user core Nov 12 20:49:15.682565 systemd[1]: sshd@9-147.182.197.11:22-139.178.68.195:44400.service: Deactivated successfully. Nov 12 20:49:15.685170 systemd[1]: session-10.scope: Deactivated successfully. Nov 12 20:49:15.690335 systemd-logind[1448]: Session 10 logged out. Waiting for processes to exit. Nov 12 20:49:15.693628 systemd-logind[1448]: Removed session 10. Nov 12 20:49:15.716372 kubelet[2510]: E1112 20:49:15.715624 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:49:15.746641 kubelet[2510]: I1112 20:49:15.746313 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-bg25k" podStartSLOduration=36.746281389 podStartE2EDuration="36.746281389s" podCreationTimestamp="2024-11-12 20:48:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:49:15.743694879 +0000 UTC m=+40.748672246" watchObservedRunningTime="2024-11-12 20:49:15.746281389 +0000 UTC m=+40.751258759" Nov 12 20:49:15.749319 systemd-networkd[1372]: vxlan.calico: Gained IPv6LL Nov 12 20:49:15.860719 systemd[1]: run-netns-cni\x2dcaa5a09c\x2df91b\x2d9412\x2d313a\x2d926466452f7a.mount: Deactivated successfully. Nov 12 20:49:15.927621 kubelet[2510]: E1112 20:49:15.927415 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:49:15.993523 systemd[1]: run-containerd-runc-k8s.io-013e7d5423e39b0bd926a47b1792ea9bd3c9b29c88d507c775010dd5e9551580-runc.SRdNq1.mount: Deactivated successfully. Nov 12 20:49:16.133071 systemd-networkd[1372]: cali0bf81bce2cb: Gained IPv6LL Nov 12 20:49:16.165705 containerd[1462]: time="2024-11-12T20:49:16.165484528Z" level=info msg="StopPodSandbox for \"55d3add20a404fd2b235d07455b2c011273625d2463517900d4e75ad63a675ca\"" Nov 12 20:49:16.167183 systemd-networkd[1372]: calif5935e81b84: Link UP Nov 12 20:49:16.167471 containerd[1462]: time="2024-11-12T20:49:16.167380814Z" level=info msg="StopPodSandbox for \"81a9aaf60851460b6722b031eec6ff073db3f6c8d8f779d9753f4ca5ad0d92a8\"" Nov 12 20:49:16.168410 systemd-networkd[1372]: calif5935e81b84: Gained carrier Nov 12 20:49:16.221945 containerd[1462]: 2024-11-12 20:49:15.887 [INFO][4193] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--apiserver--857999858d--qgxrw-eth0 calico-apiserver-857999858d- calico-apiserver 0ad6a9de-2d10-4788-aa6a-d0b5f89e72a6 813 0 2024-11-12 20:48:50 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:857999858d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.2.0-2-eeaeb2d4c6 calico-apiserver-857999858d-qgxrw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif5935e81b84 [] []}} ContainerID="df55d2aca0f4f9dec7084788b3aa53177321aacfa0f0051e2c43080ea338d56c" Namespace="calico-apiserver" Pod="calico-apiserver-857999858d-qgxrw" WorkloadEndpoint="ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--apiserver--857999858d--qgxrw-" Nov 12 20:49:16.221945 containerd[1462]: 2024-11-12 20:49:15.889 [INFO][4193] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="df55d2aca0f4f9dec7084788b3aa53177321aacfa0f0051e2c43080ea338d56c" Namespace="calico-apiserver" Pod="calico-apiserver-857999858d-qgxrw" WorkloadEndpoint="ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--apiserver--857999858d--qgxrw-eth0" Nov 12 20:49:16.221945 containerd[1462]: 2024-11-12 20:49:15.958 [INFO][4205] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="df55d2aca0f4f9dec7084788b3aa53177321aacfa0f0051e2c43080ea338d56c" HandleID="k8s-pod-network.df55d2aca0f4f9dec7084788b3aa53177321aacfa0f0051e2c43080ea338d56c" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--apiserver--857999858d--qgxrw-eth0" Nov 12 20:49:16.221945 containerd[1462]: 2024-11-12 20:49:16.075 [INFO][4205] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="df55d2aca0f4f9dec7084788b3aa53177321aacfa0f0051e2c43080ea338d56c" HandleID="k8s-pod-network.df55d2aca0f4f9dec7084788b3aa53177321aacfa0f0051e2c43080ea338d56c" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--apiserver--857999858d--qgxrw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031bbb0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.2.0-2-eeaeb2d4c6", "pod":"calico-apiserver-857999858d-qgxrw", "timestamp":"2024-11-12 20:49:15.958869124 +0000 UTC"}, Hostname:"ci-4081.2.0-2-eeaeb2d4c6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:49:16.221945 containerd[1462]: 2024-11-12 20:49:16.075 [INFO][4205] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:49:16.221945 containerd[1462]: 2024-11-12 20:49:16.075 [INFO][4205] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:49:16.221945 containerd[1462]: 2024-11-12 20:49:16.075 [INFO][4205] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.0-2-eeaeb2d4c6' Nov 12 20:49:16.221945 containerd[1462]: 2024-11-12 20:49:16.082 [INFO][4205] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.df55d2aca0f4f9dec7084788b3aa53177321aacfa0f0051e2c43080ea338d56c" host="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:49:16.221945 containerd[1462]: 2024-11-12 20:49:16.092 [INFO][4205] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:49:16.221945 containerd[1462]: 2024-11-12 20:49:16.118 [INFO][4205] ipam/ipam.go 489: Trying affinity for 192.168.60.192/26 host="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:49:16.221945 containerd[1462]: 2024-11-12 20:49:16.122 [INFO][4205] ipam/ipam.go 155: Attempting to load block cidr=192.168.60.192/26 host="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:49:16.221945 containerd[1462]: 2024-11-12 20:49:16.127 [INFO][4205] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.60.192/26 host="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:49:16.221945 containerd[1462]: 2024-11-12 20:49:16.127 [INFO][4205] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.60.192/26 handle="k8s-pod-network.df55d2aca0f4f9dec7084788b3aa53177321aacfa0f0051e2c43080ea338d56c" host="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:49:16.221945 containerd[1462]: 2024-11-12 20:49:16.130 [INFO][4205] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.df55d2aca0f4f9dec7084788b3aa53177321aacfa0f0051e2c43080ea338d56c Nov 12 20:49:16.221945 containerd[1462]: 2024-11-12 20:49:16.140 [INFO][4205] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.60.192/26 handle="k8s-pod-network.df55d2aca0f4f9dec7084788b3aa53177321aacfa0f0051e2c43080ea338d56c" host="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:49:16.221945 containerd[1462]: 2024-11-12 20:49:16.152 [INFO][4205] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.60.195/26] block=192.168.60.192/26 handle="k8s-pod-network.df55d2aca0f4f9dec7084788b3aa53177321aacfa0f0051e2c43080ea338d56c" host="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:49:16.221945 containerd[1462]: 2024-11-12 20:49:16.153 [INFO][4205] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.60.195/26] handle="k8s-pod-network.df55d2aca0f4f9dec7084788b3aa53177321aacfa0f0051e2c43080ea338d56c" host="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:49:16.221945 containerd[1462]: 2024-11-12 20:49:16.153 [INFO][4205] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:49:16.221945 containerd[1462]: 2024-11-12 20:49:16.153 [INFO][4205] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.60.195/26] IPv6=[] ContainerID="df55d2aca0f4f9dec7084788b3aa53177321aacfa0f0051e2c43080ea338d56c" HandleID="k8s-pod-network.df55d2aca0f4f9dec7084788b3aa53177321aacfa0f0051e2c43080ea338d56c" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--apiserver--857999858d--qgxrw-eth0" Nov 12 20:49:16.224214 containerd[1462]: 2024-11-12 20:49:16.157 [INFO][4193] cni-plugin/k8s.go 386: Populated endpoint ContainerID="df55d2aca0f4f9dec7084788b3aa53177321aacfa0f0051e2c43080ea338d56c" Namespace="calico-apiserver" Pod="calico-apiserver-857999858d-qgxrw" WorkloadEndpoint="ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--apiserver--857999858d--qgxrw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--apiserver--857999858d--qgxrw-eth0", GenerateName:"calico-apiserver-857999858d-", Namespace:"calico-apiserver", SelfLink:"", UID:"0ad6a9de-2d10-4788-aa6a-d0b5f89e72a6", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 48, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"857999858d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-2-eeaeb2d4c6", ContainerID:"", Pod:"calico-apiserver-857999858d-qgxrw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.60.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif5935e81b84", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:49:16.224214 containerd[1462]: 2024-11-12 20:49:16.157 [INFO][4193] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.60.195/32] ContainerID="df55d2aca0f4f9dec7084788b3aa53177321aacfa0f0051e2c43080ea338d56c" Namespace="calico-apiserver" Pod="calico-apiserver-857999858d-qgxrw" WorkloadEndpoint="ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--apiserver--857999858d--qgxrw-eth0" Nov 12 20:49:16.224214 containerd[1462]: 2024-11-12 20:49:16.157 [INFO][4193] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif5935e81b84 ContainerID="df55d2aca0f4f9dec7084788b3aa53177321aacfa0f0051e2c43080ea338d56c" Namespace="calico-apiserver" Pod="calico-apiserver-857999858d-qgxrw" WorkloadEndpoint="ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--apiserver--857999858d--qgxrw-eth0" Nov 12 20:49:16.224214 containerd[1462]: 2024-11-12 20:49:16.171 [INFO][4193] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="df55d2aca0f4f9dec7084788b3aa53177321aacfa0f0051e2c43080ea338d56c" Namespace="calico-apiserver" Pod="calico-apiserver-857999858d-qgxrw" WorkloadEndpoint="ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--apiserver--857999858d--qgxrw-eth0" Nov 12 20:49:16.224214 containerd[1462]: 2024-11-12 20:49:16.172 [INFO][4193] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="df55d2aca0f4f9dec7084788b3aa53177321aacfa0f0051e2c43080ea338d56c" Namespace="calico-apiserver" Pod="calico-apiserver-857999858d-qgxrw" WorkloadEndpoint="ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--apiserver--857999858d--qgxrw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--apiserver--857999858d--qgxrw-eth0", GenerateName:"calico-apiserver-857999858d-", Namespace:"calico-apiserver", SelfLink:"", UID:"0ad6a9de-2d10-4788-aa6a-d0b5f89e72a6", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 48, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"857999858d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-2-eeaeb2d4c6", ContainerID:"df55d2aca0f4f9dec7084788b3aa53177321aacfa0f0051e2c43080ea338d56c", Pod:"calico-apiserver-857999858d-qgxrw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.60.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif5935e81b84", MAC:"fa:dc:fa:d5:41:bb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:49:16.224214 containerd[1462]: 2024-11-12 20:49:16.209 [INFO][4193] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="df55d2aca0f4f9dec7084788b3aa53177321aacfa0f0051e2c43080ea338d56c" Namespace="calico-apiserver" Pod="calico-apiserver-857999858d-qgxrw" WorkloadEndpoint="ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--apiserver--857999858d--qgxrw-eth0" Nov 12 20:49:16.262485 systemd-networkd[1372]: calibca256e7035: Gained IPv6LL Nov 12 20:49:16.326925 containerd[1462]: time="2024-11-12T20:49:16.326185816Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:49:16.326925 containerd[1462]: time="2024-11-12T20:49:16.326284914Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:49:16.326925 containerd[1462]: time="2024-11-12T20:49:16.326309898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:49:16.331403 containerd[1462]: time="2024-11-12T20:49:16.327593588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:49:16.401511 systemd[1]: Started cri-containerd-df55d2aca0f4f9dec7084788b3aa53177321aacfa0f0051e2c43080ea338d56c.scope - libcontainer container df55d2aca0f4f9dec7084788b3aa53177321aacfa0f0051e2c43080ea338d56c. Nov 12 20:49:16.525279 containerd[1462]: 2024-11-12 20:49:16.440 [INFO][4262] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="81a9aaf60851460b6722b031eec6ff073db3f6c8d8f779d9753f4ca5ad0d92a8" Nov 12 20:49:16.525279 containerd[1462]: 2024-11-12 20:49:16.440 [INFO][4262] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="81a9aaf60851460b6722b031eec6ff073db3f6c8d8f779d9753f4ca5ad0d92a8" iface="eth0" netns="/var/run/netns/cni-fa74fd34-c9a7-2bac-b3e5-955b12aa3e27" Nov 12 20:49:16.525279 containerd[1462]: 2024-11-12 20:49:16.441 [INFO][4262] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="81a9aaf60851460b6722b031eec6ff073db3f6c8d8f779d9753f4ca5ad0d92a8" iface="eth0" netns="/var/run/netns/cni-fa74fd34-c9a7-2bac-b3e5-955b12aa3e27" Nov 12 20:49:16.525279 containerd[1462]: 2024-11-12 20:49:16.441 [INFO][4262] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="81a9aaf60851460b6722b031eec6ff073db3f6c8d8f779d9753f4ca5ad0d92a8" iface="eth0" netns="/var/run/netns/cni-fa74fd34-c9a7-2bac-b3e5-955b12aa3e27" Nov 12 20:49:16.525279 containerd[1462]: 2024-11-12 20:49:16.441 [INFO][4262] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="81a9aaf60851460b6722b031eec6ff073db3f6c8d8f779d9753f4ca5ad0d92a8" Nov 12 20:49:16.525279 containerd[1462]: 2024-11-12 20:49:16.441 [INFO][4262] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="81a9aaf60851460b6722b031eec6ff073db3f6c8d8f779d9753f4ca5ad0d92a8" Nov 12 20:49:16.525279 containerd[1462]: 2024-11-12 20:49:16.495 [INFO][4322] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="81a9aaf60851460b6722b031eec6ff073db3f6c8d8f779d9753f4ca5ad0d92a8" HandleID="k8s-pod-network.81a9aaf60851460b6722b031eec6ff073db3f6c8d8f779d9753f4ca5ad0d92a8" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-csi--node--driver--srhkg-eth0" Nov 12 20:49:16.525279 containerd[1462]: 2024-11-12 20:49:16.495 [INFO][4322] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:49:16.525279 containerd[1462]: 2024-11-12 20:49:16.495 [INFO][4322] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:49:16.525279 containerd[1462]: 2024-11-12 20:49:16.512 [WARNING][4322] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="81a9aaf60851460b6722b031eec6ff073db3f6c8d8f779d9753f4ca5ad0d92a8" HandleID="k8s-pod-network.81a9aaf60851460b6722b031eec6ff073db3f6c8d8f779d9753f4ca5ad0d92a8" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-csi--node--driver--srhkg-eth0" Nov 12 20:49:16.525279 containerd[1462]: 2024-11-12 20:49:16.513 [INFO][4322] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="81a9aaf60851460b6722b031eec6ff073db3f6c8d8f779d9753f4ca5ad0d92a8" HandleID="k8s-pod-network.81a9aaf60851460b6722b031eec6ff073db3f6c8d8f779d9753f4ca5ad0d92a8" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-csi--node--driver--srhkg-eth0" Nov 12 20:49:16.525279 containerd[1462]: 2024-11-12 20:49:16.516 [INFO][4322] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:49:16.525279 containerd[1462]: 2024-11-12 20:49:16.519 [INFO][4262] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="81a9aaf60851460b6722b031eec6ff073db3f6c8d8f779d9753f4ca5ad0d92a8" Nov 12 20:49:16.530526 containerd[1462]: time="2024-11-12T20:49:16.525543236Z" level=info msg="TearDown network for sandbox \"81a9aaf60851460b6722b031eec6ff073db3f6c8d8f779d9753f4ca5ad0d92a8\" successfully" Nov 12 20:49:16.530526 containerd[1462]: time="2024-11-12T20:49:16.525602256Z" level=info msg="StopPodSandbox for \"81a9aaf60851460b6722b031eec6ff073db3f6c8d8f779d9753f4ca5ad0d92a8\" returns successfully" Nov 12 20:49:16.542775 containerd[1462]: time="2024-11-12T20:49:16.542263360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-srhkg,Uid:09520a02-648f-4672-85a7-9b0a62557d5f,Namespace:calico-system,Attempt:1,}" Nov 12 20:49:16.553922 containerd[1462]: 2024-11-12 20:49:16.418 [INFO][4263] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="55d3add20a404fd2b235d07455b2c011273625d2463517900d4e75ad63a675ca" Nov 12 20:49:16.553922 containerd[1462]: 2024-11-12 20:49:16.420 [INFO][4263] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="55d3add20a404fd2b235d07455b2c011273625d2463517900d4e75ad63a675ca" iface="eth0" netns="/var/run/netns/cni-4211c2ac-b73a-2900-15a9-4e6dbf600bf9" Nov 12 20:49:16.553922 containerd[1462]: 2024-11-12 20:49:16.421 [INFO][4263] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="55d3add20a404fd2b235d07455b2c011273625d2463517900d4e75ad63a675ca" iface="eth0" netns="/var/run/netns/cni-4211c2ac-b73a-2900-15a9-4e6dbf600bf9" Nov 12 20:49:16.553922 containerd[1462]: 2024-11-12 20:49:16.422 [INFO][4263] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="55d3add20a404fd2b235d07455b2c011273625d2463517900d4e75ad63a675ca" iface="eth0" netns="/var/run/netns/cni-4211c2ac-b73a-2900-15a9-4e6dbf600bf9" Nov 12 20:49:16.553922 containerd[1462]: 2024-11-12 20:49:16.423 [INFO][4263] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="55d3add20a404fd2b235d07455b2c011273625d2463517900d4e75ad63a675ca" Nov 12 20:49:16.553922 containerd[1462]: 2024-11-12 20:49:16.423 [INFO][4263] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="55d3add20a404fd2b235d07455b2c011273625d2463517900d4e75ad63a675ca" Nov 12 20:49:16.553922 containerd[1462]: 2024-11-12 20:49:16.505 [INFO][4318] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="55d3add20a404fd2b235d07455b2c011273625d2463517900d4e75ad63a675ca" HandleID="k8s-pod-network.55d3add20a404fd2b235d07455b2c011273625d2463517900d4e75ad63a675ca" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-coredns--6f6b679f8f--lvkdh-eth0" Nov 12 20:49:16.553922 containerd[1462]: 2024-11-12 20:49:16.511 [INFO][4318] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:49:16.553922 containerd[1462]: 2024-11-12 20:49:16.516 [INFO][4318] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:49:16.553922 containerd[1462]: 2024-11-12 20:49:16.539 [WARNING][4318] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="55d3add20a404fd2b235d07455b2c011273625d2463517900d4e75ad63a675ca" HandleID="k8s-pod-network.55d3add20a404fd2b235d07455b2c011273625d2463517900d4e75ad63a675ca" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-coredns--6f6b679f8f--lvkdh-eth0" Nov 12 20:49:16.553922 containerd[1462]: 2024-11-12 20:49:16.539 [INFO][4318] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="55d3add20a404fd2b235d07455b2c011273625d2463517900d4e75ad63a675ca" HandleID="k8s-pod-network.55d3add20a404fd2b235d07455b2c011273625d2463517900d4e75ad63a675ca" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-coredns--6f6b679f8f--lvkdh-eth0" Nov 12 20:49:16.553922 containerd[1462]: 2024-11-12 20:49:16.545 [INFO][4318] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:49:16.553922 containerd[1462]: 2024-11-12 20:49:16.550 [INFO][4263] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="55d3add20a404fd2b235d07455b2c011273625d2463517900d4e75ad63a675ca" Nov 12 20:49:16.554679 containerd[1462]: time="2024-11-12T20:49:16.554085552Z" level=info msg="TearDown network for sandbox \"55d3add20a404fd2b235d07455b2c011273625d2463517900d4e75ad63a675ca\" successfully" Nov 12 20:49:16.554679 containerd[1462]: time="2024-11-12T20:49:16.554127542Z" level=info msg="StopPodSandbox for \"55d3add20a404fd2b235d07455b2c011273625d2463517900d4e75ad63a675ca\" returns successfully" Nov 12 20:49:16.556971 kubelet[2510]: E1112 20:49:16.555344 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:49:16.557598 containerd[1462]: time="2024-11-12T20:49:16.556284750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lvkdh,Uid:5666e17e-e08c-4c69-b9f9-6f9b8433b194,Namespace:kube-system,Attempt:1,}" Nov 12 20:49:16.567469 containerd[1462]: time="2024-11-12T20:49:16.567374207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-857999858d-qgxrw,Uid:0ad6a9de-2d10-4788-aa6a-d0b5f89e72a6,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"df55d2aca0f4f9dec7084788b3aa53177321aacfa0f0051e2c43080ea338d56c\"" Nov 12 20:49:16.723419 kubelet[2510]: E1112 20:49:16.722440 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:49:16.860007 systemd[1]: run-netns-cni\x2d4211c2ac\x2db73a\x2d2900\x2d15a9\x2d4e6dbf600bf9.mount: Deactivated successfully. Nov 12 20:49:16.861050 systemd[1]: run-netns-cni\x2dfa74fd34\x2dc9a7\x2d2bac\x2db3e5\x2d955b12aa3e27.mount: Deactivated successfully. Nov 12 20:49:16.943057 systemd-networkd[1372]: cali326800ec2f3: Link UP Nov 12 20:49:16.944907 systemd-networkd[1372]: cali326800ec2f3: Gained carrier Nov 12 20:49:16.970958 containerd[1462]: 2024-11-12 20:49:16.687 [INFO][4339] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.0--2--eeaeb2d4c6-k8s-coredns--6f6b679f8f--lvkdh-eth0 coredns-6f6b679f8f- kube-system 5666e17e-e08c-4c69-b9f9-6f9b8433b194 839 0 2024-11-12 20:48:39 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.2.0-2-eeaeb2d4c6 coredns-6f6b679f8f-lvkdh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali326800ec2f3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="86afc230051329b9dc8c5258399b6faf6d3246d549938b91d0340949d14d1cd6" Namespace="kube-system" Pod="coredns-6f6b679f8f-lvkdh" WorkloadEndpoint="ci--4081.2.0--2--eeaeb2d4c6-k8s-coredns--6f6b679f8f--lvkdh-" Nov 12 20:49:16.970958 containerd[1462]: 2024-11-12 20:49:16.687 [INFO][4339] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="86afc230051329b9dc8c5258399b6faf6d3246d549938b91d0340949d14d1cd6" Namespace="kube-system" Pod="coredns-6f6b679f8f-lvkdh" WorkloadEndpoint="ci--4081.2.0--2--eeaeb2d4c6-k8s-coredns--6f6b679f8f--lvkdh-eth0" Nov 12 20:49:16.970958 containerd[1462]: 2024-11-12 20:49:16.793 [INFO][4362] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="86afc230051329b9dc8c5258399b6faf6d3246d549938b91d0340949d14d1cd6" HandleID="k8s-pod-network.86afc230051329b9dc8c5258399b6faf6d3246d549938b91d0340949d14d1cd6" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-coredns--6f6b679f8f--lvkdh-eth0" Nov 12 20:49:16.970958 containerd[1462]: 2024-11-12 20:49:16.837 [INFO][4362] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="86afc230051329b9dc8c5258399b6faf6d3246d549938b91d0340949d14d1cd6" HandleID="k8s-pod-network.86afc230051329b9dc8c5258399b6faf6d3246d549938b91d0340949d14d1cd6" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-coredns--6f6b679f8f--lvkdh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00035cec0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.2.0-2-eeaeb2d4c6", "pod":"coredns-6f6b679f8f-lvkdh", "timestamp":"2024-11-12 20:49:16.793669805 +0000 UTC"}, Hostname:"ci-4081.2.0-2-eeaeb2d4c6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:49:16.970958 containerd[1462]: 2024-11-12 20:49:16.838 [INFO][4362] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:49:16.970958 containerd[1462]: 2024-11-12 20:49:16.839 [INFO][4362] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:49:16.970958 containerd[1462]: 2024-11-12 20:49:16.839 [INFO][4362] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.0-2-eeaeb2d4c6' Nov 12 20:49:16.970958 containerd[1462]: 2024-11-12 20:49:16.865 [INFO][4362] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.86afc230051329b9dc8c5258399b6faf6d3246d549938b91d0340949d14d1cd6" host="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:49:16.970958 containerd[1462]: 2024-11-12 20:49:16.883 [INFO][4362] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:49:16.970958 containerd[1462]: 2024-11-12 20:49:16.895 [INFO][4362] ipam/ipam.go 489: Trying affinity for 192.168.60.192/26 host="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:49:16.970958 containerd[1462]: 2024-11-12 20:49:16.898 [INFO][4362] ipam/ipam.go 155: Attempting to load block cidr=192.168.60.192/26 host="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:49:16.970958 containerd[1462]: 2024-11-12 20:49:16.903 [INFO][4362] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.60.192/26 host="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:49:16.970958 containerd[1462]: 2024-11-12 20:49:16.903 [INFO][4362] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.60.192/26 handle="k8s-pod-network.86afc230051329b9dc8c5258399b6faf6d3246d549938b91d0340949d14d1cd6" host="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:49:16.970958 containerd[1462]: 2024-11-12 20:49:16.908 [INFO][4362] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.86afc230051329b9dc8c5258399b6faf6d3246d549938b91d0340949d14d1cd6 Nov 12 20:49:16.970958 containerd[1462]: 2024-11-12 20:49:16.918 [INFO][4362] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.60.192/26 handle="k8s-pod-network.86afc230051329b9dc8c5258399b6faf6d3246d549938b91d0340949d14d1cd6" host="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:49:16.970958 containerd[1462]: 2024-11-12 20:49:16.932 [INFO][4362] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.60.196/26] block=192.168.60.192/26 handle="k8s-pod-network.86afc230051329b9dc8c5258399b6faf6d3246d549938b91d0340949d14d1cd6" host="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:49:16.970958 containerd[1462]: 2024-11-12 20:49:16.932 [INFO][4362] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.60.196/26] handle="k8s-pod-network.86afc230051329b9dc8c5258399b6faf6d3246d549938b91d0340949d14d1cd6" host="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:49:16.970958 containerd[1462]: 2024-11-12 20:49:16.932 [INFO][4362] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:49:16.970958 containerd[1462]: 2024-11-12 20:49:16.932 [INFO][4362] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.60.196/26] IPv6=[] ContainerID="86afc230051329b9dc8c5258399b6faf6d3246d549938b91d0340949d14d1cd6" HandleID="k8s-pod-network.86afc230051329b9dc8c5258399b6faf6d3246d549938b91d0340949d14d1cd6" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-coredns--6f6b679f8f--lvkdh-eth0" Nov 12 20:49:16.974228 containerd[1462]: 2024-11-12 20:49:16.938 [INFO][4339] cni-plugin/k8s.go 386: Populated endpoint ContainerID="86afc230051329b9dc8c5258399b6faf6d3246d549938b91d0340949d14d1cd6" Namespace="kube-system" Pod="coredns-6f6b679f8f-lvkdh" WorkloadEndpoint="ci--4081.2.0--2--eeaeb2d4c6-k8s-coredns--6f6b679f8f--lvkdh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--2--eeaeb2d4c6-k8s-coredns--6f6b679f8f--lvkdh-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"5666e17e-e08c-4c69-b9f9-6f9b8433b194", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 48, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-2-eeaeb2d4c6", ContainerID:"", Pod:"coredns-6f6b679f8f-lvkdh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali326800ec2f3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:49:16.974228 containerd[1462]: 2024-11-12 20:49:16.938 [INFO][4339] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.60.196/32] ContainerID="86afc230051329b9dc8c5258399b6faf6d3246d549938b91d0340949d14d1cd6" Namespace="kube-system" Pod="coredns-6f6b679f8f-lvkdh" WorkloadEndpoint="ci--4081.2.0--2--eeaeb2d4c6-k8s-coredns--6f6b679f8f--lvkdh-eth0" Nov 12 20:49:16.974228 containerd[1462]: 2024-11-12 20:49:16.938 [INFO][4339] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali326800ec2f3 ContainerID="86afc230051329b9dc8c5258399b6faf6d3246d549938b91d0340949d14d1cd6" Namespace="kube-system" Pod="coredns-6f6b679f8f-lvkdh" WorkloadEndpoint="ci--4081.2.0--2--eeaeb2d4c6-k8s-coredns--6f6b679f8f--lvkdh-eth0" Nov 12 20:49:16.974228 containerd[1462]: 2024-11-12 20:49:16.945 [INFO][4339] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="86afc230051329b9dc8c5258399b6faf6d3246d549938b91d0340949d14d1cd6" Namespace="kube-system" Pod="coredns-6f6b679f8f-lvkdh" WorkloadEndpoint="ci--4081.2.0--2--eeaeb2d4c6-k8s-coredns--6f6b679f8f--lvkdh-eth0" Nov 12 20:49:16.974228 containerd[1462]: 2024-11-12 20:49:16.946 [INFO][4339] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="86afc230051329b9dc8c5258399b6faf6d3246d549938b91d0340949d14d1cd6" Namespace="kube-system" Pod="coredns-6f6b679f8f-lvkdh" WorkloadEndpoint="ci--4081.2.0--2--eeaeb2d4c6-k8s-coredns--6f6b679f8f--lvkdh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--2--eeaeb2d4c6-k8s-coredns--6f6b679f8f--lvkdh-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"5666e17e-e08c-4c69-b9f9-6f9b8433b194", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 48, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-2-eeaeb2d4c6", ContainerID:"86afc230051329b9dc8c5258399b6faf6d3246d549938b91d0340949d14d1cd6", Pod:"coredns-6f6b679f8f-lvkdh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali326800ec2f3", MAC:"0a:01:e8:6b:41:a2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:49:16.974228 containerd[1462]: 2024-11-12 20:49:16.967 [INFO][4339] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="86afc230051329b9dc8c5258399b6faf6d3246d549938b91d0340949d14d1cd6" Namespace="kube-system" Pod="coredns-6f6b679f8f-lvkdh" WorkloadEndpoint="ci--4081.2.0--2--eeaeb2d4c6-k8s-coredns--6f6b679f8f--lvkdh-eth0" Nov 12 20:49:17.049338 containerd[1462]: time="2024-11-12T20:49:17.049099736Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:49:17.049338 containerd[1462]: time="2024-11-12T20:49:17.049258007Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:49:17.049338 containerd[1462]: time="2024-11-12T20:49:17.049302918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:49:17.049990 containerd[1462]: time="2024-11-12T20:49:17.049447005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:49:17.077269 systemd-networkd[1372]: calib75023cc603: Link UP Nov 12 20:49:17.081110 systemd-networkd[1372]: calib75023cc603: Gained carrier Nov 12 20:49:17.110400 systemd[1]: run-containerd-runc-k8s.io-86afc230051329b9dc8c5258399b6faf6d3246d549938b91d0340949d14d1cd6-runc.STJviR.mount: Deactivated successfully. Nov 12 20:49:17.139657 systemd[1]: Started cri-containerd-86afc230051329b9dc8c5258399b6faf6d3246d549938b91d0340949d14d1cd6.scope - libcontainer container 86afc230051329b9dc8c5258399b6faf6d3246d549938b91d0340949d14d1cd6. Nov 12 20:49:17.150722 containerd[1462]: 2024-11-12 20:49:16.707 [INFO][4349] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.0--2--eeaeb2d4c6-k8s-csi--node--driver--srhkg-eth0 csi-node-driver- calico-system 09520a02-648f-4672-85a7-9b0a62557d5f 840 0 2024-11-12 20:48:51 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:548d65b7bf k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.2.0-2-eeaeb2d4c6 csi-node-driver-srhkg eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib75023cc603 [] []}} ContainerID="3fa05bc0693f48bc500e72d811da413e44029b32e0370b0aedaf984d65fb5068" Namespace="calico-system" Pod="csi-node-driver-srhkg" WorkloadEndpoint="ci--4081.2.0--2--eeaeb2d4c6-k8s-csi--node--driver--srhkg-" Nov 12 20:49:17.150722 containerd[1462]: 2024-11-12 20:49:16.707 [INFO][4349] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3fa05bc0693f48bc500e72d811da413e44029b32e0370b0aedaf984d65fb5068" Namespace="calico-system" Pod="csi-node-driver-srhkg" WorkloadEndpoint="ci--4081.2.0--2--eeaeb2d4c6-k8s-csi--node--driver--srhkg-eth0" Nov 12 20:49:17.150722 containerd[1462]: 2024-11-12 20:49:16.876 [INFO][4366] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3fa05bc0693f48bc500e72d811da413e44029b32e0370b0aedaf984d65fb5068" HandleID="k8s-pod-network.3fa05bc0693f48bc500e72d811da413e44029b32e0370b0aedaf984d65fb5068" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-csi--node--driver--srhkg-eth0" Nov 12 20:49:17.150722 containerd[1462]: 2024-11-12 20:49:16.900 [INFO][4366] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3fa05bc0693f48bc500e72d811da413e44029b32e0370b0aedaf984d65fb5068" HandleID="k8s-pod-network.3fa05bc0693f48bc500e72d811da413e44029b32e0370b0aedaf984d65fb5068" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-csi--node--driver--srhkg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003c86c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.2.0-2-eeaeb2d4c6", "pod":"csi-node-driver-srhkg", "timestamp":"2024-11-12 20:49:16.876240944 +0000 UTC"}, Hostname:"ci-4081.2.0-2-eeaeb2d4c6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:49:17.150722 containerd[1462]: 2024-11-12 20:49:16.901 [INFO][4366] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:49:17.150722 containerd[1462]: 2024-11-12 20:49:16.932 [INFO][4366] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:49:17.150722 containerd[1462]: 2024-11-12 20:49:16.933 [INFO][4366] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.0-2-eeaeb2d4c6' Nov 12 20:49:17.150722 containerd[1462]: 2024-11-12 20:49:16.965 [INFO][4366] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3fa05bc0693f48bc500e72d811da413e44029b32e0370b0aedaf984d65fb5068" host="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:49:17.150722 containerd[1462]: 2024-11-12 20:49:16.983 [INFO][4366] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:49:17.150722 containerd[1462]: 2024-11-12 20:49:16.997 [INFO][4366] ipam/ipam.go 489: Trying affinity for 192.168.60.192/26 host="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:49:17.150722 containerd[1462]: 2024-11-12 20:49:17.002 [INFO][4366] ipam/ipam.go 155: Attempting to load block cidr=192.168.60.192/26 host="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:49:17.150722 containerd[1462]: 2024-11-12 20:49:17.012 [INFO][4366] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.60.192/26 host="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:49:17.150722 containerd[1462]: 2024-11-12 20:49:17.013 [INFO][4366] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.60.192/26 handle="k8s-pod-network.3fa05bc0693f48bc500e72d811da413e44029b32e0370b0aedaf984d65fb5068" host="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:49:17.150722 containerd[1462]: 2024-11-12 20:49:17.019 [INFO][4366] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3fa05bc0693f48bc500e72d811da413e44029b32e0370b0aedaf984d65fb5068 Nov 12 20:49:17.150722 containerd[1462]: 2024-11-12 20:49:17.037 [INFO][4366] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.60.192/26 handle="k8s-pod-network.3fa05bc0693f48bc500e72d811da413e44029b32e0370b0aedaf984d65fb5068" host="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:49:17.150722 containerd[1462]: 2024-11-12 20:49:17.062 [INFO][4366] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.60.197/26] block=192.168.60.192/26 handle="k8s-pod-network.3fa05bc0693f48bc500e72d811da413e44029b32e0370b0aedaf984d65fb5068" host="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:49:17.150722 containerd[1462]: 2024-11-12 20:49:17.062 [INFO][4366] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.60.197/26] handle="k8s-pod-network.3fa05bc0693f48bc500e72d811da413e44029b32e0370b0aedaf984d65fb5068" host="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:49:17.150722 containerd[1462]: 2024-11-12 20:49:17.062 [INFO][4366] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:49:17.150722 containerd[1462]: 2024-11-12 20:49:17.062 [INFO][4366] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.60.197/26] IPv6=[] ContainerID="3fa05bc0693f48bc500e72d811da413e44029b32e0370b0aedaf984d65fb5068" HandleID="k8s-pod-network.3fa05bc0693f48bc500e72d811da413e44029b32e0370b0aedaf984d65fb5068" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-csi--node--driver--srhkg-eth0" Nov 12 20:49:17.153527 containerd[1462]: 2024-11-12 20:49:17.069 [INFO][4349] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3fa05bc0693f48bc500e72d811da413e44029b32e0370b0aedaf984d65fb5068" Namespace="calico-system" Pod="csi-node-driver-srhkg" WorkloadEndpoint="ci--4081.2.0--2--eeaeb2d4c6-k8s-csi--node--driver--srhkg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--2--eeaeb2d4c6-k8s-csi--node--driver--srhkg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"09520a02-648f-4672-85a7-9b0a62557d5f", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 48, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"548d65b7bf", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-2-eeaeb2d4c6", ContainerID:"", Pod:"csi-node-driver-srhkg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.60.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib75023cc603", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:49:17.153527 containerd[1462]: 2024-11-12 20:49:17.069 [INFO][4349] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.60.197/32] ContainerID="3fa05bc0693f48bc500e72d811da413e44029b32e0370b0aedaf984d65fb5068" Namespace="calico-system" Pod="csi-node-driver-srhkg" WorkloadEndpoint="ci--4081.2.0--2--eeaeb2d4c6-k8s-csi--node--driver--srhkg-eth0" Nov 12 20:49:17.153527 containerd[1462]: 2024-11-12 20:49:17.069 [INFO][4349] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib75023cc603 ContainerID="3fa05bc0693f48bc500e72d811da413e44029b32e0370b0aedaf984d65fb5068" Namespace="calico-system" Pod="csi-node-driver-srhkg" WorkloadEndpoint="ci--4081.2.0--2--eeaeb2d4c6-k8s-csi--node--driver--srhkg-eth0" Nov 12 20:49:17.153527 containerd[1462]: 2024-11-12 20:49:17.080 [INFO][4349] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3fa05bc0693f48bc500e72d811da413e44029b32e0370b0aedaf984d65fb5068" Namespace="calico-system" Pod="csi-node-driver-srhkg" WorkloadEndpoint="ci--4081.2.0--2--eeaeb2d4c6-k8s-csi--node--driver--srhkg-eth0" Nov 12 20:49:17.153527 containerd[1462]: 2024-11-12 20:49:17.083 [INFO][4349] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3fa05bc0693f48bc500e72d811da413e44029b32e0370b0aedaf984d65fb5068" Namespace="calico-system" Pod="csi-node-driver-srhkg" WorkloadEndpoint="ci--4081.2.0--2--eeaeb2d4c6-k8s-csi--node--driver--srhkg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--2--eeaeb2d4c6-k8s-csi--node--driver--srhkg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"09520a02-648f-4672-85a7-9b0a62557d5f", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 48, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"548d65b7bf", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-2-eeaeb2d4c6", ContainerID:"3fa05bc0693f48bc500e72d811da413e44029b32e0370b0aedaf984d65fb5068", Pod:"csi-node-driver-srhkg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.60.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib75023cc603", MAC:"82:d1:4e:5d:4b:1c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:49:17.153527 containerd[1462]: 2024-11-12 20:49:17.130 [INFO][4349] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3fa05bc0693f48bc500e72d811da413e44029b32e0370b0aedaf984d65fb5068" Namespace="calico-system" Pod="csi-node-driver-srhkg" WorkloadEndpoint="ci--4081.2.0--2--eeaeb2d4c6-k8s-csi--node--driver--srhkg-eth0" Nov 12 20:49:17.171049 containerd[1462]: time="2024-11-12T20:49:17.170774060Z" level=info msg="StopPodSandbox for \"7ee9c426229b318250540458b462318f3408de31a97a5fb0c59eed7ca0fdb800\"" Nov 12 20:49:17.225799 systemd-networkd[1372]: calif5935e81b84: Gained IPv6LL Nov 12 20:49:17.250148 containerd[1462]: time="2024-11-12T20:49:17.250013787Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:49:17.250148 containerd[1462]: time="2024-11-12T20:49:17.250101101Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:49:17.250486 containerd[1462]: time="2024-11-12T20:49:17.250117913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:49:17.250486 containerd[1462]: time="2024-11-12T20:49:17.250221221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:49:17.262369 containerd[1462]: time="2024-11-12T20:49:17.262302915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lvkdh,Uid:5666e17e-e08c-4c69-b9f9-6f9b8433b194,Namespace:kube-system,Attempt:1,} returns sandbox id \"86afc230051329b9dc8c5258399b6faf6d3246d549938b91d0340949d14d1cd6\"" Nov 12 20:49:17.264137 kubelet[2510]: E1112 20:49:17.264107 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:49:17.274346 containerd[1462]: time="2024-11-12T20:49:17.274216825Z" level=info msg="CreateContainer within sandbox \"86afc230051329b9dc8c5258399b6faf6d3246d549938b91d0340949d14d1cd6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 20:49:17.308336 systemd[1]: Started cri-containerd-3fa05bc0693f48bc500e72d811da413e44029b32e0370b0aedaf984d65fb5068.scope - libcontainer container 3fa05bc0693f48bc500e72d811da413e44029b32e0370b0aedaf984d65fb5068. Nov 12 20:49:17.382986 containerd[1462]: time="2024-11-12T20:49:17.382347424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-srhkg,Uid:09520a02-648f-4672-85a7-9b0a62557d5f,Namespace:calico-system,Attempt:1,} returns sandbox id \"3fa05bc0693f48bc500e72d811da413e44029b32e0370b0aedaf984d65fb5068\"" Nov 12 20:49:17.425007 containerd[1462]: 2024-11-12 20:49:17.323 [INFO][4452] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7ee9c426229b318250540458b462318f3408de31a97a5fb0c59eed7ca0fdb800" Nov 12 20:49:17.425007 containerd[1462]: 2024-11-12 20:49:17.323 [INFO][4452] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7ee9c426229b318250540458b462318f3408de31a97a5fb0c59eed7ca0fdb800" iface="eth0" netns="/var/run/netns/cni-8362c03a-da42-ed90-4e84-4a157f5a2f3b" Nov 12 20:49:17.425007 containerd[1462]: 2024-11-12 20:49:17.323 [INFO][4452] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7ee9c426229b318250540458b462318f3408de31a97a5fb0c59eed7ca0fdb800" iface="eth0" netns="/var/run/netns/cni-8362c03a-da42-ed90-4e84-4a157f5a2f3b" Nov 12 20:49:17.425007 containerd[1462]: 2024-11-12 20:49:17.326 [INFO][4452] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7ee9c426229b318250540458b462318f3408de31a97a5fb0c59eed7ca0fdb800" iface="eth0" netns="/var/run/netns/cni-8362c03a-da42-ed90-4e84-4a157f5a2f3b" Nov 12 20:49:17.425007 containerd[1462]: 2024-11-12 20:49:17.327 [INFO][4452] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7ee9c426229b318250540458b462318f3408de31a97a5fb0c59eed7ca0fdb800" Nov 12 20:49:17.425007 containerd[1462]: 2024-11-12 20:49:17.328 [INFO][4452] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7ee9c426229b318250540458b462318f3408de31a97a5fb0c59eed7ca0fdb800" Nov 12 20:49:17.425007 containerd[1462]: 2024-11-12 20:49:17.397 [INFO][4498] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7ee9c426229b318250540458b462318f3408de31a97a5fb0c59eed7ca0fdb800" HandleID="k8s-pod-network.7ee9c426229b318250540458b462318f3408de31a97a5fb0c59eed7ca0fdb800" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--apiserver--857999858d--g2wj2-eth0" Nov 12 20:49:17.425007 containerd[1462]: 2024-11-12 20:49:17.397 [INFO][4498] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:49:17.425007 containerd[1462]: 2024-11-12 20:49:17.397 [INFO][4498] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:49:17.425007 containerd[1462]: 2024-11-12 20:49:17.413 [WARNING][4498] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7ee9c426229b318250540458b462318f3408de31a97a5fb0c59eed7ca0fdb800" HandleID="k8s-pod-network.7ee9c426229b318250540458b462318f3408de31a97a5fb0c59eed7ca0fdb800" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--apiserver--857999858d--g2wj2-eth0" Nov 12 20:49:17.425007 containerd[1462]: 2024-11-12 20:49:17.413 [INFO][4498] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7ee9c426229b318250540458b462318f3408de31a97a5fb0c59eed7ca0fdb800" HandleID="k8s-pod-network.7ee9c426229b318250540458b462318f3408de31a97a5fb0c59eed7ca0fdb800" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--apiserver--857999858d--g2wj2-eth0" Nov 12 20:49:17.425007 containerd[1462]: 2024-11-12 20:49:17.418 [INFO][4498] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:49:17.425007 containerd[1462]: 2024-11-12 20:49:17.421 [INFO][4452] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7ee9c426229b318250540458b462318f3408de31a97a5fb0c59eed7ca0fdb800" Nov 12 20:49:17.426499 containerd[1462]: time="2024-11-12T20:49:17.426094672Z" level=info msg="TearDown network for sandbox \"7ee9c426229b318250540458b462318f3408de31a97a5fb0c59eed7ca0fdb800\" successfully" Nov 12 20:49:17.426499 containerd[1462]: time="2024-11-12T20:49:17.426137683Z" level=info msg="StopPodSandbox for \"7ee9c426229b318250540458b462318f3408de31a97a5fb0c59eed7ca0fdb800\" returns successfully" Nov 12 20:49:17.427330 containerd[1462]: time="2024-11-12T20:49:17.427074644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-857999858d-g2wj2,Uid:3cfde3c6-532c-42e8-b5c0-a7b194fb76ba,Namespace:calico-apiserver,Attempt:1,}" Nov 12 20:49:17.439321 containerd[1462]: time="2024-11-12T20:49:17.438973518Z" level=info msg="CreateContainer within sandbox \"86afc230051329b9dc8c5258399b6faf6d3246d549938b91d0340949d14d1cd6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b57e8f1f2fb97f06b0d5ba2bbcb3d4ca57caae32c55be62e24d61b042225ac06\"" Nov 12 20:49:17.470435 containerd[1462]: time="2024-11-12T20:49:17.457707441Z" level=info msg="StartContainer for \"b57e8f1f2fb97f06b0d5ba2bbcb3d4ca57caae32c55be62e24d61b042225ac06\"" Nov 12 20:49:17.558091 systemd[1]: Started cri-containerd-b57e8f1f2fb97f06b0d5ba2bbcb3d4ca57caae32c55be62e24d61b042225ac06.scope - libcontainer container b57e8f1f2fb97f06b0d5ba2bbcb3d4ca57caae32c55be62e24d61b042225ac06. Nov 12 20:49:17.647384 containerd[1462]: time="2024-11-12T20:49:17.646573122Z" level=info msg="StartContainer for \"b57e8f1f2fb97f06b0d5ba2bbcb3d4ca57caae32c55be62e24d61b042225ac06\" returns successfully" Nov 12 20:49:17.766199 kubelet[2510]: E1112 20:49:17.766049 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:49:17.767797 kubelet[2510]: E1112 20:49:17.767477 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:49:17.869894 systemd[1]: run-netns-cni\x2d8362c03a\x2dda42\x2ded90\x2d4e84\x2d4a157f5a2f3b.mount: Deactivated successfully. Nov 12 20:49:18.068008 systemd-networkd[1372]: cali757cd175588: Link UP Nov 12 20:49:18.068377 systemd-networkd[1372]: cali757cd175588: Gained carrier Nov 12 20:49:18.123351 kubelet[2510]: I1112 20:49:18.121575 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-lvkdh" podStartSLOduration=39.121486607 podStartE2EDuration="39.121486607s" podCreationTimestamp="2024-11-12 20:48:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:49:17.887572139 +0000 UTC m=+42.892549506" watchObservedRunningTime="2024-11-12 20:49:18.121486607 +0000 UTC m=+43.126463978" Nov 12 20:49:18.130880 containerd[1462]: 2024-11-12 20:49:17.577 [INFO][4517] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--apiserver--857999858d--g2wj2-eth0 calico-apiserver-857999858d- calico-apiserver 3cfde3c6-532c-42e8-b5c0-a7b194fb76ba 860 0 2024-11-12 20:48:50 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:857999858d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.2.0-2-eeaeb2d4c6 calico-apiserver-857999858d-g2wj2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali757cd175588 [] []}} ContainerID="95d1a1e6bfbdc58fe5da538ecc670f0b2028959dca3379da69087f1c69502193" Namespace="calico-apiserver" Pod="calico-apiserver-857999858d-g2wj2" WorkloadEndpoint="ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--apiserver--857999858d--g2wj2-" Nov 12 20:49:18.130880 containerd[1462]: 2024-11-12 20:49:17.577 [INFO][4517] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="95d1a1e6bfbdc58fe5da538ecc670f0b2028959dca3379da69087f1c69502193" Namespace="calico-apiserver" Pod="calico-apiserver-857999858d-g2wj2" WorkloadEndpoint="ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--apiserver--857999858d--g2wj2-eth0" Nov 12 20:49:18.130880 containerd[1462]: 2024-11-12 20:49:17.705 [INFO][4553] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="95d1a1e6bfbdc58fe5da538ecc670f0b2028959dca3379da69087f1c69502193" HandleID="k8s-pod-network.95d1a1e6bfbdc58fe5da538ecc670f0b2028959dca3379da69087f1c69502193" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--apiserver--857999858d--g2wj2-eth0" Nov 12 20:49:18.130880 containerd[1462]: 2024-11-12 20:49:17.750 [INFO][4553] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="95d1a1e6bfbdc58fe5da538ecc670f0b2028959dca3379da69087f1c69502193" HandleID="k8s-pod-network.95d1a1e6bfbdc58fe5da538ecc670f0b2028959dca3379da69087f1c69502193" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--apiserver--857999858d--g2wj2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011ace0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.2.0-2-eeaeb2d4c6", "pod":"calico-apiserver-857999858d-g2wj2", "timestamp":"2024-11-12 20:49:17.705425046 +0000 UTC"}, Hostname:"ci-4081.2.0-2-eeaeb2d4c6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:49:18.130880 containerd[1462]: 2024-11-12 20:49:17.750 [INFO][4553] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:49:18.130880 containerd[1462]: 2024-11-12 20:49:17.752 [INFO][4553] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:49:18.130880 containerd[1462]: 2024-11-12 20:49:17.752 [INFO][4553] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.0-2-eeaeb2d4c6' Nov 12 20:49:18.130880 containerd[1462]: 2024-11-12 20:49:17.769 [INFO][4553] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.95d1a1e6bfbdc58fe5da538ecc670f0b2028959dca3379da69087f1c69502193" host="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:49:18.130880 containerd[1462]: 2024-11-12 20:49:17.876 [INFO][4553] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:49:18.130880 containerd[1462]: 2024-11-12 20:49:17.929 [INFO][4553] ipam/ipam.go 489: Trying affinity for 192.168.60.192/26 host="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:49:18.130880 containerd[1462]: 2024-11-12 20:49:17.935 [INFO][4553] ipam/ipam.go 155: Attempting to load block cidr=192.168.60.192/26 host="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:49:18.130880 containerd[1462]: 2024-11-12 20:49:17.976 [INFO][4553] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.60.192/26 host="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:49:18.130880 containerd[1462]: 2024-11-12 20:49:17.976 [INFO][4553] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.60.192/26 handle="k8s-pod-network.95d1a1e6bfbdc58fe5da538ecc670f0b2028959dca3379da69087f1c69502193" host="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:49:18.130880 containerd[1462]: 2024-11-12 20:49:17.981 [INFO][4553] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.95d1a1e6bfbdc58fe5da538ecc670f0b2028959dca3379da69087f1c69502193 Nov 12 20:49:18.130880 containerd[1462]: 2024-11-12 20:49:17.998 [INFO][4553] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.60.192/26 handle="k8s-pod-network.95d1a1e6bfbdc58fe5da538ecc670f0b2028959dca3379da69087f1c69502193" host="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:49:18.130880 containerd[1462]: 2024-11-12 20:49:18.043 [INFO][4553] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.60.198/26] block=192.168.60.192/26 handle="k8s-pod-network.95d1a1e6bfbdc58fe5da538ecc670f0b2028959dca3379da69087f1c69502193" host="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:49:18.130880 containerd[1462]: 2024-11-12 20:49:18.043 [INFO][4553] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.60.198/26] handle="k8s-pod-network.95d1a1e6bfbdc58fe5da538ecc670f0b2028959dca3379da69087f1c69502193" host="ci-4081.2.0-2-eeaeb2d4c6" Nov 12 20:49:18.130880 containerd[1462]: 2024-11-12 20:49:18.043 [INFO][4553] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:49:18.130880 containerd[1462]: 2024-11-12 20:49:18.043 [INFO][4553] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.60.198/26] IPv6=[] ContainerID="95d1a1e6bfbdc58fe5da538ecc670f0b2028959dca3379da69087f1c69502193" HandleID="k8s-pod-network.95d1a1e6bfbdc58fe5da538ecc670f0b2028959dca3379da69087f1c69502193" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--apiserver--857999858d--g2wj2-eth0" Nov 12 20:49:18.135307 containerd[1462]: 2024-11-12 20:49:18.052 [INFO][4517] cni-plugin/k8s.go 386: Populated endpoint ContainerID="95d1a1e6bfbdc58fe5da538ecc670f0b2028959dca3379da69087f1c69502193" Namespace="calico-apiserver" Pod="calico-apiserver-857999858d-g2wj2" WorkloadEndpoint="ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--apiserver--857999858d--g2wj2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--apiserver--857999858d--g2wj2-eth0", GenerateName:"calico-apiserver-857999858d-", Namespace:"calico-apiserver", SelfLink:"", UID:"3cfde3c6-532c-42e8-b5c0-a7b194fb76ba", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 48, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"857999858d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-2-eeaeb2d4c6", ContainerID:"", Pod:"calico-apiserver-857999858d-g2wj2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.60.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali757cd175588", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:49:18.135307 containerd[1462]: 2024-11-12 20:49:18.052 [INFO][4517] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.60.198/32] ContainerID="95d1a1e6bfbdc58fe5da538ecc670f0b2028959dca3379da69087f1c69502193" Namespace="calico-apiserver" Pod="calico-apiserver-857999858d-g2wj2" WorkloadEndpoint="ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--apiserver--857999858d--g2wj2-eth0" Nov 12 20:49:18.135307 containerd[1462]: 2024-11-12 20:49:18.052 [INFO][4517] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali757cd175588 ContainerID="95d1a1e6bfbdc58fe5da538ecc670f0b2028959dca3379da69087f1c69502193" Namespace="calico-apiserver" Pod="calico-apiserver-857999858d-g2wj2" WorkloadEndpoint="ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--apiserver--857999858d--g2wj2-eth0" Nov 12 20:49:18.135307 containerd[1462]: 2024-11-12 20:49:18.068 [INFO][4517] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="95d1a1e6bfbdc58fe5da538ecc670f0b2028959dca3379da69087f1c69502193" Namespace="calico-apiserver" Pod="calico-apiserver-857999858d-g2wj2" WorkloadEndpoint="ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--apiserver--857999858d--g2wj2-eth0" Nov 12 20:49:18.135307 containerd[1462]: 2024-11-12 20:49:18.077 [INFO][4517] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="95d1a1e6bfbdc58fe5da538ecc670f0b2028959dca3379da69087f1c69502193" Namespace="calico-apiserver" Pod="calico-apiserver-857999858d-g2wj2" WorkloadEndpoint="ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--apiserver--857999858d--g2wj2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--apiserver--857999858d--g2wj2-eth0", GenerateName:"calico-apiserver-857999858d-", Namespace:"calico-apiserver", SelfLink:"", UID:"3cfde3c6-532c-42e8-b5c0-a7b194fb76ba", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 48, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"857999858d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-2-eeaeb2d4c6", ContainerID:"95d1a1e6bfbdc58fe5da538ecc670f0b2028959dca3379da69087f1c69502193", Pod:"calico-apiserver-857999858d-g2wj2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.60.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali757cd175588", MAC:"ba:42:46:66:fe:54", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:49:18.135307 containerd[1462]: 2024-11-12 20:49:18.121 [INFO][4517] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="95d1a1e6bfbdc58fe5da538ecc670f0b2028959dca3379da69087f1c69502193" Namespace="calico-apiserver" Pod="calico-apiserver-857999858d-g2wj2" WorkloadEndpoint="ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--apiserver--857999858d--g2wj2-eth0" Nov 12 20:49:18.232201 containerd[1462]: time="2024-11-12T20:49:18.230893572Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:49:18.232201 containerd[1462]: time="2024-11-12T20:49:18.230995305Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:49:18.232201 containerd[1462]: time="2024-11-12T20:49:18.231014983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:49:18.234191 containerd[1462]: time="2024-11-12T20:49:18.233624102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:49:18.305275 systemd[1]: Started cri-containerd-95d1a1e6bfbdc58fe5da538ecc670f0b2028959dca3379da69087f1c69502193.scope - libcontainer container 95d1a1e6bfbdc58fe5da538ecc670f0b2028959dca3379da69087f1c69502193. Nov 12 20:49:18.403730 containerd[1462]: time="2024-11-12T20:49:18.403613506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-857999858d-g2wj2,Uid:3cfde3c6-532c-42e8-b5c0-a7b194fb76ba,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"95d1a1e6bfbdc58fe5da538ecc670f0b2028959dca3379da69087f1c69502193\"" Nov 12 20:49:18.437259 systemd-networkd[1372]: cali326800ec2f3: Gained IPv6LL Nov 12 20:49:18.630747 systemd-networkd[1372]: calib75023cc603: Gained IPv6LL Nov 12 20:49:18.776661 kubelet[2510]: E1112 20:49:18.776627 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:49:18.813068 containerd[1462]: time="2024-11-12T20:49:18.812998327Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:18.819255 containerd[1462]: time="2024-11-12T20:49:18.819167526Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.0: active requests=0, bytes read=34152461" Nov 12 20:49:18.823813 containerd[1462]: time="2024-11-12T20:49:18.822531034Z" level=info msg="ImageCreate event name:\"sha256:48cc7c24253a8037ceea486888a8c75cd74cbf20752c30b86fae718f5a3fc134\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:18.833132 containerd[1462]: time="2024-11-12T20:49:18.833041647Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:8242cd7e9b9b505c73292dd812ce1669bca95cacc56d30687f49e6e0b95c5535\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:18.835721 containerd[1462]: time="2024-11-12T20:49:18.835672378Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" with image id \"sha256:48cc7c24253a8037ceea486888a8c75cd74cbf20752c30b86fae718f5a3fc134\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:8242cd7e9b9b505c73292dd812ce1669bca95cacc56d30687f49e6e0b95c5535\", size \"35645521\" in 3.446933003s" Nov 12 20:49:18.835936 containerd[1462]: time="2024-11-12T20:49:18.835913485Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" returns image reference \"sha256:48cc7c24253a8037ceea486888a8c75cd74cbf20752c30b86fae718f5a3fc134\"" Nov 12 20:49:18.839263 containerd[1462]: time="2024-11-12T20:49:18.839220146Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\"" Nov 12 20:49:18.867474 containerd[1462]: time="2024-11-12T20:49:18.865256455Z" level=info msg="CreateContainer within sandbox \"db96ca28e0933ae5fb83ea05328a6dfee02ce93cf56f4910f5da5b4cba462c0b\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Nov 12 20:49:18.917052 containerd[1462]: time="2024-11-12T20:49:18.916915386Z" level=info msg="CreateContainer within sandbox \"db96ca28e0933ae5fb83ea05328a6dfee02ce93cf56f4910f5da5b4cba462c0b\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"518e0a6ab8f6d6e9739ef35df0d8e45a972ab4d3191301dfb6c01b254f3505dd\"" Nov 12 20:49:18.918075 containerd[1462]: time="2024-11-12T20:49:18.917707307Z" level=info msg="StartContainer for \"518e0a6ab8f6d6e9739ef35df0d8e45a972ab4d3191301dfb6c01b254f3505dd\"" Nov 12 20:49:18.971264 systemd[1]: Started cri-containerd-518e0a6ab8f6d6e9739ef35df0d8e45a972ab4d3191301dfb6c01b254f3505dd.scope - libcontainer container 518e0a6ab8f6d6e9739ef35df0d8e45a972ab4d3191301dfb6c01b254f3505dd. Nov 12 20:49:19.051252 containerd[1462]: time="2024-11-12T20:49:19.051075460Z" level=info msg="StartContainer for \"518e0a6ab8f6d6e9739ef35df0d8e45a972ab4d3191301dfb6c01b254f3505dd\" returns successfully" Nov 12 20:49:19.653239 systemd-networkd[1372]: cali757cd175588: Gained IPv6LL Nov 12 20:49:19.779399 kubelet[2510]: E1112 20:49:19.779143 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:49:20.694227 systemd[1]: Started sshd@10-147.182.197.11:22-139.178.68.195:59858.service - OpenSSH per-connection server daemon (139.178.68.195:59858). Nov 12 20:49:20.783233 kubelet[2510]: I1112 20:49:20.783039 2510 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:49:20.816966 sshd[4681]: Accepted publickey for core from 139.178.68.195 port 59858 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:49:20.820392 sshd[4681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:49:20.831512 systemd-logind[1448]: New session 11 of user core. Nov 12 20:49:20.838332 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 12 20:49:21.606172 sshd[4681]: pam_unix(sshd:session): session closed for user core Nov 12 20:49:21.608701 containerd[1462]: time="2024-11-12T20:49:21.607957944Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:21.613047 systemd[1]: sshd@10-147.182.197.11:22-139.178.68.195:59858.service: Deactivated successfully. Nov 12 20:49:21.616608 containerd[1462]: time="2024-11-12T20:49:21.614600155Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.0: active requests=0, bytes read=41963930" Nov 12 20:49:21.615900 systemd[1]: session-11.scope: Deactivated successfully. Nov 12 20:49:21.619433 systemd-logind[1448]: Session 11 logged out. Waiting for processes to exit. Nov 12 20:49:21.621099 containerd[1462]: time="2024-11-12T20:49:21.620401684Z" level=info msg="ImageCreate event name:\"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:21.627024 systemd-logind[1448]: Removed session 11. Nov 12 20:49:21.631328 containerd[1462]: time="2024-11-12T20:49:21.631239574Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:21.632996 containerd[1462]: time="2024-11-12T20:49:21.632745710Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" with image id \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\", size \"43457038\" in 2.793256119s" Nov 12 20:49:21.632996 containerd[1462]: time="2024-11-12T20:49:21.632808774Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" returns image reference \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\"" Nov 12 20:49:21.647790 containerd[1462]: time="2024-11-12T20:49:21.646473243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.0\"" Nov 12 20:49:21.649147 containerd[1462]: time="2024-11-12T20:49:21.648903600Z" level=info msg="CreateContainer within sandbox \"df55d2aca0f4f9dec7084788b3aa53177321aacfa0f0051e2c43080ea338d56c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Nov 12 20:49:21.688044 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount22740704.mount: Deactivated successfully. Nov 12 20:49:21.701489 containerd[1462]: time="2024-11-12T20:49:21.700796156Z" level=info msg="CreateContainer within sandbox \"df55d2aca0f4f9dec7084788b3aa53177321aacfa0f0051e2c43080ea338d56c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c2b70bd1c4a419b3ccd10b636c6e5efc133ea683873d0a1e43fe6d133ee8fd64\"" Nov 12 20:49:21.704496 containerd[1462]: time="2024-11-12T20:49:21.704292510Z" level=info msg="StartContainer for \"c2b70bd1c4a419b3ccd10b636c6e5efc133ea683873d0a1e43fe6d133ee8fd64\"" Nov 12 20:49:21.772449 systemd[1]: Started cri-containerd-c2b70bd1c4a419b3ccd10b636c6e5efc133ea683873d0a1e43fe6d133ee8fd64.scope - libcontainer container c2b70bd1c4a419b3ccd10b636c6e5efc133ea683873d0a1e43fe6d133ee8fd64. Nov 12 20:49:21.851395 containerd[1462]: time="2024-11-12T20:49:21.851097208Z" level=info msg="StartContainer for \"c2b70bd1c4a419b3ccd10b636c6e5efc133ea683873d0a1e43fe6d133ee8fd64\" returns successfully" Nov 12 20:49:22.832700 kubelet[2510]: I1112 20:49:22.831592 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6fbd766b6b-f77jl" podStartSLOduration=28.364564031 podStartE2EDuration="31.831566026s" podCreationTimestamp="2024-11-12 20:48:51 +0000 UTC" firstStartedPulling="2024-11-12 20:49:15.369992362 +0000 UTC m=+40.374969710" lastFinishedPulling="2024-11-12 20:49:18.836994334 +0000 UTC m=+43.841971705" observedRunningTime="2024-11-12 20:49:19.837903055 +0000 UTC m=+44.842880440" watchObservedRunningTime="2024-11-12 20:49:22.831566026 +0000 UTC m=+47.836543396" Nov 12 20:49:22.846458 kubelet[2510]: I1112 20:49:22.845807 2510 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:49:23.011809 kubelet[2510]: I1112 20:49:23.010667 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-857999858d-qgxrw" podStartSLOduration=27.936135952 podStartE2EDuration="33.010581833s" podCreationTimestamp="2024-11-12 20:48:50 +0000 UTC" firstStartedPulling="2024-11-12 20:49:16.57103674 +0000 UTC m=+41.576014101" lastFinishedPulling="2024-11-12 20:49:21.645482621 +0000 UTC m=+46.650459982" observedRunningTime="2024-11-12 20:49:22.832131903 +0000 UTC m=+47.837109273" watchObservedRunningTime="2024-11-12 20:49:23.010581833 +0000 UTC m=+48.015559206" Nov 12 20:49:23.296445 containerd[1462]: time="2024-11-12T20:49:23.296360454Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:23.300454 containerd[1462]: time="2024-11-12T20:49:23.300357627Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.0: active requests=0, bytes read=7902635" Nov 12 20:49:23.303359 containerd[1462]: time="2024-11-12T20:49:23.303276998Z" level=info msg="ImageCreate event name:\"sha256:a58f4c4b5a7fc2dc0036f198a37464aa007ff2dfe31c8fddad993477291bea46\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:23.314080 containerd[1462]: time="2024-11-12T20:49:23.313519319Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:034dac492808ec38cd5e596ef6c97d7cd01aaab29a4952c746b27c75ecab8cf5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:23.316433 containerd[1462]: time="2024-11-12T20:49:23.316252049Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.0\" with image id \"sha256:a58f4c4b5a7fc2dc0036f198a37464aa007ff2dfe31c8fddad993477291bea46\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:034dac492808ec38cd5e596ef6c97d7cd01aaab29a4952c746b27c75ecab8cf5\", size \"9395727\" in 1.669730828s" Nov 12 20:49:23.316596 containerd[1462]: time="2024-11-12T20:49:23.316551619Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.0\" returns image reference \"sha256:a58f4c4b5a7fc2dc0036f198a37464aa007ff2dfe31c8fddad993477291bea46\"" Nov 12 20:49:23.319424 containerd[1462]: time="2024-11-12T20:49:23.319260827Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\"" Nov 12 20:49:23.392823 containerd[1462]: time="2024-11-12T20:49:23.392759429Z" level=info msg="CreateContainer within sandbox \"3fa05bc0693f48bc500e72d811da413e44029b32e0370b0aedaf984d65fb5068\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Nov 12 20:49:23.435532 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4106821420.mount: Deactivated successfully. Nov 12 20:49:23.436857 containerd[1462]: time="2024-11-12T20:49:23.436795376Z" level=info msg="CreateContainer within sandbox \"3fa05bc0693f48bc500e72d811da413e44029b32e0370b0aedaf984d65fb5068\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"9ee1dd036735dad52b5ae5c59518cb9b66d501b197e1bb23b990e25ce3dda159\"" Nov 12 20:49:23.445878 containerd[1462]: time="2024-11-12T20:49:23.438841597Z" level=info msg="StartContainer for \"9ee1dd036735dad52b5ae5c59518cb9b66d501b197e1bb23b990e25ce3dda159\"" Nov 12 20:49:23.514586 systemd[1]: Started cri-containerd-9ee1dd036735dad52b5ae5c59518cb9b66d501b197e1bb23b990e25ce3dda159.scope - libcontainer container 9ee1dd036735dad52b5ae5c59518cb9b66d501b197e1bb23b990e25ce3dda159. Nov 12 20:49:23.581194 containerd[1462]: time="2024-11-12T20:49:23.581017741Z" level=info msg="StartContainer for \"9ee1dd036735dad52b5ae5c59518cb9b66d501b197e1bb23b990e25ce3dda159\" returns successfully" Nov 12 20:49:23.804929 kubelet[2510]: I1112 20:49:23.804891 2510 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:49:23.904990 containerd[1462]: time="2024-11-12T20:49:23.904648130Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:23.909657 containerd[1462]: time="2024-11-12T20:49:23.909100685Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.0: active requests=0, bytes read=77" Nov 12 20:49:23.912830 containerd[1462]: time="2024-11-12T20:49:23.912747365Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" with image id \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\", size \"43457038\" in 593.318881ms" Nov 12 20:49:23.912830 containerd[1462]: time="2024-11-12T20:49:23.912816798Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" returns image reference \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\"" Nov 12 20:49:23.915057 containerd[1462]: time="2024-11-12T20:49:23.914650732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\"" Nov 12 20:49:23.916447 containerd[1462]: time="2024-11-12T20:49:23.916391495Z" level=info msg="CreateContainer within sandbox \"95d1a1e6bfbdc58fe5da538ecc670f0b2028959dca3379da69087f1c69502193\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Nov 12 20:49:23.971446 containerd[1462]: time="2024-11-12T20:49:23.971281499Z" level=info msg="CreateContainer within sandbox \"95d1a1e6bfbdc58fe5da538ecc670f0b2028959dca3379da69087f1c69502193\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"70e625ab066a3c3c912e4554f95a124509520c9f55edee6581d4be9e59a63418\"" Nov 12 20:49:23.973821 containerd[1462]: time="2024-11-12T20:49:23.972613586Z" level=info msg="StartContainer for \"70e625ab066a3c3c912e4554f95a124509520c9f55edee6581d4be9e59a63418\"" Nov 12 20:49:24.035201 systemd[1]: Started cri-containerd-70e625ab066a3c3c912e4554f95a124509520c9f55edee6581d4be9e59a63418.scope - libcontainer container 70e625ab066a3c3c912e4554f95a124509520c9f55edee6581d4be9e59a63418. Nov 12 20:49:24.100325 containerd[1462]: time="2024-11-12T20:49:24.100271515Z" level=info msg="StartContainer for \"70e625ab066a3c3c912e4554f95a124509520c9f55edee6581d4be9e59a63418\" returns successfully" Nov 12 20:49:24.894406 systemd[1]: run-containerd-runc-k8s.io-70e625ab066a3c3c912e4554f95a124509520c9f55edee6581d4be9e59a63418-runc.hok4vS.mount: Deactivated successfully. Nov 12 20:49:25.829947 containerd[1462]: time="2024-11-12T20:49:25.829884441Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:25.832941 containerd[1462]: time="2024-11-12T20:49:25.832875510Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0: active requests=0, bytes read=10501080" Nov 12 20:49:25.835303 containerd[1462]: time="2024-11-12T20:49:25.835236554Z" level=info msg="ImageCreate event name:\"sha256:448cca84519399c3138626aff1a43b0b9168ecbe27e0e8e6df63416012eeeaae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:25.841059 containerd[1462]: time="2024-11-12T20:49:25.841011437Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:69153d7038238f84185e52b4a84e11c5cf5af716ef8613fb0a475ea311dca0cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:25.841805 containerd[1462]: time="2024-11-12T20:49:25.841768233Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" with image id \"sha256:448cca84519399c3138626aff1a43b0b9168ecbe27e0e8e6df63416012eeeaae\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:69153d7038238f84185e52b4a84e11c5cf5af716ef8613fb0a475ea311dca0cb\", size \"11994124\" in 1.927082778s" Nov 12 20:49:25.841805 containerd[1462]: time="2024-11-12T20:49:25.841803473Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" returns image reference \"sha256:448cca84519399c3138626aff1a43b0b9168ecbe27e0e8e6df63416012eeeaae\"" Nov 12 20:49:25.845745 containerd[1462]: time="2024-11-12T20:49:25.845701002Z" level=info msg="CreateContainer within sandbox \"3fa05bc0693f48bc500e72d811da413e44029b32e0370b0aedaf984d65fb5068\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Nov 12 20:49:25.883511 containerd[1462]: time="2024-11-12T20:49:25.883459678Z" level=info msg="CreateContainer within sandbox \"3fa05bc0693f48bc500e72d811da413e44029b32e0370b0aedaf984d65fb5068\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"07d8aca0e43288c5d02df75aaaaad5069eb96794edb59aa846262d9f644cf2ea\"" Nov 12 20:49:25.884088 containerd[1462]: time="2024-11-12T20:49:25.884053900Z" level=info msg="StartContainer for \"07d8aca0e43288c5d02df75aaaaad5069eb96794edb59aa846262d9f644cf2ea\"" Nov 12 20:49:25.933166 systemd[1]: Started cri-containerd-07d8aca0e43288c5d02df75aaaaad5069eb96794edb59aa846262d9f644cf2ea.scope - libcontainer container 07d8aca0e43288c5d02df75aaaaad5069eb96794edb59aa846262d9f644cf2ea. Nov 12 20:49:26.031101 containerd[1462]: time="2024-11-12T20:49:26.030954668Z" level=info msg="StartContainer for \"07d8aca0e43288c5d02df75aaaaad5069eb96794edb59aa846262d9f644cf2ea\" returns successfully" Nov 12 20:49:26.065976 kubelet[2510]: I1112 20:49:26.065882 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-857999858d-g2wj2" podStartSLOduration=30.561339816 podStartE2EDuration="36.065840615s" podCreationTimestamp="2024-11-12 20:48:50 +0000 UTC" firstStartedPulling="2024-11-12 20:49:18.40908265 +0000 UTC m=+43.414060021" lastFinishedPulling="2024-11-12 20:49:23.913583471 +0000 UTC m=+48.918560820" observedRunningTime="2024-11-12 20:49:24.83687679 +0000 UTC m=+49.841854159" watchObservedRunningTime="2024-11-12 20:49:26.065840615 +0000 UTC m=+51.070817986" Nov 12 20:49:26.419700 kubelet[2510]: I1112 20:49:26.419568 2510 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Nov 12 20:49:26.423093 kubelet[2510]: I1112 20:49:26.422972 2510 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Nov 12 20:49:26.632052 systemd[1]: Started sshd@11-147.182.197.11:22-139.178.68.195:56224.service - OpenSSH per-connection server daemon (139.178.68.195:56224). Nov 12 20:49:26.756037 sshd[4915]: Accepted publickey for core from 139.178.68.195 port 56224 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:49:26.760767 sshd[4915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:49:26.769622 systemd-logind[1448]: New session 12 of user core. Nov 12 20:49:26.775174 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 12 20:49:27.183920 sshd[4915]: pam_unix(sshd:session): session closed for user core Nov 12 20:49:27.199168 systemd[1]: sshd@11-147.182.197.11:22-139.178.68.195:56224.service: Deactivated successfully. Nov 12 20:49:27.202802 systemd[1]: session-12.scope: Deactivated successfully. Nov 12 20:49:27.205483 systemd-logind[1448]: Session 12 logged out. Waiting for processes to exit. Nov 12 20:49:27.211350 systemd[1]: Started sshd@12-147.182.197.11:22-139.178.68.195:56240.service - OpenSSH per-connection server daemon (139.178.68.195:56240). Nov 12 20:49:27.213348 systemd-logind[1448]: Removed session 12. Nov 12 20:49:27.269675 sshd[4931]: Accepted publickey for core from 139.178.68.195 port 56240 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:49:27.271753 sshd[4931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:49:27.278765 systemd-logind[1448]: New session 13 of user core. Nov 12 20:49:27.286136 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 12 20:49:27.580292 sshd[4931]: pam_unix(sshd:session): session closed for user core Nov 12 20:49:27.595539 systemd[1]: sshd@12-147.182.197.11:22-139.178.68.195:56240.service: Deactivated successfully. Nov 12 20:49:27.601500 systemd[1]: session-13.scope: Deactivated successfully. Nov 12 20:49:27.606014 systemd-logind[1448]: Session 13 logged out. Waiting for processes to exit. Nov 12 20:49:27.616704 systemd[1]: Started sshd@13-147.182.197.11:22-139.178.68.195:56244.service - OpenSSH per-connection server daemon (139.178.68.195:56244). Nov 12 20:49:27.622633 systemd-logind[1448]: Removed session 13. Nov 12 20:49:27.686657 sshd[4942]: Accepted publickey for core from 139.178.68.195 port 56244 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:49:27.688892 sshd[4942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:49:27.696143 systemd-logind[1448]: New session 14 of user core. Nov 12 20:49:27.701054 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 12 20:49:27.885488 sshd[4942]: pam_unix(sshd:session): session closed for user core Nov 12 20:49:27.891555 systemd[1]: sshd@13-147.182.197.11:22-139.178.68.195:56244.service: Deactivated successfully. Nov 12 20:49:27.894307 systemd[1]: session-14.scope: Deactivated successfully. Nov 12 20:49:27.896005 systemd-logind[1448]: Session 14 logged out. Waiting for processes to exit. Nov 12 20:49:27.897264 systemd-logind[1448]: Removed session 14. Nov 12 20:49:32.908196 systemd[1]: Started sshd@14-147.182.197.11:22-139.178.68.195:56260.service - OpenSSH per-connection server daemon (139.178.68.195:56260). Nov 12 20:49:32.953487 sshd[4961]: Accepted publickey for core from 139.178.68.195 port 56260 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:49:32.955831 sshd[4961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:49:32.963955 systemd-logind[1448]: New session 15 of user core. Nov 12 20:49:32.970186 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 12 20:49:33.140311 sshd[4961]: pam_unix(sshd:session): session closed for user core Nov 12 20:49:33.145605 systemd[1]: sshd@14-147.182.197.11:22-139.178.68.195:56260.service: Deactivated successfully. Nov 12 20:49:33.149141 systemd[1]: session-15.scope: Deactivated successfully. Nov 12 20:49:33.150271 systemd-logind[1448]: Session 15 logged out. Waiting for processes to exit. Nov 12 20:49:33.151890 systemd-logind[1448]: Removed session 15. Nov 12 20:49:35.195048 containerd[1462]: time="2024-11-12T20:49:35.195001606Z" level=info msg="StopPodSandbox for \"728463c1accba5daf51aa8f42fe2a17e9a738a245208f22863c34b09f50c5a16\"" Nov 12 20:49:35.397131 containerd[1462]: 2024-11-12 20:49:35.331 [WARNING][4993] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="728463c1accba5daf51aa8f42fe2a17e9a738a245208f22863c34b09f50c5a16" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--kube--controllers--6fbd766b6b--f77jl-eth0", GenerateName:"calico-kube-controllers-6fbd766b6b-", Namespace:"calico-system", SelfLink:"", UID:"d0780930-4dfa-4cd9-9093-a1b94ae21874", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 48, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6fbd766b6b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-2-eeaeb2d4c6", ContainerID:"db96ca28e0933ae5fb83ea05328a6dfee02ce93cf56f4910f5da5b4cba462c0b", Pod:"calico-kube-controllers-6fbd766b6b-f77jl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.60.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0bf81bce2cb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:49:35.397131 containerd[1462]: 2024-11-12 20:49:35.332 [INFO][4993] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="728463c1accba5daf51aa8f42fe2a17e9a738a245208f22863c34b09f50c5a16" Nov 12 20:49:35.397131 containerd[1462]: 2024-11-12 20:49:35.332 [INFO][4993] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="728463c1accba5daf51aa8f42fe2a17e9a738a245208f22863c34b09f50c5a16" iface="eth0" netns="" Nov 12 20:49:35.397131 containerd[1462]: 2024-11-12 20:49:35.332 [INFO][4993] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="728463c1accba5daf51aa8f42fe2a17e9a738a245208f22863c34b09f50c5a16" Nov 12 20:49:35.397131 containerd[1462]: 2024-11-12 20:49:35.332 [INFO][4993] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="728463c1accba5daf51aa8f42fe2a17e9a738a245208f22863c34b09f50c5a16" Nov 12 20:49:35.397131 containerd[1462]: 2024-11-12 20:49:35.377 [INFO][4999] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="728463c1accba5daf51aa8f42fe2a17e9a738a245208f22863c34b09f50c5a16" HandleID="k8s-pod-network.728463c1accba5daf51aa8f42fe2a17e9a738a245208f22863c34b09f50c5a16" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--kube--controllers--6fbd766b6b--f77jl-eth0" Nov 12 20:49:35.397131 containerd[1462]: 2024-11-12 20:49:35.379 [INFO][4999] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:49:35.397131 containerd[1462]: 2024-11-12 20:49:35.379 [INFO][4999] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:49:35.397131 containerd[1462]: 2024-11-12 20:49:35.387 [WARNING][4999] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="728463c1accba5daf51aa8f42fe2a17e9a738a245208f22863c34b09f50c5a16" HandleID="k8s-pod-network.728463c1accba5daf51aa8f42fe2a17e9a738a245208f22863c34b09f50c5a16" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--kube--controllers--6fbd766b6b--f77jl-eth0" Nov 12 20:49:35.397131 containerd[1462]: 2024-11-12 20:49:35.387 [INFO][4999] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="728463c1accba5daf51aa8f42fe2a17e9a738a245208f22863c34b09f50c5a16" HandleID="k8s-pod-network.728463c1accba5daf51aa8f42fe2a17e9a738a245208f22863c34b09f50c5a16" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--kube--controllers--6fbd766b6b--f77jl-eth0" Nov 12 20:49:35.397131 containerd[1462]: 2024-11-12 20:49:35.390 [INFO][4999] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:49:35.397131 containerd[1462]: 2024-11-12 20:49:35.393 [INFO][4993] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="728463c1accba5daf51aa8f42fe2a17e9a738a245208f22863c34b09f50c5a16" Nov 12 20:49:35.398761 containerd[1462]: time="2024-11-12T20:49:35.397293208Z" level=info msg="TearDown network for sandbox \"728463c1accba5daf51aa8f42fe2a17e9a738a245208f22863c34b09f50c5a16\" successfully" Nov 12 20:49:35.398761 containerd[1462]: time="2024-11-12T20:49:35.397332159Z" level=info msg="StopPodSandbox for \"728463c1accba5daf51aa8f42fe2a17e9a738a245208f22863c34b09f50c5a16\" returns successfully" Nov 12 20:49:35.398761 containerd[1462]: time="2024-11-12T20:49:35.398134167Z" level=info msg="RemovePodSandbox for \"728463c1accba5daf51aa8f42fe2a17e9a738a245208f22863c34b09f50c5a16\"" Nov 12 20:49:35.398761 containerd[1462]: time="2024-11-12T20:49:35.398181966Z" level=info msg="Forcibly stopping sandbox \"728463c1accba5daf51aa8f42fe2a17e9a738a245208f22863c34b09f50c5a16\"" Nov 12 20:49:35.521870 containerd[1462]: 2024-11-12 20:49:35.463 [WARNING][5017] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="728463c1accba5daf51aa8f42fe2a17e9a738a245208f22863c34b09f50c5a16" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--kube--controllers--6fbd766b6b--f77jl-eth0", GenerateName:"calico-kube-controllers-6fbd766b6b-", Namespace:"calico-system", SelfLink:"", UID:"d0780930-4dfa-4cd9-9093-a1b94ae21874", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 48, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6fbd766b6b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-2-eeaeb2d4c6", ContainerID:"db96ca28e0933ae5fb83ea05328a6dfee02ce93cf56f4910f5da5b4cba462c0b", Pod:"calico-kube-controllers-6fbd766b6b-f77jl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.60.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0bf81bce2cb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:49:35.521870 containerd[1462]: 2024-11-12 20:49:35.464 [INFO][5017] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="728463c1accba5daf51aa8f42fe2a17e9a738a245208f22863c34b09f50c5a16" Nov 12 20:49:35.521870 containerd[1462]: 2024-11-12 20:49:35.464 [INFO][5017] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="728463c1accba5daf51aa8f42fe2a17e9a738a245208f22863c34b09f50c5a16" iface="eth0" netns="" Nov 12 20:49:35.521870 containerd[1462]: 2024-11-12 20:49:35.464 [INFO][5017] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="728463c1accba5daf51aa8f42fe2a17e9a738a245208f22863c34b09f50c5a16" Nov 12 20:49:35.521870 containerd[1462]: 2024-11-12 20:49:35.464 [INFO][5017] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="728463c1accba5daf51aa8f42fe2a17e9a738a245208f22863c34b09f50c5a16" Nov 12 20:49:35.521870 containerd[1462]: 2024-11-12 20:49:35.501 [INFO][5023] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="728463c1accba5daf51aa8f42fe2a17e9a738a245208f22863c34b09f50c5a16" HandleID="k8s-pod-network.728463c1accba5daf51aa8f42fe2a17e9a738a245208f22863c34b09f50c5a16" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--kube--controllers--6fbd766b6b--f77jl-eth0" Nov 12 20:49:35.521870 containerd[1462]: 2024-11-12 20:49:35.502 [INFO][5023] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:49:35.521870 containerd[1462]: 2024-11-12 20:49:35.502 [INFO][5023] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:49:35.521870 containerd[1462]: 2024-11-12 20:49:35.511 [WARNING][5023] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="728463c1accba5daf51aa8f42fe2a17e9a738a245208f22863c34b09f50c5a16" HandleID="k8s-pod-network.728463c1accba5daf51aa8f42fe2a17e9a738a245208f22863c34b09f50c5a16" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--kube--controllers--6fbd766b6b--f77jl-eth0" Nov 12 20:49:35.521870 containerd[1462]: 2024-11-12 20:49:35.511 [INFO][5023] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="728463c1accba5daf51aa8f42fe2a17e9a738a245208f22863c34b09f50c5a16" HandleID="k8s-pod-network.728463c1accba5daf51aa8f42fe2a17e9a738a245208f22863c34b09f50c5a16" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--kube--controllers--6fbd766b6b--f77jl-eth0" Nov 12 20:49:35.521870 containerd[1462]: 2024-11-12 20:49:35.516 [INFO][5023] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:49:35.521870 containerd[1462]: 2024-11-12 20:49:35.519 [INFO][5017] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="728463c1accba5daf51aa8f42fe2a17e9a738a245208f22863c34b09f50c5a16" Nov 12 20:49:35.524826 containerd[1462]: time="2024-11-12T20:49:35.522255220Z" level=info msg="TearDown network for sandbox \"728463c1accba5daf51aa8f42fe2a17e9a738a245208f22863c34b09f50c5a16\" successfully" Nov 12 20:49:35.575169 containerd[1462]: time="2024-11-12T20:49:35.574831031Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"728463c1accba5daf51aa8f42fe2a17e9a738a245208f22863c34b09f50c5a16\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:49:35.575169 containerd[1462]: time="2024-11-12T20:49:35.574978762Z" level=info msg="RemovePodSandbox \"728463c1accba5daf51aa8f42fe2a17e9a738a245208f22863c34b09f50c5a16\" returns successfully" Nov 12 20:49:35.576078 containerd[1462]: time="2024-11-12T20:49:35.576029155Z" level=info msg="StopPodSandbox for \"81a9aaf60851460b6722b031eec6ff073db3f6c8d8f779d9753f4ca5ad0d92a8\"" Nov 12 20:49:35.691443 containerd[1462]: 2024-11-12 20:49:35.637 [WARNING][5042] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="81a9aaf60851460b6722b031eec6ff073db3f6c8d8f779d9753f4ca5ad0d92a8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--2--eeaeb2d4c6-k8s-csi--node--driver--srhkg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"09520a02-648f-4672-85a7-9b0a62557d5f", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 48, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"548d65b7bf", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-2-eeaeb2d4c6", ContainerID:"3fa05bc0693f48bc500e72d811da413e44029b32e0370b0aedaf984d65fb5068", Pod:"csi-node-driver-srhkg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.60.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib75023cc603", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:49:35.691443 containerd[1462]: 2024-11-12 20:49:35.638 [INFO][5042] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="81a9aaf60851460b6722b031eec6ff073db3f6c8d8f779d9753f4ca5ad0d92a8" Nov 12 20:49:35.691443 containerd[1462]: 2024-11-12 20:49:35.638 [INFO][5042] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="81a9aaf60851460b6722b031eec6ff073db3f6c8d8f779d9753f4ca5ad0d92a8" iface="eth0" netns="" Nov 12 20:49:35.691443 containerd[1462]: 2024-11-12 20:49:35.638 [INFO][5042] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="81a9aaf60851460b6722b031eec6ff073db3f6c8d8f779d9753f4ca5ad0d92a8" Nov 12 20:49:35.691443 containerd[1462]: 2024-11-12 20:49:35.638 [INFO][5042] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="81a9aaf60851460b6722b031eec6ff073db3f6c8d8f779d9753f4ca5ad0d92a8" Nov 12 20:49:35.691443 containerd[1462]: 2024-11-12 20:49:35.675 [INFO][5048] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="81a9aaf60851460b6722b031eec6ff073db3f6c8d8f779d9753f4ca5ad0d92a8" HandleID="k8s-pod-network.81a9aaf60851460b6722b031eec6ff073db3f6c8d8f779d9753f4ca5ad0d92a8" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-csi--node--driver--srhkg-eth0" Nov 12 20:49:35.691443 containerd[1462]: 2024-11-12 20:49:35.676 [INFO][5048] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:49:35.691443 containerd[1462]: 2024-11-12 20:49:35.676 [INFO][5048] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:49:35.691443 containerd[1462]: 2024-11-12 20:49:35.684 [WARNING][5048] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="81a9aaf60851460b6722b031eec6ff073db3f6c8d8f779d9753f4ca5ad0d92a8" HandleID="k8s-pod-network.81a9aaf60851460b6722b031eec6ff073db3f6c8d8f779d9753f4ca5ad0d92a8" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-csi--node--driver--srhkg-eth0" Nov 12 20:49:35.691443 containerd[1462]: 2024-11-12 20:49:35.684 [INFO][5048] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="81a9aaf60851460b6722b031eec6ff073db3f6c8d8f779d9753f4ca5ad0d92a8" HandleID="k8s-pod-network.81a9aaf60851460b6722b031eec6ff073db3f6c8d8f779d9753f4ca5ad0d92a8" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-csi--node--driver--srhkg-eth0" Nov 12 20:49:35.691443 containerd[1462]: 2024-11-12 20:49:35.687 [INFO][5048] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:49:35.691443 containerd[1462]: 2024-11-12 20:49:35.688 [INFO][5042] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="81a9aaf60851460b6722b031eec6ff073db3f6c8d8f779d9753f4ca5ad0d92a8" Nov 12 20:49:35.692578 containerd[1462]: time="2024-11-12T20:49:35.691948208Z" level=info msg="TearDown network for sandbox \"81a9aaf60851460b6722b031eec6ff073db3f6c8d8f779d9753f4ca5ad0d92a8\" successfully" Nov 12 20:49:35.692578 containerd[1462]: time="2024-11-12T20:49:35.691978222Z" level=info msg="StopPodSandbox for \"81a9aaf60851460b6722b031eec6ff073db3f6c8d8f779d9753f4ca5ad0d92a8\" returns successfully" Nov 12 20:49:35.692578 containerd[1462]: time="2024-11-12T20:49:35.692494230Z" level=info msg="RemovePodSandbox for \"81a9aaf60851460b6722b031eec6ff073db3f6c8d8f779d9753f4ca5ad0d92a8\"" Nov 12 20:49:35.692578 containerd[1462]: time="2024-11-12T20:49:35.692531183Z" level=info msg="Forcibly stopping sandbox \"81a9aaf60851460b6722b031eec6ff073db3f6c8d8f779d9753f4ca5ad0d92a8\"" Nov 12 20:49:35.833989 containerd[1462]: 2024-11-12 20:49:35.783 [WARNING][5066] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="81a9aaf60851460b6722b031eec6ff073db3f6c8d8f779d9753f4ca5ad0d92a8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--2--eeaeb2d4c6-k8s-csi--node--driver--srhkg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"09520a02-648f-4672-85a7-9b0a62557d5f", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 48, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"548d65b7bf", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-2-eeaeb2d4c6", ContainerID:"3fa05bc0693f48bc500e72d811da413e44029b32e0370b0aedaf984d65fb5068", Pod:"csi-node-driver-srhkg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.60.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib75023cc603", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:49:35.833989 containerd[1462]: 2024-11-12 20:49:35.783 [INFO][5066] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="81a9aaf60851460b6722b031eec6ff073db3f6c8d8f779d9753f4ca5ad0d92a8" Nov 12 20:49:35.833989 containerd[1462]: 2024-11-12 20:49:35.783 [INFO][5066] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="81a9aaf60851460b6722b031eec6ff073db3f6c8d8f779d9753f4ca5ad0d92a8" iface="eth0" netns="" Nov 12 20:49:35.833989 containerd[1462]: 2024-11-12 20:49:35.783 [INFO][5066] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="81a9aaf60851460b6722b031eec6ff073db3f6c8d8f779d9753f4ca5ad0d92a8" Nov 12 20:49:35.833989 containerd[1462]: 2024-11-12 20:49:35.783 [INFO][5066] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="81a9aaf60851460b6722b031eec6ff073db3f6c8d8f779d9753f4ca5ad0d92a8" Nov 12 20:49:35.833989 containerd[1462]: 2024-11-12 20:49:35.814 [INFO][5072] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="81a9aaf60851460b6722b031eec6ff073db3f6c8d8f779d9753f4ca5ad0d92a8" HandleID="k8s-pod-network.81a9aaf60851460b6722b031eec6ff073db3f6c8d8f779d9753f4ca5ad0d92a8" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-csi--node--driver--srhkg-eth0" Nov 12 20:49:35.833989 containerd[1462]: 2024-11-12 20:49:35.814 [INFO][5072] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:49:35.833989 containerd[1462]: 2024-11-12 20:49:35.814 [INFO][5072] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:49:35.833989 containerd[1462]: 2024-11-12 20:49:35.824 [WARNING][5072] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="81a9aaf60851460b6722b031eec6ff073db3f6c8d8f779d9753f4ca5ad0d92a8" HandleID="k8s-pod-network.81a9aaf60851460b6722b031eec6ff073db3f6c8d8f779d9753f4ca5ad0d92a8" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-csi--node--driver--srhkg-eth0" Nov 12 20:49:35.833989 containerd[1462]: 2024-11-12 20:49:35.824 [INFO][5072] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="81a9aaf60851460b6722b031eec6ff073db3f6c8d8f779d9753f4ca5ad0d92a8" HandleID="k8s-pod-network.81a9aaf60851460b6722b031eec6ff073db3f6c8d8f779d9753f4ca5ad0d92a8" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-csi--node--driver--srhkg-eth0" Nov 12 20:49:35.833989 containerd[1462]: 2024-11-12 20:49:35.827 [INFO][5072] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:49:35.833989 containerd[1462]: 2024-11-12 20:49:35.830 [INFO][5066] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="81a9aaf60851460b6722b031eec6ff073db3f6c8d8f779d9753f4ca5ad0d92a8" Nov 12 20:49:35.833989 containerd[1462]: time="2024-11-12T20:49:35.832487755Z" level=info msg="TearDown network for sandbox \"81a9aaf60851460b6722b031eec6ff073db3f6c8d8f779d9753f4ca5ad0d92a8\" successfully" Nov 12 20:49:35.842634 containerd[1462]: time="2024-11-12T20:49:35.842551684Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"81a9aaf60851460b6722b031eec6ff073db3f6c8d8f779d9753f4ca5ad0d92a8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:49:35.842981 containerd[1462]: time="2024-11-12T20:49:35.842663893Z" level=info msg="RemovePodSandbox \"81a9aaf60851460b6722b031eec6ff073db3f6c8d8f779d9753f4ca5ad0d92a8\" returns successfully" Nov 12 20:49:35.843840 containerd[1462]: time="2024-11-12T20:49:35.843661221Z" level=info msg="StopPodSandbox for \"55d3add20a404fd2b235d07455b2c011273625d2463517900d4e75ad63a675ca\"" Nov 12 20:49:35.964604 containerd[1462]: 2024-11-12 20:49:35.911 [WARNING][5090] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="55d3add20a404fd2b235d07455b2c011273625d2463517900d4e75ad63a675ca" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--2--eeaeb2d4c6-k8s-coredns--6f6b679f8f--lvkdh-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"5666e17e-e08c-4c69-b9f9-6f9b8433b194", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 48, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-2-eeaeb2d4c6", ContainerID:"86afc230051329b9dc8c5258399b6faf6d3246d549938b91d0340949d14d1cd6", Pod:"coredns-6f6b679f8f-lvkdh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali326800ec2f3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:49:35.964604 containerd[1462]: 2024-11-12 20:49:35.912 [INFO][5090] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="55d3add20a404fd2b235d07455b2c011273625d2463517900d4e75ad63a675ca" Nov 12 20:49:35.964604 containerd[1462]: 2024-11-12 20:49:35.912 [INFO][5090] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="55d3add20a404fd2b235d07455b2c011273625d2463517900d4e75ad63a675ca" iface="eth0" netns="" Nov 12 20:49:35.964604 containerd[1462]: 2024-11-12 20:49:35.912 [INFO][5090] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="55d3add20a404fd2b235d07455b2c011273625d2463517900d4e75ad63a675ca" Nov 12 20:49:35.964604 containerd[1462]: 2024-11-12 20:49:35.912 [INFO][5090] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="55d3add20a404fd2b235d07455b2c011273625d2463517900d4e75ad63a675ca" Nov 12 20:49:35.964604 containerd[1462]: 2024-11-12 20:49:35.945 [INFO][5096] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="55d3add20a404fd2b235d07455b2c011273625d2463517900d4e75ad63a675ca" HandleID="k8s-pod-network.55d3add20a404fd2b235d07455b2c011273625d2463517900d4e75ad63a675ca" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-coredns--6f6b679f8f--lvkdh-eth0" Nov 12 20:49:35.964604 containerd[1462]: 2024-11-12 20:49:35.945 [INFO][5096] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:49:35.964604 containerd[1462]: 2024-11-12 20:49:35.945 [INFO][5096] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:49:35.964604 containerd[1462]: 2024-11-12 20:49:35.954 [WARNING][5096] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="55d3add20a404fd2b235d07455b2c011273625d2463517900d4e75ad63a675ca" HandleID="k8s-pod-network.55d3add20a404fd2b235d07455b2c011273625d2463517900d4e75ad63a675ca" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-coredns--6f6b679f8f--lvkdh-eth0" Nov 12 20:49:35.964604 containerd[1462]: 2024-11-12 20:49:35.954 [INFO][5096] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="55d3add20a404fd2b235d07455b2c011273625d2463517900d4e75ad63a675ca" HandleID="k8s-pod-network.55d3add20a404fd2b235d07455b2c011273625d2463517900d4e75ad63a675ca" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-coredns--6f6b679f8f--lvkdh-eth0" Nov 12 20:49:35.964604 containerd[1462]: 2024-11-12 20:49:35.959 [INFO][5096] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:49:35.964604 containerd[1462]: 2024-11-12 20:49:35.962 [INFO][5090] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="55d3add20a404fd2b235d07455b2c011273625d2463517900d4e75ad63a675ca" Nov 12 20:49:35.966260 containerd[1462]: time="2024-11-12T20:49:35.964648358Z" level=info msg="TearDown network for sandbox \"55d3add20a404fd2b235d07455b2c011273625d2463517900d4e75ad63a675ca\" successfully" Nov 12 20:49:35.966260 containerd[1462]: time="2024-11-12T20:49:35.964674320Z" level=info msg="StopPodSandbox for \"55d3add20a404fd2b235d07455b2c011273625d2463517900d4e75ad63a675ca\" returns successfully" Nov 12 20:49:35.966260 containerd[1462]: time="2024-11-12T20:49:35.965394658Z" level=info msg="RemovePodSandbox for \"55d3add20a404fd2b235d07455b2c011273625d2463517900d4e75ad63a675ca\"" Nov 12 20:49:35.966260 containerd[1462]: time="2024-11-12T20:49:35.965426688Z" level=info msg="Forcibly stopping sandbox \"55d3add20a404fd2b235d07455b2c011273625d2463517900d4e75ad63a675ca\"" Nov 12 20:49:36.086174 containerd[1462]: 2024-11-12 20:49:36.028 [WARNING][5114] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="55d3add20a404fd2b235d07455b2c011273625d2463517900d4e75ad63a675ca" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--2--eeaeb2d4c6-k8s-coredns--6f6b679f8f--lvkdh-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"5666e17e-e08c-4c69-b9f9-6f9b8433b194", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 48, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-2-eeaeb2d4c6", ContainerID:"86afc230051329b9dc8c5258399b6faf6d3246d549938b91d0340949d14d1cd6", Pod:"coredns-6f6b679f8f-lvkdh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali326800ec2f3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:49:36.086174 containerd[1462]: 2024-11-12 20:49:36.029 [INFO][5114] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="55d3add20a404fd2b235d07455b2c011273625d2463517900d4e75ad63a675ca" Nov 12 20:49:36.086174 containerd[1462]: 2024-11-12 20:49:36.029 [INFO][5114] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="55d3add20a404fd2b235d07455b2c011273625d2463517900d4e75ad63a675ca" iface="eth0" netns="" Nov 12 20:49:36.086174 containerd[1462]: 2024-11-12 20:49:36.029 [INFO][5114] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="55d3add20a404fd2b235d07455b2c011273625d2463517900d4e75ad63a675ca" Nov 12 20:49:36.086174 containerd[1462]: 2024-11-12 20:49:36.029 [INFO][5114] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="55d3add20a404fd2b235d07455b2c011273625d2463517900d4e75ad63a675ca" Nov 12 20:49:36.086174 containerd[1462]: 2024-11-12 20:49:36.069 [INFO][5120] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="55d3add20a404fd2b235d07455b2c011273625d2463517900d4e75ad63a675ca" HandleID="k8s-pod-network.55d3add20a404fd2b235d07455b2c011273625d2463517900d4e75ad63a675ca" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-coredns--6f6b679f8f--lvkdh-eth0" Nov 12 20:49:36.086174 containerd[1462]: 2024-11-12 20:49:36.070 [INFO][5120] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:49:36.086174 containerd[1462]: 2024-11-12 20:49:36.070 [INFO][5120] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:49:36.086174 containerd[1462]: 2024-11-12 20:49:36.079 [WARNING][5120] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="55d3add20a404fd2b235d07455b2c011273625d2463517900d4e75ad63a675ca" HandleID="k8s-pod-network.55d3add20a404fd2b235d07455b2c011273625d2463517900d4e75ad63a675ca" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-coredns--6f6b679f8f--lvkdh-eth0" Nov 12 20:49:36.086174 containerd[1462]: 2024-11-12 20:49:36.079 [INFO][5120] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="55d3add20a404fd2b235d07455b2c011273625d2463517900d4e75ad63a675ca" HandleID="k8s-pod-network.55d3add20a404fd2b235d07455b2c011273625d2463517900d4e75ad63a675ca" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-coredns--6f6b679f8f--lvkdh-eth0" Nov 12 20:49:36.086174 containerd[1462]: 2024-11-12 20:49:36.081 [INFO][5120] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:49:36.086174 containerd[1462]: 2024-11-12 20:49:36.083 [INFO][5114] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="55d3add20a404fd2b235d07455b2c011273625d2463517900d4e75ad63a675ca" Nov 12 20:49:36.086174 containerd[1462]: time="2024-11-12T20:49:36.086083353Z" level=info msg="TearDown network for sandbox \"55d3add20a404fd2b235d07455b2c011273625d2463517900d4e75ad63a675ca\" successfully" Nov 12 20:49:36.095145 containerd[1462]: time="2024-11-12T20:49:36.095036871Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"55d3add20a404fd2b235d07455b2c011273625d2463517900d4e75ad63a675ca\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:49:36.095511 containerd[1462]: time="2024-11-12T20:49:36.095185859Z" level=info msg="RemovePodSandbox \"55d3add20a404fd2b235d07455b2c011273625d2463517900d4e75ad63a675ca\" returns successfully" Nov 12 20:49:36.096471 containerd[1462]: time="2024-11-12T20:49:36.096431015Z" level=info msg="StopPodSandbox for \"22b84a0f12da83b15e0c2c992d12554faea446cd9daee45cbaa7f51f0e37384c\"" Nov 12 20:49:36.223275 containerd[1462]: 2024-11-12 20:49:36.160 [WARNING][5138] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="22b84a0f12da83b15e0c2c992d12554faea446cd9daee45cbaa7f51f0e37384c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--2--eeaeb2d4c6-k8s-coredns--6f6b679f8f--bg25k-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"b84cd075-0ebd-4d27-8bb3-99aa5a83c1e5", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 48, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-2-eeaeb2d4c6", ContainerID:"0f1bf72710ddad3969ca9de5c618b175e374b2762d4aec04c9ef2209fac4455c", Pod:"coredns-6f6b679f8f-bg25k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibca256e7035", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:49:36.223275 containerd[1462]: 2024-11-12 20:49:36.161 [INFO][5138] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="22b84a0f12da83b15e0c2c992d12554faea446cd9daee45cbaa7f51f0e37384c" Nov 12 20:49:36.223275 containerd[1462]: 2024-11-12 20:49:36.161 [INFO][5138] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="22b84a0f12da83b15e0c2c992d12554faea446cd9daee45cbaa7f51f0e37384c" iface="eth0" netns="" Nov 12 20:49:36.223275 containerd[1462]: 2024-11-12 20:49:36.161 [INFO][5138] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="22b84a0f12da83b15e0c2c992d12554faea446cd9daee45cbaa7f51f0e37384c" Nov 12 20:49:36.223275 containerd[1462]: 2024-11-12 20:49:36.161 [INFO][5138] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="22b84a0f12da83b15e0c2c992d12554faea446cd9daee45cbaa7f51f0e37384c" Nov 12 20:49:36.223275 containerd[1462]: 2024-11-12 20:49:36.202 [INFO][5144] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="22b84a0f12da83b15e0c2c992d12554faea446cd9daee45cbaa7f51f0e37384c" HandleID="k8s-pod-network.22b84a0f12da83b15e0c2c992d12554faea446cd9daee45cbaa7f51f0e37384c" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-coredns--6f6b679f8f--bg25k-eth0" Nov 12 20:49:36.223275 containerd[1462]: 2024-11-12 20:49:36.203 [INFO][5144] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:49:36.223275 containerd[1462]: 2024-11-12 20:49:36.203 [INFO][5144] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:49:36.223275 containerd[1462]: 2024-11-12 20:49:36.214 [WARNING][5144] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="22b84a0f12da83b15e0c2c992d12554faea446cd9daee45cbaa7f51f0e37384c" HandleID="k8s-pod-network.22b84a0f12da83b15e0c2c992d12554faea446cd9daee45cbaa7f51f0e37384c" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-coredns--6f6b679f8f--bg25k-eth0" Nov 12 20:49:36.223275 containerd[1462]: 2024-11-12 20:49:36.214 [INFO][5144] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="22b84a0f12da83b15e0c2c992d12554faea446cd9daee45cbaa7f51f0e37384c" HandleID="k8s-pod-network.22b84a0f12da83b15e0c2c992d12554faea446cd9daee45cbaa7f51f0e37384c" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-coredns--6f6b679f8f--bg25k-eth0" Nov 12 20:49:36.223275 containerd[1462]: 2024-11-12 20:49:36.218 [INFO][5144] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:49:36.223275 containerd[1462]: 2024-11-12 20:49:36.220 [INFO][5138] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="22b84a0f12da83b15e0c2c992d12554faea446cd9daee45cbaa7f51f0e37384c" Nov 12 20:49:36.225550 containerd[1462]: time="2024-11-12T20:49:36.223489010Z" level=info msg="TearDown network for sandbox \"22b84a0f12da83b15e0c2c992d12554faea446cd9daee45cbaa7f51f0e37384c\" successfully" Nov 12 20:49:36.225550 containerd[1462]: time="2024-11-12T20:49:36.223550546Z" level=info msg="StopPodSandbox for \"22b84a0f12da83b15e0c2c992d12554faea446cd9daee45cbaa7f51f0e37384c\" returns successfully" Nov 12 20:49:36.225550 containerd[1462]: time="2024-11-12T20:49:36.224350769Z" level=info msg="RemovePodSandbox for \"22b84a0f12da83b15e0c2c992d12554faea446cd9daee45cbaa7f51f0e37384c\"" Nov 12 20:49:36.225550 containerd[1462]: time="2024-11-12T20:49:36.224392702Z" level=info msg="Forcibly stopping sandbox \"22b84a0f12da83b15e0c2c992d12554faea446cd9daee45cbaa7f51f0e37384c\"" Nov 12 20:49:36.366402 containerd[1462]: 2024-11-12 20:49:36.294 [WARNING][5163] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="22b84a0f12da83b15e0c2c992d12554faea446cd9daee45cbaa7f51f0e37384c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--2--eeaeb2d4c6-k8s-coredns--6f6b679f8f--bg25k-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"b84cd075-0ebd-4d27-8bb3-99aa5a83c1e5", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 48, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-2-eeaeb2d4c6", ContainerID:"0f1bf72710ddad3969ca9de5c618b175e374b2762d4aec04c9ef2209fac4455c", Pod:"coredns-6f6b679f8f-bg25k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibca256e7035", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:49:36.366402 containerd[1462]: 2024-11-12 20:49:36.294 [INFO][5163] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="22b84a0f12da83b15e0c2c992d12554faea446cd9daee45cbaa7f51f0e37384c" Nov 12 20:49:36.366402 containerd[1462]: 2024-11-12 20:49:36.294 [INFO][5163] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="22b84a0f12da83b15e0c2c992d12554faea446cd9daee45cbaa7f51f0e37384c" iface="eth0" netns="" Nov 12 20:49:36.366402 containerd[1462]: 2024-11-12 20:49:36.294 [INFO][5163] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="22b84a0f12da83b15e0c2c992d12554faea446cd9daee45cbaa7f51f0e37384c" Nov 12 20:49:36.366402 containerd[1462]: 2024-11-12 20:49:36.294 [INFO][5163] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="22b84a0f12da83b15e0c2c992d12554faea446cd9daee45cbaa7f51f0e37384c" Nov 12 20:49:36.366402 containerd[1462]: 2024-11-12 20:49:36.345 [INFO][5170] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="22b84a0f12da83b15e0c2c992d12554faea446cd9daee45cbaa7f51f0e37384c" HandleID="k8s-pod-network.22b84a0f12da83b15e0c2c992d12554faea446cd9daee45cbaa7f51f0e37384c" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-coredns--6f6b679f8f--bg25k-eth0" Nov 12 20:49:36.366402 containerd[1462]: 2024-11-12 20:49:36.345 [INFO][5170] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:49:36.366402 containerd[1462]: 2024-11-12 20:49:36.345 [INFO][5170] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:49:36.366402 containerd[1462]: 2024-11-12 20:49:36.355 [WARNING][5170] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="22b84a0f12da83b15e0c2c992d12554faea446cd9daee45cbaa7f51f0e37384c" HandleID="k8s-pod-network.22b84a0f12da83b15e0c2c992d12554faea446cd9daee45cbaa7f51f0e37384c" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-coredns--6f6b679f8f--bg25k-eth0" Nov 12 20:49:36.366402 containerd[1462]: 2024-11-12 20:49:36.355 [INFO][5170] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="22b84a0f12da83b15e0c2c992d12554faea446cd9daee45cbaa7f51f0e37384c" HandleID="k8s-pod-network.22b84a0f12da83b15e0c2c992d12554faea446cd9daee45cbaa7f51f0e37384c" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-coredns--6f6b679f8f--bg25k-eth0" Nov 12 20:49:36.366402 containerd[1462]: 2024-11-12 20:49:36.361 [INFO][5170] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:49:36.366402 containerd[1462]: 2024-11-12 20:49:36.363 [INFO][5163] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="22b84a0f12da83b15e0c2c992d12554faea446cd9daee45cbaa7f51f0e37384c" Nov 12 20:49:36.366402 containerd[1462]: time="2024-11-12T20:49:36.366242439Z" level=info msg="TearDown network for sandbox \"22b84a0f12da83b15e0c2c992d12554faea446cd9daee45cbaa7f51f0e37384c\" successfully" Nov 12 20:49:36.378438 containerd[1462]: time="2024-11-12T20:49:36.377203871Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"22b84a0f12da83b15e0c2c992d12554faea446cd9daee45cbaa7f51f0e37384c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:49:36.378438 containerd[1462]: time="2024-11-12T20:49:36.377359575Z" level=info msg="RemovePodSandbox \"22b84a0f12da83b15e0c2c992d12554faea446cd9daee45cbaa7f51f0e37384c\" returns successfully" Nov 12 20:49:36.379594 containerd[1462]: time="2024-11-12T20:49:36.379359719Z" level=info msg="StopPodSandbox for \"7ee9c426229b318250540458b462318f3408de31a97a5fb0c59eed7ca0fdb800\"" Nov 12 20:49:36.513125 containerd[1462]: 2024-11-12 20:49:36.454 [WARNING][5189] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7ee9c426229b318250540458b462318f3408de31a97a5fb0c59eed7ca0fdb800" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--apiserver--857999858d--g2wj2-eth0", GenerateName:"calico-apiserver-857999858d-", Namespace:"calico-apiserver", SelfLink:"", UID:"3cfde3c6-532c-42e8-b5c0-a7b194fb76ba", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 48, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"857999858d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-2-eeaeb2d4c6", ContainerID:"95d1a1e6bfbdc58fe5da538ecc670f0b2028959dca3379da69087f1c69502193", Pod:"calico-apiserver-857999858d-g2wj2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.60.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali757cd175588", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:49:36.513125 containerd[1462]: 2024-11-12 20:49:36.454 [INFO][5189] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7ee9c426229b318250540458b462318f3408de31a97a5fb0c59eed7ca0fdb800" Nov 12 20:49:36.513125 containerd[1462]: 2024-11-12 20:49:36.454 [INFO][5189] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7ee9c426229b318250540458b462318f3408de31a97a5fb0c59eed7ca0fdb800" iface="eth0" netns="" Nov 12 20:49:36.513125 containerd[1462]: 2024-11-12 20:49:36.454 [INFO][5189] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7ee9c426229b318250540458b462318f3408de31a97a5fb0c59eed7ca0fdb800" Nov 12 20:49:36.513125 containerd[1462]: 2024-11-12 20:49:36.454 [INFO][5189] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7ee9c426229b318250540458b462318f3408de31a97a5fb0c59eed7ca0fdb800" Nov 12 20:49:36.513125 containerd[1462]: 2024-11-12 20:49:36.491 [INFO][5195] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7ee9c426229b318250540458b462318f3408de31a97a5fb0c59eed7ca0fdb800" HandleID="k8s-pod-network.7ee9c426229b318250540458b462318f3408de31a97a5fb0c59eed7ca0fdb800" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--apiserver--857999858d--g2wj2-eth0" Nov 12 20:49:36.513125 containerd[1462]: 2024-11-12 20:49:36.491 [INFO][5195] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:49:36.513125 containerd[1462]: 2024-11-12 20:49:36.492 [INFO][5195] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:49:36.513125 containerd[1462]: 2024-11-12 20:49:36.502 [WARNING][5195] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7ee9c426229b318250540458b462318f3408de31a97a5fb0c59eed7ca0fdb800" HandleID="k8s-pod-network.7ee9c426229b318250540458b462318f3408de31a97a5fb0c59eed7ca0fdb800" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--apiserver--857999858d--g2wj2-eth0" Nov 12 20:49:36.513125 containerd[1462]: 2024-11-12 20:49:36.502 [INFO][5195] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7ee9c426229b318250540458b462318f3408de31a97a5fb0c59eed7ca0fdb800" HandleID="k8s-pod-network.7ee9c426229b318250540458b462318f3408de31a97a5fb0c59eed7ca0fdb800" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--apiserver--857999858d--g2wj2-eth0" Nov 12 20:49:36.513125 containerd[1462]: 2024-11-12 20:49:36.508 [INFO][5195] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:49:36.513125 containerd[1462]: 2024-11-12 20:49:36.510 [INFO][5189] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7ee9c426229b318250540458b462318f3408de31a97a5fb0c59eed7ca0fdb800" Nov 12 20:49:36.514390 containerd[1462]: time="2024-11-12T20:49:36.513136940Z" level=info msg="TearDown network for sandbox \"7ee9c426229b318250540458b462318f3408de31a97a5fb0c59eed7ca0fdb800\" successfully" Nov 12 20:49:36.514390 containerd[1462]: time="2024-11-12T20:49:36.513172570Z" level=info msg="StopPodSandbox for \"7ee9c426229b318250540458b462318f3408de31a97a5fb0c59eed7ca0fdb800\" returns successfully" Nov 12 20:49:36.514390 containerd[1462]: time="2024-11-12T20:49:36.513733637Z" level=info msg="RemovePodSandbox for \"7ee9c426229b318250540458b462318f3408de31a97a5fb0c59eed7ca0fdb800\"" Nov 12 20:49:36.514390 containerd[1462]: time="2024-11-12T20:49:36.513772413Z" level=info msg="Forcibly stopping sandbox \"7ee9c426229b318250540458b462318f3408de31a97a5fb0c59eed7ca0fdb800\"" Nov 12 20:49:36.639034 containerd[1462]: 2024-11-12 20:49:36.579 [WARNING][5213] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7ee9c426229b318250540458b462318f3408de31a97a5fb0c59eed7ca0fdb800" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--apiserver--857999858d--g2wj2-eth0", GenerateName:"calico-apiserver-857999858d-", Namespace:"calico-apiserver", SelfLink:"", UID:"3cfde3c6-532c-42e8-b5c0-a7b194fb76ba", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 48, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"857999858d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-2-eeaeb2d4c6", ContainerID:"95d1a1e6bfbdc58fe5da538ecc670f0b2028959dca3379da69087f1c69502193", Pod:"calico-apiserver-857999858d-g2wj2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.60.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali757cd175588", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:49:36.639034 containerd[1462]: 2024-11-12 20:49:36.579 [INFO][5213] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7ee9c426229b318250540458b462318f3408de31a97a5fb0c59eed7ca0fdb800" Nov 12 20:49:36.639034 containerd[1462]: 2024-11-12 20:49:36.579 [INFO][5213] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7ee9c426229b318250540458b462318f3408de31a97a5fb0c59eed7ca0fdb800" iface="eth0" netns="" Nov 12 20:49:36.639034 containerd[1462]: 2024-11-12 20:49:36.579 [INFO][5213] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7ee9c426229b318250540458b462318f3408de31a97a5fb0c59eed7ca0fdb800" Nov 12 20:49:36.639034 containerd[1462]: 2024-11-12 20:49:36.579 [INFO][5213] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7ee9c426229b318250540458b462318f3408de31a97a5fb0c59eed7ca0fdb800" Nov 12 20:49:36.639034 containerd[1462]: 2024-11-12 20:49:36.622 [INFO][5219] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7ee9c426229b318250540458b462318f3408de31a97a5fb0c59eed7ca0fdb800" HandleID="k8s-pod-network.7ee9c426229b318250540458b462318f3408de31a97a5fb0c59eed7ca0fdb800" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--apiserver--857999858d--g2wj2-eth0" Nov 12 20:49:36.639034 containerd[1462]: 2024-11-12 20:49:36.622 [INFO][5219] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:49:36.639034 containerd[1462]: 2024-11-12 20:49:36.622 [INFO][5219] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:49:36.639034 containerd[1462]: 2024-11-12 20:49:36.630 [WARNING][5219] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7ee9c426229b318250540458b462318f3408de31a97a5fb0c59eed7ca0fdb800" HandleID="k8s-pod-network.7ee9c426229b318250540458b462318f3408de31a97a5fb0c59eed7ca0fdb800" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--apiserver--857999858d--g2wj2-eth0" Nov 12 20:49:36.639034 containerd[1462]: 2024-11-12 20:49:36.631 [INFO][5219] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7ee9c426229b318250540458b462318f3408de31a97a5fb0c59eed7ca0fdb800" HandleID="k8s-pod-network.7ee9c426229b318250540458b462318f3408de31a97a5fb0c59eed7ca0fdb800" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--apiserver--857999858d--g2wj2-eth0" Nov 12 20:49:36.639034 containerd[1462]: 2024-11-12 20:49:36.633 [INFO][5219] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:49:36.639034 containerd[1462]: 2024-11-12 20:49:36.636 [INFO][5213] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7ee9c426229b318250540458b462318f3408de31a97a5fb0c59eed7ca0fdb800" Nov 12 20:49:36.639034 containerd[1462]: time="2024-11-12T20:49:36.639001350Z" level=info msg="TearDown network for sandbox \"7ee9c426229b318250540458b462318f3408de31a97a5fb0c59eed7ca0fdb800\" successfully" Nov 12 20:49:36.648149 containerd[1462]: time="2024-11-12T20:49:36.647544352Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7ee9c426229b318250540458b462318f3408de31a97a5fb0c59eed7ca0fdb800\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:49:36.648149 containerd[1462]: time="2024-11-12T20:49:36.647698532Z" level=info msg="RemovePodSandbox \"7ee9c426229b318250540458b462318f3408de31a97a5fb0c59eed7ca0fdb800\" returns successfully" Nov 12 20:49:36.648798 containerd[1462]: time="2024-11-12T20:49:36.648443060Z" level=info msg="StopPodSandbox for \"ec1d5e27d6437cca4c8ad3ef77f3a589a1a8dd4fdd278bd179f966cbb35983c2\"" Nov 12 20:49:36.765088 containerd[1462]: 2024-11-12 20:49:36.707 [WARNING][5237] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ec1d5e27d6437cca4c8ad3ef77f3a589a1a8dd4fdd278bd179f966cbb35983c2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--apiserver--857999858d--qgxrw-eth0", GenerateName:"calico-apiserver-857999858d-", Namespace:"calico-apiserver", SelfLink:"", UID:"0ad6a9de-2d10-4788-aa6a-d0b5f89e72a6", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 48, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"857999858d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-2-eeaeb2d4c6", ContainerID:"df55d2aca0f4f9dec7084788b3aa53177321aacfa0f0051e2c43080ea338d56c", Pod:"calico-apiserver-857999858d-qgxrw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.60.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif5935e81b84", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:49:36.765088 containerd[1462]: 2024-11-12 20:49:36.708 [INFO][5237] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ec1d5e27d6437cca4c8ad3ef77f3a589a1a8dd4fdd278bd179f966cbb35983c2" Nov 12 20:49:36.765088 containerd[1462]: 2024-11-12 20:49:36.708 [INFO][5237] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ec1d5e27d6437cca4c8ad3ef77f3a589a1a8dd4fdd278bd179f966cbb35983c2" iface="eth0" netns="" Nov 12 20:49:36.765088 containerd[1462]: 2024-11-12 20:49:36.708 [INFO][5237] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ec1d5e27d6437cca4c8ad3ef77f3a589a1a8dd4fdd278bd179f966cbb35983c2" Nov 12 20:49:36.765088 containerd[1462]: 2024-11-12 20:49:36.708 [INFO][5237] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ec1d5e27d6437cca4c8ad3ef77f3a589a1a8dd4fdd278bd179f966cbb35983c2" Nov 12 20:49:36.765088 containerd[1462]: 2024-11-12 20:49:36.742 [INFO][5244] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ec1d5e27d6437cca4c8ad3ef77f3a589a1a8dd4fdd278bd179f966cbb35983c2" HandleID="k8s-pod-network.ec1d5e27d6437cca4c8ad3ef77f3a589a1a8dd4fdd278bd179f966cbb35983c2" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--apiserver--857999858d--qgxrw-eth0" Nov 12 20:49:36.765088 containerd[1462]: 2024-11-12 20:49:36.743 [INFO][5244] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:49:36.765088 containerd[1462]: 2024-11-12 20:49:36.743 [INFO][5244] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:49:36.765088 containerd[1462]: 2024-11-12 20:49:36.753 [WARNING][5244] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ec1d5e27d6437cca4c8ad3ef77f3a589a1a8dd4fdd278bd179f966cbb35983c2" HandleID="k8s-pod-network.ec1d5e27d6437cca4c8ad3ef77f3a589a1a8dd4fdd278bd179f966cbb35983c2" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--apiserver--857999858d--qgxrw-eth0" Nov 12 20:49:36.765088 containerd[1462]: 2024-11-12 20:49:36.753 [INFO][5244] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ec1d5e27d6437cca4c8ad3ef77f3a589a1a8dd4fdd278bd179f966cbb35983c2" HandleID="k8s-pod-network.ec1d5e27d6437cca4c8ad3ef77f3a589a1a8dd4fdd278bd179f966cbb35983c2" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--apiserver--857999858d--qgxrw-eth0" Nov 12 20:49:36.765088 containerd[1462]: 2024-11-12 20:49:36.755 [INFO][5244] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:49:36.765088 containerd[1462]: 2024-11-12 20:49:36.760 [INFO][5237] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ec1d5e27d6437cca4c8ad3ef77f3a589a1a8dd4fdd278bd179f966cbb35983c2" Nov 12 20:49:36.766106 containerd[1462]: time="2024-11-12T20:49:36.765136454Z" level=info msg="TearDown network for sandbox \"ec1d5e27d6437cca4c8ad3ef77f3a589a1a8dd4fdd278bd179f966cbb35983c2\" successfully" Nov 12 20:49:36.766106 containerd[1462]: time="2024-11-12T20:49:36.765165396Z" level=info msg="StopPodSandbox for \"ec1d5e27d6437cca4c8ad3ef77f3a589a1a8dd4fdd278bd179f966cbb35983c2\" returns successfully" Nov 12 20:49:36.766106 containerd[1462]: time="2024-11-12T20:49:36.765883947Z" level=info msg="RemovePodSandbox for \"ec1d5e27d6437cca4c8ad3ef77f3a589a1a8dd4fdd278bd179f966cbb35983c2\"" Nov 12 20:49:36.766106 containerd[1462]: time="2024-11-12T20:49:36.765954962Z" level=info msg="Forcibly stopping sandbox \"ec1d5e27d6437cca4c8ad3ef77f3a589a1a8dd4fdd278bd179f966cbb35983c2\"" Nov 12 20:49:36.896372 containerd[1462]: 2024-11-12 20:49:36.831 [WARNING][5262] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ec1d5e27d6437cca4c8ad3ef77f3a589a1a8dd4fdd278bd179f966cbb35983c2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--apiserver--857999858d--qgxrw-eth0", GenerateName:"calico-apiserver-857999858d-", Namespace:"calico-apiserver", SelfLink:"", UID:"0ad6a9de-2d10-4788-aa6a-d0b5f89e72a6", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 48, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"857999858d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-2-eeaeb2d4c6", ContainerID:"df55d2aca0f4f9dec7084788b3aa53177321aacfa0f0051e2c43080ea338d56c", Pod:"calico-apiserver-857999858d-qgxrw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.60.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif5935e81b84", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:49:36.896372 containerd[1462]: 2024-11-12 20:49:36.832 [INFO][5262] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ec1d5e27d6437cca4c8ad3ef77f3a589a1a8dd4fdd278bd179f966cbb35983c2" Nov 12 20:49:36.896372 containerd[1462]: 2024-11-12 20:49:36.832 [INFO][5262] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ec1d5e27d6437cca4c8ad3ef77f3a589a1a8dd4fdd278bd179f966cbb35983c2" iface="eth0" netns="" Nov 12 20:49:36.896372 containerd[1462]: 2024-11-12 20:49:36.832 [INFO][5262] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ec1d5e27d6437cca4c8ad3ef77f3a589a1a8dd4fdd278bd179f966cbb35983c2" Nov 12 20:49:36.896372 containerd[1462]: 2024-11-12 20:49:36.832 [INFO][5262] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ec1d5e27d6437cca4c8ad3ef77f3a589a1a8dd4fdd278bd179f966cbb35983c2" Nov 12 20:49:36.896372 containerd[1462]: 2024-11-12 20:49:36.876 [INFO][5269] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ec1d5e27d6437cca4c8ad3ef77f3a589a1a8dd4fdd278bd179f966cbb35983c2" HandleID="k8s-pod-network.ec1d5e27d6437cca4c8ad3ef77f3a589a1a8dd4fdd278bd179f966cbb35983c2" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--apiserver--857999858d--qgxrw-eth0" Nov 12 20:49:36.896372 containerd[1462]: 2024-11-12 20:49:36.877 [INFO][5269] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:49:36.896372 containerd[1462]: 2024-11-12 20:49:36.878 [INFO][5269] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:49:36.896372 containerd[1462]: 2024-11-12 20:49:36.887 [WARNING][5269] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ec1d5e27d6437cca4c8ad3ef77f3a589a1a8dd4fdd278bd179f966cbb35983c2" HandleID="k8s-pod-network.ec1d5e27d6437cca4c8ad3ef77f3a589a1a8dd4fdd278bd179f966cbb35983c2" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--apiserver--857999858d--qgxrw-eth0" Nov 12 20:49:36.896372 containerd[1462]: 2024-11-12 20:49:36.887 [INFO][5269] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ec1d5e27d6437cca4c8ad3ef77f3a589a1a8dd4fdd278bd179f966cbb35983c2" HandleID="k8s-pod-network.ec1d5e27d6437cca4c8ad3ef77f3a589a1a8dd4fdd278bd179f966cbb35983c2" Workload="ci--4081.2.0--2--eeaeb2d4c6-k8s-calico--apiserver--857999858d--qgxrw-eth0" Nov 12 20:49:36.896372 containerd[1462]: 2024-11-12 20:49:36.890 [INFO][5269] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:49:36.896372 containerd[1462]: 2024-11-12 20:49:36.893 [INFO][5262] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ec1d5e27d6437cca4c8ad3ef77f3a589a1a8dd4fdd278bd179f966cbb35983c2" Nov 12 20:49:36.896372 containerd[1462]: time="2024-11-12T20:49:36.896119059Z" level=info msg="TearDown network for sandbox \"ec1d5e27d6437cca4c8ad3ef77f3a589a1a8dd4fdd278bd179f966cbb35983c2\" successfully" Nov 12 20:49:36.910074 containerd[1462]: time="2024-11-12T20:49:36.905929179Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ec1d5e27d6437cca4c8ad3ef77f3a589a1a8dd4fdd278bd179f966cbb35983c2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:49:36.910255 containerd[1462]: time="2024-11-12T20:49:36.910117849Z" level=info msg="RemovePodSandbox \"ec1d5e27d6437cca4c8ad3ef77f3a589a1a8dd4fdd278bd179f966cbb35983c2\" returns successfully" Nov 12 20:49:38.159352 systemd[1]: Started sshd@15-147.182.197.11:22-139.178.68.195:38460.service - OpenSSH per-connection server daemon (139.178.68.195:38460). Nov 12 20:49:38.265878 sshd[5276]: Accepted publickey for core from 139.178.68.195 port 38460 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:49:38.268947 sshd[5276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:49:38.276312 systemd-logind[1448]: New session 16 of user core. Nov 12 20:49:38.280185 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 12 20:49:38.543997 sshd[5276]: pam_unix(sshd:session): session closed for user core Nov 12 20:49:38.549056 systemd[1]: sshd@15-147.182.197.11:22-139.178.68.195:38460.service: Deactivated successfully. Nov 12 20:49:38.551814 systemd[1]: session-16.scope: Deactivated successfully. Nov 12 20:49:38.553168 systemd-logind[1448]: Session 16 logged out. Waiting for processes to exit. Nov 12 20:49:38.555291 systemd-logind[1448]: Removed session 16. Nov 12 20:49:43.460259 kubelet[2510]: I1112 20:49:43.459798 2510 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:49:43.493898 kubelet[2510]: I1112 20:49:43.491702 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-srhkg" podStartSLOduration=44.0481751 podStartE2EDuration="52.491677734s" podCreationTimestamp="2024-11-12 20:48:51 +0000 UTC" firstStartedPulling="2024-11-12 20:49:17.399505086 +0000 UTC m=+42.404482446" lastFinishedPulling="2024-11-12 20:49:25.843007719 +0000 UTC m=+50.847985080" observedRunningTime="2024-11-12 20:49:26.842142322 +0000 UTC m=+51.847119692" watchObservedRunningTime="2024-11-12 20:49:43.491677734 +0000 UTC m=+68.496655103" Nov 12 20:49:43.563390 systemd[1]: Started sshd@16-147.182.197.11:22-139.178.68.195:38464.service - OpenSSH per-connection server daemon (139.178.68.195:38464). Nov 12 20:49:43.621327 sshd[5315]: Accepted publickey for core from 139.178.68.195 port 38464 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:49:43.623973 sshd[5315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:49:43.633485 systemd-logind[1448]: New session 17 of user core. Nov 12 20:49:43.641146 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 12 20:49:43.810702 sshd[5315]: pam_unix(sshd:session): session closed for user core Nov 12 20:49:43.816007 systemd-logind[1448]: Session 17 logged out. Waiting for processes to exit. Nov 12 20:49:43.816238 systemd[1]: sshd@16-147.182.197.11:22-139.178.68.195:38464.service: Deactivated successfully. Nov 12 20:49:43.818663 systemd[1]: session-17.scope: Deactivated successfully. Nov 12 20:49:43.823283 systemd-logind[1448]: Removed session 17. Nov 12 20:49:48.837491 systemd[1]: Started sshd@17-147.182.197.11:22-139.178.68.195:55284.service - OpenSSH per-connection server daemon (139.178.68.195:55284). Nov 12 20:49:48.909136 sshd[5350]: Accepted publickey for core from 139.178.68.195 port 55284 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:49:48.912317 sshd[5350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:49:48.919887 systemd-logind[1448]: New session 18 of user core. Nov 12 20:49:48.924223 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 12 20:49:49.183541 sshd[5350]: pam_unix(sshd:session): session closed for user core Nov 12 20:49:49.195120 systemd[1]: sshd@17-147.182.197.11:22-139.178.68.195:55284.service: Deactivated successfully. Nov 12 20:49:49.197814 systemd[1]: session-18.scope: Deactivated successfully. Nov 12 20:49:49.200813 systemd-logind[1448]: Session 18 logged out. Waiting for processes to exit. Nov 12 20:49:49.206326 systemd[1]: Started sshd@18-147.182.197.11:22-139.178.68.195:55298.service - OpenSSH per-connection server daemon (139.178.68.195:55298). Nov 12 20:49:49.208251 systemd-logind[1448]: Removed session 18. Nov 12 20:49:49.245442 sshd[5364]: Accepted publickey for core from 139.178.68.195 port 55298 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:49:49.247642 sshd[5364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:49:49.256361 systemd-logind[1448]: New session 19 of user core. Nov 12 20:49:49.261627 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 12 20:49:49.942668 sshd[5364]: pam_unix(sshd:session): session closed for user core Nov 12 20:49:49.960531 systemd[1]: Started sshd@19-147.182.197.11:22-139.178.68.195:55308.service - OpenSSH per-connection server daemon (139.178.68.195:55308). Nov 12 20:49:49.961695 systemd[1]: sshd@18-147.182.197.11:22-139.178.68.195:55298.service: Deactivated successfully. Nov 12 20:49:49.966051 systemd[1]: session-19.scope: Deactivated successfully. Nov 12 20:49:49.969401 systemd-logind[1448]: Session 19 logged out. Waiting for processes to exit. Nov 12 20:49:49.973273 systemd-logind[1448]: Removed session 19. Nov 12 20:49:50.040862 sshd[5373]: Accepted publickey for core from 139.178.68.195 port 55308 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:49:50.044299 sshd[5373]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:49:50.051020 systemd-logind[1448]: New session 20 of user core. Nov 12 20:49:50.055105 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 12 20:49:52.897043 sshd[5373]: pam_unix(sshd:session): session closed for user core Nov 12 20:49:52.906997 systemd[1]: sshd@19-147.182.197.11:22-139.178.68.195:55308.service: Deactivated successfully. Nov 12 20:49:52.913286 systemd[1]: session-20.scope: Deactivated successfully. Nov 12 20:49:52.919745 systemd-logind[1448]: Session 20 logged out. Waiting for processes to exit. Nov 12 20:49:52.927719 systemd[1]: Started sshd@20-147.182.197.11:22-139.178.68.195:55314.service - OpenSSH per-connection server daemon (139.178.68.195:55314). Nov 12 20:49:52.937134 systemd-logind[1448]: Removed session 20. Nov 12 20:49:53.029312 sshd[5409]: Accepted publickey for core from 139.178.68.195 port 55314 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:49:53.034515 sshd[5409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:49:53.047976 systemd-logind[1448]: New session 21 of user core. Nov 12 20:49:53.054140 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 12 20:49:54.183100 sshd[5409]: pam_unix(sshd:session): session closed for user core Nov 12 20:49:54.195748 systemd[1]: sshd@20-147.182.197.11:22-139.178.68.195:55314.service: Deactivated successfully. Nov 12 20:49:54.201669 systemd[1]: session-21.scope: Deactivated successfully. Nov 12 20:49:54.206292 systemd-logind[1448]: Session 21 logged out. Waiting for processes to exit. Nov 12 20:49:54.214379 systemd[1]: Started sshd@21-147.182.197.11:22-139.178.68.195:55316.service - OpenSSH per-connection server daemon (139.178.68.195:55316). Nov 12 20:49:54.219970 systemd-logind[1448]: Removed session 21. Nov 12 20:49:54.294698 sshd[5424]: Accepted publickey for core from 139.178.68.195 port 55316 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:49:54.297642 sshd[5424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:49:54.309018 systemd-logind[1448]: New session 22 of user core. Nov 12 20:49:54.315124 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 12 20:49:54.558455 sshd[5424]: pam_unix(sshd:session): session closed for user core Nov 12 20:49:54.565292 systemd-logind[1448]: Session 22 logged out. Waiting for processes to exit. Nov 12 20:49:54.566477 systemd[1]: sshd@21-147.182.197.11:22-139.178.68.195:55316.service: Deactivated successfully. Nov 12 20:49:54.570502 systemd[1]: session-22.scope: Deactivated successfully. Nov 12 20:49:54.572407 systemd-logind[1448]: Removed session 22. Nov 12 20:49:59.164504 kubelet[2510]: E1112 20:49:59.164252 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:49:59.580384 systemd[1]: Started sshd@22-147.182.197.11:22-139.178.68.195:48314.service - OpenSSH per-connection server daemon (139.178.68.195:48314). Nov 12 20:49:59.692803 sshd[5445]: Accepted publickey for core from 139.178.68.195 port 48314 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:49:59.696658 sshd[5445]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:49:59.704129 systemd-logind[1448]: New session 23 of user core. Nov 12 20:49:59.711204 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 12 20:50:00.014737 sshd[5445]: pam_unix(sshd:session): session closed for user core Nov 12 20:50:00.022676 systemd-logind[1448]: Session 23 logged out. Waiting for processes to exit. Nov 12 20:50:00.023444 systemd[1]: sshd@22-147.182.197.11:22-139.178.68.195:48314.service: Deactivated successfully. Nov 12 20:50:00.030943 systemd[1]: session-23.scope: Deactivated successfully. Nov 12 20:50:00.051883 systemd-logind[1448]: Removed session 23. Nov 12 20:50:05.037469 systemd[1]: Started sshd@23-147.182.197.11:22-139.178.68.195:48330.service - OpenSSH per-connection server daemon (139.178.68.195:48330). Nov 12 20:50:05.112919 sshd[5461]: Accepted publickey for core from 139.178.68.195 port 48330 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:50:05.115986 sshd[5461]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:50:05.125915 systemd-logind[1448]: New session 24 of user core. Nov 12 20:50:05.139829 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 12 20:50:05.409950 sshd[5461]: pam_unix(sshd:session): session closed for user core Nov 12 20:50:05.418350 systemd[1]: sshd@23-147.182.197.11:22-139.178.68.195:48330.service: Deactivated successfully. Nov 12 20:50:05.424209 systemd[1]: session-24.scope: Deactivated successfully. Nov 12 20:50:05.425829 systemd-logind[1448]: Session 24 logged out. Waiting for processes to exit. Nov 12 20:50:05.429269 systemd-logind[1448]: Removed session 24. Nov 12 20:50:09.171909 kubelet[2510]: E1112 20:50:09.171730 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:50:10.434384 systemd[1]: Started sshd@24-147.182.197.11:22-139.178.68.195:58584.service - OpenSSH per-connection server daemon (139.178.68.195:58584). Nov 12 20:50:10.555885 sshd[5475]: Accepted publickey for core from 139.178.68.195 port 58584 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:50:10.559084 sshd[5475]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:50:10.568750 systemd-logind[1448]: New session 25 of user core. Nov 12 20:50:10.573245 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 12 20:50:11.062586 sshd[5475]: pam_unix(sshd:session): session closed for user core Nov 12 20:50:11.070495 systemd[1]: sshd@24-147.182.197.11:22-139.178.68.195:58584.service: Deactivated successfully. Nov 12 20:50:11.074571 systemd[1]: session-25.scope: Deactivated successfully. Nov 12 20:50:11.078467 systemd-logind[1448]: Session 25 logged out. Waiting for processes to exit. Nov 12 20:50:11.080777 systemd-logind[1448]: Removed session 25. Nov 12 20:50:12.164605 kubelet[2510]: E1112 20:50:12.164442 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:50:13.165478 kubelet[2510]: E1112 20:50:13.163872 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:50:15.504226 systemd[1]: run-containerd-runc-k8s.io-013e7d5423e39b0bd926a47b1792ea9bd3c9b29c88d507c775010dd5e9551580-runc.92vdk2.mount: Deactivated successfully. Nov 12 20:50:16.086360 systemd[1]: Started sshd@25-147.182.197.11:22-139.178.68.195:35894.service - OpenSSH per-connection server daemon (139.178.68.195:35894). Nov 12 20:50:16.144907 sshd[5512]: Accepted publickey for core from 139.178.68.195 port 35894 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:50:16.147499 sshd[5512]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:50:16.155176 systemd-logind[1448]: New session 26 of user core. Nov 12 20:50:16.160495 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 12 20:50:16.382592 sshd[5512]: pam_unix(sshd:session): session closed for user core Nov 12 20:50:16.389328 systemd[1]: sshd@25-147.182.197.11:22-139.178.68.195:35894.service: Deactivated successfully. Nov 12 20:50:16.394222 systemd[1]: session-26.scope: Deactivated successfully. Nov 12 20:50:16.396503 systemd-logind[1448]: Session 26 logged out. Waiting for processes to exit. Nov 12 20:50:16.399048 systemd-logind[1448]: Removed session 26.