Dec 13 02:38:05.067339 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 02:38:05.067379 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 02:38:05.067391 kernel: BIOS-provided physical RAM map: Dec 13 02:38:05.067399 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 02:38:05.067406 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 02:38:05.067413 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 02:38:05.067421 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Dec 13 02:38:05.067429 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Dec 13 02:38:05.067436 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 02:38:05.067445 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 02:38:05.067453 kernel: NX (Execute Disable) protection: active Dec 13 02:38:05.067460 kernel: APIC: Static calls initialized Dec 13 02:38:05.067467 kernel: SMBIOS 2.8 present. Dec 13 02:38:05.067475 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 Dec 13 02:38:05.067484 kernel: Hypervisor detected: KVM Dec 13 02:38:05.067494 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 02:38:05.067502 kernel: kvm-clock: using sched offset of 4177675267 cycles Dec 13 02:38:05.067510 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 02:38:05.067518 kernel: tsc: Detected 1996.249 MHz processor Dec 13 02:38:05.067526 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 02:38:05.067534 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 02:38:05.067542 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Dec 13 02:38:05.067550 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 13 02:38:05.067558 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 02:38:05.067568 kernel: ACPI: Early table checksum verification disabled Dec 13 02:38:05.067576 kernel: ACPI: RSDP 0x00000000000F5930 000014 (v00 BOCHS ) Dec 13 02:38:05.067584 kernel: ACPI: RSDT 0x000000007FFE1848 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:38:05.067592 kernel: ACPI: FACP 0x000000007FFE172C 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:38:05.067599 kernel: ACPI: DSDT 0x000000007FFE0040 0016EC (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:38:05.067607 kernel: ACPI: FACS 0x000000007FFE0000 000040 Dec 13 02:38:05.067615 kernel: ACPI: APIC 0x000000007FFE17A0 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:38:05.067623 kernel: ACPI: WAET 0x000000007FFE1820 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:38:05.067631 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe172c-0x7ffe179f] Dec 13 02:38:05.067641 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe172b] Dec 13 02:38:05.067649 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Dec 13 02:38:05.067657 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17a0-0x7ffe181f] Dec 13 02:38:05.067664 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe1820-0x7ffe1847] Dec 13 02:38:05.067672 kernel: No NUMA configuration found Dec 13 02:38:05.067680 kernel: Faking a node at [mem 0x0000000000000000-0x000000007ffdcfff] Dec 13 02:38:05.067688 kernel: NODE_DATA(0) allocated [mem 0x7ffd7000-0x7ffdcfff] Dec 13 02:38:05.067699 kernel: Zone ranges: Dec 13 02:38:05.067710 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 02:38:05.067718 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdcfff] Dec 13 02:38:05.067726 kernel: Normal empty Dec 13 02:38:05.067734 kernel: Movable zone start for each node Dec 13 02:38:05.067742 kernel: Early memory node ranges Dec 13 02:38:05.067750 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 02:38:05.067758 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Dec 13 02:38:05.067769 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdcfff] Dec 13 02:38:05.067777 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 02:38:05.067786 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 02:38:05.067794 kernel: On node 0, zone DMA32: 35 pages in unavailable ranges Dec 13 02:38:05.067802 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 02:38:05.067810 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 02:38:05.067818 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 02:38:05.067826 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 02:38:05.067835 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 02:38:05.067845 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 02:38:05.067854 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 02:38:05.067862 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 02:38:05.067870 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 02:38:05.067878 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 02:38:05.067887 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 02:38:05.067895 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Dec 13 02:38:05.067903 kernel: Booting paravirtualized kernel on KVM Dec 13 02:38:05.067911 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 02:38:05.067922 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 13 02:38:05.067931 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Dec 13 02:38:05.067939 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Dec 13 02:38:05.067947 kernel: pcpu-alloc: [0] 0 1 Dec 13 02:38:05.067955 kernel: kvm-guest: PV spinlocks disabled, no host support Dec 13 02:38:05.067965 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 02:38:05.067974 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 02:38:05.067982 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 02:38:05.067993 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 02:38:05.068001 kernel: Fallback order for Node 0: 0 Dec 13 02:38:05.068009 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515805 Dec 13 02:38:05.068017 kernel: Policy zone: DMA32 Dec 13 02:38:05.068025 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 02:38:05.068034 kernel: Memory: 1971212K/2096620K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 125148K reserved, 0K cma-reserved) Dec 13 02:38:05.068042 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 02:38:05.068051 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 02:38:05.068061 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 02:38:05.068069 kernel: Dynamic Preempt: voluntary Dec 13 02:38:05.068077 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 02:38:05.068086 kernel: rcu: RCU event tracing is enabled. Dec 13 02:38:05.068095 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 02:38:05.068103 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 02:38:05.068112 kernel: Rude variant of Tasks RCU enabled. Dec 13 02:38:05.068120 kernel: Tracing variant of Tasks RCU enabled. Dec 13 02:38:05.068128 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 02:38:05.068137 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 02:38:05.068147 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 02:38:05.068156 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 02:38:05.068164 kernel: Console: colour VGA+ 80x25 Dec 13 02:38:05.068172 kernel: printk: console [tty0] enabled Dec 13 02:38:05.068180 kernel: printk: console [ttyS0] enabled Dec 13 02:38:05.068189 kernel: ACPI: Core revision 20230628 Dec 13 02:38:05.068197 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 02:38:05.068205 kernel: x2apic enabled Dec 13 02:38:05.070244 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 02:38:05.070269 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 02:38:05.070281 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 02:38:05.070293 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Dec 13 02:38:05.070306 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 13 02:38:05.070316 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 13 02:38:05.070328 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 02:38:05.070341 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 02:38:05.070352 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 02:38:05.070362 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 02:38:05.070376 kernel: Speculative Store Bypass: Vulnerable Dec 13 02:38:05.070386 kernel: x86/fpu: x87 FPU will use FXSAVE Dec 13 02:38:05.070395 kernel: Freeing SMP alternatives memory: 32K Dec 13 02:38:05.070404 kernel: pid_max: default: 32768 minimum: 301 Dec 13 02:38:05.070414 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 02:38:05.070423 kernel: landlock: Up and running. Dec 13 02:38:05.070432 kernel: SELinux: Initializing. Dec 13 02:38:05.070442 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 02:38:05.070466 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 02:38:05.070477 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Dec 13 02:38:05.070487 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 02:38:05.070500 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 02:38:05.070510 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 02:38:05.070520 kernel: Performance Events: AMD PMU driver. Dec 13 02:38:05.070529 kernel: ... version: 0 Dec 13 02:38:05.070539 kernel: ... bit width: 48 Dec 13 02:38:05.070552 kernel: ... generic registers: 4 Dec 13 02:38:05.070563 kernel: ... value mask: 0000ffffffffffff Dec 13 02:38:05.070573 kernel: ... max period: 00007fffffffffff Dec 13 02:38:05.070582 kernel: ... fixed-purpose events: 0 Dec 13 02:38:05.070592 kernel: ... event mask: 000000000000000f Dec 13 02:38:05.070602 kernel: signal: max sigframe size: 1440 Dec 13 02:38:05.070612 kernel: rcu: Hierarchical SRCU implementation. Dec 13 02:38:05.070622 kernel: rcu: Max phase no-delay instances is 400. Dec 13 02:38:05.070632 kernel: smp: Bringing up secondary CPUs ... Dec 13 02:38:05.070642 kernel: smpboot: x86: Booting SMP configuration: Dec 13 02:38:05.070655 kernel: .... node #0, CPUs: #1 Dec 13 02:38:05.070664 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 02:38:05.070674 kernel: smpboot: Max logical packages: 2 Dec 13 02:38:05.070684 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Dec 13 02:38:05.070694 kernel: devtmpfs: initialized Dec 13 02:38:05.070704 kernel: x86/mm: Memory block size: 128MB Dec 13 02:38:05.070714 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 02:38:05.070724 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 02:38:05.070734 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 02:38:05.070746 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 02:38:05.070756 kernel: audit: initializing netlink subsys (disabled) Dec 13 02:38:05.070766 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 02:38:05.070776 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 02:38:05.070786 kernel: audit: type=2000 audit(1734057484.475:1): state=initialized audit_enabled=0 res=1 Dec 13 02:38:05.070795 kernel: cpuidle: using governor menu Dec 13 02:38:05.070805 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 02:38:05.070815 kernel: dca service started, version 1.12.1 Dec 13 02:38:05.070825 kernel: PCI: Using configuration type 1 for base access Dec 13 02:38:05.070838 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 02:38:05.070848 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 02:38:05.070857 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 02:38:05.070867 kernel: ACPI: Added _OSI(Module Device) Dec 13 02:38:05.070877 kernel: ACPI: Added _OSI(Processor Device) Dec 13 02:38:05.070887 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 02:38:05.070897 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 02:38:05.070907 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 02:38:05.070916 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 02:38:05.070929 kernel: ACPI: Interpreter enabled Dec 13 02:38:05.070939 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 02:38:05.070949 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 02:38:05.070959 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 02:38:05.070969 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 02:38:05.070978 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Dec 13 02:38:05.070988 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 02:38:05.071174 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 02:38:05.073331 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Dec 13 02:38:05.073437 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Dec 13 02:38:05.073453 kernel: acpiphp: Slot [3] registered Dec 13 02:38:05.073463 kernel: acpiphp: Slot [4] registered Dec 13 02:38:05.073472 kernel: acpiphp: Slot [5] registered Dec 13 02:38:05.073482 kernel: acpiphp: Slot [6] registered Dec 13 02:38:05.073492 kernel: acpiphp: Slot [7] registered Dec 13 02:38:05.073501 kernel: acpiphp: Slot [8] registered Dec 13 02:38:05.073516 kernel: acpiphp: Slot [9] registered Dec 13 02:38:05.073526 kernel: acpiphp: Slot [10] registered Dec 13 02:38:05.073536 kernel: acpiphp: Slot [11] registered Dec 13 02:38:05.073545 kernel: acpiphp: Slot [12] registered Dec 13 02:38:05.073555 kernel: acpiphp: Slot [13] registered Dec 13 02:38:05.073565 kernel: acpiphp: Slot [14] registered Dec 13 02:38:05.073574 kernel: acpiphp: Slot [15] registered Dec 13 02:38:05.073585 kernel: acpiphp: Slot [16] registered Dec 13 02:38:05.073594 kernel: acpiphp: Slot [17] registered Dec 13 02:38:05.073604 kernel: acpiphp: Slot [18] registered Dec 13 02:38:05.073617 kernel: acpiphp: Slot [19] registered Dec 13 02:38:05.073627 kernel: acpiphp: Slot [20] registered Dec 13 02:38:05.073636 kernel: acpiphp: Slot [21] registered Dec 13 02:38:05.073646 kernel: acpiphp: Slot [22] registered Dec 13 02:38:05.073656 kernel: acpiphp: Slot [23] registered Dec 13 02:38:05.073665 kernel: acpiphp: Slot [24] registered Dec 13 02:38:05.073675 kernel: acpiphp: Slot [25] registered Dec 13 02:38:05.073685 kernel: acpiphp: Slot [26] registered Dec 13 02:38:05.073694 kernel: acpiphp: Slot [27] registered Dec 13 02:38:05.073707 kernel: acpiphp: Slot [28] registered Dec 13 02:38:05.073717 kernel: acpiphp: Slot [29] registered Dec 13 02:38:05.073727 kernel: acpiphp: Slot [30] registered Dec 13 02:38:05.073736 kernel: acpiphp: Slot [31] registered Dec 13 02:38:05.073746 kernel: PCI host bridge to bus 0000:00 Dec 13 02:38:05.073846 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 02:38:05.073935 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 02:38:05.074023 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 02:38:05.074117 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 13 02:38:05.074205 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Dec 13 02:38:05.074326 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 02:38:05.074449 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 02:38:05.074561 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Dec 13 02:38:05.074672 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Dec 13 02:38:05.074780 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Dec 13 02:38:05.074879 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Dec 13 02:38:05.074977 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Dec 13 02:38:05.075074 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Dec 13 02:38:05.075172 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Dec 13 02:38:05.075311 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Dec 13 02:38:05.075415 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Dec 13 02:38:05.075520 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Dec 13 02:38:05.075628 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Dec 13 02:38:05.075728 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Dec 13 02:38:05.075827 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Dec 13 02:38:05.075925 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Dec 13 02:38:05.076024 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Dec 13 02:38:05.076121 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 02:38:05.078531 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Dec 13 02:38:05.078798 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Dec 13 02:38:05.078897 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Dec 13 02:38:05.079007 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Dec 13 02:38:05.079112 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Dec 13 02:38:05.082750 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Dec 13 02:38:05.083013 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 02:38:05.083115 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Dec 13 02:38:05.083235 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Dec 13 02:38:05.083381 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Dec 13 02:38:05.083484 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Dec 13 02:38:05.083603 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Dec 13 02:38:05.083714 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 02:38:05.083839 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Dec 13 02:38:05.083942 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Dec 13 02:38:05.083957 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 02:38:05.083967 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 02:38:05.083977 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 02:38:05.083987 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 02:38:05.083997 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 02:38:05.084007 kernel: iommu: Default domain type: Translated Dec 13 02:38:05.084017 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 02:38:05.084032 kernel: PCI: Using ACPI for IRQ routing Dec 13 02:38:05.084042 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 02:38:05.084052 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 02:38:05.084061 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Dec 13 02:38:05.084159 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Dec 13 02:38:05.085064 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Dec 13 02:38:05.085167 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 02:38:05.085181 kernel: vgaarb: loaded Dec 13 02:38:05.085192 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 02:38:05.085207 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 02:38:05.085265 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 02:38:05.085276 kernel: pnp: PnP ACPI init Dec 13 02:38:05.085385 kernel: pnp 00:03: [dma 2] Dec 13 02:38:05.085402 kernel: pnp: PnP ACPI: found 5 devices Dec 13 02:38:05.085412 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 02:38:05.085422 kernel: NET: Registered PF_INET protocol family Dec 13 02:38:05.085432 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 02:38:05.085447 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 02:38:05.085457 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 02:38:05.085467 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 02:38:05.085477 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 02:38:05.085486 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 02:38:05.085496 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 02:38:05.085506 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 02:38:05.085516 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 02:38:05.085526 kernel: NET: Registered PF_XDP protocol family Dec 13 02:38:05.085615 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 02:38:05.085701 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 02:38:05.087260 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 02:38:05.087376 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 13 02:38:05.087454 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Dec 13 02:38:05.087544 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Dec 13 02:38:05.087634 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 02:38:05.087647 kernel: PCI: CLS 0 bytes, default 64 Dec 13 02:38:05.087661 kernel: Initialise system trusted keyrings Dec 13 02:38:05.087670 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 02:38:05.087679 kernel: Key type asymmetric registered Dec 13 02:38:05.087687 kernel: Asymmetric key parser 'x509' registered Dec 13 02:38:05.087696 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 02:38:05.087705 kernel: io scheduler mq-deadline registered Dec 13 02:38:05.087713 kernel: io scheduler kyber registered Dec 13 02:38:05.087722 kernel: io scheduler bfq registered Dec 13 02:38:05.087730 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 02:38:05.087751 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Dec 13 02:38:05.087760 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Dec 13 02:38:05.087769 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 13 02:38:05.087778 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Dec 13 02:38:05.087786 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 02:38:05.087795 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 02:38:05.087804 kernel: random: crng init done Dec 13 02:38:05.087813 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 02:38:05.087822 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 02:38:05.087832 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 02:38:05.087930 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 13 02:38:05.087946 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 02:38:05.088026 kernel: rtc_cmos 00:04: registered as rtc0 Dec 13 02:38:05.088105 kernel: rtc_cmos 00:04: setting system clock to 2024-12-13T02:38:04 UTC (1734057484) Dec 13 02:38:05.088186 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Dec 13 02:38:05.088199 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 13 02:38:05.089241 kernel: NET: Registered PF_INET6 protocol family Dec 13 02:38:05.089253 kernel: Segment Routing with IPv6 Dec 13 02:38:05.089262 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 02:38:05.089270 kernel: NET: Registered PF_PACKET protocol family Dec 13 02:38:05.089279 kernel: Key type dns_resolver registered Dec 13 02:38:05.089288 kernel: IPI shorthand broadcast: enabled Dec 13 02:38:05.089297 kernel: sched_clock: Marking stable (872007779, 119077006)->(994967081, -3882296) Dec 13 02:38:05.089305 kernel: registered taskstats version 1 Dec 13 02:38:05.089314 kernel: Loading compiled-in X.509 certificates Dec 13 02:38:05.089323 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 02:38:05.089336 kernel: Key type .fscrypt registered Dec 13 02:38:05.089344 kernel: Key type fscrypt-provisioning registered Dec 13 02:38:05.089353 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 02:38:05.089362 kernel: ima: Allocated hash algorithm: sha1 Dec 13 02:38:05.089370 kernel: ima: No architecture policies found Dec 13 02:38:05.089379 kernel: clk: Disabling unused clocks Dec 13 02:38:05.089388 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 02:38:05.089397 kernel: Write protecting the kernel read-only data: 36864k Dec 13 02:38:05.089408 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 02:38:05.089417 kernel: Run /init as init process Dec 13 02:38:05.089425 kernel: with arguments: Dec 13 02:38:05.089434 kernel: /init Dec 13 02:38:05.089442 kernel: with environment: Dec 13 02:38:05.089451 kernel: HOME=/ Dec 13 02:38:05.089459 kernel: TERM=linux Dec 13 02:38:05.089468 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 02:38:05.089486 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 02:38:05.089500 systemd[1]: Detected virtualization kvm. Dec 13 02:38:05.089510 systemd[1]: Detected architecture x86-64. Dec 13 02:38:05.089520 systemd[1]: Running in initrd. Dec 13 02:38:05.089529 systemd[1]: No hostname configured, using default hostname. Dec 13 02:38:05.089538 systemd[1]: Hostname set to . Dec 13 02:38:05.089548 systemd[1]: Initializing machine ID from VM UUID. Dec 13 02:38:05.089557 systemd[1]: Queued start job for default target initrd.target. Dec 13 02:38:05.089569 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 02:38:05.089578 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 02:38:05.089588 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 02:38:05.089598 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 02:38:05.089608 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 02:38:05.089617 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 02:38:05.089629 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 02:38:05.089641 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 02:38:05.089651 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 02:38:05.089660 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 02:38:05.089670 systemd[1]: Reached target paths.target - Path Units. Dec 13 02:38:05.089690 systemd[1]: Reached target slices.target - Slice Units. Dec 13 02:38:05.089702 systemd[1]: Reached target swap.target - Swaps. Dec 13 02:38:05.089713 systemd[1]: Reached target timers.target - Timer Units. Dec 13 02:38:05.089723 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 02:38:05.089732 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 02:38:05.089742 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 02:38:05.089752 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 02:38:05.089762 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 02:38:05.089772 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 02:38:05.089782 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 02:38:05.089796 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 02:38:05.089805 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 02:38:05.089815 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 02:38:05.089825 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 02:38:05.089834 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 02:38:05.089844 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 02:38:05.089854 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 02:38:05.089863 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 02:38:05.089873 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 02:38:05.089885 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 02:38:05.089895 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 02:38:05.089905 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 02:38:05.089965 systemd-journald[185]: Collecting audit messages is disabled. Dec 13 02:38:05.089995 systemd-journald[185]: Journal started Dec 13 02:38:05.090025 systemd-journald[185]: Runtime Journal (/run/log/journal/0ef15715a30d46dea636179f91f7b1ff) is 4.9M, max 39.3M, 34.4M free. Dec 13 02:38:05.082677 systemd-modules-load[186]: Inserted module 'overlay' Dec 13 02:38:05.103339 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 02:38:05.105258 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 02:38:05.117954 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 02:38:05.117973 kernel: Bridge firewalling registered Dec 13 02:38:05.116965 systemd-modules-load[186]: Inserted module 'br_netfilter' Dec 13 02:38:05.118862 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 02:38:05.123148 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 02:38:05.124339 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 02:38:05.133771 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 02:38:05.134655 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 02:38:05.147444 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 02:38:05.151266 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 02:38:05.153263 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 02:38:05.155257 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 02:38:05.171337 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 02:38:05.172660 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 02:38:05.175414 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 02:38:05.192199 dracut-cmdline[220]: dracut-dracut-053 Dec 13 02:38:05.194903 dracut-cmdline[220]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 02:38:05.202792 systemd-resolved[218]: Positive Trust Anchors: Dec 13 02:38:05.202821 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 02:38:05.202862 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 02:38:05.206153 systemd-resolved[218]: Defaulting to hostname 'linux'. Dec 13 02:38:05.207772 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 02:38:05.208888 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 02:38:05.277297 kernel: SCSI subsystem initialized Dec 13 02:38:05.287333 kernel: Loading iSCSI transport class v2.0-870. Dec 13 02:38:05.299649 kernel: iscsi: registered transport (tcp) Dec 13 02:38:05.321503 kernel: iscsi: registered transport (qla4xxx) Dec 13 02:38:05.321580 kernel: QLogic iSCSI HBA Driver Dec 13 02:38:05.375649 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 02:38:05.381569 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 02:38:05.429807 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 02:38:05.429906 kernel: device-mapper: uevent: version 1.0.3 Dec 13 02:38:05.431841 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 02:38:05.485344 kernel: raid6: sse2x4 gen() 13072 MB/s Dec 13 02:38:05.502288 kernel: raid6: sse2x2 gen() 15026 MB/s Dec 13 02:38:05.519733 kernel: raid6: sse2x1 gen() 7894 MB/s Dec 13 02:38:05.520023 kernel: raid6: using algorithm sse2x2 gen() 15026 MB/s Dec 13 02:38:05.539591 kernel: raid6: .... xor() 5589 MB/s, rmw enabled Dec 13 02:38:05.539945 kernel: raid6: using ssse3x2 recovery algorithm Dec 13 02:38:05.572598 kernel: xor: measuring software checksum speed Dec 13 02:38:05.572924 kernel: prefetch64-sse : 16084 MB/sec Dec 13 02:38:05.572984 kernel: generic_sse : 14425 MB/sec Dec 13 02:38:05.573694 kernel: xor: using function: prefetch64-sse (16084 MB/sec) Dec 13 02:38:05.757277 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 02:38:05.771006 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 02:38:05.783424 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 02:38:05.826805 systemd-udevd[403]: Using default interface naming scheme 'v255'. Dec 13 02:38:05.838265 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 02:38:05.847437 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 02:38:05.882489 dracut-pre-trigger[412]: rd.md=0: removing MD RAID activation Dec 13 02:38:05.926198 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 02:38:05.933485 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 02:38:05.978706 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 02:38:05.989864 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 02:38:06.036937 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 02:38:06.038325 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 02:38:06.039703 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 02:38:06.040929 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 02:38:06.049393 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 02:38:06.054256 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Dec 13 02:38:06.110750 kernel: virtio_blk virtio2: [vda] 41943040 512-byte logical blocks (21.5 GB/20.0 GiB) Dec 13 02:38:06.110875 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 02:38:06.110890 kernel: GPT:17805311 != 41943039 Dec 13 02:38:06.110902 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 02:38:06.110914 kernel: GPT:17805311 != 41943039 Dec 13 02:38:06.110924 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 02:38:06.110941 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 02:38:06.060800 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 02:38:06.085033 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 02:38:06.085248 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 02:38:06.085975 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 02:38:06.086489 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 02:38:06.117572 kernel: libata version 3.00 loaded. Dec 13 02:38:06.086619 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 02:38:06.087114 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 02:38:06.094679 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 02:38:06.179678 kernel: ata_piix 0000:00:01.1: version 2.13 Dec 13 02:38:06.179856 kernel: scsi host0: ata_piix Dec 13 02:38:06.179990 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (458) Dec 13 02:38:06.180006 kernel: scsi host1: ata_piix Dec 13 02:38:06.180130 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Dec 13 02:38:06.180146 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Dec 13 02:38:06.180160 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (453) Dec 13 02:38:06.162534 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 02:38:06.180461 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 02:38:06.186907 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 02:38:06.191625 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 02:38:06.192178 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 02:38:06.198323 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 02:38:06.206354 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 02:38:06.210366 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 02:38:06.216580 disk-uuid[500]: Primary Header is updated. Dec 13 02:38:06.216580 disk-uuid[500]: Secondary Entries is updated. Dec 13 02:38:06.216580 disk-uuid[500]: Secondary Header is updated. Dec 13 02:38:06.226324 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 02:38:06.227453 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 02:38:06.232234 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 02:38:07.247346 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 02:38:07.249710 disk-uuid[504]: The operation has completed successfully. Dec 13 02:38:07.324911 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 02:38:07.325345 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 02:38:07.355348 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 02:38:07.361906 sh[523]: Success Dec 13 02:38:07.377403 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Dec 13 02:38:07.459878 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 02:38:07.463404 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 02:38:07.468489 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 02:38:07.489211 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 02:38:07.489329 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:38:07.489361 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 02:38:07.491310 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 02:38:07.492446 kernel: BTRFS info (device dm-0): using free space tree Dec 13 02:38:07.505209 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 02:38:07.506208 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 02:38:07.518346 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 02:38:07.524792 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 02:38:07.542489 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 02:38:07.542563 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:38:07.542594 kernel: BTRFS info (device vda6): using free space tree Dec 13 02:38:07.549291 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 02:38:07.560462 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 02:38:07.560209 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 02:38:07.569853 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 02:38:07.576395 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 02:38:07.643823 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 02:38:07.650448 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 02:38:07.676163 systemd-networkd[707]: lo: Link UP Dec 13 02:38:07.676172 systemd-networkd[707]: lo: Gained carrier Dec 13 02:38:07.677407 systemd-networkd[707]: Enumeration completed Dec 13 02:38:07.677783 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 02:38:07.678386 systemd-networkd[707]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 02:38:07.678389 systemd-networkd[707]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:38:07.680637 systemd[1]: Reached target network.target - Network. Dec 13 02:38:07.681549 systemd-networkd[707]: eth0: Link UP Dec 13 02:38:07.681553 systemd-networkd[707]: eth0: Gained carrier Dec 13 02:38:07.681561 systemd-networkd[707]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 02:38:07.703596 systemd-networkd[707]: eth0: DHCPv4 address 172.24.4.208/24, gateway 172.24.4.1 acquired from 172.24.4.1 Dec 13 02:38:07.727545 ignition[622]: Ignition 2.19.0 Dec 13 02:38:07.728237 ignition[622]: Stage: fetch-offline Dec 13 02:38:07.728277 ignition[622]: no configs at "/usr/lib/ignition/base.d" Dec 13 02:38:07.729476 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 02:38:07.728286 ignition[622]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 02:38:07.728370 ignition[622]: parsed url from cmdline: "" Dec 13 02:38:07.728374 ignition[622]: no config URL provided Dec 13 02:38:07.728379 ignition[622]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 02:38:07.728386 ignition[622]: no config at "/usr/lib/ignition/user.ign" Dec 13 02:38:07.728393 ignition[622]: failed to fetch config: resource requires networking Dec 13 02:38:07.728562 ignition[622]: Ignition finished successfully Dec 13 02:38:07.736427 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 02:38:07.749657 ignition[716]: Ignition 2.19.0 Dec 13 02:38:07.749670 ignition[716]: Stage: fetch Dec 13 02:38:07.749856 ignition[716]: no configs at "/usr/lib/ignition/base.d" Dec 13 02:38:07.749867 ignition[716]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 02:38:07.749963 ignition[716]: parsed url from cmdline: "" Dec 13 02:38:07.749966 ignition[716]: no config URL provided Dec 13 02:38:07.749972 ignition[716]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 02:38:07.749980 ignition[716]: no config at "/usr/lib/ignition/user.ign" Dec 13 02:38:07.750113 ignition[716]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Dec 13 02:38:07.750122 ignition[716]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Dec 13 02:38:07.750131 ignition[716]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Dec 13 02:38:07.891548 systemd-resolved[218]: Detected conflict on linux IN A 172.24.4.208 Dec 13 02:38:07.891598 systemd-resolved[218]: Hostname conflict, changing published hostname from 'linux' to 'linux6'. Dec 13 02:38:07.956770 ignition[716]: GET result: OK Dec 13 02:38:07.957111 ignition[716]: parsing config with SHA512: 23b9533f20ea0105d727091f4168fe37e203140ce1ad7b1925171a9639edcc9259e3c2667c1557569278490af8a552b078f3f7f53d52a00b339e309cb97a79d8 Dec 13 02:38:07.966799 unknown[716]: fetched base config from "system" Dec 13 02:38:07.966825 unknown[716]: fetched base config from "system" Dec 13 02:38:07.967717 ignition[716]: fetch: fetch complete Dec 13 02:38:07.966860 unknown[716]: fetched user config from "openstack" Dec 13 02:38:07.967729 ignition[716]: fetch: fetch passed Dec 13 02:38:07.971574 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 02:38:07.967817 ignition[716]: Ignition finished successfully Dec 13 02:38:07.978585 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 02:38:08.014698 ignition[722]: Ignition 2.19.0 Dec 13 02:38:08.014716 ignition[722]: Stage: kargs Dec 13 02:38:08.015121 ignition[722]: no configs at "/usr/lib/ignition/base.d" Dec 13 02:38:08.015147 ignition[722]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 02:38:08.017487 ignition[722]: kargs: kargs passed Dec 13 02:38:08.019911 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 02:38:08.017584 ignition[722]: Ignition finished successfully Dec 13 02:38:08.035021 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 02:38:08.065203 ignition[728]: Ignition 2.19.0 Dec 13 02:38:08.065276 ignition[728]: Stage: disks Dec 13 02:38:08.065703 ignition[728]: no configs at "/usr/lib/ignition/base.d" Dec 13 02:38:08.065729 ignition[728]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 02:38:08.071821 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 02:38:08.068039 ignition[728]: disks: disks passed Dec 13 02:38:08.074805 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 02:38:08.068135 ignition[728]: Ignition finished successfully Dec 13 02:38:08.076587 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 02:38:08.078933 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 02:38:08.081726 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 02:38:08.084254 systemd[1]: Reached target basic.target - Basic System. Dec 13 02:38:08.099573 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 02:38:08.132942 systemd-fsck[736]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Dec 13 02:38:08.148755 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 02:38:08.158408 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 02:38:08.321249 kernel: EXT4-fs (vda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 02:38:08.322438 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 02:38:08.323967 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 02:38:08.335328 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 02:38:08.337387 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 02:38:08.338689 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 02:38:08.343477 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Dec 13 02:38:08.344104 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 02:38:08.344133 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 02:38:08.350741 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 02:38:08.354308 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 02:38:08.372722 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (744) Dec 13 02:38:08.384170 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 02:38:08.384277 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:38:08.384310 kernel: BTRFS info (device vda6): using free space tree Dec 13 02:38:08.404284 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 02:38:08.415865 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 02:38:08.483871 initrd-setup-root[772]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 02:38:08.489621 initrd-setup-root[779]: cut: /sysroot/etc/group: No such file or directory Dec 13 02:38:08.494606 initrd-setup-root[786]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 02:38:08.500724 initrd-setup-root[793]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 02:38:08.614462 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 02:38:08.620448 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 02:38:08.625456 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 02:38:08.640599 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 02:38:08.646361 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 02:38:08.686655 ignition[860]: INFO : Ignition 2.19.0 Dec 13 02:38:08.689046 ignition[860]: INFO : Stage: mount Dec 13 02:38:08.689786 ignition[860]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 02:38:08.689786 ignition[860]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 02:38:08.694397 ignition[860]: INFO : mount: mount passed Dec 13 02:38:08.694397 ignition[860]: INFO : Ignition finished successfully Dec 13 02:38:08.696352 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 02:38:08.699250 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 02:38:09.349672 systemd-networkd[707]: eth0: Gained IPv6LL Dec 13 02:38:15.564911 coreos-metadata[746]: Dec 13 02:38:15.564 WARN failed to locate config-drive, using the metadata service API instead Dec 13 02:38:15.610896 coreos-metadata[746]: Dec 13 02:38:15.610 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 02:38:15.626538 coreos-metadata[746]: Dec 13 02:38:15.626 INFO Fetch successful Dec 13 02:38:15.628045 coreos-metadata[746]: Dec 13 02:38:15.626 INFO wrote hostname ci-4081-2-1-b-31d3d6554f.novalocal to /sysroot/etc/hostname Dec 13 02:38:15.630169 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Dec 13 02:38:15.630466 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Dec 13 02:38:15.648437 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 02:38:15.671824 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 02:38:15.688294 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (877) Dec 13 02:38:15.694602 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 02:38:15.694680 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:38:15.697635 kernel: BTRFS info (device vda6): using free space tree Dec 13 02:38:15.707342 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 02:38:15.712123 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 02:38:15.755679 ignition[895]: INFO : Ignition 2.19.0 Dec 13 02:38:15.758430 ignition[895]: INFO : Stage: files Dec 13 02:38:15.758430 ignition[895]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 02:38:15.758430 ignition[895]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 02:38:15.765026 ignition[895]: DEBUG : files: compiled without relabeling support, skipping Dec 13 02:38:15.768802 ignition[895]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 02:38:15.768802 ignition[895]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 02:38:15.775148 ignition[895]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 02:38:15.777139 ignition[895]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 02:38:15.777139 ignition[895]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 02:38:15.776294 unknown[895]: wrote ssh authorized keys file for user: core Dec 13 02:38:15.782462 ignition[895]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 02:38:15.782462 ignition[895]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 02:38:15.847678 ignition[895]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 02:38:16.148117 ignition[895]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 02:38:16.148117 ignition[895]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 02:38:16.152798 ignition[895]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 02:38:16.152798 ignition[895]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 02:38:16.152798 ignition[895]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 02:38:16.152798 ignition[895]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 02:38:16.152798 ignition[895]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 02:38:16.152798 ignition[895]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 02:38:16.152798 ignition[895]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 02:38:16.152798 ignition[895]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 02:38:16.152798 ignition[895]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 02:38:16.152798 ignition[895]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 02:38:16.152798 ignition[895]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 02:38:16.152798 ignition[895]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 02:38:16.152798 ignition[895]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 02:38:16.681814 ignition[895]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 02:38:18.377065 ignition[895]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 02:38:18.377065 ignition[895]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 02:38:18.387877 ignition[895]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 02:38:18.387877 ignition[895]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 02:38:18.387877 ignition[895]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 02:38:18.387877 ignition[895]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 13 02:38:18.387877 ignition[895]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 02:38:18.387877 ignition[895]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 02:38:18.387877 ignition[895]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 02:38:18.387877 ignition[895]: INFO : files: files passed Dec 13 02:38:18.387877 ignition[895]: INFO : Ignition finished successfully Dec 13 02:38:18.390187 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 02:38:18.401574 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 02:38:18.413359 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 02:38:18.423779 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 02:38:18.424513 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 02:38:18.428418 initrd-setup-root-after-ignition[924]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 02:38:18.428418 initrd-setup-root-after-ignition[924]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 02:38:18.433008 initrd-setup-root-after-ignition[928]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 02:38:18.430640 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 02:38:18.431437 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 02:38:18.437375 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 02:38:18.469415 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 02:38:18.469527 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 02:38:18.470249 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 02:38:18.471681 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 02:38:18.473752 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 02:38:18.478396 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 02:38:18.518643 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 02:38:18.525494 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 02:38:18.557416 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 02:38:18.560687 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 02:38:18.562318 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 02:38:18.564988 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 02:38:18.565325 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 02:38:18.568439 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 02:38:18.570176 systemd[1]: Stopped target basic.target - Basic System. Dec 13 02:38:18.572887 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 02:38:18.575379 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 02:38:18.577744 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 02:38:18.580493 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 02:38:18.583187 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 02:38:18.586029 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 02:38:18.588686 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 02:38:18.591452 systemd[1]: Stopped target swap.target - Swaps. Dec 13 02:38:18.593846 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 02:38:18.594120 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 02:38:18.597012 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 02:38:18.598751 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 02:38:18.601107 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 02:38:18.602392 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 02:38:18.603626 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 02:38:18.603906 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 02:38:18.606412 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 02:38:18.606548 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 02:38:18.607248 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 02:38:18.607381 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 02:38:18.617409 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 02:38:18.620453 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 02:38:18.621050 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 02:38:18.621270 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 02:38:18.623375 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 02:38:18.623539 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 02:38:18.634245 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 02:38:18.634897 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 02:38:18.638928 ignition[948]: INFO : Ignition 2.19.0 Dec 13 02:38:18.638928 ignition[948]: INFO : Stage: umount Dec 13 02:38:18.641144 ignition[948]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 02:38:18.641144 ignition[948]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 02:38:18.641144 ignition[948]: INFO : umount: umount passed Dec 13 02:38:18.641144 ignition[948]: INFO : Ignition finished successfully Dec 13 02:38:18.641469 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 02:38:18.641563 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 02:38:18.645066 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 02:38:18.645140 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 02:38:18.647091 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 02:38:18.647138 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 02:38:18.648767 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 02:38:18.648811 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 02:38:18.650540 systemd[1]: Stopped target network.target - Network. Dec 13 02:38:18.653598 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 02:38:18.653645 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 02:38:18.655306 systemd[1]: Stopped target paths.target - Path Units. Dec 13 02:38:18.656915 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 02:38:18.661559 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 02:38:18.662545 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 02:38:18.664047 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 02:38:18.665554 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 02:38:18.665590 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 02:38:18.667410 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 02:38:18.667443 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 02:38:18.669302 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 02:38:18.669344 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 02:38:18.670796 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 02:38:18.670836 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 02:38:18.672794 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 02:38:18.674784 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 02:38:18.677107 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 02:38:18.677738 systemd-networkd[707]: eth0: DHCPv6 lease lost Dec 13 02:38:18.678818 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 02:38:18.678913 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 02:38:18.680804 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 02:38:18.680903 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 02:38:18.683619 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 02:38:18.683854 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 02:38:18.690355 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 02:38:18.691042 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 02:38:18.691096 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 02:38:18.691716 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 02:38:18.691763 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 02:38:18.692782 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 02:38:18.692823 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 02:38:18.693391 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 02:38:18.693430 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 02:38:18.694652 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 02:38:18.703610 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 02:38:18.704678 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 02:38:18.706122 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 02:38:18.706373 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 02:38:18.707720 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 02:38:18.707785 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 02:38:18.709087 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 02:38:18.709119 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 02:38:18.710165 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 02:38:18.710238 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 02:38:18.711670 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 02:38:18.711713 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 02:38:18.712676 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 02:38:18.712721 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 02:38:18.725364 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 02:38:18.725924 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 02:38:18.725977 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 02:38:18.730513 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 02:38:18.730574 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 02:38:18.731139 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 02:38:18.731182 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 02:38:18.731815 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 02:38:18.731858 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 02:38:18.733588 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 02:38:18.733685 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 02:38:18.770178 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 02:38:18.770463 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 02:38:18.773467 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 02:38:18.774962 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 02:38:18.775086 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 02:38:18.784577 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 02:38:18.806431 systemd[1]: Switching root. Dec 13 02:38:18.858865 systemd-journald[185]: Journal stopped Dec 13 02:38:22.009092 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Dec 13 02:38:22.009150 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 02:38:22.009169 kernel: SELinux: policy capability open_perms=1 Dec 13 02:38:22.009180 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 02:38:22.009192 kernel: SELinux: policy capability always_check_network=0 Dec 13 02:38:22.009203 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 02:38:22.009240 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 02:38:22.009253 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 02:38:22.009267 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 02:38:22.009278 kernel: audit: type=1403 audit(1734057500.103:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 02:38:22.009290 systemd[1]: Successfully loaded SELinux policy in 104.809ms. Dec 13 02:38:22.009307 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.974ms. Dec 13 02:38:22.009320 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 02:38:22.009337 systemd[1]: Detected virtualization kvm. Dec 13 02:38:22.009350 systemd[1]: Detected architecture x86-64. Dec 13 02:38:22.009361 systemd[1]: Detected first boot. Dec 13 02:38:22.009373 systemd[1]: Hostname set to . Dec 13 02:38:22.009389 systemd[1]: Initializing machine ID from VM UUID. Dec 13 02:38:22.009401 zram_generator::config[990]: No configuration found. Dec 13 02:38:22.009416 systemd[1]: Populated /etc with preset unit settings. Dec 13 02:38:22.009428 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 02:38:22.009440 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 02:38:22.009452 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 02:38:22.009464 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 02:38:22.009476 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 02:38:22.009490 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 02:38:22.009502 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 02:38:22.009516 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 02:38:22.009528 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 02:38:22.009540 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 02:38:22.009552 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 02:38:22.009564 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 02:38:22.009576 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 02:38:22.009588 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 02:38:22.009599 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 02:38:22.009611 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 02:38:22.009626 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 02:38:22.009637 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 02:38:22.009649 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 02:38:22.009661 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 02:38:22.009674 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 02:38:22.009686 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 02:38:22.009699 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 02:38:22.009712 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 02:38:22.009724 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 02:38:22.009736 systemd[1]: Reached target slices.target - Slice Units. Dec 13 02:38:22.009748 systemd[1]: Reached target swap.target - Swaps. Dec 13 02:38:22.009760 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 02:38:22.009772 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 02:38:22.009784 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 02:38:22.009796 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 02:38:22.009807 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 02:38:22.009822 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 02:38:22.009834 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 02:38:22.009845 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 02:38:22.009857 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 02:38:22.009869 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:38:22.009881 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 02:38:22.009893 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 02:38:22.009905 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 02:38:22.009919 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 02:38:22.009931 systemd[1]: Reached target machines.target - Containers. Dec 13 02:38:22.009943 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 02:38:22.009955 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 02:38:22.009967 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 02:38:22.009980 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 02:38:22.009991 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 02:38:22.010003 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 02:38:22.010019 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 02:38:22.010030 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 02:38:22.010042 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 02:38:22.010054 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 02:38:22.010067 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 02:38:22.010079 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 02:38:22.010092 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 02:38:22.010103 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 02:38:22.010115 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 02:38:22.010129 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 02:38:22.010141 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 02:38:22.010153 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 02:38:22.010164 kernel: loop: module loaded Dec 13 02:38:22.010176 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 02:38:22.010188 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 02:38:22.010199 systemd[1]: Stopped verity-setup.service. Dec 13 02:38:22.010267 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:38:22.010284 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 02:38:22.010300 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 02:38:22.010311 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 02:38:22.010324 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 02:38:22.010336 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 02:38:22.010350 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 02:38:22.010362 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 02:38:22.010374 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 02:38:22.010386 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 02:38:22.010398 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:38:22.010410 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 02:38:22.010425 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:38:22.010437 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 02:38:22.010449 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:38:22.010461 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 02:38:22.010472 kernel: ACPI: bus type drm_connector registered Dec 13 02:38:22.010484 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 02:38:22.010496 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 02:38:22.010523 systemd-journald[1083]: Collecting audit messages is disabled. Dec 13 02:38:22.010549 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 02:38:22.010562 systemd-journald[1083]: Journal started Dec 13 02:38:22.010586 systemd-journald[1083]: Runtime Journal (/run/log/journal/0ef15715a30d46dea636179f91f7b1ff) is 4.9M, max 39.3M, 34.4M free. Dec 13 02:38:21.353176 systemd[1]: Queued start job for default target multi-user.target. Dec 13 02:38:21.377388 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 02:38:21.377801 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 02:38:22.027250 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 02:38:22.023654 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 02:38:22.023875 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 02:38:22.024531 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 02:38:22.032487 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 02:38:22.036304 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 02:38:22.036361 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 02:38:22.038319 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 02:38:22.043378 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 02:38:22.045523 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 02:38:22.046181 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 02:38:22.122270 kernel: fuse: init (API version 7.39) Dec 13 02:38:22.169569 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 02:38:22.179615 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 02:38:22.181789 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:38:22.187483 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 02:38:22.195441 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 02:38:22.199434 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 02:38:22.200247 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 02:38:22.200449 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 02:38:22.201555 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 02:38:22.202814 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 02:38:22.204117 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 02:38:22.214068 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 02:38:22.222930 systemd-tmpfiles[1099]: ACLs are not supported, ignoring. Dec 13 02:38:22.225313 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 02:38:22.227287 systemd-tmpfiles[1099]: ACLs are not supported, ignoring. Dec 13 02:38:22.230410 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 02:38:22.233829 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 02:38:22.234666 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 02:38:22.236017 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 02:38:22.239601 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 02:38:22.241764 systemd-journald[1083]: Time spent on flushing to /var/log/journal/0ef15715a30d46dea636179f91f7b1ff is 59.303ms for 945 entries. Dec 13 02:38:22.241764 systemd-journald[1083]: System Journal (/var/log/journal/0ef15715a30d46dea636179f91f7b1ff) is 8.0M, max 584.8M, 576.8M free. Dec 13 02:38:22.313390 systemd-journald[1083]: Received client request to flush runtime journal. Dec 13 02:38:22.313434 kernel: loop0: detected capacity change from 0 to 211296 Dec 13 02:38:22.245401 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 02:38:22.247573 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 02:38:22.249545 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 02:38:22.262424 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 02:38:22.293468 udevadm[1133]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 02:38:22.315085 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 02:38:22.331038 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 02:38:22.388169 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 02:38:22.388887 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 02:38:22.407295 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 02:38:22.416626 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 02:38:22.424403 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 02:38:22.452285 kernel: loop1: detected capacity change from 0 to 140768 Dec 13 02:38:22.463386 systemd-tmpfiles[1146]: ACLs are not supported, ignoring. Dec 13 02:38:22.463726 systemd-tmpfiles[1146]: ACLs are not supported, ignoring. Dec 13 02:38:22.475622 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 02:38:22.547307 kernel: loop2: detected capacity change from 0 to 8 Dec 13 02:38:22.572257 kernel: loop3: detected capacity change from 0 to 142488 Dec 13 02:38:22.700289 kernel: loop4: detected capacity change from 0 to 211296 Dec 13 02:38:22.804301 kernel: loop5: detected capacity change from 0 to 140768 Dec 13 02:38:22.863118 kernel: loop6: detected capacity change from 0 to 8 Dec 13 02:38:22.863341 kernel: loop7: detected capacity change from 0 to 142488 Dec 13 02:38:22.942089 (sd-merge)[1153]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Dec 13 02:38:22.943081 (sd-merge)[1153]: Merged extensions into '/usr'. Dec 13 02:38:22.949324 systemd[1]: Reloading requested from client PID 1120 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 02:38:22.949358 systemd[1]: Reloading... Dec 13 02:38:23.081424 zram_generator::config[1200]: No configuration found. Dec 13 02:38:23.216425 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:38:23.279063 systemd[1]: Reloading finished in 328 ms. Dec 13 02:38:23.315393 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 02:38:23.325380 systemd[1]: Starting ensure-sysext.service... Dec 13 02:38:23.327883 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 02:38:23.356291 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 02:38:23.369636 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 02:38:23.371615 systemd[1]: Reloading requested from client PID 1234 ('systemctl') (unit ensure-sysext.service)... Dec 13 02:38:23.371633 systemd[1]: Reloading... Dec 13 02:38:23.387192 systemd-tmpfiles[1235]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 02:38:23.393380 systemd-tmpfiles[1235]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 02:38:23.394200 systemd-tmpfiles[1235]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 02:38:23.394528 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Dec 13 02:38:23.394598 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Dec 13 02:38:23.403875 systemd-tmpfiles[1235]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 02:38:23.403887 systemd-tmpfiles[1235]: Skipping /boot Dec 13 02:38:23.410534 systemd-udevd[1238]: Using default interface naming scheme 'v255'. Dec 13 02:38:23.422345 systemd-tmpfiles[1235]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 02:38:23.422357 systemd-tmpfiles[1235]: Skipping /boot Dec 13 02:38:23.475057 zram_generator::config[1265]: No configuration found. Dec 13 02:38:23.573929 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1267) Dec 13 02:38:23.582245 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1267) Dec 13 02:38:23.608243 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1279) Dec 13 02:38:23.609806 ldconfig[1116]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 02:38:23.686846 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 02:38:23.686942 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Dec 13 02:38:23.690243 kernel: ACPI: button: Power Button [PWRF] Dec 13 02:38:23.720341 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:38:23.740285 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 02:38:23.778247 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 02:38:23.791285 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Dec 13 02:38:23.793241 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Dec 13 02:38:23.800060 kernel: Console: switching to colour dummy device 80x25 Dec 13 02:38:23.800138 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Dec 13 02:38:23.800159 kernel: [drm] features: -context_init Dec 13 02:38:23.801533 kernel: [drm] number of scanouts: 1 Dec 13 02:38:23.801568 kernel: [drm] number of cap sets: 0 Dec 13 02:38:23.806244 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Dec 13 02:38:23.816143 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Dec 13 02:38:23.816274 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 02:38:23.819251 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Dec 13 02:38:23.838495 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 02:38:23.840645 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 02:38:23.841149 systemd[1]: Reloading finished in 469 ms. Dec 13 02:38:23.857677 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 02:38:23.860113 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 02:38:23.864680 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 02:38:23.901523 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:38:23.906395 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 02:38:23.910366 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 02:38:23.910594 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 02:38:23.912546 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 02:38:23.914378 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 02:38:23.917389 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 02:38:23.921004 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 02:38:23.921809 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 02:38:23.923529 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 02:38:23.928855 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 02:38:23.930964 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 02:38:23.939460 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 02:38:23.951421 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 02:38:23.954361 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 02:38:23.955301 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:38:23.955957 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 02:38:23.957284 systemd[1]: Finished ensure-sysext.service. Dec 13 02:38:23.957618 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 02:38:23.957744 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 02:38:23.959400 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:38:23.959583 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 02:38:23.971006 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 02:38:23.981270 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 02:38:23.987622 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 02:38:23.988681 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:38:23.988841 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 02:38:23.994208 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 02:38:24.003115 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:38:24.004297 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 02:38:24.006345 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:38:24.023577 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 02:38:24.046009 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 02:38:24.057507 lvm[1373]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 02:38:24.066314 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 02:38:24.067316 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:38:24.077006 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 02:38:24.078833 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 02:38:24.089410 augenrules[1396]: No rules Dec 13 02:38:24.089477 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 02:38:24.093020 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 02:38:24.097504 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 02:38:24.099748 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 02:38:24.114390 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 02:38:24.129548 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 02:38:24.140362 lvm[1406]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 02:38:24.175883 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 02:38:24.185571 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 02:38:24.208539 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 02:38:24.210404 systemd-networkd[1362]: lo: Link UP Dec 13 02:38:24.210409 systemd-networkd[1362]: lo: Gained carrier Dec 13 02:38:24.212561 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 02:38:24.213403 systemd-networkd[1362]: Enumeration completed Dec 13 02:38:24.213812 systemd-networkd[1362]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 02:38:24.213816 systemd-networkd[1362]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:38:24.215469 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 02:38:24.215549 systemd-networkd[1362]: eth0: Link UP Dec 13 02:38:24.215553 systemd-networkd[1362]: eth0: Gained carrier Dec 13 02:38:24.215566 systemd-networkd[1362]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 02:38:24.221341 systemd-timesyncd[1374]: No network connectivity, watching for changes. Dec 13 02:38:24.226432 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 02:38:24.228665 systemd-resolved[1364]: Positive Trust Anchors: Dec 13 02:38:24.228895 systemd-resolved[1364]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 02:38:24.229115 systemd-resolved[1364]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 02:38:24.229277 systemd-networkd[1362]: eth0: DHCPv4 address 172.24.4.208/24, gateway 172.24.4.1 acquired from 172.24.4.1 Dec 13 02:38:24.230137 systemd-timesyncd[1374]: Network configuration changed, trying to establish connection. Dec 13 02:38:24.235064 systemd-resolved[1364]: Using system hostname 'ci-4081-2-1-b-31d3d6554f.novalocal'. Dec 13 02:38:24.236661 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 02:38:24.237304 systemd[1]: Reached target network.target - Network. Dec 13 02:38:24.237738 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 02:38:24.238165 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 02:38:24.240714 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 02:38:24.242133 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 02:38:24.243713 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 02:38:24.245159 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 02:38:24.246593 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 02:38:24.248378 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 02:38:24.248510 systemd[1]: Reached target paths.target - Path Units. Dec 13 02:38:24.249434 systemd[1]: Reached target timers.target - Timer Units. Dec 13 02:38:24.978742 systemd-resolved[1364]: Clock change detected. Flushing caches. Dec 13 02:38:24.978910 systemd-timesyncd[1374]: Contacted time server 212.83.158.83:123 (1.flatcar.pool.ntp.org). Dec 13 02:38:24.978980 systemd-timesyncd[1374]: Initial clock synchronization to Fri 2024-12-13 02:38:24.978700 UTC. Dec 13 02:38:24.980893 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 02:38:24.986174 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 02:38:24.992816 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 02:38:24.995372 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 02:38:24.997899 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 02:38:24.999128 systemd[1]: Reached target basic.target - Basic System. Dec 13 02:38:25.000549 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 02:38:25.000654 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 02:38:25.006579 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 02:38:25.012696 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 02:38:25.018842 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 02:38:25.028593 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 02:38:25.031773 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 02:38:25.034239 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 02:38:25.040679 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 02:38:25.045842 jq[1423]: false Dec 13 02:38:25.046160 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 02:38:25.058166 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 02:38:25.063982 extend-filesystems[1424]: Found loop4 Dec 13 02:38:25.063982 extend-filesystems[1424]: Found loop5 Dec 13 02:38:25.063982 extend-filesystems[1424]: Found loop6 Dec 13 02:38:25.063982 extend-filesystems[1424]: Found loop7 Dec 13 02:38:25.063982 extend-filesystems[1424]: Found vda Dec 13 02:38:25.063982 extend-filesystems[1424]: Found vda1 Dec 13 02:38:25.063982 extend-filesystems[1424]: Found vda2 Dec 13 02:38:25.063982 extend-filesystems[1424]: Found vda3 Dec 13 02:38:25.063982 extend-filesystems[1424]: Found usr Dec 13 02:38:25.063982 extend-filesystems[1424]: Found vda4 Dec 13 02:38:25.063982 extend-filesystems[1424]: Found vda6 Dec 13 02:38:25.063982 extend-filesystems[1424]: Found vda7 Dec 13 02:38:25.164329 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 4635643 blocks Dec 13 02:38:25.066696 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 02:38:25.164501 extend-filesystems[1424]: Found vda9 Dec 13 02:38:25.164501 extend-filesystems[1424]: Checking size of /dev/vda9 Dec 13 02:38:25.164501 extend-filesystems[1424]: Resized partition /dev/vda9 Dec 13 02:38:25.076372 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 02:38:25.171332 extend-filesystems[1445]: resize2fs 1.47.1 (20-May-2024) Dec 13 02:38:25.081792 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 02:38:25.082333 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 02:38:25.195296 dbus-daemon[1422]: [system] SELinux support is enabled Dec 13 02:38:25.094988 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 02:38:25.200673 update_engine[1440]: I20241213 02:38:25.182366 1440 main.cc:92] Flatcar Update Engine starting Dec 13 02:38:25.117629 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 02:38:25.200976 jq[1446]: true Dec 13 02:38:25.129953 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 02:38:25.134462 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 02:38:25.134959 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 02:38:25.135149 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 02:38:25.170998 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 02:38:25.171214 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 02:38:25.198800 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 02:38:25.208846 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 02:38:25.217399 update_engine[1440]: I20241213 02:38:25.210912 1440 update_check_scheduler.cc:74] Next update check in 5m26s Dec 13 02:38:25.208903 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 02:38:25.216411 systemd-logind[1432]: New seat seat0. Dec 13 02:38:25.222323 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1280) Dec 13 02:38:25.222356 jq[1449]: true Dec 13 02:38:25.222387 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 02:38:25.222413 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 02:38:25.229118 (ntainerd)[1450]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 02:38:25.235786 systemd[1]: Started update-engine.service - Update Engine. Dec 13 02:38:25.246667 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 02:38:25.256830 systemd-logind[1432]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 02:38:25.263568 tar[1448]: linux-amd64/helm Dec 13 02:38:25.256854 systemd-logind[1432]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 02:38:25.257097 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 02:38:25.290516 kernel: EXT4-fs (vda9): resized filesystem to 4635643 Dec 13 02:38:25.401129 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 02:38:25.413613 bash[1476]: Updated "/home/core/.ssh/authorized_keys" Dec 13 02:38:25.413719 extend-filesystems[1445]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 02:38:25.413719 extend-filesystems[1445]: old_desc_blocks = 1, new_desc_blocks = 3 Dec 13 02:38:25.413719 extend-filesystems[1445]: The filesystem on /dev/vda9 is now 4635643 (4k) blocks long. Dec 13 02:38:25.406135 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 02:38:25.415017 extend-filesystems[1424]: Resized filesystem in /dev/vda9 Dec 13 02:38:25.408201 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 02:38:25.420732 sshd_keygen[1444]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 02:38:25.433769 systemd[1]: Starting sshkeys.service... Dec 13 02:38:25.441193 locksmithd[1461]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 02:38:25.464557 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 02:38:25.476887 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 02:38:25.479700 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 02:38:25.490991 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 02:38:25.507884 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 02:38:25.508069 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 02:38:25.523055 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 02:38:25.568473 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 02:38:25.580972 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 02:38:25.593013 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 02:38:25.595413 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 02:38:25.669203 containerd[1450]: time="2024-12-13T02:38:25.669063219Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 02:38:25.698086 containerd[1450]: time="2024-12-13T02:38:25.698035246Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:38:25.700869 containerd[1450]: time="2024-12-13T02:38:25.700812335Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:38:25.700997 containerd[1450]: time="2024-12-13T02:38:25.700981041Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 02:38:25.701084 containerd[1450]: time="2024-12-13T02:38:25.701068455Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 02:38:25.701294 containerd[1450]: time="2024-12-13T02:38:25.701275864Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 02:38:25.701368 containerd[1450]: time="2024-12-13T02:38:25.701353580Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 02:38:25.701519 containerd[1450]: time="2024-12-13T02:38:25.701498702Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:38:25.701586 containerd[1450]: time="2024-12-13T02:38:25.701571899Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:38:25.701806 containerd[1450]: time="2024-12-13T02:38:25.701784107Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:38:25.701871 containerd[1450]: time="2024-12-13T02:38:25.701856994Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 02:38:25.701937 containerd[1450]: time="2024-12-13T02:38:25.701921254Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:38:25.701994 containerd[1450]: time="2024-12-13T02:38:25.701980435Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 02:38:25.702130 containerd[1450]: time="2024-12-13T02:38:25.702112353Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:38:25.702626 containerd[1450]: time="2024-12-13T02:38:25.702582905Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:38:25.703137 containerd[1450]: time="2024-12-13T02:38:25.702815151Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:38:25.703137 containerd[1450]: time="2024-12-13T02:38:25.703075589Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 02:38:25.703331 containerd[1450]: time="2024-12-13T02:38:25.703286765Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 02:38:25.703486 containerd[1450]: time="2024-12-13T02:38:25.703443790Z" level=info msg="metadata content store policy set" policy=shared Dec 13 02:38:25.714566 containerd[1450]: time="2024-12-13T02:38:25.714527517Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 02:38:25.714624 containerd[1450]: time="2024-12-13T02:38:25.714587570Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 02:38:25.714624 containerd[1450]: time="2024-12-13T02:38:25.714610523Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 02:38:25.714679 containerd[1450]: time="2024-12-13T02:38:25.714631212Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 02:38:25.714679 containerd[1450]: time="2024-12-13T02:38:25.714652492Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 02:38:25.714835 containerd[1450]: time="2024-12-13T02:38:25.714801551Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 02:38:25.715117 containerd[1450]: time="2024-12-13T02:38:25.715088730Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 02:38:25.715224 containerd[1450]: time="2024-12-13T02:38:25.715193957Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 02:38:25.715224 containerd[1450]: time="2024-12-13T02:38:25.715220257Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 02:38:25.715278 containerd[1450]: time="2024-12-13T02:38:25.715236457Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 02:38:25.715278 containerd[1450]: time="2024-12-13T02:38:25.715253790Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 02:38:25.715278 containerd[1450]: time="2024-12-13T02:38:25.715268507Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 02:38:25.715336 containerd[1450]: time="2024-12-13T02:38:25.715282443Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 02:38:25.715336 containerd[1450]: time="2024-12-13T02:38:25.715299014Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 02:38:25.715336 containerd[1450]: time="2024-12-13T02:38:25.715322017Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 02:38:25.715397 containerd[1450]: time="2024-12-13T02:38:25.715338959Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 02:38:25.715397 containerd[1450]: time="2024-12-13T02:38:25.715353336Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 02:38:25.715397 containerd[1450]: time="2024-12-13T02:38:25.715365459Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 02:38:25.715397 containerd[1450]: time="2024-12-13T02:38:25.715387530Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 02:38:25.715493 containerd[1450]: time="2024-12-13T02:38:25.715402879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 02:38:25.715493 containerd[1450]: time="2024-12-13T02:38:25.715417607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 02:38:25.715493 containerd[1450]: time="2024-12-13T02:38:25.715432555Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 02:38:25.715493 containerd[1450]: time="2024-12-13T02:38:25.715447543Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 02:38:25.715493 containerd[1450]: time="2024-12-13T02:38:25.715462661Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 02:38:25.718490 containerd[1450]: time="2024-12-13T02:38:25.718429355Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 02:38:25.718490 containerd[1450]: time="2024-12-13T02:38:25.718462998Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 02:38:25.718565 containerd[1450]: time="2024-12-13T02:38:25.718494397Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 02:38:25.718565 containerd[1450]: time="2024-12-13T02:38:25.718515737Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 02:38:25.718565 containerd[1450]: time="2024-12-13T02:38:25.718530004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 02:38:25.718565 containerd[1450]: time="2024-12-13T02:38:25.718551835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 02:38:25.718649 containerd[1450]: time="2024-12-13T02:38:25.718568015Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 02:38:25.718649 containerd[1450]: time="2024-12-13T02:38:25.718588614Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 02:38:25.718649 containerd[1450]: time="2024-12-13T02:38:25.718616136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 02:38:25.718649 containerd[1450]: time="2024-12-13T02:38:25.718633107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 02:38:25.718739 containerd[1450]: time="2024-12-13T02:38:25.718647144Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 02:38:25.718739 containerd[1450]: time="2024-12-13T02:38:25.718704792Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 02:38:25.718739 containerd[1450]: time="2024-12-13T02:38:25.718729218Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 02:38:25.718803 containerd[1450]: time="2024-12-13T02:38:25.718742673Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 02:38:25.718803 containerd[1450]: time="2024-12-13T02:38:25.718757401Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 02:38:25.718803 containerd[1450]: time="2024-12-13T02:38:25.718769804Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 02:38:25.718803 containerd[1450]: time="2024-12-13T02:38:25.718784662Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 02:38:25.718803 containerd[1450]: time="2024-12-13T02:38:25.718796784Z" level=info msg="NRI interface is disabled by configuration." Dec 13 02:38:25.718923 containerd[1450]: time="2024-12-13T02:38:25.718809899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 02:38:25.719210 containerd[1450]: time="2024-12-13T02:38:25.719128637Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 02:38:25.719346 containerd[1450]: time="2024-12-13T02:38:25.719214758Z" level=info msg="Connect containerd service" Dec 13 02:38:25.719346 containerd[1450]: time="2024-12-13T02:38:25.719252659Z" level=info msg="using legacy CRI server" Dec 13 02:38:25.719346 containerd[1450]: time="2024-12-13T02:38:25.719261185Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 02:38:25.719410 containerd[1450]: time="2024-12-13T02:38:25.719359360Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 02:38:25.723757 containerd[1450]: time="2024-12-13T02:38:25.723714788Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 02:38:25.724944 containerd[1450]: time="2024-12-13T02:38:25.723902590Z" level=info msg="Start subscribing containerd event" Dec 13 02:38:25.724944 containerd[1450]: time="2024-12-13T02:38:25.723979284Z" level=info msg="Start recovering state" Dec 13 02:38:25.724944 containerd[1450]: time="2024-12-13T02:38:25.724047973Z" level=info msg="Start event monitor" Dec 13 02:38:25.724944 containerd[1450]: time="2024-12-13T02:38:25.724060977Z" level=info msg="Start snapshots syncer" Dec 13 02:38:25.724944 containerd[1450]: time="2024-12-13T02:38:25.724073882Z" level=info msg="Start cni network conf syncer for default" Dec 13 02:38:25.724944 containerd[1450]: time="2024-12-13T02:38:25.724083019Z" level=info msg="Start streaming server" Dec 13 02:38:25.724944 containerd[1450]: time="2024-12-13T02:38:25.724125268Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 02:38:25.724944 containerd[1450]: time="2024-12-13T02:38:25.724177336Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 02:38:25.724313 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 02:38:25.728366 containerd[1450]: time="2024-12-13T02:38:25.728345463Z" level=info msg="containerd successfully booted in 0.061267s" Dec 13 02:38:25.949430 tar[1448]: linux-amd64/LICENSE Dec 13 02:38:25.949839 tar[1448]: linux-amd64/README.md Dec 13 02:38:25.959408 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 02:38:26.407084 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 02:38:26.420262 systemd[1]: Started sshd@0-172.24.4.208:22-172.24.4.1:38920.service - OpenSSH per-connection server daemon (172.24.4.1:38920). Dec 13 02:38:26.973927 systemd-networkd[1362]: eth0: Gained IPv6LL Dec 13 02:38:26.978866 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 02:38:26.984293 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 02:38:26.995013 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:38:27.009623 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 02:38:27.072799 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 02:38:27.397250 sshd[1518]: Accepted publickey for core from 172.24.4.1 port 38920 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:38:27.400695 sshd[1518]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:38:27.430961 systemd-logind[1432]: New session 1 of user core. Dec 13 02:38:27.436928 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 02:38:27.452579 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 02:38:27.482138 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 02:38:27.493040 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 02:38:27.500114 (systemd)[1534]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:38:27.617292 systemd[1534]: Queued start job for default target default.target. Dec 13 02:38:27.625359 systemd[1534]: Created slice app.slice - User Application Slice. Dec 13 02:38:27.625381 systemd[1534]: Reached target paths.target - Paths. Dec 13 02:38:27.625396 systemd[1534]: Reached target timers.target - Timers. Dec 13 02:38:27.631657 systemd[1534]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 02:38:27.643169 systemd[1534]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 02:38:27.643427 systemd[1534]: Reached target sockets.target - Sockets. Dec 13 02:38:27.643637 systemd[1534]: Reached target basic.target - Basic System. Dec 13 02:38:27.643812 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 02:38:27.645529 systemd[1534]: Reached target default.target - Main User Target. Dec 13 02:38:27.645571 systemd[1534]: Startup finished in 139ms. Dec 13 02:38:27.650672 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 02:38:28.153921 systemd[1]: Started sshd@1-172.24.4.208:22-172.24.4.1:38922.service - OpenSSH per-connection server daemon (172.24.4.1:38922). Dec 13 02:38:28.588049 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:38:28.593399 (kubelet)[1554]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:38:29.456538 sshd[1546]: Accepted publickey for core from 172.24.4.1 port 38922 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:38:29.457373 sshd[1546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:38:29.470413 systemd-logind[1432]: New session 2 of user core. Dec 13 02:38:29.476791 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 02:38:30.104824 kubelet[1554]: E1213 02:38:30.104607 1554 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:38:30.107901 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:38:30.108226 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:38:30.108897 systemd[1]: kubelet.service: Consumed 2.051s CPU time. Dec 13 02:38:30.190137 sshd[1546]: pam_unix(sshd:session): session closed for user core Dec 13 02:38:30.199867 systemd[1]: sshd@1-172.24.4.208:22-172.24.4.1:38922.service: Deactivated successfully. Dec 13 02:38:30.203149 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 02:38:30.206807 systemd-logind[1432]: Session 2 logged out. Waiting for processes to exit. Dec 13 02:38:30.214204 systemd[1]: Started sshd@2-172.24.4.208:22-172.24.4.1:38924.service - OpenSSH per-connection server daemon (172.24.4.1:38924). Dec 13 02:38:30.222261 systemd-logind[1432]: Removed session 2. Dec 13 02:38:30.869332 login[1507]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 02:38:30.872763 login[1508]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 02:38:30.880101 systemd-logind[1432]: New session 4 of user core. Dec 13 02:38:30.891932 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 02:38:30.898863 systemd-logind[1432]: New session 3 of user core. Dec 13 02:38:30.906175 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 02:38:31.233131 sshd[1567]: Accepted publickey for core from 172.24.4.1 port 38924 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:38:31.235739 sshd[1567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:38:31.244027 systemd-logind[1432]: New session 5 of user core. Dec 13 02:38:31.255859 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 02:38:31.926844 sshd[1567]: pam_unix(sshd:session): session closed for user core Dec 13 02:38:31.934750 systemd[1]: sshd@2-172.24.4.208:22-172.24.4.1:38924.service: Deactivated successfully. Dec 13 02:38:31.938422 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 02:38:31.940425 systemd-logind[1432]: Session 5 logged out. Waiting for processes to exit. Dec 13 02:38:31.942833 systemd-logind[1432]: Removed session 5. Dec 13 02:38:32.082118 coreos-metadata[1419]: Dec 13 02:38:32.082 WARN failed to locate config-drive, using the metadata service API instead Dec 13 02:38:32.331745 coreos-metadata[1419]: Dec 13 02:38:32.331 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Dec 13 02:38:32.539385 coreos-metadata[1419]: Dec 13 02:38:32.539 INFO Fetch successful Dec 13 02:38:32.539641 coreos-metadata[1419]: Dec 13 02:38:32.539 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 02:38:32.555453 coreos-metadata[1419]: Dec 13 02:38:32.555 INFO Fetch successful Dec 13 02:38:32.555453 coreos-metadata[1419]: Dec 13 02:38:32.555 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Dec 13 02:38:32.569850 coreos-metadata[1419]: Dec 13 02:38:32.569 INFO Fetch successful Dec 13 02:38:32.569850 coreos-metadata[1419]: Dec 13 02:38:32.569 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Dec 13 02:38:32.575541 coreos-metadata[1493]: Dec 13 02:38:32.575 WARN failed to locate config-drive, using the metadata service API instead Dec 13 02:38:32.586558 coreos-metadata[1419]: Dec 13 02:38:32.586 INFO Fetch successful Dec 13 02:38:32.586558 coreos-metadata[1419]: Dec 13 02:38:32.586 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Dec 13 02:38:32.603714 coreos-metadata[1419]: Dec 13 02:38:32.603 INFO Fetch successful Dec 13 02:38:32.603714 coreos-metadata[1419]: Dec 13 02:38:32.603 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Dec 13 02:38:32.617810 coreos-metadata[1419]: Dec 13 02:38:32.617 INFO Fetch successful Dec 13 02:38:32.618985 coreos-metadata[1493]: Dec 13 02:38:32.618 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Dec 13 02:38:32.635023 coreos-metadata[1493]: Dec 13 02:38:32.634 INFO Fetch successful Dec 13 02:38:32.635023 coreos-metadata[1493]: Dec 13 02:38:32.635 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 02:38:32.650254 coreos-metadata[1493]: Dec 13 02:38:32.649 INFO Fetch successful Dec 13 02:38:32.655722 unknown[1493]: wrote ssh authorized keys file for user: core Dec 13 02:38:32.672812 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 02:38:32.681328 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 02:38:32.700322 update-ssh-keys[1607]: Updated "/home/core/.ssh/authorized_keys" Dec 13 02:38:32.701250 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 02:38:32.705781 systemd[1]: Finished sshkeys.service. Dec 13 02:38:32.710794 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 02:38:32.711347 systemd[1]: Startup finished in 1.096s (kernel) + 15.251s (initrd) + 11.983s (userspace) = 28.330s. Dec 13 02:38:40.115829 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 02:38:40.126852 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:38:40.538795 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:38:40.552089 (kubelet)[1620]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:38:40.796339 kubelet[1620]: E1213 02:38:40.796088 1620 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:38:40.803944 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:38:40.804235 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:38:41.953090 systemd[1]: Started sshd@3-172.24.4.208:22-172.24.4.1:49922.service - OpenSSH per-connection server daemon (172.24.4.1:49922). Dec 13 02:38:43.348062 sshd[1629]: Accepted publickey for core from 172.24.4.1 port 49922 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:38:43.351430 sshd[1629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:38:43.360909 systemd-logind[1432]: New session 6 of user core. Dec 13 02:38:43.376849 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 02:38:43.977206 sshd[1629]: pam_unix(sshd:session): session closed for user core Dec 13 02:38:43.992444 systemd[1]: sshd@3-172.24.4.208:22-172.24.4.1:49922.service: Deactivated successfully. Dec 13 02:38:43.995251 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 02:38:43.996618 systemd-logind[1432]: Session 6 logged out. Waiting for processes to exit. Dec 13 02:38:44.004044 systemd[1]: Started sshd@4-172.24.4.208:22-172.24.4.1:49926.service - OpenSSH per-connection server daemon (172.24.4.1:49926). Dec 13 02:38:44.006346 systemd-logind[1432]: Removed session 6. Dec 13 02:38:45.592565 sshd[1636]: Accepted publickey for core from 172.24.4.1 port 49926 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:38:45.595108 sshd[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:38:45.605746 systemd-logind[1432]: New session 7 of user core. Dec 13 02:38:45.612774 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 02:38:46.283872 sshd[1636]: pam_unix(sshd:session): session closed for user core Dec 13 02:38:46.295199 systemd[1]: sshd@4-172.24.4.208:22-172.24.4.1:49926.service: Deactivated successfully. Dec 13 02:38:46.298466 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 02:38:46.301026 systemd-logind[1432]: Session 7 logged out. Waiting for processes to exit. Dec 13 02:38:46.307054 systemd[1]: Started sshd@5-172.24.4.208:22-172.24.4.1:39616.service - OpenSSH per-connection server daemon (172.24.4.1:39616). Dec 13 02:38:46.309934 systemd-logind[1432]: Removed session 7. Dec 13 02:38:47.725751 sshd[1643]: Accepted publickey for core from 172.24.4.1 port 39616 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:38:47.728872 sshd[1643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:38:47.739607 systemd-logind[1432]: New session 8 of user core. Dec 13 02:38:47.750964 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 02:38:48.491500 sshd[1643]: pam_unix(sshd:session): session closed for user core Dec 13 02:38:48.503247 systemd[1]: sshd@5-172.24.4.208:22-172.24.4.1:39616.service: Deactivated successfully. Dec 13 02:38:48.505118 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 02:38:48.507105 systemd-logind[1432]: Session 8 logged out. Waiting for processes to exit. Dec 13 02:38:48.512845 systemd[1]: Started sshd@6-172.24.4.208:22-172.24.4.1:39620.service - OpenSSH per-connection server daemon (172.24.4.1:39620). Dec 13 02:38:48.515407 systemd-logind[1432]: Removed session 8. Dec 13 02:38:49.969329 sshd[1650]: Accepted publickey for core from 172.24.4.1 port 39620 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:38:49.972176 sshd[1650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:38:49.981559 systemd-logind[1432]: New session 9 of user core. Dec 13 02:38:49.993795 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 02:38:50.543000 sudo[1653]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 02:38:50.544420 sudo[1653]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 02:38:50.566940 sudo[1653]: pam_unix(sudo:session): session closed for user root Dec 13 02:38:50.833676 sshd[1650]: pam_unix(sshd:session): session closed for user core Dec 13 02:38:50.846622 systemd[1]: sshd@6-172.24.4.208:22-172.24.4.1:39620.service: Deactivated successfully. Dec 13 02:38:50.849474 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 02:38:50.851331 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 02:38:50.853051 systemd-logind[1432]: Session 9 logged out. Waiting for processes to exit. Dec 13 02:38:50.860933 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:38:50.869080 systemd[1]: Started sshd@7-172.24.4.208:22-172.24.4.1:39622.service - OpenSSH per-connection server daemon (172.24.4.1:39622). Dec 13 02:38:50.876242 systemd-logind[1432]: Removed session 9. Dec 13 02:38:51.287890 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:38:51.292921 (kubelet)[1668]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:38:51.600187 kubelet[1668]: E1213 02:38:51.598816 1668 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:38:51.605763 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:38:51.606127 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:38:52.314289 sshd[1659]: Accepted publickey for core from 172.24.4.1 port 39622 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:38:52.316457 sshd[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:38:52.321702 systemd-logind[1432]: New session 10 of user core. Dec 13 02:38:52.335844 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 02:38:52.773453 sudo[1678]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 02:38:52.774176 sudo[1678]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 02:38:52.781987 sudo[1678]: pam_unix(sudo:session): session closed for user root Dec 13 02:38:52.797046 sudo[1677]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 02:38:52.798402 sudo[1677]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 02:38:52.823962 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 02:38:52.840087 auditctl[1681]: No rules Dec 13 02:38:52.840870 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 02:38:52.841242 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 02:38:52.850461 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 02:38:52.912117 augenrules[1699]: No rules Dec 13 02:38:52.914274 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 02:38:52.916834 sudo[1677]: pam_unix(sudo:session): session closed for user root Dec 13 02:38:53.090048 sshd[1659]: pam_unix(sshd:session): session closed for user core Dec 13 02:38:53.097851 systemd[1]: sshd@7-172.24.4.208:22-172.24.4.1:39622.service: Deactivated successfully. Dec 13 02:38:53.099256 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 02:38:53.101652 systemd-logind[1432]: Session 10 logged out. Waiting for processes to exit. Dec 13 02:38:53.106972 systemd[1]: Started sshd@8-172.24.4.208:22-172.24.4.1:39626.service - OpenSSH per-connection server daemon (172.24.4.1:39626). Dec 13 02:38:53.109808 systemd-logind[1432]: Removed session 10. Dec 13 02:38:54.117456 sshd[1707]: Accepted publickey for core from 172.24.4.1 port 39626 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:38:54.119872 sshd[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:38:54.128449 systemd-logind[1432]: New session 11 of user core. Dec 13 02:38:54.139805 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 02:38:54.589813 sudo[1710]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 02:38:54.590432 sudo[1710]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 02:38:55.187989 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 02:38:55.192082 (dockerd)[1725]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 02:38:55.900764 dockerd[1725]: time="2024-12-13T02:38:55.900682053Z" level=info msg="Starting up" Dec 13 02:38:56.073610 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3331534188-merged.mount: Deactivated successfully. Dec 13 02:38:56.117562 dockerd[1725]: time="2024-12-13T02:38:56.117305109Z" level=info msg="Loading containers: start." Dec 13 02:38:56.291302 kernel: Initializing XFRM netlink socket Dec 13 02:38:56.437240 systemd-networkd[1362]: docker0: Link UP Dec 13 02:38:56.452348 dockerd[1725]: time="2024-12-13T02:38:56.452006191Z" level=info msg="Loading containers: done." Dec 13 02:38:56.468522 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3626663827-merged.mount: Deactivated successfully. Dec 13 02:38:56.470405 dockerd[1725]: time="2024-12-13T02:38:56.470343002Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 02:38:56.470562 dockerd[1725]: time="2024-12-13T02:38:56.470528710Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 02:38:56.470678 dockerd[1725]: time="2024-12-13T02:38:56.470646901Z" level=info msg="Daemon has completed initialization" Dec 13 02:38:56.510977 dockerd[1725]: time="2024-12-13T02:38:56.510808074Z" level=info msg="API listen on /run/docker.sock" Dec 13 02:38:56.511556 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 02:38:58.227084 containerd[1450]: time="2024-12-13T02:38:58.226777072Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 02:38:59.055104 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount878621851.mount: Deactivated successfully. Dec 13 02:39:01.585373 containerd[1450]: time="2024-12-13T02:39:01.583828014Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:39:01.591850 containerd[1450]: time="2024-12-13T02:39:01.591790334Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=35139262" Dec 13 02:39:01.594439 containerd[1450]: time="2024-12-13T02:39:01.594386282Z" level=info msg="ImageCreate event name:\"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:39:01.597426 containerd[1450]: time="2024-12-13T02:39:01.597345552Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:39:01.599115 containerd[1450]: time="2024-12-13T02:39:01.598832095Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"35136054\" in 3.371997387s" Dec 13 02:39:01.599115 containerd[1450]: time="2024-12-13T02:39:01.598877187Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Dec 13 02:39:01.615534 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 02:39:01.622156 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:39:01.628511 containerd[1450]: time="2024-12-13T02:39:01.628436871Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 02:39:01.752177 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:39:01.756623 (kubelet)[1936]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:39:02.374435 kubelet[1936]: E1213 02:39:02.374227 1936 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:39:02.381226 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:39:02.381650 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:39:05.099200 containerd[1450]: time="2024-12-13T02:39:05.099010965Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:39:05.101149 containerd[1450]: time="2024-12-13T02:39:05.100827509Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=32217740" Dec 13 02:39:05.102299 containerd[1450]: time="2024-12-13T02:39:05.102231700Z" level=info msg="ImageCreate event name:\"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:39:05.106656 containerd[1450]: time="2024-12-13T02:39:05.106568490Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:39:05.108293 containerd[1450]: time="2024-12-13T02:39:05.107823875Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"33662844\" in 3.479333666s" Dec 13 02:39:05.108293 containerd[1450]: time="2024-12-13T02:39:05.107869199Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Dec 13 02:39:05.137252 containerd[1450]: time="2024-12-13T02:39:05.137217437Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 02:39:07.053723 containerd[1450]: time="2024-12-13T02:39:07.053652737Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:39:07.055216 containerd[1450]: time="2024-12-13T02:39:07.055073092Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=17332830" Dec 13 02:39:07.056298 containerd[1450]: time="2024-12-13T02:39:07.056249443Z" level=info msg="ImageCreate event name:\"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:39:07.062536 containerd[1450]: time="2024-12-13T02:39:07.062452258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:39:07.064297 containerd[1450]: time="2024-12-13T02:39:07.063534344Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"18777952\" in 1.926240517s" Dec 13 02:39:07.064297 containerd[1450]: time="2024-12-13T02:39:07.063570422Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Dec 13 02:39:07.087773 containerd[1450]: time="2024-12-13T02:39:07.087736052Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 02:39:08.851869 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1082369405.mount: Deactivated successfully. Dec 13 02:39:09.914338 containerd[1450]: time="2024-12-13T02:39:09.914173890Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:39:09.916225 containerd[1450]: time="2024-12-13T02:39:09.916113923Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619966" Dec 13 02:39:09.918145 containerd[1450]: time="2024-12-13T02:39:09.917995287Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:39:09.935144 containerd[1450]: time="2024-12-13T02:39:09.934565020Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:39:09.937997 containerd[1450]: time="2024-12-13T02:39:09.937840403Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 2.849895855s" Dec 13 02:39:09.938380 containerd[1450]: time="2024-12-13T02:39:09.938131484Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 02:39:10.000691 containerd[1450]: time="2024-12-13T02:39:10.000509894Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 02:39:10.396634 update_engine[1440]: I20241213 02:39:10.396076 1440 update_attempter.cc:509] Updating boot flags... Dec 13 02:39:10.464760 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1982) Dec 13 02:39:10.534603 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1983) Dec 13 02:39:10.585335 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1983) Dec 13 02:39:10.711146 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1492370649.mount: Deactivated successfully. Dec 13 02:39:11.913517 containerd[1450]: time="2024-12-13T02:39:11.912607293Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:39:11.915036 containerd[1450]: time="2024-12-13T02:39:11.914992408Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Dec 13 02:39:11.916462 containerd[1450]: time="2024-12-13T02:39:11.916436323Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:39:11.919617 containerd[1450]: time="2024-12-13T02:39:11.919591560Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:39:11.920906 containerd[1450]: time="2024-12-13T02:39:11.920880176Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.920321451s" Dec 13 02:39:11.920989 containerd[1450]: time="2024-12-13T02:39:11.920972919Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 02:39:11.949207 containerd[1450]: time="2024-12-13T02:39:11.949159780Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 02:39:12.548934 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 13 02:39:12.559655 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:39:12.576910 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount18866285.mount: Deactivated successfully. Dec 13 02:39:12.594863 containerd[1450]: time="2024-12-13T02:39:12.594792678Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:39:12.598468 containerd[1450]: time="2024-12-13T02:39:12.598367438Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Dec 13 02:39:12.601219 containerd[1450]: time="2024-12-13T02:39:12.600589602Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:39:12.607760 containerd[1450]: time="2024-12-13T02:39:12.607661971Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:39:12.609990 containerd[1450]: time="2024-12-13T02:39:12.609930400Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 660.716351ms" Dec 13 02:39:12.610382 containerd[1450]: time="2024-12-13T02:39:12.610173673Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 02:39:12.655417 containerd[1450]: time="2024-12-13T02:39:12.655319454Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 02:39:12.727892 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:39:12.732280 (kubelet)[2055]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:39:12.796216 kubelet[2055]: E1213 02:39:12.796165 2055 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:39:12.799636 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:39:12.799897 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:39:14.510651 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4045553700.mount: Deactivated successfully. Dec 13 02:39:19.482126 containerd[1450]: time="2024-12-13T02:39:19.482027098Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:39:19.484072 containerd[1450]: time="2024-12-13T02:39:19.483682397Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651633" Dec 13 02:39:19.485336 containerd[1450]: time="2024-12-13T02:39:19.485287591Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:39:19.490504 containerd[1450]: time="2024-12-13T02:39:19.490441599Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:39:19.491733 containerd[1450]: time="2024-12-13T02:39:19.491705297Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 6.836339477s" Dec 13 02:39:19.491827 containerd[1450]: time="2024-12-13T02:39:19.491809241Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Dec 13 02:39:22.865785 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Dec 13 02:39:22.875963 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:39:23.425915 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:39:23.436570 (kubelet)[2176]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:39:24.089673 kubelet[2176]: E1213 02:39:24.089581 2176 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:39:24.091778 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:39:24.091933 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:39:24.302057 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:39:24.323061 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:39:24.361138 systemd[1]: Reloading requested from client PID 2190 ('systemctl') (unit session-11.scope)... Dec 13 02:39:24.361165 systemd[1]: Reloading... Dec 13 02:39:24.483522 zram_generator::config[2225]: No configuration found. Dec 13 02:39:24.631652 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:39:24.714218 systemd[1]: Reloading finished in 352 ms. Dec 13 02:39:24.770028 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 02:39:24.770110 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 02:39:24.770341 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:39:24.775813 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:39:25.172416 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:39:25.193138 (kubelet)[2296]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 02:39:26.155228 kubelet[2296]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:39:26.155228 kubelet[2296]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 02:39:26.155228 kubelet[2296]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:39:26.155692 kubelet[2296]: I1213 02:39:26.155336 2296 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 02:39:27.025779 kubelet[2296]: I1213 02:39:27.025702 2296 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 02:39:27.025779 kubelet[2296]: I1213 02:39:27.025738 2296 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 02:39:27.026075 kubelet[2296]: I1213 02:39:27.025969 2296 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 02:39:27.060974 kubelet[2296]: I1213 02:39:27.060668 2296 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 02:39:27.067644 kubelet[2296]: E1213 02:39:27.067581 2296 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.208:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.208:6443: connect: connection refused Dec 13 02:39:27.088650 kubelet[2296]: I1213 02:39:27.088341 2296 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 02:39:27.088993 kubelet[2296]: I1213 02:39:27.088963 2296 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 02:39:27.092197 kubelet[2296]: I1213 02:39:27.092159 2296 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 02:39:27.092617 kubelet[2296]: I1213 02:39:27.092355 2296 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 02:39:27.092617 kubelet[2296]: I1213 02:39:27.092380 2296 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 02:39:27.095036 kubelet[2296]: I1213 02:39:27.094899 2296 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:39:27.095132 kubelet[2296]: I1213 02:39:27.095120 2296 kubelet.go:396] "Attempting to sync node with API server" Dec 13 02:39:27.095658 kubelet[2296]: I1213 02:39:27.095645 2296 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 02:39:27.095904 kubelet[2296]: I1213 02:39:27.095738 2296 kubelet.go:312] "Adding apiserver pod source" Dec 13 02:39:27.095904 kubelet[2296]: I1213 02:39:27.095756 2296 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 02:39:27.095973 kubelet[2296]: W1213 02:39:27.095885 2296 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.24.4.208:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-b-31d3d6554f.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.208:6443: connect: connection refused Dec 13 02:39:27.096043 kubelet[2296]: E1213 02:39:27.096002 2296 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.208:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-b-31d3d6554f.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.208:6443: connect: connection refused Dec 13 02:39:27.098079 kubelet[2296]: W1213 02:39:27.097663 2296 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.24.4.208:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.208:6443: connect: connection refused Dec 13 02:39:27.098079 kubelet[2296]: E1213 02:39:27.097716 2296 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.208:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.208:6443: connect: connection refused Dec 13 02:39:27.099693 kubelet[2296]: I1213 02:39:27.099678 2296 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 02:39:27.105398 kubelet[2296]: I1213 02:39:27.105380 2296 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 02:39:27.108601 kubelet[2296]: W1213 02:39:27.108582 2296 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 02:39:27.110290 kubelet[2296]: I1213 02:39:27.109671 2296 server.go:1256] "Started kubelet" Dec 13 02:39:27.110906 kubelet[2296]: I1213 02:39:27.110890 2296 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 02:39:27.118624 kubelet[2296]: I1213 02:39:27.118602 2296 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 02:39:27.119617 kubelet[2296]: I1213 02:39:27.119602 2296 server.go:461] "Adding debug handlers to kubelet server" Dec 13 02:39:27.120809 kubelet[2296]: I1213 02:39:27.120796 2296 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 02:39:27.121064 kubelet[2296]: I1213 02:39:27.121030 2296 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 02:39:27.122641 kubelet[2296]: I1213 02:39:27.122623 2296 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 02:39:27.129366 kubelet[2296]: E1213 02:39:27.129140 2296 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.208:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.208:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-2-1-b-31d3d6554f.novalocal.18109c2b7ba5e761 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-2-1-b-31d3d6554f.novalocal,UID:ci-4081-2-1-b-31d3d6554f.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-2-1-b-31d3d6554f.novalocal,},FirstTimestamp:2024-12-13 02:39:27.109637985 +0000 UTC m=+1.909349113,LastTimestamp:2024-12-13 02:39:27.109637985 +0000 UTC m=+1.909349113,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-2-1-b-31d3d6554f.novalocal,}" Dec 13 02:39:27.129831 kubelet[2296]: I1213 02:39:27.129810 2296 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 02:39:27.130102 kubelet[2296]: E1213 02:39:27.130055 2296 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.208:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-b-31d3d6554f.novalocal?timeout=10s\": dial tcp 172.24.4.208:6443: connect: connection refused" interval="200ms" Dec 13 02:39:27.133461 kubelet[2296]: I1213 02:39:27.133311 2296 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 02:39:27.133708 kubelet[2296]: I1213 02:39:27.133658 2296 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 02:39:27.137868 kubelet[2296]: W1213 02:39:27.137813 2296 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.24.4.208:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.208:6443: connect: connection refused Dec 13 02:39:27.137995 kubelet[2296]: E1213 02:39:27.137965 2296 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.208:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.208:6443: connect: connection refused Dec 13 02:39:27.141513 kubelet[2296]: I1213 02:39:27.139816 2296 factory.go:221] Registration of the containerd container factory successfully Dec 13 02:39:27.141513 kubelet[2296]: I1213 02:39:27.139834 2296 factory.go:221] Registration of the systemd container factory successfully Dec 13 02:39:27.157772 kubelet[2296]: E1213 02:39:27.157732 2296 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 02:39:27.162911 kubelet[2296]: I1213 02:39:27.162862 2296 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 02:39:27.165683 kubelet[2296]: I1213 02:39:27.165655 2296 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 02:39:27.165744 kubelet[2296]: I1213 02:39:27.165698 2296 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 02:39:27.165744 kubelet[2296]: I1213 02:39:27.165726 2296 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 02:39:27.165820 kubelet[2296]: E1213 02:39:27.165782 2296 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 02:39:27.171338 kubelet[2296]: W1213 02:39:27.171272 2296 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.24.4.208:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.208:6443: connect: connection refused Dec 13 02:39:27.171968 kubelet[2296]: E1213 02:39:27.171941 2296 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.208:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.208:6443: connect: connection refused Dec 13 02:39:27.174080 kubelet[2296]: I1213 02:39:27.174040 2296 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 02:39:27.174080 kubelet[2296]: I1213 02:39:27.174060 2296 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 02:39:27.174080 kubelet[2296]: I1213 02:39:27.174080 2296 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:39:27.181899 kubelet[2296]: I1213 02:39:27.181849 2296 policy_none.go:49] "None policy: Start" Dec 13 02:39:27.182644 kubelet[2296]: I1213 02:39:27.182616 2296 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 02:39:27.182644 kubelet[2296]: I1213 02:39:27.182647 2296 state_mem.go:35] "Initializing new in-memory state store" Dec 13 02:39:27.193034 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 02:39:27.202742 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 02:39:27.206447 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 02:39:27.218316 kubelet[2296]: I1213 02:39:27.217420 2296 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 02:39:27.218316 kubelet[2296]: I1213 02:39:27.217711 2296 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 02:39:27.219219 kubelet[2296]: E1213 02:39:27.219187 2296 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-2-1-b-31d3d6554f.novalocal\" not found" Dec 13 02:39:27.225387 kubelet[2296]: I1213 02:39:27.225357 2296 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:39:27.225765 kubelet[2296]: E1213 02:39:27.225737 2296 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.208:6443/api/v1/nodes\": dial tcp 172.24.4.208:6443: connect: connection refused" node="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:39:27.266842 kubelet[2296]: I1213 02:39:27.266470 2296 topology_manager.go:215] "Topology Admit Handler" podUID="79778268541385d03c0f2dd18dd10a55" podNamespace="kube-system" podName="kube-apiserver-ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:39:27.270787 kubelet[2296]: I1213 02:39:27.270559 2296 topology_manager.go:215] "Topology Admit Handler" podUID="8c3812177f6981989f1a86e0ec7463df" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:39:27.275105 kubelet[2296]: I1213 02:39:27.275037 2296 topology_manager.go:215] "Topology Admit Handler" podUID="d0d656891e32b8bcac1f06c0a945818c" podNamespace="kube-system" podName="kube-scheduler-ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:39:27.289880 systemd[1]: Created slice kubepods-burstable-pod79778268541385d03c0f2dd18dd10a55.slice - libcontainer container kubepods-burstable-pod79778268541385d03c0f2dd18dd10a55.slice. Dec 13 02:39:27.322688 systemd[1]: Created slice kubepods-burstable-pod8c3812177f6981989f1a86e0ec7463df.slice - libcontainer container kubepods-burstable-pod8c3812177f6981989f1a86e0ec7463df.slice. Dec 13 02:39:27.331602 kubelet[2296]: E1213 02:39:27.331535 2296 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.208:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-b-31d3d6554f.novalocal?timeout=10s\": dial tcp 172.24.4.208:6443: connect: connection refused" interval="400ms" Dec 13 02:39:27.336145 kubelet[2296]: I1213 02:39:27.335198 2296 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8c3812177f6981989f1a86e0ec7463df-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-2-1-b-31d3d6554f.novalocal\" (UID: \"8c3812177f6981989f1a86e0ec7463df\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:39:27.336145 kubelet[2296]: I1213 02:39:27.335280 2296 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8c3812177f6981989f1a86e0ec7463df-kubeconfig\") pod \"kube-controller-manager-ci-4081-2-1-b-31d3d6554f.novalocal\" (UID: \"8c3812177f6981989f1a86e0ec7463df\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:39:27.336145 kubelet[2296]: I1213 02:39:27.335339 2296 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d0d656891e32b8bcac1f06c0a945818c-kubeconfig\") pod \"kube-scheduler-ci-4081-2-1-b-31d3d6554f.novalocal\" (UID: \"d0d656891e32b8bcac1f06c0a945818c\") " pod="kube-system/kube-scheduler-ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:39:27.336145 kubelet[2296]: I1213 02:39:27.335399 2296 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/79778268541385d03c0f2dd18dd10a55-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-2-1-b-31d3d6554f.novalocal\" (UID: \"79778268541385d03c0f2dd18dd10a55\") " pod="kube-system/kube-apiserver-ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:39:27.336520 kubelet[2296]: I1213 02:39:27.335462 2296 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8c3812177f6981989f1a86e0ec7463df-ca-certs\") pod \"kube-controller-manager-ci-4081-2-1-b-31d3d6554f.novalocal\" (UID: \"8c3812177f6981989f1a86e0ec7463df\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:39:27.336520 kubelet[2296]: I1213 02:39:27.335554 2296 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8c3812177f6981989f1a86e0ec7463df-k8s-certs\") pod \"kube-controller-manager-ci-4081-2-1-b-31d3d6554f.novalocal\" (UID: \"8c3812177f6981989f1a86e0ec7463df\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:39:27.336520 kubelet[2296]: I1213 02:39:27.335617 2296 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8c3812177f6981989f1a86e0ec7463df-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-2-1-b-31d3d6554f.novalocal\" (UID: \"8c3812177f6981989f1a86e0ec7463df\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:39:27.336520 kubelet[2296]: I1213 02:39:27.335674 2296 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/79778268541385d03c0f2dd18dd10a55-ca-certs\") pod \"kube-apiserver-ci-4081-2-1-b-31d3d6554f.novalocal\" (UID: \"79778268541385d03c0f2dd18dd10a55\") " pod="kube-system/kube-apiserver-ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:39:27.337027 kubelet[2296]: I1213 02:39:27.335730 2296 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/79778268541385d03c0f2dd18dd10a55-k8s-certs\") pod \"kube-apiserver-ci-4081-2-1-b-31d3d6554f.novalocal\" (UID: \"79778268541385d03c0f2dd18dd10a55\") " pod="kube-system/kube-apiserver-ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:39:27.342133 systemd[1]: Created slice kubepods-burstable-podd0d656891e32b8bcac1f06c0a945818c.slice - libcontainer container kubepods-burstable-podd0d656891e32b8bcac1f06c0a945818c.slice. Dec 13 02:39:27.430664 kubelet[2296]: I1213 02:39:27.430569 2296 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:39:27.431477 kubelet[2296]: E1213 02:39:27.431430 2296 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.208:6443/api/v1/nodes\": dial tcp 172.24.4.208:6443: connect: connection refused" node="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:39:27.616937 containerd[1450]: time="2024-12-13T02:39:27.616615158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-2-1-b-31d3d6554f.novalocal,Uid:79778268541385d03c0f2dd18dd10a55,Namespace:kube-system,Attempt:0,}" Dec 13 02:39:27.646157 containerd[1450]: time="2024-12-13T02:39:27.645676006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-2-1-b-31d3d6554f.novalocal,Uid:8c3812177f6981989f1a86e0ec7463df,Namespace:kube-system,Attempt:0,}" Dec 13 02:39:27.650056 containerd[1450]: time="2024-12-13T02:39:27.649778907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-2-1-b-31d3d6554f.novalocal,Uid:d0d656891e32b8bcac1f06c0a945818c,Namespace:kube-system,Attempt:0,}" Dec 13 02:39:27.733384 kubelet[2296]: E1213 02:39:27.733321 2296 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.208:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-b-31d3d6554f.novalocal?timeout=10s\": dial tcp 172.24.4.208:6443: connect: connection refused" interval="800ms" Dec 13 02:39:27.835857 kubelet[2296]: I1213 02:39:27.835235 2296 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:39:27.836092 kubelet[2296]: E1213 02:39:27.836065 2296 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.208:6443/api/v1/nodes\": dial tcp 172.24.4.208:6443: connect: connection refused" node="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:39:28.019672 kubelet[2296]: W1213 02:39:28.019350 2296 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.24.4.208:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.208:6443: connect: connection refused Dec 13 02:39:28.019672 kubelet[2296]: E1213 02:39:28.019445 2296 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.208:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.208:6443: connect: connection refused Dec 13 02:39:28.070866 kubelet[2296]: W1213 02:39:28.070760 2296 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.24.4.208:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-b-31d3d6554f.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.208:6443: connect: connection refused Dec 13 02:39:28.071743 kubelet[2296]: E1213 02:39:28.071628 2296 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.208:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-b-31d3d6554f.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.208:6443: connect: connection refused Dec 13 02:39:28.072325 kubelet[2296]: W1213 02:39:28.072199 2296 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.24.4.208:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.208:6443: connect: connection refused Dec 13 02:39:28.072325 kubelet[2296]: E1213 02:39:28.072258 2296 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.208:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.208:6443: connect: connection refused Dec 13 02:39:28.822001 kubelet[2296]: E1213 02:39:28.540788 2296 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.208:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-b-31d3d6554f.novalocal?timeout=10s\": dial tcp 172.24.4.208:6443: connect: connection refused" interval="1.6s" Dec 13 02:39:28.822001 kubelet[2296]: I1213 02:39:28.640433 2296 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:39:28.822001 kubelet[2296]: E1213 02:39:28.641120 2296 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.208:6443/api/v1/nodes\": dial tcp 172.24.4.208:6443: connect: connection refused" node="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:39:28.822001 kubelet[2296]: W1213 02:39:28.670356 2296 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.24.4.208:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.208:6443: connect: connection refused Dec 13 02:39:28.822001 kubelet[2296]: E1213 02:39:28.670535 2296 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.208:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.208:6443: connect: connection refused Dec 13 02:39:28.962642 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount945391602.mount: Deactivated successfully. Dec 13 02:39:28.976274 containerd[1450]: time="2024-12-13T02:39:28.976156612Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 02:39:28.982563 containerd[1450]: time="2024-12-13T02:39:28.982254405Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 02:39:28.987767 containerd[1450]: time="2024-12-13T02:39:28.987461774Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 02:39:28.989618 containerd[1450]: time="2024-12-13T02:39:28.989447207Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 02:39:28.993109 containerd[1450]: time="2024-12-13T02:39:28.992897580Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Dec 13 02:39:28.995373 containerd[1450]: time="2024-12-13T02:39:28.995063921Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 02:39:28.995373 containerd[1450]: time="2024-12-13T02:39:28.995238467Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 02:39:29.002994 containerd[1450]: time="2024-12-13T02:39:29.002912660Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 02:39:29.007530 containerd[1450]: time="2024-12-13T02:39:29.007272665Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.390453896s" Dec 13 02:39:29.011979 containerd[1450]: time="2024-12-13T02:39:29.011916171Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.366046463s" Dec 13 02:39:29.016125 containerd[1450]: time="2024-12-13T02:39:29.015906365Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.365887799s" Dec 13 02:39:29.209079 kubelet[2296]: E1213 02:39:29.209033 2296 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.208:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.208:6443: connect: connection refused Dec 13 02:39:29.295966 containerd[1450]: time="2024-12-13T02:39:29.295845986Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:39:29.296184 containerd[1450]: time="2024-12-13T02:39:29.295978854Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:39:29.296184 containerd[1450]: time="2024-12-13T02:39:29.296018959Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:39:29.296503 containerd[1450]: time="2024-12-13T02:39:29.296193135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:39:29.301822 containerd[1450]: time="2024-12-13T02:39:29.301746272Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:39:29.301977 containerd[1450]: time="2024-12-13T02:39:29.301946457Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:39:29.302242 containerd[1450]: time="2024-12-13T02:39:29.302121814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:39:29.304515 containerd[1450]: time="2024-12-13T02:39:29.303118649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:39:29.306410 containerd[1450]: time="2024-12-13T02:39:29.306039854Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:39:29.306410 containerd[1450]: time="2024-12-13T02:39:29.306158515Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:39:29.306410 containerd[1450]: time="2024-12-13T02:39:29.306190054Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:39:29.306410 containerd[1450]: time="2024-12-13T02:39:29.306297796Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:39:29.338683 systemd[1]: Started cri-containerd-7924d7d81fbaba9e4d1d58e284a5172810542e8bf20af5463c1579576388166a.scope - libcontainer container 7924d7d81fbaba9e4d1d58e284a5172810542e8bf20af5463c1579576388166a. Dec 13 02:39:29.344107 systemd[1]: Started cri-containerd-92fdc95ffaace0d45284819aa27605eef71fa98bdb0b9209cb5e0148f80e24c8.scope - libcontainer container 92fdc95ffaace0d45284819aa27605eef71fa98bdb0b9209cb5e0148f80e24c8. Dec 13 02:39:29.347442 systemd[1]: Started cri-containerd-aa16f1b9914f704e0beb84e03a449d7131ca2487257c7356a383d314d0d83293.scope - libcontainer container aa16f1b9914f704e0beb84e03a449d7131ca2487257c7356a383d314d0d83293. Dec 13 02:39:29.408738 containerd[1450]: time="2024-12-13T02:39:29.408656564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-2-1-b-31d3d6554f.novalocal,Uid:d0d656891e32b8bcac1f06c0a945818c,Namespace:kube-system,Attempt:0,} returns sandbox id \"7924d7d81fbaba9e4d1d58e284a5172810542e8bf20af5463c1579576388166a\"" Dec 13 02:39:29.416700 containerd[1450]: time="2024-12-13T02:39:29.416628135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-2-1-b-31d3d6554f.novalocal,Uid:8c3812177f6981989f1a86e0ec7463df,Namespace:kube-system,Attempt:0,} returns sandbox id \"92fdc95ffaace0d45284819aa27605eef71fa98bdb0b9209cb5e0148f80e24c8\"" Dec 13 02:39:29.425787 containerd[1450]: time="2024-12-13T02:39:29.425723377Z" level=info msg="CreateContainer within sandbox \"7924d7d81fbaba9e4d1d58e284a5172810542e8bf20af5463c1579576388166a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 02:39:29.425910 containerd[1450]: time="2024-12-13T02:39:29.425727495Z" level=info msg="CreateContainer within sandbox \"92fdc95ffaace0d45284819aa27605eef71fa98bdb0b9209cb5e0148f80e24c8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 02:39:29.444109 containerd[1450]: time="2024-12-13T02:39:29.444048695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-2-1-b-31d3d6554f.novalocal,Uid:79778268541385d03c0f2dd18dd10a55,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa16f1b9914f704e0beb84e03a449d7131ca2487257c7356a383d314d0d83293\"" Dec 13 02:39:29.447325 containerd[1450]: time="2024-12-13T02:39:29.447271182Z" level=info msg="CreateContainer within sandbox \"aa16f1b9914f704e0beb84e03a449d7131ca2487257c7356a383d314d0d83293\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 02:39:29.480608 containerd[1450]: time="2024-12-13T02:39:29.480254146Z" level=info msg="CreateContainer within sandbox \"7924d7d81fbaba9e4d1d58e284a5172810542e8bf20af5463c1579576388166a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"98b453fb1d3c70ce8002780f61be84e396d24361787dab4a4ded7b578ce994dc\"" Dec 13 02:39:29.482553 containerd[1450]: time="2024-12-13T02:39:29.481829523Z" level=info msg="StartContainer for \"98b453fb1d3c70ce8002780f61be84e396d24361787dab4a4ded7b578ce994dc\"" Dec 13 02:39:29.508924 containerd[1450]: time="2024-12-13T02:39:29.508885149Z" level=info msg="CreateContainer within sandbox \"aa16f1b9914f704e0beb84e03a449d7131ca2487257c7356a383d314d0d83293\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"64b25ff1ac174bee6289a801f7e7c5b786f3f1eb3bf9ca891867a243eadf0b6c\"" Dec 13 02:39:29.512155 containerd[1450]: time="2024-12-13T02:39:29.511006698Z" level=info msg="StartContainer for \"64b25ff1ac174bee6289a801f7e7c5b786f3f1eb3bf9ca891867a243eadf0b6c\"" Dec 13 02:39:29.515699 systemd[1]: Started cri-containerd-98b453fb1d3c70ce8002780f61be84e396d24361787dab4a4ded7b578ce994dc.scope - libcontainer container 98b453fb1d3c70ce8002780f61be84e396d24361787dab4a4ded7b578ce994dc. Dec 13 02:39:29.517708 containerd[1450]: time="2024-12-13T02:39:29.517678798Z" level=info msg="CreateContainer within sandbox \"92fdc95ffaace0d45284819aa27605eef71fa98bdb0b9209cb5e0148f80e24c8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b87e304e2e36c61bd10bee14cddbbcef4bf3daa73286a865a89d819c95b1b658\"" Dec 13 02:39:29.518198 containerd[1450]: time="2024-12-13T02:39:29.518164025Z" level=info msg="StartContainer for \"b87e304e2e36c61bd10bee14cddbbcef4bf3daa73286a865a89d819c95b1b658\"" Dec 13 02:39:29.557982 systemd[1]: Started cri-containerd-64b25ff1ac174bee6289a801f7e7c5b786f3f1eb3bf9ca891867a243eadf0b6c.scope - libcontainer container 64b25ff1ac174bee6289a801f7e7c5b786f3f1eb3bf9ca891867a243eadf0b6c. Dec 13 02:39:29.574772 systemd[1]: Started cri-containerd-b87e304e2e36c61bd10bee14cddbbcef4bf3daa73286a865a89d819c95b1b658.scope - libcontainer container b87e304e2e36c61bd10bee14cddbbcef4bf3daa73286a865a89d819c95b1b658. Dec 13 02:39:29.606154 containerd[1450]: time="2024-12-13T02:39:29.605005399Z" level=info msg="StartContainer for \"98b453fb1d3c70ce8002780f61be84e396d24361787dab4a4ded7b578ce994dc\" returns successfully" Dec 13 02:39:29.651413 containerd[1450]: time="2024-12-13T02:39:29.651366125Z" level=info msg="StartContainer for \"64b25ff1ac174bee6289a801f7e7c5b786f3f1eb3bf9ca891867a243eadf0b6c\" returns successfully" Dec 13 02:39:29.679051 containerd[1450]: time="2024-12-13T02:39:29.678901930Z" level=info msg="StartContainer for \"b87e304e2e36c61bd10bee14cddbbcef4bf3daa73286a865a89d819c95b1b658\" returns successfully" Dec 13 02:39:30.142051 kubelet[2296]: E1213 02:39:30.142012 2296 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.208:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-b-31d3d6554f.novalocal?timeout=10s\": dial tcp 172.24.4.208:6443: connect: connection refused" interval="3.2s" Dec 13 02:39:30.246219 kubelet[2296]: I1213 02:39:30.245777 2296 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:39:30.246219 kubelet[2296]: E1213 02:39:30.246186 2296 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.208:6443/api/v1/nodes\": dial tcp 172.24.4.208:6443: connect: connection refused" node="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:39:30.590181 kubelet[2296]: W1213 02:39:30.590034 2296 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.24.4.208:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.208:6443: connect: connection refused Dec 13 02:39:30.590181 kubelet[2296]: E1213 02:39:30.590112 2296 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.208:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.208:6443: connect: connection refused Dec 13 02:39:32.628570 kubelet[2296]: E1213 02:39:32.628397 2296 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-2-1-b-31d3d6554f.novalocal.18109c2b7ba5e761 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-2-1-b-31d3d6554f.novalocal,UID:ci-4081-2-1-b-31d3d6554f.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-2-1-b-31d3d6554f.novalocal,},FirstTimestamp:2024-12-13 02:39:27.109637985 +0000 UTC m=+1.909349113,LastTimestamp:2024-12-13 02:39:27.109637985 +0000 UTC m=+1.909349113,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-2-1-b-31d3d6554f.novalocal,}" Dec 13 02:39:32.897769 kubelet[2296]: E1213 02:39:32.896974 2296 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4081-2-1-b-31d3d6554f.novalocal" not found Dec 13 02:39:33.101146 kubelet[2296]: I1213 02:39:33.100970 2296 apiserver.go:52] "Watching apiserver" Dec 13 02:39:33.134327 kubelet[2296]: I1213 02:39:33.134240 2296 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 02:39:33.255216 kubelet[2296]: E1213 02:39:33.255010 2296 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4081-2-1-b-31d3d6554f.novalocal" not found Dec 13 02:39:33.350568 kubelet[2296]: E1213 02:39:33.350424 2296 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-2-1-b-31d3d6554f.novalocal\" not found" node="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:39:33.452062 kubelet[2296]: I1213 02:39:33.451950 2296 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:39:33.462812 kubelet[2296]: I1213 02:39:33.462314 2296 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:39:35.877588 systemd[1]: Reloading requested from client PID 2569 ('systemctl') (unit session-11.scope)... Dec 13 02:39:35.877632 systemd[1]: Reloading... Dec 13 02:39:35.967520 zram_generator::config[2604]: No configuration found. Dec 13 02:39:36.132980 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:39:36.235117 systemd[1]: Reloading finished in 356 ms. Dec 13 02:39:36.278055 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:39:36.279602 kubelet[2296]: I1213 02:39:36.278186 2296 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 02:39:36.292906 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 02:39:36.293272 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:39:36.293344 systemd[1]: kubelet.service: Consumed 1.821s CPU time, 111.4M memory peak, 0B memory swap peak. Dec 13 02:39:36.298754 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:39:37.370811 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:39:37.381937 (kubelet)[2672]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 02:39:37.477691 kubelet[2672]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:39:37.478037 kubelet[2672]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 02:39:37.478125 kubelet[2672]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:39:37.478270 kubelet[2672]: I1213 02:39:37.478234 2672 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 02:39:37.486586 kubelet[2672]: I1213 02:39:37.485375 2672 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 02:39:37.486586 kubelet[2672]: I1213 02:39:37.485401 2672 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 02:39:37.486586 kubelet[2672]: I1213 02:39:37.485662 2672 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 02:39:37.492662 kubelet[2672]: I1213 02:39:37.492636 2672 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 02:39:37.502025 kubelet[2672]: I1213 02:39:37.501991 2672 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 02:39:37.511130 kubelet[2672]: I1213 02:39:37.511104 2672 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 02:39:37.511554 kubelet[2672]: I1213 02:39:37.511539 2672 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 02:39:37.511833 kubelet[2672]: I1213 02:39:37.511815 2672 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 02:39:37.511987 kubelet[2672]: I1213 02:39:37.511975 2672 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 02:39:37.512062 kubelet[2672]: I1213 02:39:37.512052 2672 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 02:39:37.512145 kubelet[2672]: I1213 02:39:37.512135 2672 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:39:37.512295 kubelet[2672]: I1213 02:39:37.512284 2672 kubelet.go:396] "Attempting to sync node with API server" Dec 13 02:39:37.512379 kubelet[2672]: I1213 02:39:37.512369 2672 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 02:39:37.512456 kubelet[2672]: I1213 02:39:37.512447 2672 kubelet.go:312] "Adding apiserver pod source" Dec 13 02:39:37.512547 kubelet[2672]: I1213 02:39:37.512537 2672 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 02:39:37.515456 kubelet[2672]: I1213 02:39:37.515406 2672 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 02:39:37.515828 kubelet[2672]: I1213 02:39:37.515805 2672 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 02:39:37.516263 kubelet[2672]: I1213 02:39:37.516239 2672 server.go:1256] "Started kubelet" Dec 13 02:39:37.526610 kubelet[2672]: I1213 02:39:37.526575 2672 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 02:39:37.540805 kubelet[2672]: I1213 02:39:37.540768 2672 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 02:39:37.544829 kubelet[2672]: I1213 02:39:37.544797 2672 server.go:461] "Adding debug handlers to kubelet server" Dec 13 02:39:37.547584 kubelet[2672]: I1213 02:39:37.547554 2672 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 02:39:37.547840 kubelet[2672]: I1213 02:39:37.547813 2672 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 02:39:37.553019 kubelet[2672]: I1213 02:39:37.552987 2672 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 02:39:37.554015 kubelet[2672]: I1213 02:39:37.553474 2672 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 02:39:37.554015 kubelet[2672]: I1213 02:39:37.553638 2672 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 02:39:37.556407 kubelet[2672]: I1213 02:39:37.556330 2672 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 02:39:37.558198 kubelet[2672]: I1213 02:39:37.558182 2672 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 02:39:37.558595 kubelet[2672]: I1213 02:39:37.558281 2672 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 02:39:37.558595 kubelet[2672]: I1213 02:39:37.558307 2672 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 02:39:37.558595 kubelet[2672]: E1213 02:39:37.558356 2672 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 02:39:37.569221 kubelet[2672]: I1213 02:39:37.568066 2672 factory.go:221] Registration of the systemd container factory successfully Dec 13 02:39:37.569221 kubelet[2672]: I1213 02:39:37.568212 2672 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 02:39:37.571882 kubelet[2672]: E1213 02:39:37.571862 2672 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 02:39:37.574512 kubelet[2672]: I1213 02:39:37.574217 2672 factory.go:221] Registration of the containerd container factory successfully Dec 13 02:39:37.625522 kubelet[2672]: I1213 02:39:37.625182 2672 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 02:39:37.625522 kubelet[2672]: I1213 02:39:37.625219 2672 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 02:39:37.625522 kubelet[2672]: I1213 02:39:37.625236 2672 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:39:37.625522 kubelet[2672]: I1213 02:39:37.625390 2672 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 02:39:37.625522 kubelet[2672]: I1213 02:39:37.625414 2672 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 02:39:37.625522 kubelet[2672]: I1213 02:39:37.625420 2672 policy_none.go:49] "None policy: Start" Dec 13 02:39:37.628113 kubelet[2672]: I1213 02:39:37.627251 2672 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 02:39:37.628113 kubelet[2672]: I1213 02:39:37.627332 2672 state_mem.go:35] "Initializing new in-memory state store" Dec 13 02:39:37.628113 kubelet[2672]: I1213 02:39:37.627531 2672 state_mem.go:75] "Updated machine memory state" Dec 13 02:39:37.633275 kubelet[2672]: I1213 02:39:37.633228 2672 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 02:39:37.634332 kubelet[2672]: I1213 02:39:37.634289 2672 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 02:39:37.661079 kubelet[2672]: I1213 02:39:37.660627 2672 topology_manager.go:215] "Topology Admit Handler" podUID="79778268541385d03c0f2dd18dd10a55" podNamespace="kube-system" podName="kube-apiserver-ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:39:37.661079 kubelet[2672]: I1213 02:39:37.660713 2672 topology_manager.go:215] "Topology Admit Handler" podUID="8c3812177f6981989f1a86e0ec7463df" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:39:37.661079 kubelet[2672]: I1213 02:39:37.660753 2672 topology_manager.go:215] "Topology Admit Handler" podUID="d0d656891e32b8bcac1f06c0a945818c" podNamespace="kube-system" podName="kube-scheduler-ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:39:37.664271 kubelet[2672]: I1213 02:39:37.664236 2672 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:39:37.674717 kubelet[2672]: W1213 02:39:37.674677 2672 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 02:39:37.675392 kubelet[2672]: W1213 02:39:37.675290 2672 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 02:39:37.681163 kubelet[2672]: W1213 02:39:37.680952 2672 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 02:39:37.685326 kubelet[2672]: I1213 02:39:37.685300 2672 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:39:37.685587 kubelet[2672]: I1213 02:39:37.685529 2672 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:39:37.754466 kubelet[2672]: I1213 02:39:37.754427 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/79778268541385d03c0f2dd18dd10a55-k8s-certs\") pod \"kube-apiserver-ci-4081-2-1-b-31d3d6554f.novalocal\" (UID: \"79778268541385d03c0f2dd18dd10a55\") " pod="kube-system/kube-apiserver-ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:39:37.754798 kubelet[2672]: I1213 02:39:37.754505 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/79778268541385d03c0f2dd18dd10a55-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-2-1-b-31d3d6554f.novalocal\" (UID: \"79778268541385d03c0f2dd18dd10a55\") " pod="kube-system/kube-apiserver-ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:39:37.754798 kubelet[2672]: I1213 02:39:37.754572 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8c3812177f6981989f1a86e0ec7463df-ca-certs\") pod \"kube-controller-manager-ci-4081-2-1-b-31d3d6554f.novalocal\" (UID: \"8c3812177f6981989f1a86e0ec7463df\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:39:37.754798 kubelet[2672]: I1213 02:39:37.754623 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8c3812177f6981989f1a86e0ec7463df-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-2-1-b-31d3d6554f.novalocal\" (UID: \"8c3812177f6981989f1a86e0ec7463df\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:39:37.754798 kubelet[2672]: I1213 02:39:37.754651 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/79778268541385d03c0f2dd18dd10a55-ca-certs\") pod \"kube-apiserver-ci-4081-2-1-b-31d3d6554f.novalocal\" (UID: \"79778268541385d03c0f2dd18dd10a55\") " pod="kube-system/kube-apiserver-ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:39:37.755025 kubelet[2672]: I1213 02:39:37.754677 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8c3812177f6981989f1a86e0ec7463df-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-2-1-b-31d3d6554f.novalocal\" (UID: \"8c3812177f6981989f1a86e0ec7463df\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:39:37.755025 kubelet[2672]: I1213 02:39:37.754701 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8c3812177f6981989f1a86e0ec7463df-k8s-certs\") pod \"kube-controller-manager-ci-4081-2-1-b-31d3d6554f.novalocal\" (UID: \"8c3812177f6981989f1a86e0ec7463df\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:39:37.755025 kubelet[2672]: I1213 02:39:37.754741 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8c3812177f6981989f1a86e0ec7463df-kubeconfig\") pod \"kube-controller-manager-ci-4081-2-1-b-31d3d6554f.novalocal\" (UID: \"8c3812177f6981989f1a86e0ec7463df\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:39:37.755025 kubelet[2672]: I1213 02:39:37.754767 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d0d656891e32b8bcac1f06c0a945818c-kubeconfig\") pod \"kube-scheduler-ci-4081-2-1-b-31d3d6554f.novalocal\" (UID: \"d0d656891e32b8bcac1f06c0a945818c\") " pod="kube-system/kube-scheduler-ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:39:38.521169 kubelet[2672]: I1213 02:39:38.521081 2672 apiserver.go:52] "Watching apiserver" Dec 13 02:39:38.554329 kubelet[2672]: I1213 02:39:38.554245 2672 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 02:39:38.618187 kubelet[2672]: W1213 02:39:38.617899 2672 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 02:39:38.618187 kubelet[2672]: E1213 02:39:38.618039 2672 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-2-1-b-31d3d6554f.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:39:38.687109 kubelet[2672]: I1213 02:39:38.687054 2672 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-2-1-b-31d3d6554f.novalocal" podStartSLOduration=1.685462896 podStartE2EDuration="1.685462896s" podCreationTimestamp="2024-12-13 02:39:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:39:38.66483188 +0000 UTC m=+1.268211890" watchObservedRunningTime="2024-12-13 02:39:38.685462896 +0000 UTC m=+1.288842906" Dec 13 02:39:38.748342 kubelet[2672]: I1213 02:39:38.748154 2672 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-2-1-b-31d3d6554f.novalocal" podStartSLOduration=1.748110665 podStartE2EDuration="1.748110665s" podCreationTimestamp="2024-12-13 02:39:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:39:38.687195721 +0000 UTC m=+1.290575741" watchObservedRunningTime="2024-12-13 02:39:38.748110665 +0000 UTC m=+1.351490685" Dec 13 02:39:43.286830 kubelet[2672]: I1213 02:39:43.286741 2672 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-2-1-b-31d3d6554f.novalocal" podStartSLOduration=6.286593889 podStartE2EDuration="6.286593889s" podCreationTimestamp="2024-12-13 02:39:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:39:38.749322585 +0000 UTC m=+1.352702605" watchObservedRunningTime="2024-12-13 02:39:43.286593889 +0000 UTC m=+5.889973950" Dec 13 02:39:43.377712 sudo[1710]: pam_unix(sudo:session): session closed for user root Dec 13 02:39:43.560195 sshd[1707]: pam_unix(sshd:session): session closed for user core Dec 13 02:39:43.569660 systemd-logind[1432]: Session 11 logged out. Waiting for processes to exit. Dec 13 02:39:43.571007 systemd[1]: sshd@8-172.24.4.208:22-172.24.4.1:39626.service: Deactivated successfully. Dec 13 02:39:43.577619 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 02:39:43.578137 systemd[1]: session-11.scope: Consumed 7.902s CPU time, 186.8M memory peak, 0B memory swap peak. Dec 13 02:39:43.582932 systemd-logind[1432]: Removed session 11. Dec 13 02:39:48.225344 kubelet[2672]: I1213 02:39:48.225272 2672 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 02:39:48.226891 containerd[1450]: time="2024-12-13T02:39:48.226216637Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 02:39:48.227430 kubelet[2672]: I1213 02:39:48.226511 2672 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 02:39:49.171362 kubelet[2672]: I1213 02:39:49.171314 2672 topology_manager.go:215] "Topology Admit Handler" podUID="0fe1dbd3-0e8c-4e78-a2cf-d95064566c2a" podNamespace="kube-system" podName="kube-proxy-zlsnz" Dec 13 02:39:49.188406 systemd[1]: Created slice kubepods-besteffort-pod0fe1dbd3_0e8c_4e78_a2cf_d95064566c2a.slice - libcontainer container kubepods-besteffort-pod0fe1dbd3_0e8c_4e78_a2cf_d95064566c2a.slice. Dec 13 02:39:49.232513 kubelet[2672]: I1213 02:39:49.232384 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0fe1dbd3-0e8c-4e78-a2cf-d95064566c2a-kube-proxy\") pod \"kube-proxy-zlsnz\" (UID: \"0fe1dbd3-0e8c-4e78-a2cf-d95064566c2a\") " pod="kube-system/kube-proxy-zlsnz" Dec 13 02:39:49.232513 kubelet[2672]: I1213 02:39:49.232469 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0fe1dbd3-0e8c-4e78-a2cf-d95064566c2a-xtables-lock\") pod \"kube-proxy-zlsnz\" (UID: \"0fe1dbd3-0e8c-4e78-a2cf-d95064566c2a\") " pod="kube-system/kube-proxy-zlsnz" Dec 13 02:39:49.232963 kubelet[2672]: I1213 02:39:49.232554 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0fe1dbd3-0e8c-4e78-a2cf-d95064566c2a-lib-modules\") pod \"kube-proxy-zlsnz\" (UID: \"0fe1dbd3-0e8c-4e78-a2cf-d95064566c2a\") " pod="kube-system/kube-proxy-zlsnz" Dec 13 02:39:49.333890 kubelet[2672]: I1213 02:39:49.333324 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwdrs\" (UniqueName: \"kubernetes.io/projected/0fe1dbd3-0e8c-4e78-a2cf-d95064566c2a-kube-api-access-bwdrs\") pod \"kube-proxy-zlsnz\" (UID: \"0fe1dbd3-0e8c-4e78-a2cf-d95064566c2a\") " pod="kube-system/kube-proxy-zlsnz" Dec 13 02:39:49.371847 kubelet[2672]: I1213 02:39:49.371770 2672 topology_manager.go:215] "Topology Admit Handler" podUID="db3500ea-1601-4437-8e34-321659c75183" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-q2dgv" Dec 13 02:39:49.382089 systemd[1]: Created slice kubepods-besteffort-poddb3500ea_1601_4437_8e34_321659c75183.slice - libcontainer container kubepods-besteffort-poddb3500ea_1601_4437_8e34_321659c75183.slice. Dec 13 02:39:49.498131 containerd[1450]: time="2024-12-13T02:39:49.497973205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zlsnz,Uid:0fe1dbd3-0e8c-4e78-a2cf-d95064566c2a,Namespace:kube-system,Attempt:0,}" Dec 13 02:39:49.539812 kubelet[2672]: I1213 02:39:49.539748 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvq8z\" (UniqueName: \"kubernetes.io/projected/db3500ea-1601-4437-8e34-321659c75183-kube-api-access-hvq8z\") pod \"tigera-operator-c7ccbd65-q2dgv\" (UID: \"db3500ea-1601-4437-8e34-321659c75183\") " pod="tigera-operator/tigera-operator-c7ccbd65-q2dgv" Dec 13 02:39:49.539987 kubelet[2672]: I1213 02:39:49.539893 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/db3500ea-1601-4437-8e34-321659c75183-var-lib-calico\") pod \"tigera-operator-c7ccbd65-q2dgv\" (UID: \"db3500ea-1601-4437-8e34-321659c75183\") " pod="tigera-operator/tigera-operator-c7ccbd65-q2dgv" Dec 13 02:39:49.568271 containerd[1450]: time="2024-12-13T02:39:49.568051930Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:39:49.568271 containerd[1450]: time="2024-12-13T02:39:49.568121030Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:39:49.568271 containerd[1450]: time="2024-12-13T02:39:49.568141288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:39:49.570035 containerd[1450]: time="2024-12-13T02:39:49.569884464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:39:49.600653 systemd[1]: Started cri-containerd-1ade4f45768e1d480f61038682b19eb3da2702844f54dd1ebc692a70d65b5e34.scope - libcontainer container 1ade4f45768e1d480f61038682b19eb3da2702844f54dd1ebc692a70d65b5e34. Dec 13 02:39:49.630278 containerd[1450]: time="2024-12-13T02:39:49.630223648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zlsnz,Uid:0fe1dbd3-0e8c-4e78-a2cf-d95064566c2a,Namespace:kube-system,Attempt:0,} returns sandbox id \"1ade4f45768e1d480f61038682b19eb3da2702844f54dd1ebc692a70d65b5e34\"" Dec 13 02:39:49.633790 containerd[1450]: time="2024-12-13T02:39:49.633620984Z" level=info msg="CreateContainer within sandbox \"1ade4f45768e1d480f61038682b19eb3da2702844f54dd1ebc692a70d65b5e34\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 02:39:49.669521 containerd[1450]: time="2024-12-13T02:39:49.669447788Z" level=info msg="CreateContainer within sandbox \"1ade4f45768e1d480f61038682b19eb3da2702844f54dd1ebc692a70d65b5e34\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"044992441778da892f5968e630fefcabe4bfb5afafe26ac64ffcdff8f66e4e43\"" Dec 13 02:39:49.670504 containerd[1450]: time="2024-12-13T02:39:49.670323078Z" level=info msg="StartContainer for \"044992441778da892f5968e630fefcabe4bfb5afafe26ac64ffcdff8f66e4e43\"" Dec 13 02:39:49.687059 containerd[1450]: time="2024-12-13T02:39:49.686943016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-q2dgv,Uid:db3500ea-1601-4437-8e34-321659c75183,Namespace:tigera-operator,Attempt:0,}" Dec 13 02:39:49.700682 systemd[1]: Started cri-containerd-044992441778da892f5968e630fefcabe4bfb5afafe26ac64ffcdff8f66e4e43.scope - libcontainer container 044992441778da892f5968e630fefcabe4bfb5afafe26ac64ffcdff8f66e4e43. Dec 13 02:39:49.733692 containerd[1450]: time="2024-12-13T02:39:49.733158219Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:39:49.734213 containerd[1450]: time="2024-12-13T02:39:49.733983756Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:39:49.734213 containerd[1450]: time="2024-12-13T02:39:49.734044410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:39:49.734213 containerd[1450]: time="2024-12-13T02:39:49.734161499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:39:49.752936 containerd[1450]: time="2024-12-13T02:39:49.752800641Z" level=info msg="StartContainer for \"044992441778da892f5968e630fefcabe4bfb5afafe26ac64ffcdff8f66e4e43\" returns successfully" Dec 13 02:39:49.759514 systemd[1]: Started cri-containerd-b851f89c5891cbf50d1f0e0bf6983633dd2f14fab968bb864589ba5436817c20.scope - libcontainer container b851f89c5891cbf50d1f0e0bf6983633dd2f14fab968bb864589ba5436817c20. Dec 13 02:39:49.807600 containerd[1450]: time="2024-12-13T02:39:49.807542063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-q2dgv,Uid:db3500ea-1601-4437-8e34-321659c75183,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b851f89c5891cbf50d1f0e0bf6983633dd2f14fab968bb864589ba5436817c20\"" Dec 13 02:39:49.815435 containerd[1450]: time="2024-12-13T02:39:49.814327670Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Dec 13 02:39:52.460464 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1171403371.mount: Deactivated successfully. Dec 13 02:39:57.746396 containerd[1450]: time="2024-12-13T02:39:57.746171202Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:39:57.781886 containerd[1450]: time="2024-12-13T02:39:57.781676562Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21764333" Dec 13 02:39:57.809988 containerd[1450]: time="2024-12-13T02:39:57.809600941Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:39:57.818047 containerd[1450]: time="2024-12-13T02:39:57.816811198Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:39:57.820683 containerd[1450]: time="2024-12-13T02:39:57.820334623Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 8.004909707s" Dec 13 02:39:57.820683 containerd[1450]: time="2024-12-13T02:39:57.820427808Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Dec 13 02:39:57.884704 kubelet[2672]: I1213 02:39:57.884468 2672 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-zlsnz" podStartSLOduration=8.879257796 podStartE2EDuration="8.879257796s" podCreationTimestamp="2024-12-13 02:39:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:39:50.694327893 +0000 UTC m=+13.297707953" watchObservedRunningTime="2024-12-13 02:39:57.879257796 +0000 UTC m=+20.482637856" Dec 13 02:39:57.915386 containerd[1450]: time="2024-12-13T02:39:57.915326242Z" level=info msg="CreateContainer within sandbox \"b851f89c5891cbf50d1f0e0bf6983633dd2f14fab968bb864589ba5436817c20\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 02:39:57.949917 containerd[1450]: time="2024-12-13T02:39:57.949854210Z" level=info msg="CreateContainer within sandbox \"b851f89c5891cbf50d1f0e0bf6983633dd2f14fab968bb864589ba5436817c20\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b3dfc3559ab4215d13549d3e3fee5e1c95743c87f52b5c665f253e85d1c176c5\"" Dec 13 02:39:57.950451 containerd[1450]: time="2024-12-13T02:39:57.950391037Z" level=info msg="StartContainer for \"b3dfc3559ab4215d13549d3e3fee5e1c95743c87f52b5c665f253e85d1c176c5\"" Dec 13 02:39:57.992747 systemd[1]: Started cri-containerd-b3dfc3559ab4215d13549d3e3fee5e1c95743c87f52b5c665f253e85d1c176c5.scope - libcontainer container b3dfc3559ab4215d13549d3e3fee5e1c95743c87f52b5c665f253e85d1c176c5. Dec 13 02:39:58.068743 containerd[1450]: time="2024-12-13T02:39:58.068577291Z" level=info msg="StartContainer for \"b3dfc3559ab4215d13549d3e3fee5e1c95743c87f52b5c665f253e85d1c176c5\" returns successfully" Dec 13 02:40:02.011234 kubelet[2672]: I1213 02:40:02.011182 2672 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-q2dgv" podStartSLOduration=4.988583558 podStartE2EDuration="13.01112187s" podCreationTimestamp="2024-12-13 02:39:49 +0000 UTC" firstStartedPulling="2024-12-13 02:39:49.808731892 +0000 UTC m=+12.412111912" lastFinishedPulling="2024-12-13 02:39:57.831270164 +0000 UTC m=+20.434650224" observedRunningTime="2024-12-13 02:39:58.76533954 +0000 UTC m=+21.368719670" watchObservedRunningTime="2024-12-13 02:40:02.01112187 +0000 UTC m=+24.614501880" Dec 13 02:40:02.013002 kubelet[2672]: I1213 02:40:02.011546 2672 topology_manager.go:215] "Topology Admit Handler" podUID="6b0bad46-7056-4564-8d0b-45ea5ddf98fe" podNamespace="calico-system" podName="calico-typha-5bc987f856-nzl97" Dec 13 02:40:02.023941 systemd[1]: Created slice kubepods-besteffort-pod6b0bad46_7056_4564_8d0b_45ea5ddf98fe.slice - libcontainer container kubepods-besteffort-pod6b0bad46_7056_4564_8d0b_45ea5ddf98fe.slice. Dec 13 02:40:02.033732 kubelet[2672]: W1213 02:40:02.033687 2672 reflector.go:539] object-"calico-system"/"tigera-ca-bundle": failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:ci-4081-2-1-b-31d3d6554f.novalocal" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081-2-1-b-31d3d6554f.novalocal' and this object Dec 13 02:40:02.033732 kubelet[2672]: E1213 02:40:02.033733 2672 reflector.go:147] object-"calico-system"/"tigera-ca-bundle": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:ci-4081-2-1-b-31d3d6554f.novalocal" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081-2-1-b-31d3d6554f.novalocal' and this object Dec 13 02:40:02.246423 kubelet[2672]: I1213 02:40:02.246257 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b0bad46-7056-4564-8d0b-45ea5ddf98fe-tigera-ca-bundle\") pod \"calico-typha-5bc987f856-nzl97\" (UID: \"6b0bad46-7056-4564-8d0b-45ea5ddf98fe\") " pod="calico-system/calico-typha-5bc987f856-nzl97" Dec 13 02:40:02.246423 kubelet[2672]: I1213 02:40:02.246401 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q96fq\" (UniqueName: \"kubernetes.io/projected/6b0bad46-7056-4564-8d0b-45ea5ddf98fe-kube-api-access-q96fq\") pod \"calico-typha-5bc987f856-nzl97\" (UID: \"6b0bad46-7056-4564-8d0b-45ea5ddf98fe\") " pod="calico-system/calico-typha-5bc987f856-nzl97" Dec 13 02:40:02.246741 kubelet[2672]: I1213 02:40:02.246516 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6b0bad46-7056-4564-8d0b-45ea5ddf98fe-typha-certs\") pod \"calico-typha-5bc987f856-nzl97\" (UID: \"6b0bad46-7056-4564-8d0b-45ea5ddf98fe\") " pod="calico-system/calico-typha-5bc987f856-nzl97" Dec 13 02:40:02.755669 kubelet[2672]: I1213 02:40:02.755119 2672 topology_manager.go:215] "Topology Admit Handler" podUID="858cf28b-1c33-4aa3-a26d-c7bfec012731" podNamespace="calico-system" podName="calico-node-5tmsf" Dec 13 02:40:02.772783 systemd[1]: Created slice kubepods-besteffort-pod858cf28b_1c33_4aa3_a26d_c7bfec012731.slice - libcontainer container kubepods-besteffort-pod858cf28b_1c33_4aa3_a26d_c7bfec012731.slice. Dec 13 02:40:02.850541 kubelet[2672]: I1213 02:40:02.850350 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/858cf28b-1c33-4aa3-a26d-c7bfec012731-xtables-lock\") pod \"calico-node-5tmsf\" (UID: \"858cf28b-1c33-4aa3-a26d-c7bfec012731\") " pod="calico-system/calico-node-5tmsf" Dec 13 02:40:02.850541 kubelet[2672]: I1213 02:40:02.850400 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/858cf28b-1c33-4aa3-a26d-c7bfec012731-policysync\") pod \"calico-node-5tmsf\" (UID: \"858cf28b-1c33-4aa3-a26d-c7bfec012731\") " pod="calico-system/calico-node-5tmsf" Dec 13 02:40:02.850541 kubelet[2672]: I1213 02:40:02.850428 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/858cf28b-1c33-4aa3-a26d-c7bfec012731-node-certs\") pod \"calico-node-5tmsf\" (UID: \"858cf28b-1c33-4aa3-a26d-c7bfec012731\") " pod="calico-system/calico-node-5tmsf" Dec 13 02:40:02.850541 kubelet[2672]: I1213 02:40:02.850457 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/858cf28b-1c33-4aa3-a26d-c7bfec012731-var-lib-calico\") pod \"calico-node-5tmsf\" (UID: \"858cf28b-1c33-4aa3-a26d-c7bfec012731\") " pod="calico-system/calico-node-5tmsf" Dec 13 02:40:02.850966 kubelet[2672]: I1213 02:40:02.850584 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/858cf28b-1c33-4aa3-a26d-c7bfec012731-tigera-ca-bundle\") pod \"calico-node-5tmsf\" (UID: \"858cf28b-1c33-4aa3-a26d-c7bfec012731\") " pod="calico-system/calico-node-5tmsf" Dec 13 02:40:02.850966 kubelet[2672]: I1213 02:40:02.850654 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/858cf28b-1c33-4aa3-a26d-c7bfec012731-cni-bin-dir\") pod \"calico-node-5tmsf\" (UID: \"858cf28b-1c33-4aa3-a26d-c7bfec012731\") " pod="calico-system/calico-node-5tmsf" Dec 13 02:40:02.850966 kubelet[2672]: I1213 02:40:02.850721 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/858cf28b-1c33-4aa3-a26d-c7bfec012731-flexvol-driver-host\") pod \"calico-node-5tmsf\" (UID: \"858cf28b-1c33-4aa3-a26d-c7bfec012731\") " pod="calico-system/calico-node-5tmsf" Dec 13 02:40:02.851646 kubelet[2672]: I1213 02:40:02.851606 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/858cf28b-1c33-4aa3-a26d-c7bfec012731-var-run-calico\") pod \"calico-node-5tmsf\" (UID: \"858cf28b-1c33-4aa3-a26d-c7bfec012731\") " pod="calico-system/calico-node-5tmsf" Dec 13 02:40:02.851770 kubelet[2672]: I1213 02:40:02.851708 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/858cf28b-1c33-4aa3-a26d-c7bfec012731-lib-modules\") pod \"calico-node-5tmsf\" (UID: \"858cf28b-1c33-4aa3-a26d-c7bfec012731\") " pod="calico-system/calico-node-5tmsf" Dec 13 02:40:02.852071 kubelet[2672]: I1213 02:40:02.851781 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/858cf28b-1c33-4aa3-a26d-c7bfec012731-cni-log-dir\") pod \"calico-node-5tmsf\" (UID: \"858cf28b-1c33-4aa3-a26d-c7bfec012731\") " pod="calico-system/calico-node-5tmsf" Dec 13 02:40:02.852071 kubelet[2672]: I1213 02:40:02.851918 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/858cf28b-1c33-4aa3-a26d-c7bfec012731-cni-net-dir\") pod \"calico-node-5tmsf\" (UID: \"858cf28b-1c33-4aa3-a26d-c7bfec012731\") " pod="calico-system/calico-node-5tmsf" Dec 13 02:40:02.852071 kubelet[2672]: I1213 02:40:02.852010 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g49pk\" (UniqueName: \"kubernetes.io/projected/858cf28b-1c33-4aa3-a26d-c7bfec012731-kube-api-access-g49pk\") pod \"calico-node-5tmsf\" (UID: \"858cf28b-1c33-4aa3-a26d-c7bfec012731\") " pod="calico-system/calico-node-5tmsf" Dec 13 02:40:02.906257 kubelet[2672]: I1213 02:40:02.906194 2672 topology_manager.go:215] "Topology Admit Handler" podUID="8e72692f-d22b-4813-bb35-ab03aefb087b" podNamespace="calico-system" podName="csi-node-driver-9qnbf" Dec 13 02:40:02.907773 kubelet[2672]: E1213 02:40:02.907564 2672 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9qnbf" podUID="8e72692f-d22b-4813-bb35-ab03aefb087b" Dec 13 02:40:02.958839 kubelet[2672]: E1213 02:40:02.958747 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:02.959188 kubelet[2672]: W1213 02:40:02.958781 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:02.959188 kubelet[2672]: E1213 02:40:02.959059 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:02.960067 kubelet[2672]: E1213 02:40:02.959919 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:02.960067 kubelet[2672]: W1213 02:40:02.959932 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:02.960575 kubelet[2672]: E1213 02:40:02.960424 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:02.960575 kubelet[2672]: W1213 02:40:02.960435 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:02.960575 kubelet[2672]: E1213 02:40:02.960451 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:02.960764 kubelet[2672]: E1213 02:40:02.960752 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:02.960930 kubelet[2672]: W1213 02:40:02.960839 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:02.960930 kubelet[2672]: E1213 02:40:02.960859 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:02.961844 kubelet[2672]: E1213 02:40:02.961699 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:02.961844 kubelet[2672]: W1213 02:40:02.961711 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:02.961844 kubelet[2672]: E1213 02:40:02.961725 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:02.962045 kubelet[2672]: E1213 02:40:02.962032 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:02.983387 kubelet[2672]: E1213 02:40:02.983339 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:02.985646 kubelet[2672]: W1213 02:40:02.985624 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:02.985808 kubelet[2672]: E1213 02:40:02.985793 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:02.987264 kubelet[2672]: E1213 02:40:02.987215 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:02.987264 kubelet[2672]: W1213 02:40:02.987246 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:02.987783 kubelet[2672]: E1213 02:40:02.987289 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:02.987952 kubelet[2672]: E1213 02:40:02.987874 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:02.989086 kubelet[2672]: W1213 02:40:02.988010 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:02.989086 kubelet[2672]: E1213 02:40:02.988537 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:02.989086 kubelet[2672]: W1213 02:40:02.988549 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:02.989202 kubelet[2672]: E1213 02:40:02.989168 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:02.989202 kubelet[2672]: W1213 02:40:02.989180 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:02.989784 kubelet[2672]: E1213 02:40:02.989296 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:02.989784 kubelet[2672]: E1213 02:40:02.989326 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:02.989784 kubelet[2672]: E1213 02:40:02.989307 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:02.995615 kubelet[2672]: E1213 02:40:02.995579 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:02.995615 kubelet[2672]: W1213 02:40:02.995605 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:02.995824 kubelet[2672]: E1213 02:40:02.995633 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:02.996105 kubelet[2672]: E1213 02:40:02.996085 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:02.996105 kubelet[2672]: W1213 02:40:02.996101 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:02.996229 kubelet[2672]: E1213 02:40:02.996115 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:02.996425 kubelet[2672]: E1213 02:40:02.996405 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:02.996425 kubelet[2672]: W1213 02:40:02.996420 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:02.996557 kubelet[2672]: E1213 02:40:02.996434 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:02.997620 kubelet[2672]: E1213 02:40:02.997598 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:02.997620 kubelet[2672]: W1213 02:40:02.997615 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:02.997737 kubelet[2672]: E1213 02:40:02.997632 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:02.997821 kubelet[2672]: E1213 02:40:02.997803 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:02.997821 kubelet[2672]: W1213 02:40:02.997817 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:02.998018 kubelet[2672]: E1213 02:40:02.997830 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:02.998018 kubelet[2672]: E1213 02:40:02.997989 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:02.998018 kubelet[2672]: W1213 02:40:02.997998 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:02.998018 kubelet[2672]: E1213 02:40:02.998010 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:02.998302 kubelet[2672]: E1213 02:40:02.998152 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:02.998302 kubelet[2672]: W1213 02:40:02.998162 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:02.998302 kubelet[2672]: E1213 02:40:02.998174 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:02.999084 kubelet[2672]: E1213 02:40:02.998343 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:02.999084 kubelet[2672]: W1213 02:40:02.998353 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:02.999084 kubelet[2672]: E1213 02:40:02.998366 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:02.999084 kubelet[2672]: E1213 02:40:02.998752 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:02.999084 kubelet[2672]: W1213 02:40:02.998762 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:02.999084 kubelet[2672]: E1213 02:40:02.998776 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:02.999345 kubelet[2672]: E1213 02:40:02.999152 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:02.999345 kubelet[2672]: W1213 02:40:02.999164 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:02.999345 kubelet[2672]: E1213 02:40:02.999178 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:02.999605 kubelet[2672]: E1213 02:40:02.999584 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:03.000269 kubelet[2672]: W1213 02:40:03.000232 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:03.000269 kubelet[2672]: E1213 02:40:03.000262 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:03.000465 kubelet[2672]: E1213 02:40:03.000446 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:03.000465 kubelet[2672]: W1213 02:40:03.000460 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:03.000555 kubelet[2672]: E1213 02:40:03.000473 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:03.000693 kubelet[2672]: E1213 02:40:03.000674 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:03.000693 kubelet[2672]: W1213 02:40:03.000688 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:03.000794 kubelet[2672]: E1213 02:40:03.000700 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:03.001632 kubelet[2672]: E1213 02:40:03.001609 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:03.001632 kubelet[2672]: W1213 02:40:03.001629 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:03.001765 kubelet[2672]: E1213 02:40:03.001646 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:03.001852 kubelet[2672]: E1213 02:40:03.001830 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:03.001852 kubelet[2672]: W1213 02:40:03.001849 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:03.001961 kubelet[2672]: E1213 02:40:03.001864 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:03.002092 kubelet[2672]: E1213 02:40:03.002072 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:03.002092 kubelet[2672]: W1213 02:40:03.002088 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:03.002212 kubelet[2672]: E1213 02:40:03.002103 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:03.003130 kubelet[2672]: E1213 02:40:03.003107 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:03.003130 kubelet[2672]: W1213 02:40:03.003125 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:03.003230 kubelet[2672]: E1213 02:40:03.003142 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:03.003339 kubelet[2672]: E1213 02:40:03.003314 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:03.003339 kubelet[2672]: W1213 02:40:03.003332 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:03.003411 kubelet[2672]: E1213 02:40:03.003346 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:03.003624 kubelet[2672]: E1213 02:40:03.003593 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:03.003624 kubelet[2672]: W1213 02:40:03.003613 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:03.003997 kubelet[2672]: E1213 02:40:03.003635 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:03.003997 kubelet[2672]: E1213 02:40:03.003995 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:03.004067 kubelet[2672]: W1213 02:40:03.004006 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:03.004067 kubelet[2672]: E1213 02:40:03.004023 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:03.054741 kubelet[2672]: E1213 02:40:03.054585 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:03.054741 kubelet[2672]: W1213 02:40:03.054620 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:03.054741 kubelet[2672]: E1213 02:40:03.054675 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:03.054741 kubelet[2672]: I1213 02:40:03.054741 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8e72692f-d22b-4813-bb35-ab03aefb087b-varrun\") pod \"csi-node-driver-9qnbf\" (UID: \"8e72692f-d22b-4813-bb35-ab03aefb087b\") " pod="calico-system/csi-node-driver-9qnbf" Dec 13 02:40:03.057664 kubelet[2672]: E1213 02:40:03.056586 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:03.057664 kubelet[2672]: W1213 02:40:03.056604 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:03.057664 kubelet[2672]: E1213 02:40:03.056680 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:03.057664 kubelet[2672]: E1213 02:40:03.056921 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:03.057664 kubelet[2672]: W1213 02:40:03.056933 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:03.057664 kubelet[2672]: E1213 02:40:03.056950 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:03.057664 kubelet[2672]: I1213 02:40:03.057040 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8e72692f-d22b-4813-bb35-ab03aefb087b-socket-dir\") pod \"csi-node-driver-9qnbf\" (UID: \"8e72692f-d22b-4813-bb35-ab03aefb087b\") " pod="calico-system/csi-node-driver-9qnbf" Dec 13 02:40:03.058235 kubelet[2672]: E1213 02:40:03.058207 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:03.058235 kubelet[2672]: W1213 02:40:03.058228 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:03.058302 kubelet[2672]: E1213 02:40:03.058243 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:03.058677 kubelet[2672]: E1213 02:40:03.058638 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:03.058677 kubelet[2672]: W1213 02:40:03.058670 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:03.058758 kubelet[2672]: E1213 02:40:03.058697 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:03.059147 kubelet[2672]: E1213 02:40:03.059117 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:03.059147 kubelet[2672]: W1213 02:40:03.059133 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:03.059245 kubelet[2672]: E1213 02:40:03.059153 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:03.059622 kubelet[2672]: E1213 02:40:03.059596 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:03.059622 kubelet[2672]: W1213 02:40:03.059612 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:03.059622 kubelet[2672]: E1213 02:40:03.059624 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:03.059749 kubelet[2672]: I1213 02:40:03.059659 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsmh6\" (UniqueName: \"kubernetes.io/projected/8e72692f-d22b-4813-bb35-ab03aefb087b-kube-api-access-hsmh6\") pod \"csi-node-driver-9qnbf\" (UID: \"8e72692f-d22b-4813-bb35-ab03aefb087b\") " pod="calico-system/csi-node-driver-9qnbf" Dec 13 02:40:03.059938 kubelet[2672]: E1213 02:40:03.059906 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:03.059938 kubelet[2672]: W1213 02:40:03.059923 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:03.059938 kubelet[2672]: E1213 02:40:03.059939 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:03.060205 kubelet[2672]: E1213 02:40:03.060177 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:03.060205 kubelet[2672]: W1213 02:40:03.060193 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:03.060285 kubelet[2672]: E1213 02:40:03.060210 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:03.060285 kubelet[2672]: I1213 02:40:03.060230 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8e72692f-d22b-4813-bb35-ab03aefb087b-kubelet-dir\") pod \"csi-node-driver-9qnbf\" (UID: \"8e72692f-d22b-4813-bb35-ab03aefb087b\") " pod="calico-system/csi-node-driver-9qnbf" Dec 13 02:40:03.061115 kubelet[2672]: E1213 02:40:03.061094 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:03.061115 kubelet[2672]: W1213 02:40:03.061112 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:03.061224 kubelet[2672]: E1213 02:40:03.061206 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:03.061256 kubelet[2672]: I1213 02:40:03.061237 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8e72692f-d22b-4813-bb35-ab03aefb087b-registration-dir\") pod \"csi-node-driver-9qnbf\" (UID: \"8e72692f-d22b-4813-bb35-ab03aefb087b\") " pod="calico-system/csi-node-driver-9qnbf" Dec 13 02:40:03.061792 kubelet[2672]: E1213 02:40:03.061773 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:03.061792 kubelet[2672]: W1213 02:40:03.061789 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:03.061916 kubelet[2672]: E1213 02:40:03.061897 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:03.063192 kubelet[2672]: E1213 02:40:03.063165 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:03.063192 kubelet[2672]: W1213 02:40:03.063182 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:03.063290 kubelet[2672]: E1213 02:40:03.063219 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:03.063726 kubelet[2672]: E1213 02:40:03.063672 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:03.063726 kubelet[2672]: W1213 02:40:03.063682 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:03.063726 kubelet[2672]: E1213 02:40:03.063708 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:03.064138 kubelet[2672]: E1213 02:40:03.064105 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:03.064138 kubelet[2672]: W1213 02:40:03.064120 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:03.064138 kubelet[2672]: E1213 02:40:03.064137 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:03.064778 kubelet[2672]: E1213 02:40:03.064748 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:03.064778 kubelet[2672]: W1213 02:40:03.064763 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:03.064778 kubelet[2672]: E1213 02:40:03.064775 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:03.065108 kubelet[2672]: E1213 02:40:03.065089 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:03.065283 kubelet[2672]: W1213 02:40:03.065108 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:03.065283 kubelet[2672]: E1213 02:40:03.065127 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:03.065579 kubelet[2672]: E1213 02:40:03.065544 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:03.065579 kubelet[2672]: W1213 02:40:03.065558 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:03.065579 kubelet[2672]: E1213 02:40:03.065570 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:03.106508 kubelet[2672]: E1213 02:40:03.106367 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:03.106508 kubelet[2672]: W1213 02:40:03.106388 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:03.106508 kubelet[2672]: E1213 02:40:03.106422 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:03.106993 kubelet[2672]: E1213 02:40:03.106927 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:03.106993 kubelet[2672]: W1213 02:40:03.106939 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:03.106993 kubelet[2672]: E1213 02:40:03.106954 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:03.167449 kubelet[2672]: E1213 02:40:03.167321 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:03.167449 kubelet[2672]: W1213 02:40:03.167351 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:03.167449 kubelet[2672]: E1213 02:40:03.167378 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:03.168634 kubelet[2672]: E1213 02:40:03.168466 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:03.168634 kubelet[2672]: W1213 02:40:03.168503 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:03.168634 kubelet[2672]: E1213 02:40:03.168525 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:03.168835 kubelet[2672]: E1213 02:40:03.168747 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:03.168835 kubelet[2672]: W1213 02:40:03.168756 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:03.169228 kubelet[2672]: E1213 02:40:03.168968 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:03.169674 kubelet[2672]: E1213 02:40:03.169618 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:03.169674 kubelet[2672]: W1213 02:40:03.169629 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:03.169748 kubelet[2672]: E1213 02:40:03.169680 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:03.170147 kubelet[2672]: E1213 02:40:03.170015 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:03.170147 kubelet[2672]: W1213 02:40:03.170027 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:03.170147 kubelet[2672]: E1213 02:40:03.170067 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:03.170508 kubelet[2672]: E1213 02:40:03.170367 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:03.170508 kubelet[2672]: W1213 02:40:03.170412 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:03.170508 kubelet[2672]: E1213 02:40:03.170458 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:03.170964 kubelet[2672]: E1213 02:40:03.170867 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:03.170964 kubelet[2672]: W1213 02:40:03.170881 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:03.170964 kubelet[2672]: E1213 02:40:03.170934 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:03.171249 kubelet[2672]: E1213 02:40:03.171237 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:03.171397 kubelet[2672]: W1213 02:40:03.171286 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:03.171397 kubelet[2672]: E1213 02:40:03.171334 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:03.171806 kubelet[2672]: E1213 02:40:03.171794 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:03.171935 kubelet[2672]: W1213 02:40:03.171857 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:03.171935 kubelet[2672]: E1213 02:40:03.171915 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:03.172377 kubelet[2672]: E1213 02:40:03.172258 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:03.172377 kubelet[2672]: W1213 02:40:03.172269 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:03.172377 kubelet[2672]: E1213 02:40:03.172312 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:03.172843 kubelet[2672]: E1213 02:40:03.172762 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:03.172843 kubelet[2672]: W1213 02:40:03.172773 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:03.173196 kubelet[2672]: E1213 02:40:03.172947 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:03.174589 kubelet[2672]: E1213 02:40:03.174573 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:03.174809 kubelet[2672]: W1213 02:40:03.174685 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:03.174809 kubelet[2672]: E1213 02:40:03.174741 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:03.174960 kubelet[2672]: E1213 02:40:03.174948 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:03.175120 kubelet[2672]: W1213 02:40:03.175008 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:03.175213 kubelet[2672]: E1213 02:40:03.175200 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:03.175392 kubelet[2672]: E1213 02:40:03.175328 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:03.175392 kubelet[2672]: W1213 02:40:03.175343 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:03.175392 kubelet[2672]: E1213 02:40:03.175385 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:03.175867 kubelet[2672]: E1213 02:40:03.175845 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:03.175913 kubelet[2672]: W1213 02:40:03.175865 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:03.176035 kubelet[2672]: E1213 02:40:03.175979 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:03.176107 kubelet[2672]: E1213 02:40:03.176089 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:03.176139 kubelet[2672]: W1213 02:40:03.176106 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:03.176326 kubelet[2672]: E1213 02:40:03.176296 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:03.176667 kubelet[2672]: E1213 02:40:03.176646 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:03.176667 kubelet[2672]: W1213 02:40:03.176664 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:03.176776 kubelet[2672]: E1213 02:40:03.176758 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:03.176971 kubelet[2672]: E1213 02:40:03.176952 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:03.176971 kubelet[2672]: W1213 02:40:03.176968 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:03.177078 kubelet[2672]: E1213 02:40:03.177062 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:03.177344 kubelet[2672]: E1213 02:40:03.177324 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:03.177344 kubelet[2672]: W1213 02:40:03.177341 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:03.177452 kubelet[2672]: E1213 02:40:03.177434 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:03.177691 kubelet[2672]: E1213 02:40:03.177672 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:03.177740 kubelet[2672]: W1213 02:40:03.177707 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:03.178259 kubelet[2672]: E1213 02:40:03.178238 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:03.178259 kubelet[2672]: W1213 02:40:03.178256 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:03.178985 kubelet[2672]: E1213 02:40:03.178685 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:03.178985 kubelet[2672]: E1213 02:40:03.178819 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:03.178985 kubelet[2672]: E1213 02:40:03.178878 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:03.178985 kubelet[2672]: W1213 02:40:03.178828 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:03.179116 kubelet[2672]: E1213 02:40:03.179092 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:03.179520 kubelet[2672]: E1213 02:40:03.179454 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:03.179520 kubelet[2672]: W1213 02:40:03.179473 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:03.179800 kubelet[2672]: E1213 02:40:03.179763 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:03.180827 kubelet[2672]: E1213 02:40:03.180811 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:03.181031 kubelet[2672]: W1213 02:40:03.180887 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:03.181031 kubelet[2672]: E1213 02:40:03.180924 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:03.181319 kubelet[2672]: E1213 02:40:03.181308 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:03.181504 kubelet[2672]: W1213 02:40:03.181394 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:03.181504 kubelet[2672]: E1213 02:40:03.181411 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:03.199052 kubelet[2672]: E1213 02:40:03.197436 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:40:03.199052 kubelet[2672]: W1213 02:40:03.197458 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:40:03.199052 kubelet[2672]: E1213 02:40:03.197659 2672 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:40:03.287193 containerd[1450]: time="2024-12-13T02:40:03.234935216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5bc987f856-nzl97,Uid:6b0bad46-7056-4564-8d0b-45ea5ddf98fe,Namespace:calico-system,Attempt:0,}" Dec 13 02:40:03.382273 containerd[1450]: time="2024-12-13T02:40:03.381857299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5tmsf,Uid:858cf28b-1c33-4aa3-a26d-c7bfec012731,Namespace:calico-system,Attempt:0,}" Dec 13 02:40:03.391029 containerd[1450]: time="2024-12-13T02:40:03.390900136Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:40:03.391316 containerd[1450]: time="2024-12-13T02:40:03.391278227Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:40:03.391421 containerd[1450]: time="2024-12-13T02:40:03.391396762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:40:03.391711 containerd[1450]: time="2024-12-13T02:40:03.391631219Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:40:03.444804 containerd[1450]: time="2024-12-13T02:40:03.444618073Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:40:03.444804 containerd[1450]: time="2024-12-13T02:40:03.444702133Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:40:03.444804 containerd[1450]: time="2024-12-13T02:40:03.444716781Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:40:03.445405 containerd[1450]: time="2024-12-13T02:40:03.445247582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:40:03.450724 systemd[1]: Started cri-containerd-d2ab06205771ae3de53f51e5e24b0b13c1bd3ee460aeb1bf739aec468b080aaf.scope - libcontainer container d2ab06205771ae3de53f51e5e24b0b13c1bd3ee460aeb1bf739aec468b080aaf. Dec 13 02:40:03.481739 systemd[1]: Started cri-containerd-adb96023d37e849ffc1cccf4ae2458c1e5dd797a8c771709842266888a29679e.scope - libcontainer container adb96023d37e849ffc1cccf4ae2458c1e5dd797a8c771709842266888a29679e. Dec 13 02:40:03.528815 containerd[1450]: time="2024-12-13T02:40:03.528265911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5tmsf,Uid:858cf28b-1c33-4aa3-a26d-c7bfec012731,Namespace:calico-system,Attempt:0,} returns sandbox id \"adb96023d37e849ffc1cccf4ae2458c1e5dd797a8c771709842266888a29679e\"" Dec 13 02:40:03.533850 containerd[1450]: time="2024-12-13T02:40:03.533792817Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 02:40:03.534842 containerd[1450]: time="2024-12-13T02:40:03.534792793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5bc987f856-nzl97,Uid:6b0bad46-7056-4564-8d0b-45ea5ddf98fe,Namespace:calico-system,Attempt:0,} returns sandbox id \"d2ab06205771ae3de53f51e5e24b0b13c1bd3ee460aeb1bf739aec468b080aaf\"" Dec 13 02:40:04.558992 kubelet[2672]: E1213 02:40:04.558884 2672 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9qnbf" podUID="8e72692f-d22b-4813-bb35-ab03aefb087b" Dec 13 02:40:05.807296 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3681548774.mount: Deactivated successfully. Dec 13 02:40:06.442905 containerd[1450]: time="2024-12-13T02:40:06.442774462Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:40:06.445800 containerd[1450]: time="2024-12-13T02:40:06.445326480Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Dec 13 02:40:06.447641 containerd[1450]: time="2024-12-13T02:40:06.447519686Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:40:06.452831 containerd[1450]: time="2024-12-13T02:40:06.452720687Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:40:06.456162 containerd[1450]: time="2024-12-13T02:40:06.454926306Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 2.921067362s" Dec 13 02:40:06.456162 containerd[1450]: time="2024-12-13T02:40:06.455036746Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Dec 13 02:40:06.458156 containerd[1450]: time="2024-12-13T02:40:06.457692913Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 13 02:40:06.463569 containerd[1450]: time="2024-12-13T02:40:06.463268147Z" level=info msg="CreateContainer within sandbox \"adb96023d37e849ffc1cccf4ae2458c1e5dd797a8c771709842266888a29679e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 02:40:06.514231 containerd[1450]: time="2024-12-13T02:40:06.513333958Z" level=info msg="CreateContainer within sandbox \"adb96023d37e849ffc1cccf4ae2458c1e5dd797a8c771709842266888a29679e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d33fdb6163cac5926d303777ad6e60167b2c01f59886683119eeb9c672782a35\"" Dec 13 02:40:06.516095 containerd[1450]: time="2024-12-13T02:40:06.516026323Z" level=info msg="StartContainer for \"d33fdb6163cac5926d303777ad6e60167b2c01f59886683119eeb9c672782a35\"" Dec 13 02:40:06.558749 kubelet[2672]: E1213 02:40:06.558707 2672 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9qnbf" podUID="8e72692f-d22b-4813-bb35-ab03aefb087b" Dec 13 02:40:06.610784 systemd[1]: Started cri-containerd-d33fdb6163cac5926d303777ad6e60167b2c01f59886683119eeb9c672782a35.scope - libcontainer container d33fdb6163cac5926d303777ad6e60167b2c01f59886683119eeb9c672782a35. Dec 13 02:40:06.656341 containerd[1450]: time="2024-12-13T02:40:06.656293484Z" level=info msg="StartContainer for \"d33fdb6163cac5926d303777ad6e60167b2c01f59886683119eeb9c672782a35\" returns successfully" Dec 13 02:40:06.673256 systemd[1]: cri-containerd-d33fdb6163cac5926d303777ad6e60167b2c01f59886683119eeb9c672782a35.scope: Deactivated successfully. Dec 13 02:40:06.742726 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d33fdb6163cac5926d303777ad6e60167b2c01f59886683119eeb9c672782a35-rootfs.mount: Deactivated successfully. Dec 13 02:40:06.785624 containerd[1450]: time="2024-12-13T02:40:06.785351625Z" level=info msg="shim disconnected" id=d33fdb6163cac5926d303777ad6e60167b2c01f59886683119eeb9c672782a35 namespace=k8s.io Dec 13 02:40:06.785624 containerd[1450]: time="2024-12-13T02:40:06.785411069Z" level=warning msg="cleaning up after shim disconnected" id=d33fdb6163cac5926d303777ad6e60167b2c01f59886683119eeb9c672782a35 namespace=k8s.io Dec 13 02:40:06.785624 containerd[1450]: time="2024-12-13T02:40:06.785421549Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 02:40:08.560671 kubelet[2672]: E1213 02:40:08.560304 2672 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9qnbf" podUID="8e72692f-d22b-4813-bb35-ab03aefb087b" Dec 13 02:40:10.559340 kubelet[2672]: E1213 02:40:10.559021 2672 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9qnbf" podUID="8e72692f-d22b-4813-bb35-ab03aefb087b" Dec 13 02:40:11.881048 containerd[1450]: time="2024-12-13T02:40:11.880937364Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:40:11.882515 containerd[1450]: time="2024-12-13T02:40:11.882458834Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Dec 13 02:40:11.884095 containerd[1450]: time="2024-12-13T02:40:11.884031212Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:40:11.886703 containerd[1450]: time="2024-12-13T02:40:11.886653976Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:40:11.887778 containerd[1450]: time="2024-12-13T02:40:11.887536492Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 5.429776471s" Dec 13 02:40:11.887778 containerd[1450]: time="2024-12-13T02:40:11.887568123Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Dec 13 02:40:11.890498 containerd[1450]: time="2024-12-13T02:40:11.888523228Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 02:40:11.913562 containerd[1450]: time="2024-12-13T02:40:11.913515105Z" level=info msg="CreateContainer within sandbox \"d2ab06205771ae3de53f51e5e24b0b13c1bd3ee460aeb1bf739aec468b080aaf\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 02:40:11.964683 containerd[1450]: time="2024-12-13T02:40:11.964552520Z" level=info msg="CreateContainer within sandbox \"d2ab06205771ae3de53f51e5e24b0b13c1bd3ee460aeb1bf739aec468b080aaf\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f47c9f5f5ea20fd08ab98a36cb255522845aba8db6c68d6ff5efd70bb17f5599\"" Dec 13 02:40:11.965876 containerd[1450]: time="2024-12-13T02:40:11.965834056Z" level=info msg="StartContainer for \"f47c9f5f5ea20fd08ab98a36cb255522845aba8db6c68d6ff5efd70bb17f5599\"" Dec 13 02:40:12.014633 systemd[1]: Started cri-containerd-f47c9f5f5ea20fd08ab98a36cb255522845aba8db6c68d6ff5efd70bb17f5599.scope - libcontainer container f47c9f5f5ea20fd08ab98a36cb255522845aba8db6c68d6ff5efd70bb17f5599. Dec 13 02:40:12.077520 containerd[1450]: time="2024-12-13T02:40:12.076410963Z" level=info msg="StartContainer for \"f47c9f5f5ea20fd08ab98a36cb255522845aba8db6c68d6ff5efd70bb17f5599\" returns successfully" Dec 13 02:40:12.559585 kubelet[2672]: E1213 02:40:12.559038 2672 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9qnbf" podUID="8e72692f-d22b-4813-bb35-ab03aefb087b" Dec 13 02:40:12.811109 kubelet[2672]: I1213 02:40:12.809466 2672 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-5bc987f856-nzl97" podStartSLOduration=3.459251861 podStartE2EDuration="11.809375373s" podCreationTimestamp="2024-12-13 02:40:01 +0000 UTC" firstStartedPulling="2024-12-13 02:40:03.537835183 +0000 UTC m=+26.141215193" lastFinishedPulling="2024-12-13 02:40:11.887958675 +0000 UTC m=+34.491338705" observedRunningTime="2024-12-13 02:40:12.808675904 +0000 UTC m=+35.412056005" watchObservedRunningTime="2024-12-13 02:40:12.809375373 +0000 UTC m=+35.412755433" Dec 13 02:40:13.790288 kubelet[2672]: I1213 02:40:13.790212 2672 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 02:40:14.559377 kubelet[2672]: E1213 02:40:14.559307 2672 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9qnbf" podUID="8e72692f-d22b-4813-bb35-ab03aefb087b" Dec 13 02:40:16.560029 kubelet[2672]: E1213 02:40:16.559841 2672 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9qnbf" podUID="8e72692f-d22b-4813-bb35-ab03aefb087b" Dec 13 02:40:18.558887 kubelet[2672]: E1213 02:40:18.558838 2672 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9qnbf" podUID="8e72692f-d22b-4813-bb35-ab03aefb087b" Dec 13 02:40:20.132101 containerd[1450]: time="2024-12-13T02:40:20.131895467Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:40:20.135415 containerd[1450]: time="2024-12-13T02:40:20.135301872Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Dec 13 02:40:20.139849 containerd[1450]: time="2024-12-13T02:40:20.139547426Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:40:20.147819 containerd[1450]: time="2024-12-13T02:40:20.147701628Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:40:20.152239 containerd[1450]: time="2024-12-13T02:40:20.152119699Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 8.263544814s" Dec 13 02:40:20.152239 containerd[1450]: time="2024-12-13T02:40:20.152206384Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Dec 13 02:40:20.157599 containerd[1450]: time="2024-12-13T02:40:20.157350922Z" level=info msg="CreateContainer within sandbox \"adb96023d37e849ffc1cccf4ae2458c1e5dd797a8c771709842266888a29679e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 02:40:20.246541 containerd[1450]: time="2024-12-13T02:40:20.246401811Z" level=info msg="CreateContainer within sandbox \"adb96023d37e849ffc1cccf4ae2458c1e5dd797a8c771709842266888a29679e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"688d43b680e9f4b703db45620261ac07f3e93cb232f11df4d231725f537ee5e7\"" Dec 13 02:40:20.248762 containerd[1450]: time="2024-12-13T02:40:20.248740782Z" level=info msg="StartContainer for \"688d43b680e9f4b703db45620261ac07f3e93cb232f11df4d231725f537ee5e7\"" Dec 13 02:40:20.386937 systemd[1]: Started cri-containerd-688d43b680e9f4b703db45620261ac07f3e93cb232f11df4d231725f537ee5e7.scope - libcontainer container 688d43b680e9f4b703db45620261ac07f3e93cb232f11df4d231725f537ee5e7. Dec 13 02:40:20.636772 kubelet[2672]: E1213 02:40:20.559672 2672 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9qnbf" podUID="8e72692f-d22b-4813-bb35-ab03aefb087b" Dec 13 02:40:20.942869 containerd[1450]: time="2024-12-13T02:40:20.942726565Z" level=info msg="StartContainer for \"688d43b680e9f4b703db45620261ac07f3e93cb232f11df4d231725f537ee5e7\" returns successfully" Dec 13 02:40:22.559661 kubelet[2672]: E1213 02:40:22.559135 2672 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9qnbf" podUID="8e72692f-d22b-4813-bb35-ab03aefb087b" Dec 13 02:40:22.899808 systemd[1]: cri-containerd-688d43b680e9f4b703db45620261ac07f3e93cb232f11df4d231725f537ee5e7.scope: Deactivated successfully. Dec 13 02:40:22.945364 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-688d43b680e9f4b703db45620261ac07f3e93cb232f11df4d231725f537ee5e7-rootfs.mount: Deactivated successfully. Dec 13 02:40:22.967118 containerd[1450]: time="2024-12-13T02:40:22.966877865Z" level=info msg="shim disconnected" id=688d43b680e9f4b703db45620261ac07f3e93cb232f11df4d231725f537ee5e7 namespace=k8s.io Dec 13 02:40:22.967118 containerd[1450]: time="2024-12-13T02:40:22.966938680Z" level=warning msg="cleaning up after shim disconnected" id=688d43b680e9f4b703db45620261ac07f3e93cb232f11df4d231725f537ee5e7 namespace=k8s.io Dec 13 02:40:22.967118 containerd[1450]: time="2024-12-13T02:40:22.966948408Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 02:40:22.993386 kubelet[2672]: I1213 02:40:22.990799 2672 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 02:40:23.032833 kubelet[2672]: I1213 02:40:23.032775 2672 topology_manager.go:215] "Topology Admit Handler" podUID="33c3e9fb-68ef-4580-9c08-9e7c76469b7a" podNamespace="kube-system" podName="coredns-76f75df574-sfprt" Dec 13 02:40:23.046584 systemd[1]: Created slice kubepods-burstable-pod33c3e9fb_68ef_4580_9c08_9e7c76469b7a.slice - libcontainer container kubepods-burstable-pod33c3e9fb_68ef_4580_9c08_9e7c76469b7a.slice. Dec 13 02:40:23.053611 kubelet[2672]: I1213 02:40:23.053313 2672 topology_manager.go:215] "Topology Admit Handler" podUID="0fc46b44-6bcb-489b-aece-768f5c9d6bf3" podNamespace="calico-apiserver" podName="calico-apiserver-d894d9fbd-5thcx" Dec 13 02:40:23.066877 kubelet[2672]: I1213 02:40:23.066032 2672 topology_manager.go:215] "Topology Admit Handler" podUID="7b74c729-793f-4b9e-8c1e-327ee29af018" podNamespace="calico-system" podName="calico-kube-controllers-585df87b9-jhcxr" Dec 13 02:40:23.066877 kubelet[2672]: I1213 02:40:23.066327 2672 topology_manager.go:215] "Topology Admit Handler" podUID="d1f56eec-7f0d-4c0d-9522-9259829f7521" podNamespace="calico-apiserver" podName="calico-apiserver-d894d9fbd-w6s8z" Dec 13 02:40:23.072451 kubelet[2672]: I1213 02:40:23.069714 2672 topology_manager.go:215] "Topology Admit Handler" podUID="b6fec068-1607-4b7c-a071-cd5974d02433" podNamespace="kube-system" podName="coredns-76f75df574-qnq6v" Dec 13 02:40:23.078393 systemd[1]: Created slice kubepods-besteffort-pod0fc46b44_6bcb_489b_aece_768f5c9d6bf3.slice - libcontainer container kubepods-besteffort-pod0fc46b44_6bcb_489b_aece_768f5c9d6bf3.slice. Dec 13 02:40:23.093464 systemd[1]: Created slice kubepods-besteffort-podd1f56eec_7f0d_4c0d_9522_9259829f7521.slice - libcontainer container kubepods-besteffort-podd1f56eec_7f0d_4c0d_9522_9259829f7521.slice. Dec 13 02:40:23.100138 systemd[1]: Created slice kubepods-besteffort-pod7b74c729_793f_4b9e_8c1e_327ee29af018.slice - libcontainer container kubepods-besteffort-pod7b74c729_793f_4b9e_8c1e_327ee29af018.slice. Dec 13 02:40:23.106656 systemd[1]: Created slice kubepods-burstable-podb6fec068_1607_4b7c_a071_cd5974d02433.slice - libcontainer container kubepods-burstable-podb6fec068_1607_4b7c_a071_cd5974d02433.slice. Dec 13 02:40:23.176608 kubelet[2672]: I1213 02:40:23.175449 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b6fec068-1607-4b7c-a071-cd5974d02433-config-volume\") pod \"coredns-76f75df574-qnq6v\" (UID: \"b6fec068-1607-4b7c-a071-cd5974d02433\") " pod="kube-system/coredns-76f75df574-qnq6v" Dec 13 02:40:23.176608 kubelet[2672]: I1213 02:40:23.175533 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/33c3e9fb-68ef-4580-9c08-9e7c76469b7a-config-volume\") pod \"coredns-76f75df574-sfprt\" (UID: \"33c3e9fb-68ef-4580-9c08-9e7c76469b7a\") " pod="kube-system/coredns-76f75df574-sfprt" Dec 13 02:40:23.176608 kubelet[2672]: I1213 02:40:23.175570 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xf95\" (UniqueName: \"kubernetes.io/projected/7b74c729-793f-4b9e-8c1e-327ee29af018-kube-api-access-2xf95\") pod \"calico-kube-controllers-585df87b9-jhcxr\" (UID: \"7b74c729-793f-4b9e-8c1e-327ee29af018\") " pod="calico-system/calico-kube-controllers-585df87b9-jhcxr" Dec 13 02:40:23.176608 kubelet[2672]: I1213 02:40:23.175603 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d1f56eec-7f0d-4c0d-9522-9259829f7521-calico-apiserver-certs\") pod \"calico-apiserver-d894d9fbd-w6s8z\" (UID: \"d1f56eec-7f0d-4c0d-9522-9259829f7521\") " pod="calico-apiserver/calico-apiserver-d894d9fbd-w6s8z" Dec 13 02:40:23.176608 kubelet[2672]: I1213 02:40:23.175638 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wvml\" (UniqueName: \"kubernetes.io/projected/33c3e9fb-68ef-4580-9c08-9e7c76469b7a-kube-api-access-5wvml\") pod \"coredns-76f75df574-sfprt\" (UID: \"33c3e9fb-68ef-4580-9c08-9e7c76469b7a\") " pod="kube-system/coredns-76f75df574-sfprt" Dec 13 02:40:23.177243 kubelet[2672]: I1213 02:40:23.175677 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h86ls\" (UniqueName: \"kubernetes.io/projected/d1f56eec-7f0d-4c0d-9522-9259829f7521-kube-api-access-h86ls\") pod \"calico-apiserver-d894d9fbd-w6s8z\" (UID: \"d1f56eec-7f0d-4c0d-9522-9259829f7521\") " pod="calico-apiserver/calico-apiserver-d894d9fbd-w6s8z" Dec 13 02:40:23.177243 kubelet[2672]: I1213 02:40:23.175712 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0fc46b44-6bcb-489b-aece-768f5c9d6bf3-calico-apiserver-certs\") pod \"calico-apiserver-d894d9fbd-5thcx\" (UID: \"0fc46b44-6bcb-489b-aece-768f5c9d6bf3\") " pod="calico-apiserver/calico-apiserver-d894d9fbd-5thcx" Dec 13 02:40:23.177243 kubelet[2672]: I1213 02:40:23.175742 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzvxj\" (UniqueName: \"kubernetes.io/projected/0fc46b44-6bcb-489b-aece-768f5c9d6bf3-kube-api-access-qzvxj\") pod \"calico-apiserver-d894d9fbd-5thcx\" (UID: \"0fc46b44-6bcb-489b-aece-768f5c9d6bf3\") " pod="calico-apiserver/calico-apiserver-d894d9fbd-5thcx" Dec 13 02:40:23.177243 kubelet[2672]: I1213 02:40:23.175771 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z55d2\" (UniqueName: \"kubernetes.io/projected/b6fec068-1607-4b7c-a071-cd5974d02433-kube-api-access-z55d2\") pod \"coredns-76f75df574-qnq6v\" (UID: \"b6fec068-1607-4b7c-a071-cd5974d02433\") " pod="kube-system/coredns-76f75df574-qnq6v" Dec 13 02:40:23.177243 kubelet[2672]: I1213 02:40:23.175800 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b74c729-793f-4b9e-8c1e-327ee29af018-tigera-ca-bundle\") pod \"calico-kube-controllers-585df87b9-jhcxr\" (UID: \"7b74c729-793f-4b9e-8c1e-327ee29af018\") " pod="calico-system/calico-kube-controllers-585df87b9-jhcxr" Dec 13 02:40:23.363605 containerd[1450]: time="2024-12-13T02:40:23.363199122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-sfprt,Uid:33c3e9fb-68ef-4580-9c08-9e7c76469b7a,Namespace:kube-system,Attempt:0,}" Dec 13 02:40:23.388581 containerd[1450]: time="2024-12-13T02:40:23.388449190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d894d9fbd-5thcx,Uid:0fc46b44-6bcb-489b-aece-768f5c9d6bf3,Namespace:calico-apiserver,Attempt:0,}" Dec 13 02:40:23.398098 containerd[1450]: time="2024-12-13T02:40:23.398020663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d894d9fbd-w6s8z,Uid:d1f56eec-7f0d-4c0d-9522-9259829f7521,Namespace:calico-apiserver,Attempt:0,}" Dec 13 02:40:23.403980 containerd[1450]: time="2024-12-13T02:40:23.403922688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-585df87b9-jhcxr,Uid:7b74c729-793f-4b9e-8c1e-327ee29af018,Namespace:calico-system,Attempt:0,}" Dec 13 02:40:23.411125 containerd[1450]: time="2024-12-13T02:40:23.410809729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qnq6v,Uid:b6fec068-1607-4b7c-a071-cd5974d02433,Namespace:kube-system,Attempt:0,}" Dec 13 02:40:23.968852 containerd[1450]: time="2024-12-13T02:40:23.968076406Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 02:40:24.472954 kubelet[2672]: I1213 02:40:24.472916 2672 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 02:40:24.567597 systemd[1]: Created slice kubepods-besteffort-pod8e72692f_d22b_4813_bb35_ab03aefb087b.slice - libcontainer container kubepods-besteffort-pod8e72692f_d22b_4813_bb35_ab03aefb087b.slice. Dec 13 02:40:24.572836 containerd[1450]: time="2024-12-13T02:40:24.571785923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9qnbf,Uid:8e72692f-d22b-4813-bb35-ab03aefb087b,Namespace:calico-system,Attempt:0,}" Dec 13 02:40:24.998545 containerd[1450]: time="2024-12-13T02:40:24.998455895Z" level=error msg="Failed to destroy network for sandbox \"94f138cb1525f1d6908b7caba24dad6518badd0df93a8320331c00602c63ba52\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:40:25.001272 containerd[1450]: time="2024-12-13T02:40:25.001220819Z" level=error msg="Failed to destroy network for sandbox \"346f752a207b1dacd95d811a0541afb8312074760139a7369e171c82196f3c52\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:40:25.008810 containerd[1450]: time="2024-12-13T02:40:25.008761350Z" level=error msg="encountered an error cleaning up failed sandbox \"346f752a207b1dacd95d811a0541afb8312074760139a7369e171c82196f3c52\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:40:25.009158 containerd[1450]: time="2024-12-13T02:40:25.009085342Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d894d9fbd-w6s8z,Uid:d1f56eec-7f0d-4c0d-9522-9259829f7521,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"346f752a207b1dacd95d811a0541afb8312074760139a7369e171c82196f3c52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:40:25.018422 containerd[1450]: time="2024-12-13T02:40:25.009096994Z" level=error msg="encountered an error cleaning up failed sandbox \"94f138cb1525f1d6908b7caba24dad6518badd0df93a8320331c00602c63ba52\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:40:25.018691 containerd[1450]: time="2024-12-13T02:40:25.018662107Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-585df87b9-jhcxr,Uid:7b74c729-793f-4b9e-8c1e-327ee29af018,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"94f138cb1525f1d6908b7caba24dad6518badd0df93a8320331c00602c63ba52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:40:25.019104 containerd[1450]: time="2024-12-13T02:40:25.019063476Z" level=error msg="Failed to destroy network for sandbox \"9e95ab1f67d91335b43bdc101237c01d380e43b057afa10b93f8815725a6635d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:40:25.019707 containerd[1450]: time="2024-12-13T02:40:25.019455468Z" level=error msg="encountered an error cleaning up failed sandbox \"9e95ab1f67d91335b43bdc101237c01d380e43b057afa10b93f8815725a6635d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:40:25.019707 containerd[1450]: time="2024-12-13T02:40:25.019517135Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d894d9fbd-5thcx,Uid:0fc46b44-6bcb-489b-aece-768f5c9d6bf3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9e95ab1f67d91335b43bdc101237c01d380e43b057afa10b93f8815725a6635d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:40:25.019707 containerd[1450]: time="2024-12-13T02:40:25.019606264Z" level=error msg="Failed to destroy network for sandbox \"397b5d5566daa503c40d3625e685d60dba0902373966745528806a19b0a62f65\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:40:25.020060 containerd[1450]: time="2024-12-13T02:40:25.020035546Z" level=error msg="encountered an error cleaning up failed sandbox \"397b5d5566daa503c40d3625e685d60dba0902373966745528806a19b0a62f65\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:40:25.020154 containerd[1450]: time="2024-12-13T02:40:25.020131668Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-sfprt,Uid:33c3e9fb-68ef-4580-9c08-9e7c76469b7a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"397b5d5566daa503c40d3625e685d60dba0902373966745528806a19b0a62f65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:40:25.022776 containerd[1450]: time="2024-12-13T02:40:25.020840540Z" level=error msg="Failed to destroy network for sandbox \"545ea108a48aef43421de94533e20fddba5d935cba65950b7682f1bb0e629d33\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:40:25.022776 containerd[1450]: time="2024-12-13T02:40:25.021434985Z" level=error msg="encountered an error cleaning up failed sandbox \"545ea108a48aef43421de94533e20fddba5d935cba65950b7682f1bb0e629d33\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:40:25.022776 containerd[1450]: time="2024-12-13T02:40:25.021526587Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qnq6v,Uid:b6fec068-1607-4b7c-a071-cd5974d02433,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"545ea108a48aef43421de94533e20fddba5d935cba65950b7682f1bb0e629d33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:40:25.022776 containerd[1450]: time="2024-12-13T02:40:25.021687652Z" level=error msg="Failed to destroy network for sandbox \"299f4e56be73d6ca2b86b2641695a2326ea23375985d2a3098aacca162c0071b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:40:25.022776 containerd[1450]: time="2024-12-13T02:40:25.022015654Z" level=error msg="encountered an error cleaning up failed sandbox \"299f4e56be73d6ca2b86b2641695a2326ea23375985d2a3098aacca162c0071b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:40:25.022776 containerd[1450]: time="2024-12-13T02:40:25.022085686Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9qnbf,Uid:8e72692f-d22b-4813-bb35-ab03aefb087b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"299f4e56be73d6ca2b86b2641695a2326ea23375985d2a3098aacca162c0071b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:40:25.023072 kubelet[2672]: E1213 02:40:25.020504 2672 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"397b5d5566daa503c40d3625e685d60dba0902373966745528806a19b0a62f65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:40:25.023072 kubelet[2672]: E1213 02:40:25.020578 2672 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"397b5d5566daa503c40d3625e685d60dba0902373966745528806a19b0a62f65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-sfprt" Dec 13 02:40:25.023072 kubelet[2672]: E1213 02:40:25.020609 2672 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"397b5d5566daa503c40d3625e685d60dba0902373966745528806a19b0a62f65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-sfprt" Dec 13 02:40:25.023210 kubelet[2672]: E1213 02:40:25.020688 2672 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-sfprt_kube-system(33c3e9fb-68ef-4580-9c08-9e7c76469b7a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-sfprt_kube-system(33c3e9fb-68ef-4580-9c08-9e7c76469b7a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"397b5d5566daa503c40d3625e685d60dba0902373966745528806a19b0a62f65\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-sfprt" podUID="33c3e9fb-68ef-4580-9c08-9e7c76469b7a" Dec 13 02:40:25.023210 kubelet[2672]: E1213 02:40:25.021003 2672 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"346f752a207b1dacd95d811a0541afb8312074760139a7369e171c82196f3c52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:40:25.023210 kubelet[2672]: E1213 02:40:25.021032 2672 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"346f752a207b1dacd95d811a0541afb8312074760139a7369e171c82196f3c52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d894d9fbd-w6s8z" Dec 13 02:40:25.023336 kubelet[2672]: E1213 02:40:25.021059 2672 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"346f752a207b1dacd95d811a0541afb8312074760139a7369e171c82196f3c52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d894d9fbd-w6s8z" Dec 13 02:40:25.023336 kubelet[2672]: E1213 02:40:25.022297 2672 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d894d9fbd-w6s8z_calico-apiserver(d1f56eec-7f0d-4c0d-9522-9259829f7521)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d894d9fbd-w6s8z_calico-apiserver(d1f56eec-7f0d-4c0d-9522-9259829f7521)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"346f752a207b1dacd95d811a0541afb8312074760139a7369e171c82196f3c52\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d894d9fbd-w6s8z" podUID="d1f56eec-7f0d-4c0d-9522-9259829f7521" Dec 13 02:40:25.023336 kubelet[2672]: E1213 02:40:25.022411 2672 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e95ab1f67d91335b43bdc101237c01d380e43b057afa10b93f8815725a6635d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:40:25.023451 kubelet[2672]: E1213 02:40:25.022455 2672 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e95ab1f67d91335b43bdc101237c01d380e43b057afa10b93f8815725a6635d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d894d9fbd-5thcx" Dec 13 02:40:25.023451 kubelet[2672]: E1213 02:40:25.022527 2672 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e95ab1f67d91335b43bdc101237c01d380e43b057afa10b93f8815725a6635d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d894d9fbd-5thcx" Dec 13 02:40:25.023451 kubelet[2672]: E1213 02:40:25.022544 2672 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94f138cb1525f1d6908b7caba24dad6518badd0df93a8320331c00602c63ba52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:40:25.023451 kubelet[2672]: E1213 02:40:25.022588 2672 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94f138cb1525f1d6908b7caba24dad6518badd0df93a8320331c00602c63ba52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-585df87b9-jhcxr" Dec 13 02:40:25.023692 kubelet[2672]: E1213 02:40:25.022610 2672 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94f138cb1525f1d6908b7caba24dad6518badd0df93a8320331c00602c63ba52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-585df87b9-jhcxr" Dec 13 02:40:25.023692 kubelet[2672]: E1213 02:40:25.022619 2672 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d894d9fbd-5thcx_calico-apiserver(0fc46b44-6bcb-489b-aece-768f5c9d6bf3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d894d9fbd-5thcx_calico-apiserver(0fc46b44-6bcb-489b-aece-768f5c9d6bf3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9e95ab1f67d91335b43bdc101237c01d380e43b057afa10b93f8815725a6635d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d894d9fbd-5thcx" podUID="0fc46b44-6bcb-489b-aece-768f5c9d6bf3" Dec 13 02:40:25.023787 kubelet[2672]: E1213 02:40:25.022663 2672 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-585df87b9-jhcxr_calico-system(7b74c729-793f-4b9e-8c1e-327ee29af018)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-585df87b9-jhcxr_calico-system(7b74c729-793f-4b9e-8c1e-327ee29af018)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"94f138cb1525f1d6908b7caba24dad6518badd0df93a8320331c00602c63ba52\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-585df87b9-jhcxr" podUID="7b74c729-793f-4b9e-8c1e-327ee29af018" Dec 13 02:40:25.023787 kubelet[2672]: E1213 02:40:25.022845 2672 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"545ea108a48aef43421de94533e20fddba5d935cba65950b7682f1bb0e629d33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:40:25.023787 kubelet[2672]: E1213 02:40:25.023076 2672 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"545ea108a48aef43421de94533e20fddba5d935cba65950b7682f1bb0e629d33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-qnq6v" Dec 13 02:40:25.023899 kubelet[2672]: E1213 02:40:25.023163 2672 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"299f4e56be73d6ca2b86b2641695a2326ea23375985d2a3098aacca162c0071b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:40:25.023899 kubelet[2672]: E1213 02:40:25.023193 2672 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"299f4e56be73d6ca2b86b2641695a2326ea23375985d2a3098aacca162c0071b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9qnbf" Dec 13 02:40:25.023899 kubelet[2672]: E1213 02:40:25.023129 2672 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"545ea108a48aef43421de94533e20fddba5d935cba65950b7682f1bb0e629d33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-qnq6v" Dec 13 02:40:25.024000 kubelet[2672]: E1213 02:40:25.023290 2672 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-qnq6v_kube-system(b6fec068-1607-4b7c-a071-cd5974d02433)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-qnq6v_kube-system(b6fec068-1607-4b7c-a071-cd5974d02433)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"545ea108a48aef43421de94533e20fddba5d935cba65950b7682f1bb0e629d33\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-qnq6v" podUID="b6fec068-1607-4b7c-a071-cd5974d02433" Dec 13 02:40:25.024000 kubelet[2672]: E1213 02:40:25.023323 2672 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"299f4e56be73d6ca2b86b2641695a2326ea23375985d2a3098aacca162c0071b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9qnbf" Dec 13 02:40:25.024000 kubelet[2672]: E1213 02:40:25.023370 2672 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9qnbf_calico-system(8e72692f-d22b-4813-bb35-ab03aefb087b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9qnbf_calico-system(8e72692f-d22b-4813-bb35-ab03aefb087b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"299f4e56be73d6ca2b86b2641695a2326ea23375985d2a3098aacca162c0071b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9qnbf" podUID="8e72692f-d22b-4813-bb35-ab03aefb087b" Dec 13 02:40:25.170592 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9e95ab1f67d91335b43bdc101237c01d380e43b057afa10b93f8815725a6635d-shm.mount: Deactivated successfully. Dec 13 02:40:25.171746 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-397b5d5566daa503c40d3625e685d60dba0902373966745528806a19b0a62f65-shm.mount: Deactivated successfully. Dec 13 02:40:25.171834 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-346f752a207b1dacd95d811a0541afb8312074760139a7369e171c82196f3c52-shm.mount: Deactivated successfully. Dec 13 02:40:25.988326 kubelet[2672]: I1213 02:40:25.988227 2672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e95ab1f67d91335b43bdc101237c01d380e43b057afa10b93f8815725a6635d" Dec 13 02:40:26.002417 kubelet[2672]: I1213 02:40:26.000668 2672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="397b5d5566daa503c40d3625e685d60dba0902373966745528806a19b0a62f65" Dec 13 02:40:26.003918 containerd[1450]: time="2024-12-13T02:40:26.003025653Z" level=info msg="StopPodSandbox for \"397b5d5566daa503c40d3625e685d60dba0902373966745528806a19b0a62f65\"" Dec 13 02:40:26.011384 containerd[1450]: time="2024-12-13T02:40:26.008106507Z" level=info msg="Ensure that sandbox 397b5d5566daa503c40d3625e685d60dba0902373966745528806a19b0a62f65 in task-service has been cleanup successfully" Dec 13 02:40:26.012984 containerd[1450]: time="2024-12-13T02:40:26.012940804Z" level=info msg="StopPodSandbox for \"9e95ab1f67d91335b43bdc101237c01d380e43b057afa10b93f8815725a6635d\"" Dec 13 02:40:26.014918 containerd[1450]: time="2024-12-13T02:40:26.014863622Z" level=info msg="Ensure that sandbox 9e95ab1f67d91335b43bdc101237c01d380e43b057afa10b93f8815725a6635d in task-service has been cleanup successfully" Dec 13 02:40:26.019096 kubelet[2672]: I1213 02:40:26.019043 2672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="545ea108a48aef43421de94533e20fddba5d935cba65950b7682f1bb0e629d33" Dec 13 02:40:26.021912 containerd[1450]: time="2024-12-13T02:40:26.021849490Z" level=info msg="StopPodSandbox for \"545ea108a48aef43421de94533e20fddba5d935cba65950b7682f1bb0e629d33\"" Dec 13 02:40:26.023325 containerd[1450]: time="2024-12-13T02:40:26.022989428Z" level=info msg="Ensure that sandbox 545ea108a48aef43421de94533e20fddba5d935cba65950b7682f1bb0e629d33 in task-service has been cleanup successfully" Dec 13 02:40:26.031471 kubelet[2672]: I1213 02:40:26.031420 2672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="299f4e56be73d6ca2b86b2641695a2326ea23375985d2a3098aacca162c0071b" Dec 13 02:40:26.038554 containerd[1450]: time="2024-12-13T02:40:26.038101541Z" level=info msg="StopPodSandbox for \"299f4e56be73d6ca2b86b2641695a2326ea23375985d2a3098aacca162c0071b\"" Dec 13 02:40:26.038554 containerd[1450]: time="2024-12-13T02:40:26.038441844Z" level=info msg="Ensure that sandbox 299f4e56be73d6ca2b86b2641695a2326ea23375985d2a3098aacca162c0071b in task-service has been cleanup successfully" Dec 13 02:40:26.040991 kubelet[2672]: I1213 02:40:26.040201 2672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94f138cb1525f1d6908b7caba24dad6518badd0df93a8320331c00602c63ba52" Dec 13 02:40:26.041663 containerd[1450]: time="2024-12-13T02:40:26.041612944Z" level=info msg="StopPodSandbox for \"94f138cb1525f1d6908b7caba24dad6518badd0df93a8320331c00602c63ba52\"" Dec 13 02:40:26.042134 containerd[1450]: time="2024-12-13T02:40:26.042087372Z" level=info msg="Ensure that sandbox 94f138cb1525f1d6908b7caba24dad6518badd0df93a8320331c00602c63ba52 in task-service has been cleanup successfully" Dec 13 02:40:26.054264 kubelet[2672]: I1213 02:40:26.054220 2672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="346f752a207b1dacd95d811a0541afb8312074760139a7369e171c82196f3c52" Dec 13 02:40:26.059839 containerd[1450]: time="2024-12-13T02:40:26.059770430Z" level=info msg="StopPodSandbox for \"346f752a207b1dacd95d811a0541afb8312074760139a7369e171c82196f3c52\"" Dec 13 02:40:26.060696 containerd[1450]: time="2024-12-13T02:40:26.060653081Z" level=info msg="Ensure that sandbox 346f752a207b1dacd95d811a0541afb8312074760139a7369e171c82196f3c52 in task-service has been cleanup successfully" Dec 13 02:40:26.127930 containerd[1450]: time="2024-12-13T02:40:26.127654925Z" level=error msg="StopPodSandbox for \"397b5d5566daa503c40d3625e685d60dba0902373966745528806a19b0a62f65\" failed" error="failed to destroy network for sandbox \"397b5d5566daa503c40d3625e685d60dba0902373966745528806a19b0a62f65\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:40:26.130203 kubelet[2672]: E1213 02:40:26.128564 2672 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"397b5d5566daa503c40d3625e685d60dba0902373966745528806a19b0a62f65\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="397b5d5566daa503c40d3625e685d60dba0902373966745528806a19b0a62f65" Dec 13 02:40:26.130203 kubelet[2672]: E1213 02:40:26.128687 2672 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"397b5d5566daa503c40d3625e685d60dba0902373966745528806a19b0a62f65"} Dec 13 02:40:26.130203 kubelet[2672]: E1213 02:40:26.128743 2672 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"33c3e9fb-68ef-4580-9c08-9e7c76469b7a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"397b5d5566daa503c40d3625e685d60dba0902373966745528806a19b0a62f65\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 02:40:26.130203 kubelet[2672]: E1213 02:40:26.128789 2672 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"33c3e9fb-68ef-4580-9c08-9e7c76469b7a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"397b5d5566daa503c40d3625e685d60dba0902373966745528806a19b0a62f65\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-sfprt" podUID="33c3e9fb-68ef-4580-9c08-9e7c76469b7a" Dec 13 02:40:26.163694 containerd[1450]: time="2024-12-13T02:40:26.163604866Z" level=error msg="StopPodSandbox for \"346f752a207b1dacd95d811a0541afb8312074760139a7369e171c82196f3c52\" failed" error="failed to destroy network for sandbox \"346f752a207b1dacd95d811a0541afb8312074760139a7369e171c82196f3c52\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:40:26.163976 kubelet[2672]: E1213 02:40:26.163956 2672 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"346f752a207b1dacd95d811a0541afb8312074760139a7369e171c82196f3c52\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="346f752a207b1dacd95d811a0541afb8312074760139a7369e171c82196f3c52" Dec 13 02:40:26.164035 kubelet[2672]: E1213 02:40:26.164016 2672 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"346f752a207b1dacd95d811a0541afb8312074760139a7369e171c82196f3c52"} Dec 13 02:40:26.164904 kubelet[2672]: E1213 02:40:26.164078 2672 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d1f56eec-7f0d-4c0d-9522-9259829f7521\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"346f752a207b1dacd95d811a0541afb8312074760139a7369e171c82196f3c52\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 02:40:26.164904 kubelet[2672]: E1213 02:40:26.164131 2672 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d1f56eec-7f0d-4c0d-9522-9259829f7521\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"346f752a207b1dacd95d811a0541afb8312074760139a7369e171c82196f3c52\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d894d9fbd-w6s8z" podUID="d1f56eec-7f0d-4c0d-9522-9259829f7521" Dec 13 02:40:26.175826 containerd[1450]: time="2024-12-13T02:40:26.175752421Z" level=error msg="StopPodSandbox for \"545ea108a48aef43421de94533e20fddba5d935cba65950b7682f1bb0e629d33\" failed" error="failed to destroy network for sandbox \"545ea108a48aef43421de94533e20fddba5d935cba65950b7682f1bb0e629d33\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:40:26.176357 kubelet[2672]: E1213 02:40:26.176079 2672 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"545ea108a48aef43421de94533e20fddba5d935cba65950b7682f1bb0e629d33\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="545ea108a48aef43421de94533e20fddba5d935cba65950b7682f1bb0e629d33" Dec 13 02:40:26.176357 kubelet[2672]: E1213 02:40:26.176141 2672 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"545ea108a48aef43421de94533e20fddba5d935cba65950b7682f1bb0e629d33"} Dec 13 02:40:26.176357 kubelet[2672]: E1213 02:40:26.176195 2672 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b6fec068-1607-4b7c-a071-cd5974d02433\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"545ea108a48aef43421de94533e20fddba5d935cba65950b7682f1bb0e629d33\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 02:40:26.176357 kubelet[2672]: E1213 02:40:26.176245 2672 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b6fec068-1607-4b7c-a071-cd5974d02433\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"545ea108a48aef43421de94533e20fddba5d935cba65950b7682f1bb0e629d33\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-qnq6v" podUID="b6fec068-1607-4b7c-a071-cd5974d02433" Dec 13 02:40:26.180403 containerd[1450]: time="2024-12-13T02:40:26.180352635Z" level=error msg="StopPodSandbox for \"94f138cb1525f1d6908b7caba24dad6518badd0df93a8320331c00602c63ba52\" failed" error="failed to destroy network for sandbox \"94f138cb1525f1d6908b7caba24dad6518badd0df93a8320331c00602c63ba52\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:40:26.181024 containerd[1450]: time="2024-12-13T02:40:26.180538848Z" level=error msg="StopPodSandbox for \"9e95ab1f67d91335b43bdc101237c01d380e43b057afa10b93f8815725a6635d\" failed" error="failed to destroy network for sandbox \"9e95ab1f67d91335b43bdc101237c01d380e43b057afa10b93f8815725a6635d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:40:26.181073 kubelet[2672]: E1213 02:40:26.180785 2672 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9e95ab1f67d91335b43bdc101237c01d380e43b057afa10b93f8815725a6635d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9e95ab1f67d91335b43bdc101237c01d380e43b057afa10b93f8815725a6635d" Dec 13 02:40:26.181073 kubelet[2672]: E1213 02:40:26.180806 2672 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"94f138cb1525f1d6908b7caba24dad6518badd0df93a8320331c00602c63ba52\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="94f138cb1525f1d6908b7caba24dad6518badd0df93a8320331c00602c63ba52" Dec 13 02:40:26.181073 kubelet[2672]: E1213 02:40:26.180841 2672 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9e95ab1f67d91335b43bdc101237c01d380e43b057afa10b93f8815725a6635d"} Dec 13 02:40:26.181073 kubelet[2672]: E1213 02:40:26.180865 2672 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"94f138cb1525f1d6908b7caba24dad6518badd0df93a8320331c00602c63ba52"} Dec 13 02:40:26.181073 kubelet[2672]: E1213 02:40:26.180898 2672 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0fc46b44-6bcb-489b-aece-768f5c9d6bf3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9e95ab1f67d91335b43bdc101237c01d380e43b057afa10b93f8815725a6635d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 02:40:26.181273 kubelet[2672]: E1213 02:40:26.180910 2672 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7b74c729-793f-4b9e-8c1e-327ee29af018\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"94f138cb1525f1d6908b7caba24dad6518badd0df93a8320331c00602c63ba52\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 02:40:26.181273 kubelet[2672]: E1213 02:40:26.180941 2672 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0fc46b44-6bcb-489b-aece-768f5c9d6bf3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9e95ab1f67d91335b43bdc101237c01d380e43b057afa10b93f8815725a6635d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d894d9fbd-5thcx" podUID="0fc46b44-6bcb-489b-aece-768f5c9d6bf3" Dec 13 02:40:26.181273 kubelet[2672]: E1213 02:40:26.180950 2672 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7b74c729-793f-4b9e-8c1e-327ee29af018\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"94f138cb1525f1d6908b7caba24dad6518badd0df93a8320331c00602c63ba52\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-585df87b9-jhcxr" podUID="7b74c729-793f-4b9e-8c1e-327ee29af018" Dec 13 02:40:26.183235 containerd[1450]: time="2024-12-13T02:40:26.183191326Z" level=error msg="StopPodSandbox for \"299f4e56be73d6ca2b86b2641695a2326ea23375985d2a3098aacca162c0071b\" failed" error="failed to destroy network for sandbox \"299f4e56be73d6ca2b86b2641695a2326ea23375985d2a3098aacca162c0071b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:40:26.183434 kubelet[2672]: E1213 02:40:26.183413 2672 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"299f4e56be73d6ca2b86b2641695a2326ea23375985d2a3098aacca162c0071b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="299f4e56be73d6ca2b86b2641695a2326ea23375985d2a3098aacca162c0071b" Dec 13 02:40:26.184506 kubelet[2672]: E1213 02:40:26.183455 2672 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"299f4e56be73d6ca2b86b2641695a2326ea23375985d2a3098aacca162c0071b"} Dec 13 02:40:26.184506 kubelet[2672]: E1213 02:40:26.183608 2672 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8e72692f-d22b-4813-bb35-ab03aefb087b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"299f4e56be73d6ca2b86b2641695a2326ea23375985d2a3098aacca162c0071b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 02:40:26.184506 kubelet[2672]: E1213 02:40:26.183668 2672 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8e72692f-d22b-4813-bb35-ab03aefb087b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"299f4e56be73d6ca2b86b2641695a2326ea23375985d2a3098aacca162c0071b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9qnbf" podUID="8e72692f-d22b-4813-bb35-ab03aefb087b" Dec 13 02:40:33.684383 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1608844804.mount: Deactivated successfully. Dec 13 02:40:35.313779 containerd[1450]: time="2024-12-13T02:40:35.313660506Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Dec 13 02:40:35.348830 containerd[1450]: time="2024-12-13T02:40:35.348752799Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:40:35.385611 containerd[1450]: time="2024-12-13T02:40:35.385509738Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:40:35.395339 containerd[1450]: time="2024-12-13T02:40:35.395163232Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:40:35.396564 containerd[1450]: time="2024-12-13T02:40:35.395939568Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 11.42719204s" Dec 13 02:40:35.396564 containerd[1450]: time="2024-12-13T02:40:35.395992979Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Dec 13 02:40:35.610319 containerd[1450]: time="2024-12-13T02:40:35.609076353Z" level=info msg="CreateContainer within sandbox \"adb96023d37e849ffc1cccf4ae2458c1e5dd797a8c771709842266888a29679e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 02:40:35.744163 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount724959673.mount: Deactivated successfully. Dec 13 02:40:35.788999 containerd[1450]: time="2024-12-13T02:40:35.788911714Z" level=info msg="CreateContainer within sandbox \"adb96023d37e849ffc1cccf4ae2458c1e5dd797a8c771709842266888a29679e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e591f0b6f2e725ce8004e0080aeb1cd014f6f41f0b21cb09f7a5e10f4f916f6f\"" Dec 13 02:40:35.803327 containerd[1450]: time="2024-12-13T02:40:35.802943629Z" level=info msg="StartContainer for \"e591f0b6f2e725ce8004e0080aeb1cd014f6f41f0b21cb09f7a5e10f4f916f6f\"" Dec 13 02:40:36.149737 systemd[1]: Started cri-containerd-e591f0b6f2e725ce8004e0080aeb1cd014f6f41f0b21cb09f7a5e10f4f916f6f.scope - libcontainer container e591f0b6f2e725ce8004e0080aeb1cd014f6f41f0b21cb09f7a5e10f4f916f6f. Dec 13 02:40:36.600426 containerd[1450]: time="2024-12-13T02:40:36.600250148Z" level=info msg="StartContainer for \"e591f0b6f2e725ce8004e0080aeb1cd014f6f41f0b21cb09f7a5e10f4f916f6f\" returns successfully" Dec 13 02:40:36.928606 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 02:40:36.933026 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 02:40:37.639121 containerd[1450]: time="2024-12-13T02:40:37.637848599Z" level=info msg="StopPodSandbox for \"299f4e56be73d6ca2b86b2641695a2326ea23375985d2a3098aacca162c0071b\"" Dec 13 02:40:37.847628 kubelet[2672]: I1213 02:40:37.847578 2672 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-5tmsf" podStartSLOduration=3.947090175 podStartE2EDuration="35.810638088s" podCreationTimestamp="2024-12-13 02:40:02 +0000 UTC" firstStartedPulling="2024-12-13 02:40:03.532899555 +0000 UTC m=+26.136279565" lastFinishedPulling="2024-12-13 02:40:35.396447467 +0000 UTC m=+57.999827478" observedRunningTime="2024-12-13 02:40:37.151684842 +0000 UTC m=+59.755064863" watchObservedRunningTime="2024-12-13 02:40:37.810638088 +0000 UTC m=+60.414018098" Dec 13 02:40:38.389626 containerd[1450]: 2024-12-13 02:40:37.809 [INFO][3776] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="299f4e56be73d6ca2b86b2641695a2326ea23375985d2a3098aacca162c0071b" Dec 13 02:40:38.389626 containerd[1450]: 2024-12-13 02:40:37.812 [INFO][3776] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="299f4e56be73d6ca2b86b2641695a2326ea23375985d2a3098aacca162c0071b" iface="eth0" netns="/var/run/netns/cni-4c62bc45-239b-807f-a574-684582b2b6f8" Dec 13 02:40:38.389626 containerd[1450]: 2024-12-13 02:40:37.812 [INFO][3776] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="299f4e56be73d6ca2b86b2641695a2326ea23375985d2a3098aacca162c0071b" iface="eth0" netns="/var/run/netns/cni-4c62bc45-239b-807f-a574-684582b2b6f8" Dec 13 02:40:38.389626 containerd[1450]: 2024-12-13 02:40:37.816 [INFO][3776] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="299f4e56be73d6ca2b86b2641695a2326ea23375985d2a3098aacca162c0071b" iface="eth0" netns="/var/run/netns/cni-4c62bc45-239b-807f-a574-684582b2b6f8" Dec 13 02:40:38.389626 containerd[1450]: 2024-12-13 02:40:37.816 [INFO][3776] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="299f4e56be73d6ca2b86b2641695a2326ea23375985d2a3098aacca162c0071b" Dec 13 02:40:38.389626 containerd[1450]: 2024-12-13 02:40:37.816 [INFO][3776] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="299f4e56be73d6ca2b86b2641695a2326ea23375985d2a3098aacca162c0071b" Dec 13 02:40:38.389626 containerd[1450]: 2024-12-13 02:40:38.337 [INFO][3782] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="299f4e56be73d6ca2b86b2641695a2326ea23375985d2a3098aacca162c0071b" HandleID="k8s-pod-network.299f4e56be73d6ca2b86b2641695a2326ea23375985d2a3098aacca162c0071b" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-csi--node--driver--9qnbf-eth0" Dec 13 02:40:38.389626 containerd[1450]: 2024-12-13 02:40:38.344 [INFO][3782] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:40:38.389626 containerd[1450]: 2024-12-13 02:40:38.345 [INFO][3782] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:40:38.389626 containerd[1450]: 2024-12-13 02:40:38.378 [WARNING][3782] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="299f4e56be73d6ca2b86b2641695a2326ea23375985d2a3098aacca162c0071b" HandleID="k8s-pod-network.299f4e56be73d6ca2b86b2641695a2326ea23375985d2a3098aacca162c0071b" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-csi--node--driver--9qnbf-eth0" Dec 13 02:40:38.389626 containerd[1450]: 2024-12-13 02:40:38.378 [INFO][3782] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="299f4e56be73d6ca2b86b2641695a2326ea23375985d2a3098aacca162c0071b" HandleID="k8s-pod-network.299f4e56be73d6ca2b86b2641695a2326ea23375985d2a3098aacca162c0071b" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-csi--node--driver--9qnbf-eth0" Dec 13 02:40:38.389626 containerd[1450]: 2024-12-13 02:40:38.382 [INFO][3782] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:40:38.389626 containerd[1450]: 2024-12-13 02:40:38.385 [INFO][3776] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="299f4e56be73d6ca2b86b2641695a2326ea23375985d2a3098aacca162c0071b" Dec 13 02:40:38.393239 containerd[1450]: time="2024-12-13T02:40:38.390915091Z" level=info msg="TearDown network for sandbox \"299f4e56be73d6ca2b86b2641695a2326ea23375985d2a3098aacca162c0071b\" successfully" Dec 13 02:40:38.393239 containerd[1450]: time="2024-12-13T02:40:38.392567972Z" level=info msg="StopPodSandbox for \"299f4e56be73d6ca2b86b2641695a2326ea23375985d2a3098aacca162c0071b\" returns successfully" Dec 13 02:40:38.396860 containerd[1450]: time="2024-12-13T02:40:38.395419365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9qnbf,Uid:8e72692f-d22b-4813-bb35-ab03aefb087b,Namespace:calico-system,Attempt:1,}" Dec 13 02:40:38.407233 systemd[1]: run-netns-cni\x2d4c62bc45\x2d239b\x2d807f\x2da574\x2d684582b2b6f8.mount: Deactivated successfully. Dec 13 02:40:38.561880 containerd[1450]: time="2024-12-13T02:40:38.561051040Z" level=info msg="StopPodSandbox for \"545ea108a48aef43421de94533e20fddba5d935cba65950b7682f1bb0e629d33\"" Dec 13 02:40:38.562871 containerd[1450]: time="2024-12-13T02:40:38.562848654Z" level=info msg="StopPodSandbox for \"9e95ab1f67d91335b43bdc101237c01d380e43b057afa10b93f8815725a6635d\"" Dec 13 02:40:38.868459 containerd[1450]: 2024-12-13 02:40:38.762 [INFO][3878] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9e95ab1f67d91335b43bdc101237c01d380e43b057afa10b93f8815725a6635d" Dec 13 02:40:38.868459 containerd[1450]: 2024-12-13 02:40:38.762 [INFO][3878] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9e95ab1f67d91335b43bdc101237c01d380e43b057afa10b93f8815725a6635d" iface="eth0" netns="/var/run/netns/cni-1ac8c4e9-6576-1651-0bc5-22acc28bea92" Dec 13 02:40:38.868459 containerd[1450]: 2024-12-13 02:40:38.768 [INFO][3878] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9e95ab1f67d91335b43bdc101237c01d380e43b057afa10b93f8815725a6635d" iface="eth0" netns="/var/run/netns/cni-1ac8c4e9-6576-1651-0bc5-22acc28bea92" Dec 13 02:40:38.868459 containerd[1450]: 2024-12-13 02:40:38.768 [INFO][3878] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9e95ab1f67d91335b43bdc101237c01d380e43b057afa10b93f8815725a6635d" iface="eth0" netns="/var/run/netns/cni-1ac8c4e9-6576-1651-0bc5-22acc28bea92" Dec 13 02:40:38.868459 containerd[1450]: 2024-12-13 02:40:38.770 [INFO][3878] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9e95ab1f67d91335b43bdc101237c01d380e43b057afa10b93f8815725a6635d" Dec 13 02:40:38.868459 containerd[1450]: 2024-12-13 02:40:38.770 [INFO][3878] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9e95ab1f67d91335b43bdc101237c01d380e43b057afa10b93f8815725a6635d" Dec 13 02:40:38.868459 containerd[1450]: 2024-12-13 02:40:38.833 [INFO][3917] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9e95ab1f67d91335b43bdc101237c01d380e43b057afa10b93f8815725a6635d" HandleID="k8s-pod-network.9e95ab1f67d91335b43bdc101237c01d380e43b057afa10b93f8815725a6635d" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--apiserver--d894d9fbd--5thcx-eth0" Dec 13 02:40:38.868459 containerd[1450]: 2024-12-13 02:40:38.833 [INFO][3917] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:40:38.868459 containerd[1450]: 2024-12-13 02:40:38.835 [INFO][3917] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:40:38.868459 containerd[1450]: 2024-12-13 02:40:38.846 [WARNING][3917] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9e95ab1f67d91335b43bdc101237c01d380e43b057afa10b93f8815725a6635d" HandleID="k8s-pod-network.9e95ab1f67d91335b43bdc101237c01d380e43b057afa10b93f8815725a6635d" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--apiserver--d894d9fbd--5thcx-eth0" Dec 13 02:40:38.868459 containerd[1450]: 2024-12-13 02:40:38.846 [INFO][3917] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9e95ab1f67d91335b43bdc101237c01d380e43b057afa10b93f8815725a6635d" HandleID="k8s-pod-network.9e95ab1f67d91335b43bdc101237c01d380e43b057afa10b93f8815725a6635d" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--apiserver--d894d9fbd--5thcx-eth0" Dec 13 02:40:38.868459 containerd[1450]: 2024-12-13 02:40:38.850 [INFO][3917] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:40:38.868459 containerd[1450]: 2024-12-13 02:40:38.858 [INFO][3878] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9e95ab1f67d91335b43bdc101237c01d380e43b057afa10b93f8815725a6635d" Dec 13 02:40:38.874041 containerd[1450]: time="2024-12-13T02:40:38.873254387Z" level=info msg="TearDown network for sandbox \"9e95ab1f67d91335b43bdc101237c01d380e43b057afa10b93f8815725a6635d\" successfully" Dec 13 02:40:38.874041 containerd[1450]: time="2024-12-13T02:40:38.873292428Z" level=info msg="StopPodSandbox for \"9e95ab1f67d91335b43bdc101237c01d380e43b057afa10b93f8815725a6635d\" returns successfully" Dec 13 02:40:38.874961 systemd-networkd[1362]: cali165c0e42282: Link UP Dec 13 02:40:38.875169 systemd-networkd[1362]: cali165c0e42282: Gained carrier Dec 13 02:40:38.887211 containerd[1450]: time="2024-12-13T02:40:38.876760306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d894d9fbd-5thcx,Uid:0fc46b44-6bcb-489b-aece-768f5c9d6bf3,Namespace:calico-apiserver,Attempt:1,}" Dec 13 02:40:38.878661 systemd[1]: run-netns-cni\x2d1ac8c4e9\x2d6576\x2d1651\x2d0bc5\x2d22acc28bea92.mount: Deactivated successfully. Dec 13 02:40:38.919294 containerd[1450]: 2024-12-13 02:40:38.527 [INFO][3798] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 02:40:38.919294 containerd[1450]: 2024-12-13 02:40:38.563 [INFO][3798] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--b--31d3d6554f.novalocal-k8s-csi--node--driver--9qnbf-eth0 csi-node-driver- calico-system 8e72692f-d22b-4813-bb35-ab03aefb087b 811 0 2024-12-13 02:40:02 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-2-1-b-31d3d6554f.novalocal csi-node-driver-9qnbf eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali165c0e42282 [] []}} ContainerID="92f0a4553201cef4fd5a1369d5d35948dc0fe7ac6b578e7fb11f7b793f36bc6e" Namespace="calico-system" Pod="csi-node-driver-9qnbf" WorkloadEndpoint="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-csi--node--driver--9qnbf-" Dec 13 02:40:38.919294 containerd[1450]: 2024-12-13 02:40:38.564 [INFO][3798] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="92f0a4553201cef4fd5a1369d5d35948dc0fe7ac6b578e7fb11f7b793f36bc6e" Namespace="calico-system" Pod="csi-node-driver-9qnbf" WorkloadEndpoint="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-csi--node--driver--9qnbf-eth0" Dec 13 02:40:38.919294 containerd[1450]: 2024-12-13 02:40:38.700 [INFO][3888] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="92f0a4553201cef4fd5a1369d5d35948dc0fe7ac6b578e7fb11f7b793f36bc6e" HandleID="k8s-pod-network.92f0a4553201cef4fd5a1369d5d35948dc0fe7ac6b578e7fb11f7b793f36bc6e" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-csi--node--driver--9qnbf-eth0" Dec 13 02:40:38.919294 containerd[1450]: 2024-12-13 02:40:38.733 [INFO][3888] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="92f0a4553201cef4fd5a1369d5d35948dc0fe7ac6b578e7fb11f7b793f36bc6e" HandleID="k8s-pod-network.92f0a4553201cef4fd5a1369d5d35948dc0fe7ac6b578e7fb11f7b793f36bc6e" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-csi--node--driver--9qnbf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00042b290), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-2-1-b-31d3d6554f.novalocal", "pod":"csi-node-driver-9qnbf", "timestamp":"2024-12-13 02:40:38.700339702 +0000 UTC"}, Hostname:"ci-4081-2-1-b-31d3d6554f.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 02:40:38.919294 containerd[1450]: 2024-12-13 02:40:38.733 [INFO][3888] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:40:38.919294 containerd[1450]: 2024-12-13 02:40:38.733 [INFO][3888] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:40:38.919294 containerd[1450]: 2024-12-13 02:40:38.733 [INFO][3888] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-b-31d3d6554f.novalocal' Dec 13 02:40:38.919294 containerd[1450]: 2024-12-13 02:40:38.740 [INFO][3888] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.92f0a4553201cef4fd5a1369d5d35948dc0fe7ac6b578e7fb11f7b793f36bc6e" host="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:40:38.919294 containerd[1450]: 2024-12-13 02:40:38.766 [INFO][3888] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:40:38.919294 containerd[1450]: 2024-12-13 02:40:38.789 [INFO][3888] ipam/ipam.go 489: Trying affinity for 192.168.24.64/26 host="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:40:38.919294 containerd[1450]: 2024-12-13 02:40:38.796 [INFO][3888] ipam/ipam.go 155: Attempting to load block cidr=192.168.24.64/26 host="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:40:38.919294 containerd[1450]: 2024-12-13 02:40:38.803 [INFO][3888] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.24.64/26 host="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:40:38.919294 containerd[1450]: 2024-12-13 02:40:38.803 [INFO][3888] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.24.64/26 handle="k8s-pod-network.92f0a4553201cef4fd5a1369d5d35948dc0fe7ac6b578e7fb11f7b793f36bc6e" host="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:40:38.919294 containerd[1450]: 2024-12-13 02:40:38.812 [INFO][3888] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.92f0a4553201cef4fd5a1369d5d35948dc0fe7ac6b578e7fb11f7b793f36bc6e Dec 13 02:40:38.919294 containerd[1450]: 2024-12-13 02:40:38.823 [INFO][3888] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.24.64/26 handle="k8s-pod-network.92f0a4553201cef4fd5a1369d5d35948dc0fe7ac6b578e7fb11f7b793f36bc6e" host="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:40:38.919294 containerd[1450]: 2024-12-13 02:40:38.834 [INFO][3888] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.24.65/26] block=192.168.24.64/26 handle="k8s-pod-network.92f0a4553201cef4fd5a1369d5d35948dc0fe7ac6b578e7fb11f7b793f36bc6e" host="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:40:38.919294 containerd[1450]: 2024-12-13 02:40:38.834 [INFO][3888] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.24.65/26] handle="k8s-pod-network.92f0a4553201cef4fd5a1369d5d35948dc0fe7ac6b578e7fb11f7b793f36bc6e" host="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:40:38.919294 containerd[1450]: 2024-12-13 02:40:38.834 [INFO][3888] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:40:38.919294 containerd[1450]: 2024-12-13 02:40:38.834 [INFO][3888] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.24.65/26] IPv6=[] ContainerID="92f0a4553201cef4fd5a1369d5d35948dc0fe7ac6b578e7fb11f7b793f36bc6e" HandleID="k8s-pod-network.92f0a4553201cef4fd5a1369d5d35948dc0fe7ac6b578e7fb11f7b793f36bc6e" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-csi--node--driver--9qnbf-eth0" Dec 13 02:40:38.922455 containerd[1450]: 2024-12-13 02:40:38.843 [INFO][3798] cni-plugin/k8s.go 386: Populated endpoint ContainerID="92f0a4553201cef4fd5a1369d5d35948dc0fe7ac6b578e7fb11f7b793f36bc6e" Namespace="calico-system" Pod="csi-node-driver-9qnbf" WorkloadEndpoint="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-csi--node--driver--9qnbf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--31d3d6554f.novalocal-k8s-csi--node--driver--9qnbf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8e72692f-d22b-4813-bb35-ab03aefb087b", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 40, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-31d3d6554f.novalocal", ContainerID:"", Pod:"csi-node-driver-9qnbf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.24.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali165c0e42282", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:40:38.922455 containerd[1450]: 2024-12-13 02:40:38.844 [INFO][3798] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.24.65/32] ContainerID="92f0a4553201cef4fd5a1369d5d35948dc0fe7ac6b578e7fb11f7b793f36bc6e" Namespace="calico-system" Pod="csi-node-driver-9qnbf" WorkloadEndpoint="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-csi--node--driver--9qnbf-eth0" Dec 13 02:40:38.922455 containerd[1450]: 2024-12-13 02:40:38.845 [INFO][3798] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali165c0e42282 ContainerID="92f0a4553201cef4fd5a1369d5d35948dc0fe7ac6b578e7fb11f7b793f36bc6e" Namespace="calico-system" Pod="csi-node-driver-9qnbf" WorkloadEndpoint="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-csi--node--driver--9qnbf-eth0" Dec 13 02:40:38.922455 containerd[1450]: 2024-12-13 02:40:38.874 [INFO][3798] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="92f0a4553201cef4fd5a1369d5d35948dc0fe7ac6b578e7fb11f7b793f36bc6e" Namespace="calico-system" Pod="csi-node-driver-9qnbf" WorkloadEndpoint="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-csi--node--driver--9qnbf-eth0" Dec 13 02:40:38.922455 containerd[1450]: 2024-12-13 02:40:38.883 [INFO][3798] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="92f0a4553201cef4fd5a1369d5d35948dc0fe7ac6b578e7fb11f7b793f36bc6e" Namespace="calico-system" Pod="csi-node-driver-9qnbf" WorkloadEndpoint="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-csi--node--driver--9qnbf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--31d3d6554f.novalocal-k8s-csi--node--driver--9qnbf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8e72692f-d22b-4813-bb35-ab03aefb087b", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 40, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-31d3d6554f.novalocal", ContainerID:"92f0a4553201cef4fd5a1369d5d35948dc0fe7ac6b578e7fb11f7b793f36bc6e", Pod:"csi-node-driver-9qnbf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.24.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali165c0e42282", MAC:"56:2c:7f:93:cb:34", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:40:38.922455 containerd[1450]: 2024-12-13 02:40:38.907 [INFO][3798] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="92f0a4553201cef4fd5a1369d5d35948dc0fe7ac6b578e7fb11f7b793f36bc6e" Namespace="calico-system" Pod="csi-node-driver-9qnbf" WorkloadEndpoint="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-csi--node--driver--9qnbf-eth0" Dec 13 02:40:38.999537 containerd[1450]: 2024-12-13 02:40:38.768 [INFO][3879] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="545ea108a48aef43421de94533e20fddba5d935cba65950b7682f1bb0e629d33" Dec 13 02:40:38.999537 containerd[1450]: 2024-12-13 02:40:38.770 [INFO][3879] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="545ea108a48aef43421de94533e20fddba5d935cba65950b7682f1bb0e629d33" iface="eth0" netns="/var/run/netns/cni-6c7f143b-ad4b-297d-ab70-840365d31c2e" Dec 13 02:40:38.999537 containerd[1450]: 2024-12-13 02:40:38.771 [INFO][3879] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="545ea108a48aef43421de94533e20fddba5d935cba65950b7682f1bb0e629d33" iface="eth0" netns="/var/run/netns/cni-6c7f143b-ad4b-297d-ab70-840365d31c2e" Dec 13 02:40:38.999537 containerd[1450]: 2024-12-13 02:40:38.771 [INFO][3879] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="545ea108a48aef43421de94533e20fddba5d935cba65950b7682f1bb0e629d33" iface="eth0" netns="/var/run/netns/cni-6c7f143b-ad4b-297d-ab70-840365d31c2e" Dec 13 02:40:38.999537 containerd[1450]: 2024-12-13 02:40:38.771 [INFO][3879] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="545ea108a48aef43421de94533e20fddba5d935cba65950b7682f1bb0e629d33" Dec 13 02:40:38.999537 containerd[1450]: 2024-12-13 02:40:38.771 [INFO][3879] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="545ea108a48aef43421de94533e20fddba5d935cba65950b7682f1bb0e629d33" Dec 13 02:40:38.999537 containerd[1450]: 2024-12-13 02:40:38.922 [INFO][3916] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="545ea108a48aef43421de94533e20fddba5d935cba65950b7682f1bb0e629d33" HandleID="k8s-pod-network.545ea108a48aef43421de94533e20fddba5d935cba65950b7682f1bb0e629d33" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-coredns--76f75df574--qnq6v-eth0" Dec 13 02:40:38.999537 containerd[1450]: 2024-12-13 02:40:38.922 [INFO][3916] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:40:38.999537 containerd[1450]: 2024-12-13 02:40:38.922 [INFO][3916] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:40:38.999537 containerd[1450]: 2024-12-13 02:40:38.960 [WARNING][3916] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="545ea108a48aef43421de94533e20fddba5d935cba65950b7682f1bb0e629d33" HandleID="k8s-pod-network.545ea108a48aef43421de94533e20fddba5d935cba65950b7682f1bb0e629d33" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-coredns--76f75df574--qnq6v-eth0" Dec 13 02:40:38.999537 containerd[1450]: 2024-12-13 02:40:38.966 [INFO][3916] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="545ea108a48aef43421de94533e20fddba5d935cba65950b7682f1bb0e629d33" HandleID="k8s-pod-network.545ea108a48aef43421de94533e20fddba5d935cba65950b7682f1bb0e629d33" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-coredns--76f75df574--qnq6v-eth0" Dec 13 02:40:38.999537 containerd[1450]: 2024-12-13 02:40:38.978 [INFO][3916] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:40:38.999537 containerd[1450]: 2024-12-13 02:40:38.996 [INFO][3879] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="545ea108a48aef43421de94533e20fddba5d935cba65950b7682f1bb0e629d33" Dec 13 02:40:39.011984 containerd[1450]: time="2024-12-13T02:40:39.000156914Z" level=info msg="TearDown network for sandbox \"545ea108a48aef43421de94533e20fddba5d935cba65950b7682f1bb0e629d33\" successfully" Dec 13 02:40:39.011984 containerd[1450]: time="2024-12-13T02:40:39.000199463Z" level=info msg="StopPodSandbox for \"545ea108a48aef43421de94533e20fddba5d935cba65950b7682f1bb0e629d33\" returns successfully" Dec 13 02:40:39.011984 containerd[1450]: time="2024-12-13T02:40:39.001163754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qnq6v,Uid:b6fec068-1607-4b7c-a071-cd5974d02433,Namespace:kube-system,Attempt:1,}" Dec 13 02:40:39.197175 containerd[1450]: time="2024-12-13T02:40:39.185449737Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:40:39.197175 containerd[1450]: time="2024-12-13T02:40:39.185529458Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:40:39.197175 containerd[1450]: time="2024-12-13T02:40:39.185559845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:40:39.197175 containerd[1450]: time="2024-12-13T02:40:39.185656939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:40:39.305749 systemd[1]: Started cri-containerd-92f0a4553201cef4fd5a1369d5d35948dc0fe7ac6b578e7fb11f7b793f36bc6e.scope - libcontainer container 92f0a4553201cef4fd5a1369d5d35948dc0fe7ac6b578e7fb11f7b793f36bc6e. Dec 13 02:40:39.403944 systemd[1]: run-netns-cni\x2d6c7f143b\x2dad4b\x2d297d\x2dab70\x2d840365d31c2e.mount: Deactivated successfully. Dec 13 02:40:39.440401 containerd[1450]: time="2024-12-13T02:40:39.440201396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9qnbf,Uid:8e72692f-d22b-4813-bb35-ab03aefb087b,Namespace:calico-system,Attempt:1,} returns sandbox id \"92f0a4553201cef4fd5a1369d5d35948dc0fe7ac6b578e7fb11f7b793f36bc6e\"" Dec 13 02:40:39.534559 systemd-networkd[1362]: cali723d674629a: Link UP Dec 13 02:40:39.534836 systemd-networkd[1362]: cali723d674629a: Gained carrier Dec 13 02:40:39.544464 containerd[1450]: time="2024-12-13T02:40:39.544027055Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 02:40:39.563625 containerd[1450]: time="2024-12-13T02:40:39.563579884Z" level=info msg="StopPodSandbox for \"346f752a207b1dacd95d811a0541afb8312074760139a7369e171c82196f3c52\"" Dec 13 02:40:39.607007 containerd[1450]: 2024-12-13 02:40:39.026 [INFO][3948] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 02:40:39.607007 containerd[1450]: 2024-12-13 02:40:39.052 [INFO][3948] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--apiserver--d894d9fbd--5thcx-eth0 calico-apiserver-d894d9fbd- calico-apiserver 0fc46b44-6bcb-489b-aece-768f5c9d6bf3 818 0 2024-12-13 02:40:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d894d9fbd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-2-1-b-31d3d6554f.novalocal calico-apiserver-d894d9fbd-5thcx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali723d674629a [] []}} ContainerID="c6fd9ae2dff69b94f7651fe32609a64787ee689c91e7f4068d7a2c8f23ac2155" Namespace="calico-apiserver" Pod="calico-apiserver-d894d9fbd-5thcx" WorkloadEndpoint="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--apiserver--d894d9fbd--5thcx-" Dec 13 02:40:39.607007 containerd[1450]: 2024-12-13 02:40:39.052 [INFO][3948] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c6fd9ae2dff69b94f7651fe32609a64787ee689c91e7f4068d7a2c8f23ac2155" Namespace="calico-apiserver" Pod="calico-apiserver-d894d9fbd-5thcx" WorkloadEndpoint="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--apiserver--d894d9fbd--5thcx-eth0" Dec 13 02:40:39.607007 containerd[1450]: 2024-12-13 02:40:39.154 [INFO][3963] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c6fd9ae2dff69b94f7651fe32609a64787ee689c91e7f4068d7a2c8f23ac2155" HandleID="k8s-pod-network.c6fd9ae2dff69b94f7651fe32609a64787ee689c91e7f4068d7a2c8f23ac2155" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--apiserver--d894d9fbd--5thcx-eth0" Dec 13 02:40:39.607007 containerd[1450]: 2024-12-13 02:40:39.276 [INFO][3963] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c6fd9ae2dff69b94f7651fe32609a64787ee689c91e7f4068d7a2c8f23ac2155" HandleID="k8s-pod-network.c6fd9ae2dff69b94f7651fe32609a64787ee689c91e7f4068d7a2c8f23ac2155" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--apiserver--d894d9fbd--5thcx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031b130), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-2-1-b-31d3d6554f.novalocal", "pod":"calico-apiserver-d894d9fbd-5thcx", "timestamp":"2024-12-13 02:40:39.154898225 +0000 UTC"}, Hostname:"ci-4081-2-1-b-31d3d6554f.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 02:40:39.607007 containerd[1450]: 2024-12-13 02:40:39.276 [INFO][3963] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:40:39.607007 containerd[1450]: 2024-12-13 02:40:39.276 [INFO][3963] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:40:39.607007 containerd[1450]: 2024-12-13 02:40:39.276 [INFO][3963] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-b-31d3d6554f.novalocal' Dec 13 02:40:39.607007 containerd[1450]: 2024-12-13 02:40:39.286 [INFO][3963] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c6fd9ae2dff69b94f7651fe32609a64787ee689c91e7f4068d7a2c8f23ac2155" host="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:40:39.607007 containerd[1450]: 2024-12-13 02:40:39.295 [INFO][3963] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:40:39.607007 containerd[1450]: 2024-12-13 02:40:39.321 [INFO][3963] ipam/ipam.go 489: Trying affinity for 192.168.24.64/26 host="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:40:39.607007 containerd[1450]: 2024-12-13 02:40:39.395 [INFO][3963] ipam/ipam.go 155: Attempting to load block cidr=192.168.24.64/26 host="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:40:39.607007 containerd[1450]: 2024-12-13 02:40:39.442 [INFO][3963] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.24.64/26 host="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:40:39.607007 containerd[1450]: 2024-12-13 02:40:39.443 [INFO][3963] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.24.64/26 handle="k8s-pod-network.c6fd9ae2dff69b94f7651fe32609a64787ee689c91e7f4068d7a2c8f23ac2155" host="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:40:39.607007 containerd[1450]: 2024-12-13 02:40:39.450 [INFO][3963] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c6fd9ae2dff69b94f7651fe32609a64787ee689c91e7f4068d7a2c8f23ac2155 Dec 13 02:40:39.607007 containerd[1450]: 2024-12-13 02:40:39.474 [INFO][3963] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.24.64/26 handle="k8s-pod-network.c6fd9ae2dff69b94f7651fe32609a64787ee689c91e7f4068d7a2c8f23ac2155" host="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:40:39.607007 containerd[1450]: 2024-12-13 02:40:39.502 [INFO][3963] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.24.66/26] block=192.168.24.64/26 handle="k8s-pod-network.c6fd9ae2dff69b94f7651fe32609a64787ee689c91e7f4068d7a2c8f23ac2155" host="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:40:39.607007 containerd[1450]: 2024-12-13 02:40:39.502 [INFO][3963] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.24.66/26] handle="k8s-pod-network.c6fd9ae2dff69b94f7651fe32609a64787ee689c91e7f4068d7a2c8f23ac2155" host="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:40:39.607007 containerd[1450]: 2024-12-13 02:40:39.502 [INFO][3963] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:40:39.607007 containerd[1450]: 2024-12-13 02:40:39.502 [INFO][3963] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.24.66/26] IPv6=[] ContainerID="c6fd9ae2dff69b94f7651fe32609a64787ee689c91e7f4068d7a2c8f23ac2155" HandleID="k8s-pod-network.c6fd9ae2dff69b94f7651fe32609a64787ee689c91e7f4068d7a2c8f23ac2155" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--apiserver--d894d9fbd--5thcx-eth0" Dec 13 02:40:39.611717 containerd[1450]: 2024-12-13 02:40:39.509 [INFO][3948] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c6fd9ae2dff69b94f7651fe32609a64787ee689c91e7f4068d7a2c8f23ac2155" Namespace="calico-apiserver" Pod="calico-apiserver-d894d9fbd-5thcx" WorkloadEndpoint="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--apiserver--d894d9fbd--5thcx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--apiserver--d894d9fbd--5thcx-eth0", GenerateName:"calico-apiserver-d894d9fbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"0fc46b44-6bcb-489b-aece-768f5c9d6bf3", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 40, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d894d9fbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-31d3d6554f.novalocal", ContainerID:"", Pod:"calico-apiserver-d894d9fbd-5thcx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali723d674629a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:40:39.611717 containerd[1450]: 2024-12-13 02:40:39.510 [INFO][3948] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.24.66/32] ContainerID="c6fd9ae2dff69b94f7651fe32609a64787ee689c91e7f4068d7a2c8f23ac2155" Namespace="calico-apiserver" Pod="calico-apiserver-d894d9fbd-5thcx" WorkloadEndpoint="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--apiserver--d894d9fbd--5thcx-eth0" Dec 13 02:40:39.611717 containerd[1450]: 2024-12-13 02:40:39.510 [INFO][3948] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali723d674629a ContainerID="c6fd9ae2dff69b94f7651fe32609a64787ee689c91e7f4068d7a2c8f23ac2155" Namespace="calico-apiserver" Pod="calico-apiserver-d894d9fbd-5thcx" WorkloadEndpoint="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--apiserver--d894d9fbd--5thcx-eth0" Dec 13 02:40:39.611717 containerd[1450]: 2024-12-13 02:40:39.533 [INFO][3948] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c6fd9ae2dff69b94f7651fe32609a64787ee689c91e7f4068d7a2c8f23ac2155" Namespace="calico-apiserver" Pod="calico-apiserver-d894d9fbd-5thcx" WorkloadEndpoint="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--apiserver--d894d9fbd--5thcx-eth0" Dec 13 02:40:39.611717 containerd[1450]: 2024-12-13 02:40:39.540 [INFO][3948] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c6fd9ae2dff69b94f7651fe32609a64787ee689c91e7f4068d7a2c8f23ac2155" Namespace="calico-apiserver" Pod="calico-apiserver-d894d9fbd-5thcx" WorkloadEndpoint="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--apiserver--d894d9fbd--5thcx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--apiserver--d894d9fbd--5thcx-eth0", GenerateName:"calico-apiserver-d894d9fbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"0fc46b44-6bcb-489b-aece-768f5c9d6bf3", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 40, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d894d9fbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-31d3d6554f.novalocal", ContainerID:"c6fd9ae2dff69b94f7651fe32609a64787ee689c91e7f4068d7a2c8f23ac2155", Pod:"calico-apiserver-d894d9fbd-5thcx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali723d674629a", MAC:"02:79:b5:8b:82:ee", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:40:39.611717 containerd[1450]: 2024-12-13 02:40:39.589 [INFO][3948] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c6fd9ae2dff69b94f7651fe32609a64787ee689c91e7f4068d7a2c8f23ac2155" Namespace="calico-apiserver" Pod="calico-apiserver-d894d9fbd-5thcx" WorkloadEndpoint="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--apiserver--d894d9fbd--5thcx-eth0" Dec 13 02:40:39.784401 containerd[1450]: time="2024-12-13T02:40:39.783049623Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:40:39.784401 containerd[1450]: time="2024-12-13T02:40:39.783107162Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:40:39.784401 containerd[1450]: time="2024-12-13T02:40:39.783121098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:40:39.784401 containerd[1450]: time="2024-12-13T02:40:39.783208222Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:40:39.863071 containerd[1450]: 2024-12-13 02:40:39.741 [INFO][4048] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="346f752a207b1dacd95d811a0541afb8312074760139a7369e171c82196f3c52" Dec 13 02:40:39.863071 containerd[1450]: 2024-12-13 02:40:39.741 [INFO][4048] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="346f752a207b1dacd95d811a0541afb8312074760139a7369e171c82196f3c52" iface="eth0" netns="/var/run/netns/cni-c79582b9-adf5-8ef0-1505-931c34736b60" Dec 13 02:40:39.863071 containerd[1450]: 2024-12-13 02:40:39.743 [INFO][4048] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="346f752a207b1dacd95d811a0541afb8312074760139a7369e171c82196f3c52" iface="eth0" netns="/var/run/netns/cni-c79582b9-adf5-8ef0-1505-931c34736b60" Dec 13 02:40:39.863071 containerd[1450]: 2024-12-13 02:40:39.745 [INFO][4048] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="346f752a207b1dacd95d811a0541afb8312074760139a7369e171c82196f3c52" iface="eth0" netns="/var/run/netns/cni-c79582b9-adf5-8ef0-1505-931c34736b60" Dec 13 02:40:39.863071 containerd[1450]: 2024-12-13 02:40:39.745 [INFO][4048] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="346f752a207b1dacd95d811a0541afb8312074760139a7369e171c82196f3c52" Dec 13 02:40:39.863071 containerd[1450]: 2024-12-13 02:40:39.745 [INFO][4048] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="346f752a207b1dacd95d811a0541afb8312074760139a7369e171c82196f3c52" Dec 13 02:40:39.863071 containerd[1450]: 2024-12-13 02:40:39.825 [INFO][4093] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="346f752a207b1dacd95d811a0541afb8312074760139a7369e171c82196f3c52" HandleID="k8s-pod-network.346f752a207b1dacd95d811a0541afb8312074760139a7369e171c82196f3c52" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--apiserver--d894d9fbd--w6s8z-eth0" Dec 13 02:40:39.863071 containerd[1450]: 2024-12-13 02:40:39.825 [INFO][4093] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:40:39.863071 containerd[1450]: 2024-12-13 02:40:39.826 [INFO][4093] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:40:39.863071 containerd[1450]: 2024-12-13 02:40:39.837 [WARNING][4093] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="346f752a207b1dacd95d811a0541afb8312074760139a7369e171c82196f3c52" HandleID="k8s-pod-network.346f752a207b1dacd95d811a0541afb8312074760139a7369e171c82196f3c52" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--apiserver--d894d9fbd--w6s8z-eth0" Dec 13 02:40:39.863071 containerd[1450]: 2024-12-13 02:40:39.837 [INFO][4093] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="346f752a207b1dacd95d811a0541afb8312074760139a7369e171c82196f3c52" HandleID="k8s-pod-network.346f752a207b1dacd95d811a0541afb8312074760139a7369e171c82196f3c52" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--apiserver--d894d9fbd--w6s8z-eth0" Dec 13 02:40:39.863071 containerd[1450]: 2024-12-13 02:40:39.846 [INFO][4093] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:40:39.863071 containerd[1450]: 2024-12-13 02:40:39.852 [INFO][4048] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="346f752a207b1dacd95d811a0541afb8312074760139a7369e171c82196f3c52" Dec 13 02:40:39.866581 containerd[1450]: time="2024-12-13T02:40:39.863235288Z" level=info msg="TearDown network for sandbox \"346f752a207b1dacd95d811a0541afb8312074760139a7369e171c82196f3c52\" successfully" Dec 13 02:40:39.866581 containerd[1450]: time="2024-12-13T02:40:39.863283559Z" level=info msg="StopPodSandbox for \"346f752a207b1dacd95d811a0541afb8312074760139a7369e171c82196f3c52\" returns successfully" Dec 13 02:40:39.865444 systemd[1]: Started cri-containerd-c6fd9ae2dff69b94f7651fe32609a64787ee689c91e7f4068d7a2c8f23ac2155.scope - libcontainer container c6fd9ae2dff69b94f7651fe32609a64787ee689c91e7f4068d7a2c8f23ac2155. Dec 13 02:40:39.868867 containerd[1450]: time="2024-12-13T02:40:39.868284549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d894d9fbd-w6s8z,Uid:d1f56eec-7f0d-4c0d-9522-9259829f7521,Namespace:calico-apiserver,Attempt:1,}" Dec 13 02:40:39.885568 kernel: bpftool[4132]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 02:40:40.034310 containerd[1450]: time="2024-12-13T02:40:40.034155641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d894d9fbd-5thcx,Uid:0fc46b44-6bcb-489b-aece-768f5c9d6bf3,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"c6fd9ae2dff69b94f7651fe32609a64787ee689c91e7f4068d7a2c8f23ac2155\"" Dec 13 02:40:40.035101 systemd-networkd[1362]: calie602d9d24e2: Link UP Dec 13 02:40:40.037322 systemd-networkd[1362]: calie602d9d24e2: Gained carrier Dec 13 02:40:40.093841 containerd[1450]: 2024-12-13 02:40:39.732 [INFO][4023] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--b--31d3d6554f.novalocal-k8s-coredns--76f75df574--qnq6v-eth0 coredns-76f75df574- kube-system b6fec068-1607-4b7c-a071-cd5974d02433 819 0 2024-12-13 02:39:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-2-1-b-31d3d6554f.novalocal coredns-76f75df574-qnq6v eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie602d9d24e2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="0a27194898dd74baba7d504aca8ed7f50de43122e0ad14aa5be9fe1722f0a228" Namespace="kube-system" Pod="coredns-76f75df574-qnq6v" WorkloadEndpoint="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-coredns--76f75df574--qnq6v-" Dec 13 02:40:40.093841 containerd[1450]: 2024-12-13 02:40:39.733 [INFO][4023] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0a27194898dd74baba7d504aca8ed7f50de43122e0ad14aa5be9fe1722f0a228" Namespace="kube-system" Pod="coredns-76f75df574-qnq6v" WorkloadEndpoint="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-coredns--76f75df574--qnq6v-eth0" Dec 13 02:40:40.093841 containerd[1450]: 2024-12-13 02:40:39.892 [INFO][4102] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0a27194898dd74baba7d504aca8ed7f50de43122e0ad14aa5be9fe1722f0a228" HandleID="k8s-pod-network.0a27194898dd74baba7d504aca8ed7f50de43122e0ad14aa5be9fe1722f0a228" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-coredns--76f75df574--qnq6v-eth0" Dec 13 02:40:40.093841 containerd[1450]: 2024-12-13 02:40:39.912 [INFO][4102] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0a27194898dd74baba7d504aca8ed7f50de43122e0ad14aa5be9fe1722f0a228" HandleID="k8s-pod-network.0a27194898dd74baba7d504aca8ed7f50de43122e0ad14aa5be9fe1722f0a228" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-coredns--76f75df574--qnq6v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001218c0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-2-1-b-31d3d6554f.novalocal", "pod":"coredns-76f75df574-qnq6v", "timestamp":"2024-12-13 02:40:39.89275407 +0000 UTC"}, Hostname:"ci-4081-2-1-b-31d3d6554f.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 02:40:40.093841 containerd[1450]: 2024-12-13 02:40:39.913 [INFO][4102] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:40:40.093841 containerd[1450]: 2024-12-13 02:40:39.913 [INFO][4102] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:40:40.093841 containerd[1450]: 2024-12-13 02:40:39.913 [INFO][4102] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-b-31d3d6554f.novalocal' Dec 13 02:40:40.093841 containerd[1450]: 2024-12-13 02:40:39.916 [INFO][4102] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0a27194898dd74baba7d504aca8ed7f50de43122e0ad14aa5be9fe1722f0a228" host="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:40:40.093841 containerd[1450]: 2024-12-13 02:40:39.929 [INFO][4102] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:40:40.093841 containerd[1450]: 2024-12-13 02:40:39.943 [INFO][4102] ipam/ipam.go 489: Trying affinity for 192.168.24.64/26 host="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:40:40.093841 containerd[1450]: 2024-12-13 02:40:39.952 [INFO][4102] ipam/ipam.go 155: Attempting to load block cidr=192.168.24.64/26 host="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:40:40.093841 containerd[1450]: 2024-12-13 02:40:39.960 [INFO][4102] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.24.64/26 host="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:40:40.093841 containerd[1450]: 2024-12-13 02:40:39.960 [INFO][4102] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.24.64/26 handle="k8s-pod-network.0a27194898dd74baba7d504aca8ed7f50de43122e0ad14aa5be9fe1722f0a228" host="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:40:40.093841 containerd[1450]: 2024-12-13 02:40:39.969 [INFO][4102] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0a27194898dd74baba7d504aca8ed7f50de43122e0ad14aa5be9fe1722f0a228 Dec 13 02:40:40.093841 containerd[1450]: 2024-12-13 02:40:39.985 [INFO][4102] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.24.64/26 handle="k8s-pod-network.0a27194898dd74baba7d504aca8ed7f50de43122e0ad14aa5be9fe1722f0a228" host="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:40:40.093841 containerd[1450]: 2024-12-13 02:40:40.018 [INFO][4102] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.24.67/26] block=192.168.24.64/26 handle="k8s-pod-network.0a27194898dd74baba7d504aca8ed7f50de43122e0ad14aa5be9fe1722f0a228" host="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:40:40.093841 containerd[1450]: 2024-12-13 02:40:40.019 [INFO][4102] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.24.67/26] handle="k8s-pod-network.0a27194898dd74baba7d504aca8ed7f50de43122e0ad14aa5be9fe1722f0a228" host="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:40:40.093841 containerd[1450]: 2024-12-13 02:40:40.019 [INFO][4102] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:40:40.093841 containerd[1450]: 2024-12-13 02:40:40.019 [INFO][4102] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.24.67/26] IPv6=[] ContainerID="0a27194898dd74baba7d504aca8ed7f50de43122e0ad14aa5be9fe1722f0a228" HandleID="k8s-pod-network.0a27194898dd74baba7d504aca8ed7f50de43122e0ad14aa5be9fe1722f0a228" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-coredns--76f75df574--qnq6v-eth0" Dec 13 02:40:40.098332 containerd[1450]: 2024-12-13 02:40:40.026 [INFO][4023] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0a27194898dd74baba7d504aca8ed7f50de43122e0ad14aa5be9fe1722f0a228" Namespace="kube-system" Pod="coredns-76f75df574-qnq6v" WorkloadEndpoint="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-coredns--76f75df574--qnq6v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--31d3d6554f.novalocal-k8s-coredns--76f75df574--qnq6v-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"b6fec068-1607-4b7c-a071-cd5974d02433", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 39, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-31d3d6554f.novalocal", ContainerID:"", Pod:"coredns-76f75df574-qnq6v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie602d9d24e2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:40:40.098332 containerd[1450]: 2024-12-13 02:40:40.026 [INFO][4023] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.24.67/32] ContainerID="0a27194898dd74baba7d504aca8ed7f50de43122e0ad14aa5be9fe1722f0a228" Namespace="kube-system" Pod="coredns-76f75df574-qnq6v" WorkloadEndpoint="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-coredns--76f75df574--qnq6v-eth0" Dec 13 02:40:40.098332 containerd[1450]: 2024-12-13 02:40:40.027 [INFO][4023] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie602d9d24e2 ContainerID="0a27194898dd74baba7d504aca8ed7f50de43122e0ad14aa5be9fe1722f0a228" Namespace="kube-system" Pod="coredns-76f75df574-qnq6v" WorkloadEndpoint="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-coredns--76f75df574--qnq6v-eth0" Dec 13 02:40:40.098332 containerd[1450]: 2024-12-13 02:40:40.036 [INFO][4023] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0a27194898dd74baba7d504aca8ed7f50de43122e0ad14aa5be9fe1722f0a228" Namespace="kube-system" Pod="coredns-76f75df574-qnq6v" WorkloadEndpoint="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-coredns--76f75df574--qnq6v-eth0" Dec 13 02:40:40.098332 containerd[1450]: 2024-12-13 02:40:40.040 [INFO][4023] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0a27194898dd74baba7d504aca8ed7f50de43122e0ad14aa5be9fe1722f0a228" Namespace="kube-system" Pod="coredns-76f75df574-qnq6v" WorkloadEndpoint="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-coredns--76f75df574--qnq6v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--31d3d6554f.novalocal-k8s-coredns--76f75df574--qnq6v-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"b6fec068-1607-4b7c-a071-cd5974d02433", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 39, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-31d3d6554f.novalocal", ContainerID:"0a27194898dd74baba7d504aca8ed7f50de43122e0ad14aa5be9fe1722f0a228", Pod:"coredns-76f75df574-qnq6v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie602d9d24e2", MAC:"9a:6d:6e:72:78:25", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:40:40.098332 containerd[1450]: 2024-12-13 02:40:40.086 [INFO][4023] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0a27194898dd74baba7d504aca8ed7f50de43122e0ad14aa5be9fe1722f0a228" Namespace="kube-system" Pod="coredns-76f75df574-qnq6v" WorkloadEndpoint="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-coredns--76f75df574--qnq6v-eth0" Dec 13 02:40:40.175641 containerd[1450]: time="2024-12-13T02:40:40.174088902Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:40:40.175641 containerd[1450]: time="2024-12-13T02:40:40.174153905Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:40:40.175641 containerd[1450]: time="2024-12-13T02:40:40.174192228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:40:40.175641 containerd[1450]: time="2024-12-13T02:40:40.174304920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:40:40.207877 systemd[1]: Started cri-containerd-0a27194898dd74baba7d504aca8ed7f50de43122e0ad14aa5be9fe1722f0a228.scope - libcontainer container 0a27194898dd74baba7d504aca8ed7f50de43122e0ad14aa5be9fe1722f0a228. Dec 13 02:40:40.318840 systemd-networkd[1362]: calid022a4d1220: Link UP Dec 13 02:40:40.320451 systemd-networkd[1362]: calid022a4d1220: Gained carrier Dec 13 02:40:40.352369 containerd[1450]: 2024-12-13 02:40:40.079 [INFO][4142] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--apiserver--d894d9fbd--w6s8z-eth0 calico-apiserver-d894d9fbd- calico-apiserver d1f56eec-7f0d-4c0d-9522-9259829f7521 829 0 2024-12-13 02:40:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d894d9fbd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-2-1-b-31d3d6554f.novalocal calico-apiserver-d894d9fbd-w6s8z eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid022a4d1220 [] []}} ContainerID="f9da70b2ac6ad43933a5abf0a63757bb346c5c3808dbe58c4162b89878068870" Namespace="calico-apiserver" Pod="calico-apiserver-d894d9fbd-w6s8z" WorkloadEndpoint="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--apiserver--d894d9fbd--w6s8z-" Dec 13 02:40:40.352369 containerd[1450]: 2024-12-13 02:40:40.080 [INFO][4142] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f9da70b2ac6ad43933a5abf0a63757bb346c5c3808dbe58c4162b89878068870" Namespace="calico-apiserver" Pod="calico-apiserver-d894d9fbd-w6s8z" WorkloadEndpoint="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--apiserver--d894d9fbd--w6s8z-eth0" Dec 13 02:40:40.352369 containerd[1450]: 2024-12-13 02:40:40.154 [INFO][4170] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f9da70b2ac6ad43933a5abf0a63757bb346c5c3808dbe58c4162b89878068870" HandleID="k8s-pod-network.f9da70b2ac6ad43933a5abf0a63757bb346c5c3808dbe58c4162b89878068870" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--apiserver--d894d9fbd--w6s8z-eth0" Dec 13 02:40:40.352369 containerd[1450]: 2024-12-13 02:40:40.219 [INFO][4170] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f9da70b2ac6ad43933a5abf0a63757bb346c5c3808dbe58c4162b89878068870" HandleID="k8s-pod-network.f9da70b2ac6ad43933a5abf0a63757bb346c5c3808dbe58c4162b89878068870" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--apiserver--d894d9fbd--w6s8z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319370), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-2-1-b-31d3d6554f.novalocal", "pod":"calico-apiserver-d894d9fbd-w6s8z", "timestamp":"2024-12-13 02:40:40.153957286 +0000 UTC"}, Hostname:"ci-4081-2-1-b-31d3d6554f.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 02:40:40.352369 containerd[1450]: 2024-12-13 02:40:40.219 [INFO][4170] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:40:40.352369 containerd[1450]: 2024-12-13 02:40:40.219 [INFO][4170] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:40:40.352369 containerd[1450]: 2024-12-13 02:40:40.219 [INFO][4170] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-b-31d3d6554f.novalocal' Dec 13 02:40:40.352369 containerd[1450]: 2024-12-13 02:40:40.224 [INFO][4170] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f9da70b2ac6ad43933a5abf0a63757bb346c5c3808dbe58c4162b89878068870" host="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:40:40.352369 containerd[1450]: 2024-12-13 02:40:40.246 [INFO][4170] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:40:40.352369 containerd[1450]: 2024-12-13 02:40:40.271 [INFO][4170] ipam/ipam.go 489: Trying affinity for 192.168.24.64/26 host="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:40:40.352369 containerd[1450]: 2024-12-13 02:40:40.275 [INFO][4170] ipam/ipam.go 155: Attempting to load block cidr=192.168.24.64/26 host="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:40:40.352369 containerd[1450]: 2024-12-13 02:40:40.282 [INFO][4170] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.24.64/26 host="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:40:40.352369 containerd[1450]: 2024-12-13 02:40:40.282 [INFO][4170] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.24.64/26 handle="k8s-pod-network.f9da70b2ac6ad43933a5abf0a63757bb346c5c3808dbe58c4162b89878068870" host="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:40:40.352369 containerd[1450]: 2024-12-13 02:40:40.285 [INFO][4170] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f9da70b2ac6ad43933a5abf0a63757bb346c5c3808dbe58c4162b89878068870 Dec 13 02:40:40.352369 containerd[1450]: 2024-12-13 02:40:40.293 [INFO][4170] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.24.64/26 handle="k8s-pod-network.f9da70b2ac6ad43933a5abf0a63757bb346c5c3808dbe58c4162b89878068870" host="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:40:40.352369 containerd[1450]: 2024-12-13 02:40:40.307 [INFO][4170] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.24.68/26] block=192.168.24.64/26 handle="k8s-pod-network.f9da70b2ac6ad43933a5abf0a63757bb346c5c3808dbe58c4162b89878068870" host="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:40:40.352369 containerd[1450]: 2024-12-13 02:40:40.308 [INFO][4170] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.24.68/26] handle="k8s-pod-network.f9da70b2ac6ad43933a5abf0a63757bb346c5c3808dbe58c4162b89878068870" host="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:40:40.352369 containerd[1450]: 2024-12-13 02:40:40.308 [INFO][4170] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:40:40.352369 containerd[1450]: 2024-12-13 02:40:40.308 [INFO][4170] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.24.68/26] IPv6=[] ContainerID="f9da70b2ac6ad43933a5abf0a63757bb346c5c3808dbe58c4162b89878068870" HandleID="k8s-pod-network.f9da70b2ac6ad43933a5abf0a63757bb346c5c3808dbe58c4162b89878068870" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--apiserver--d894d9fbd--w6s8z-eth0" Dec 13 02:40:40.353200 containerd[1450]: 2024-12-13 02:40:40.312 [INFO][4142] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f9da70b2ac6ad43933a5abf0a63757bb346c5c3808dbe58c4162b89878068870" Namespace="calico-apiserver" Pod="calico-apiserver-d894d9fbd-w6s8z" WorkloadEndpoint="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--apiserver--d894d9fbd--w6s8z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--apiserver--d894d9fbd--w6s8z-eth0", GenerateName:"calico-apiserver-d894d9fbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"d1f56eec-7f0d-4c0d-9522-9259829f7521", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 40, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d894d9fbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-31d3d6554f.novalocal", ContainerID:"", Pod:"calico-apiserver-d894d9fbd-w6s8z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid022a4d1220", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:40:40.353200 containerd[1450]: 2024-12-13 02:40:40.312 [INFO][4142] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.24.68/32] ContainerID="f9da70b2ac6ad43933a5abf0a63757bb346c5c3808dbe58c4162b89878068870" Namespace="calico-apiserver" Pod="calico-apiserver-d894d9fbd-w6s8z" WorkloadEndpoint="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--apiserver--d894d9fbd--w6s8z-eth0" Dec 13 02:40:40.353200 containerd[1450]: 2024-12-13 02:40:40.313 [INFO][4142] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid022a4d1220 ContainerID="f9da70b2ac6ad43933a5abf0a63757bb346c5c3808dbe58c4162b89878068870" Namespace="calico-apiserver" Pod="calico-apiserver-d894d9fbd-w6s8z" WorkloadEndpoint="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--apiserver--d894d9fbd--w6s8z-eth0" Dec 13 02:40:40.353200 containerd[1450]: 2024-12-13 02:40:40.320 [INFO][4142] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f9da70b2ac6ad43933a5abf0a63757bb346c5c3808dbe58c4162b89878068870" Namespace="calico-apiserver" Pod="calico-apiserver-d894d9fbd-w6s8z" WorkloadEndpoint="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--apiserver--d894d9fbd--w6s8z-eth0" Dec 13 02:40:40.353200 containerd[1450]: 2024-12-13 02:40:40.326 [INFO][4142] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f9da70b2ac6ad43933a5abf0a63757bb346c5c3808dbe58c4162b89878068870" Namespace="calico-apiserver" Pod="calico-apiserver-d894d9fbd-w6s8z" WorkloadEndpoint="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--apiserver--d894d9fbd--w6s8z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--apiserver--d894d9fbd--w6s8z-eth0", GenerateName:"calico-apiserver-d894d9fbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"d1f56eec-7f0d-4c0d-9522-9259829f7521", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 40, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d894d9fbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-31d3d6554f.novalocal", ContainerID:"f9da70b2ac6ad43933a5abf0a63757bb346c5c3808dbe58c4162b89878068870", Pod:"calico-apiserver-d894d9fbd-w6s8z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid022a4d1220", MAC:"92:33:02:c8:51:9a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:40:40.353200 containerd[1450]: 2024-12-13 02:40:40.345 [INFO][4142] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f9da70b2ac6ad43933a5abf0a63757bb346c5c3808dbe58c4162b89878068870" Namespace="calico-apiserver" Pod="calico-apiserver-d894d9fbd-w6s8z" WorkloadEndpoint="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--apiserver--d894d9fbd--w6s8z-eth0" Dec 13 02:40:40.354714 containerd[1450]: time="2024-12-13T02:40:40.353636421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qnq6v,Uid:b6fec068-1607-4b7c-a071-cd5974d02433,Namespace:kube-system,Attempt:1,} returns sandbox id \"0a27194898dd74baba7d504aca8ed7f50de43122e0ad14aa5be9fe1722f0a228\"" Dec 13 02:40:40.362381 containerd[1450]: time="2024-12-13T02:40:40.362249488Z" level=info msg="CreateContainer within sandbox \"0a27194898dd74baba7d504aca8ed7f50de43122e0ad14aa5be9fe1722f0a228\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 02:40:40.400812 systemd[1]: run-netns-cni\x2dc79582b9\x2dadf5\x2d8ef0\x2d1505\x2d931c34736b60.mount: Deactivated successfully. Dec 13 02:40:40.410191 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1325492548.mount: Deactivated successfully. Dec 13 02:40:40.417153 containerd[1450]: time="2024-12-13T02:40:40.417055700Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:40:40.417334 containerd[1450]: time="2024-12-13T02:40:40.417309539Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:40:40.417430 containerd[1450]: time="2024-12-13T02:40:40.417406952Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:40:40.417771 containerd[1450]: time="2024-12-13T02:40:40.417642948Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:40:40.418613 containerd[1450]: time="2024-12-13T02:40:40.418547536Z" level=info msg="CreateContainer within sandbox \"0a27194898dd74baba7d504aca8ed7f50de43122e0ad14aa5be9fe1722f0a228\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"04fdfb35ce9dc905b6c6fe5d56a5ccabdb07c753c63845f7b406fab6626d0b90\"" Dec 13 02:40:40.420348 containerd[1450]: time="2024-12-13T02:40:40.419718686Z" level=info msg="StartContainer for \"04fdfb35ce9dc905b6c6fe5d56a5ccabdb07c753c63845f7b406fab6626d0b90\"" Dec 13 02:40:40.482662 systemd[1]: Started cri-containerd-f9da70b2ac6ad43933a5abf0a63757bb346c5c3808dbe58c4162b89878068870.scope - libcontainer container f9da70b2ac6ad43933a5abf0a63757bb346c5c3808dbe58c4162b89878068870. Dec 13 02:40:40.513697 systemd[1]: Started cri-containerd-04fdfb35ce9dc905b6c6fe5d56a5ccabdb07c753c63845f7b406fab6626d0b90.scope - libcontainer container 04fdfb35ce9dc905b6c6fe5d56a5ccabdb07c753c63845f7b406fab6626d0b90. Dec 13 02:40:40.564619 containerd[1450]: time="2024-12-13T02:40:40.561741410Z" level=info msg="StopPodSandbox for \"94f138cb1525f1d6908b7caba24dad6518badd0df93a8320331c00602c63ba52\"" Dec 13 02:40:40.582521 containerd[1450]: time="2024-12-13T02:40:40.582449335Z" level=info msg="StartContainer for \"04fdfb35ce9dc905b6c6fe5d56a5ccabdb07c753c63845f7b406fab6626d0b90\" returns successfully" Dec 13 02:40:40.614077 systemd-networkd[1362]: cali165c0e42282: Gained IPv6LL Dec 13 02:40:40.617555 systemd-networkd[1362]: vxlan.calico: Link UP Dec 13 02:40:40.617559 systemd-networkd[1362]: vxlan.calico: Gained carrier Dec 13 02:40:40.675913 containerd[1450]: time="2024-12-13T02:40:40.673671328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d894d9fbd-w6s8z,Uid:d1f56eec-7f0d-4c0d-9522-9259829f7521,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"f9da70b2ac6ad43933a5abf0a63757bb346c5c3808dbe58c4162b89878068870\"" Dec 13 02:40:40.803289 containerd[1450]: 2024-12-13 02:40:40.751 [INFO][4329] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="94f138cb1525f1d6908b7caba24dad6518badd0df93a8320331c00602c63ba52" Dec 13 02:40:40.803289 containerd[1450]: 2024-12-13 02:40:40.751 [INFO][4329] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="94f138cb1525f1d6908b7caba24dad6518badd0df93a8320331c00602c63ba52" iface="eth0" netns="/var/run/netns/cni-eb07a943-1129-fdb2-bda3-80ca5db25f70" Dec 13 02:40:40.803289 containerd[1450]: 2024-12-13 02:40:40.752 [INFO][4329] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="94f138cb1525f1d6908b7caba24dad6518badd0df93a8320331c00602c63ba52" iface="eth0" netns="/var/run/netns/cni-eb07a943-1129-fdb2-bda3-80ca5db25f70" Dec 13 02:40:40.803289 containerd[1450]: 2024-12-13 02:40:40.752 [INFO][4329] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="94f138cb1525f1d6908b7caba24dad6518badd0df93a8320331c00602c63ba52" iface="eth0" netns="/var/run/netns/cni-eb07a943-1129-fdb2-bda3-80ca5db25f70" Dec 13 02:40:40.803289 containerd[1450]: 2024-12-13 02:40:40.752 [INFO][4329] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="94f138cb1525f1d6908b7caba24dad6518badd0df93a8320331c00602c63ba52" Dec 13 02:40:40.803289 containerd[1450]: 2024-12-13 02:40:40.752 [INFO][4329] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="94f138cb1525f1d6908b7caba24dad6518badd0df93a8320331c00602c63ba52" Dec 13 02:40:40.803289 containerd[1450]: 2024-12-13 02:40:40.787 [INFO][4357] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="94f138cb1525f1d6908b7caba24dad6518badd0df93a8320331c00602c63ba52" HandleID="k8s-pod-network.94f138cb1525f1d6908b7caba24dad6518badd0df93a8320331c00602c63ba52" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--kube--controllers--585df87b9--jhcxr-eth0" Dec 13 02:40:40.803289 containerd[1450]: 2024-12-13 02:40:40.787 [INFO][4357] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:40:40.803289 containerd[1450]: 2024-12-13 02:40:40.787 [INFO][4357] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:40:40.803289 containerd[1450]: 2024-12-13 02:40:40.796 [WARNING][4357] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="94f138cb1525f1d6908b7caba24dad6518badd0df93a8320331c00602c63ba52" HandleID="k8s-pod-network.94f138cb1525f1d6908b7caba24dad6518badd0df93a8320331c00602c63ba52" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--kube--controllers--585df87b9--jhcxr-eth0" Dec 13 02:40:40.803289 containerd[1450]: 2024-12-13 02:40:40.796 [INFO][4357] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="94f138cb1525f1d6908b7caba24dad6518badd0df93a8320331c00602c63ba52" HandleID="k8s-pod-network.94f138cb1525f1d6908b7caba24dad6518badd0df93a8320331c00602c63ba52" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--kube--controllers--585df87b9--jhcxr-eth0" Dec 13 02:40:40.803289 containerd[1450]: 2024-12-13 02:40:40.798 [INFO][4357] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:40:40.803289 containerd[1450]: 2024-12-13 02:40:40.800 [INFO][4329] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="94f138cb1525f1d6908b7caba24dad6518badd0df93a8320331c00602c63ba52" Dec 13 02:40:40.804955 containerd[1450]: time="2024-12-13T02:40:40.803464179Z" level=info msg="TearDown network for sandbox \"94f138cb1525f1d6908b7caba24dad6518badd0df93a8320331c00602c63ba52\" successfully" Dec 13 02:40:40.804955 containerd[1450]: time="2024-12-13T02:40:40.803600827Z" level=info msg="StopPodSandbox for \"94f138cb1525f1d6908b7caba24dad6518badd0df93a8320331c00602c63ba52\" returns successfully" Dec 13 02:40:40.805074 containerd[1450]: time="2024-12-13T02:40:40.804994739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-585df87b9-jhcxr,Uid:7b74c729-793f-4b9e-8c1e-327ee29af018,Namespace:calico-system,Attempt:1,}" Dec 13 02:40:41.020876 systemd-networkd[1362]: calid67b5941966: Link UP Dec 13 02:40:41.021803 systemd-networkd[1362]: calid67b5941966: Gained carrier Dec 13 02:40:41.058667 containerd[1450]: 2024-12-13 02:40:40.901 [INFO][4364] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--kube--controllers--585df87b9--jhcxr-eth0 calico-kube-controllers-585df87b9- calico-system 7b74c729-793f-4b9e-8c1e-327ee29af018 845 0 2024-12-13 02:40:02 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:585df87b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-2-1-b-31d3d6554f.novalocal calico-kube-controllers-585df87b9-jhcxr eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid67b5941966 [] []}} ContainerID="7e7503c23b575bf98e3ee1e404ca1af2f51d68a1d875fce8ff5727b411591676" Namespace="calico-system" Pod="calico-kube-controllers-585df87b9-jhcxr" WorkloadEndpoint="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--kube--controllers--585df87b9--jhcxr-" Dec 13 02:40:41.058667 containerd[1450]: 2024-12-13 02:40:40.902 [INFO][4364] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7e7503c23b575bf98e3ee1e404ca1af2f51d68a1d875fce8ff5727b411591676" Namespace="calico-system" Pod="calico-kube-controllers-585df87b9-jhcxr" WorkloadEndpoint="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--kube--controllers--585df87b9--jhcxr-eth0" Dec 13 02:40:41.058667 containerd[1450]: 2024-12-13 02:40:40.948 [INFO][4377] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7e7503c23b575bf98e3ee1e404ca1af2f51d68a1d875fce8ff5727b411591676" HandleID="k8s-pod-network.7e7503c23b575bf98e3ee1e404ca1af2f51d68a1d875fce8ff5727b411591676" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--kube--controllers--585df87b9--jhcxr-eth0" Dec 13 02:40:41.058667 containerd[1450]: 2024-12-13 02:40:40.962 [INFO][4377] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7e7503c23b575bf98e3ee1e404ca1af2f51d68a1d875fce8ff5727b411591676" HandleID="k8s-pod-network.7e7503c23b575bf98e3ee1e404ca1af2f51d68a1d875fce8ff5727b411591676" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--kube--controllers--585df87b9--jhcxr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290b70), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-2-1-b-31d3d6554f.novalocal", "pod":"calico-kube-controllers-585df87b9-jhcxr", "timestamp":"2024-12-13 02:40:40.948201969 +0000 UTC"}, Hostname:"ci-4081-2-1-b-31d3d6554f.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 02:40:41.058667 containerd[1450]: 2024-12-13 02:40:40.962 [INFO][4377] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:40:41.058667 containerd[1450]: 2024-12-13 02:40:40.962 [INFO][4377] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:40:41.058667 containerd[1450]: 2024-12-13 02:40:40.962 [INFO][4377] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-b-31d3d6554f.novalocal' Dec 13 02:40:41.058667 containerd[1450]: 2024-12-13 02:40:40.966 [INFO][4377] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7e7503c23b575bf98e3ee1e404ca1af2f51d68a1d875fce8ff5727b411591676" host="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:40:41.058667 containerd[1450]: 2024-12-13 02:40:40.972 [INFO][4377] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:40:41.058667 containerd[1450]: 2024-12-13 02:40:40.984 [INFO][4377] ipam/ipam.go 489: Trying affinity for 192.168.24.64/26 host="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:40:41.058667 containerd[1450]: 2024-12-13 02:40:40.986 [INFO][4377] ipam/ipam.go 155: Attempting to load block cidr=192.168.24.64/26 host="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:40:41.058667 containerd[1450]: 2024-12-13 02:40:40.990 [INFO][4377] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.24.64/26 host="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:40:41.058667 containerd[1450]: 2024-12-13 02:40:40.990 [INFO][4377] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.24.64/26 handle="k8s-pod-network.7e7503c23b575bf98e3ee1e404ca1af2f51d68a1d875fce8ff5727b411591676" host="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:40:41.058667 containerd[1450]: 2024-12-13 02:40:40.992 [INFO][4377] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7e7503c23b575bf98e3ee1e404ca1af2f51d68a1d875fce8ff5727b411591676 Dec 13 02:40:41.058667 containerd[1450]: 2024-12-13 02:40:41.001 [INFO][4377] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.24.64/26 handle="k8s-pod-network.7e7503c23b575bf98e3ee1e404ca1af2f51d68a1d875fce8ff5727b411591676" host="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:40:41.058667 containerd[1450]: 2024-12-13 02:40:41.011 [INFO][4377] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.24.69/26] block=192.168.24.64/26 handle="k8s-pod-network.7e7503c23b575bf98e3ee1e404ca1af2f51d68a1d875fce8ff5727b411591676" host="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:40:41.058667 containerd[1450]: 2024-12-13 02:40:41.011 [INFO][4377] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.24.69/26] handle="k8s-pod-network.7e7503c23b575bf98e3ee1e404ca1af2f51d68a1d875fce8ff5727b411591676" host="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:40:41.058667 containerd[1450]: 2024-12-13 02:40:41.011 [INFO][4377] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:40:41.058667 containerd[1450]: 2024-12-13 02:40:41.011 [INFO][4377] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.24.69/26] IPv6=[] ContainerID="7e7503c23b575bf98e3ee1e404ca1af2f51d68a1d875fce8ff5727b411591676" HandleID="k8s-pod-network.7e7503c23b575bf98e3ee1e404ca1af2f51d68a1d875fce8ff5727b411591676" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--kube--controllers--585df87b9--jhcxr-eth0" Dec 13 02:40:41.075840 containerd[1450]: 2024-12-13 02:40:41.015 [INFO][4364] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7e7503c23b575bf98e3ee1e404ca1af2f51d68a1d875fce8ff5727b411591676" Namespace="calico-system" Pod="calico-kube-controllers-585df87b9-jhcxr" WorkloadEndpoint="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--kube--controllers--585df87b9--jhcxr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--kube--controllers--585df87b9--jhcxr-eth0", GenerateName:"calico-kube-controllers-585df87b9-", Namespace:"calico-system", SelfLink:"", UID:"7b74c729-793f-4b9e-8c1e-327ee29af018", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 40, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"585df87b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-31d3d6554f.novalocal", ContainerID:"", Pod:"calico-kube-controllers-585df87b9-jhcxr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.24.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid67b5941966", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:40:41.075840 containerd[1450]: 2024-12-13 02:40:41.016 [INFO][4364] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.24.69/32] ContainerID="7e7503c23b575bf98e3ee1e404ca1af2f51d68a1d875fce8ff5727b411591676" Namespace="calico-system" Pod="calico-kube-controllers-585df87b9-jhcxr" WorkloadEndpoint="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--kube--controllers--585df87b9--jhcxr-eth0" Dec 13 02:40:41.075840 containerd[1450]: 2024-12-13 02:40:41.016 [INFO][4364] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid67b5941966 ContainerID="7e7503c23b575bf98e3ee1e404ca1af2f51d68a1d875fce8ff5727b411591676" Namespace="calico-system" Pod="calico-kube-controllers-585df87b9-jhcxr" WorkloadEndpoint="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--kube--controllers--585df87b9--jhcxr-eth0" Dec 13 02:40:41.075840 containerd[1450]: 2024-12-13 02:40:41.021 [INFO][4364] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7e7503c23b575bf98e3ee1e404ca1af2f51d68a1d875fce8ff5727b411591676" Namespace="calico-system" Pod="calico-kube-controllers-585df87b9-jhcxr" WorkloadEndpoint="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--kube--controllers--585df87b9--jhcxr-eth0" Dec 13 02:40:41.075840 containerd[1450]: 2024-12-13 02:40:41.023 [INFO][4364] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7e7503c23b575bf98e3ee1e404ca1af2f51d68a1d875fce8ff5727b411591676" Namespace="calico-system" Pod="calico-kube-controllers-585df87b9-jhcxr" WorkloadEndpoint="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--kube--controllers--585df87b9--jhcxr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--kube--controllers--585df87b9--jhcxr-eth0", GenerateName:"calico-kube-controllers-585df87b9-", Namespace:"calico-system", SelfLink:"", UID:"7b74c729-793f-4b9e-8c1e-327ee29af018", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 40, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"585df87b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-31d3d6554f.novalocal", ContainerID:"7e7503c23b575bf98e3ee1e404ca1af2f51d68a1d875fce8ff5727b411591676", Pod:"calico-kube-controllers-585df87b9-jhcxr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.24.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid67b5941966", MAC:"1a:f9:b6:52:35:0c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:40:41.075840 containerd[1450]: 2024-12-13 02:40:41.053 [INFO][4364] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7e7503c23b575bf98e3ee1e404ca1af2f51d68a1d875fce8ff5727b411591676" Namespace="calico-system" Pod="calico-kube-controllers-585df87b9-jhcxr" WorkloadEndpoint="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--kube--controllers--585df87b9--jhcxr-eth0" Dec 13 02:40:41.129266 containerd[1450]: time="2024-12-13T02:40:41.124471438Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:40:41.129266 containerd[1450]: time="2024-12-13T02:40:41.128997980Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:40:41.129266 containerd[1450]: time="2024-12-13T02:40:41.129026024Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:40:41.131064 containerd[1450]: time="2024-12-13T02:40:41.130260854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:40:41.182957 systemd[1]: Started cri-containerd-7e7503c23b575bf98e3ee1e404ca1af2f51d68a1d875fce8ff5727b411591676.scope - libcontainer container 7e7503c23b575bf98e3ee1e404ca1af2f51d68a1d875fce8ff5727b411591676. Dec 13 02:40:41.330033 containerd[1450]: time="2024-12-13T02:40:41.328267581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-585df87b9-jhcxr,Uid:7b74c729-793f-4b9e-8c1e-327ee29af018,Namespace:calico-system,Attempt:1,} returns sandbox id \"7e7503c23b575bf98e3ee1e404ca1af2f51d68a1d875fce8ff5727b411591676\"" Dec 13 02:40:41.403559 systemd[1]: run-netns-cni\x2deb07a943\x2d1129\x2dfdb2\x2dbda3\x2d80ca5db25f70.mount: Deactivated successfully. Dec 13 02:40:41.501905 systemd-networkd[1362]: cali723d674629a: Gained IPv6LL Dec 13 02:40:41.502810 systemd-networkd[1362]: calie602d9d24e2: Gained IPv6LL Dec 13 02:40:41.503024 systemd-networkd[1362]: calid022a4d1220: Gained IPv6LL Dec 13 02:40:41.559874 containerd[1450]: time="2024-12-13T02:40:41.559832888Z" level=info msg="StopPodSandbox for \"397b5d5566daa503c40d3625e685d60dba0902373966745528806a19b0a62f65\"" Dec 13 02:40:41.589678 kubelet[2672]: I1213 02:40:41.589187 2672 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-qnq6v" podStartSLOduration=52.589123934 podStartE2EDuration="52.589123934s" podCreationTimestamp="2024-12-13 02:39:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:40:41.588598994 +0000 UTC m=+64.191979024" watchObservedRunningTime="2024-12-13 02:40:41.589123934 +0000 UTC m=+64.192503955" Dec 13 02:40:41.807144 containerd[1450]: 2024-12-13 02:40:41.731 [INFO][4493] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="397b5d5566daa503c40d3625e685d60dba0902373966745528806a19b0a62f65" Dec 13 02:40:41.807144 containerd[1450]: 2024-12-13 02:40:41.731 [INFO][4493] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="397b5d5566daa503c40d3625e685d60dba0902373966745528806a19b0a62f65" iface="eth0" netns="/var/run/netns/cni-8af7a153-4dd7-2f23-b0ca-e4624a2e96ba" Dec 13 02:40:41.807144 containerd[1450]: 2024-12-13 02:40:41.732 [INFO][4493] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="397b5d5566daa503c40d3625e685d60dba0902373966745528806a19b0a62f65" iface="eth0" netns="/var/run/netns/cni-8af7a153-4dd7-2f23-b0ca-e4624a2e96ba" Dec 13 02:40:41.807144 containerd[1450]: 2024-12-13 02:40:41.733 [INFO][4493] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="397b5d5566daa503c40d3625e685d60dba0902373966745528806a19b0a62f65" iface="eth0" netns="/var/run/netns/cni-8af7a153-4dd7-2f23-b0ca-e4624a2e96ba" Dec 13 02:40:41.807144 containerd[1450]: 2024-12-13 02:40:41.733 [INFO][4493] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="397b5d5566daa503c40d3625e685d60dba0902373966745528806a19b0a62f65" Dec 13 02:40:41.807144 containerd[1450]: 2024-12-13 02:40:41.734 [INFO][4493] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="397b5d5566daa503c40d3625e685d60dba0902373966745528806a19b0a62f65" Dec 13 02:40:41.807144 containerd[1450]: 2024-12-13 02:40:41.784 [INFO][4500] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="397b5d5566daa503c40d3625e685d60dba0902373966745528806a19b0a62f65" HandleID="k8s-pod-network.397b5d5566daa503c40d3625e685d60dba0902373966745528806a19b0a62f65" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-coredns--76f75df574--sfprt-eth0" Dec 13 02:40:41.807144 containerd[1450]: 2024-12-13 02:40:41.784 [INFO][4500] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:40:41.807144 containerd[1450]: 2024-12-13 02:40:41.784 [INFO][4500] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:40:41.807144 containerd[1450]: 2024-12-13 02:40:41.794 [WARNING][4500] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="397b5d5566daa503c40d3625e685d60dba0902373966745528806a19b0a62f65" HandleID="k8s-pod-network.397b5d5566daa503c40d3625e685d60dba0902373966745528806a19b0a62f65" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-coredns--76f75df574--sfprt-eth0" Dec 13 02:40:41.807144 containerd[1450]: 2024-12-13 02:40:41.794 [INFO][4500] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="397b5d5566daa503c40d3625e685d60dba0902373966745528806a19b0a62f65" HandleID="k8s-pod-network.397b5d5566daa503c40d3625e685d60dba0902373966745528806a19b0a62f65" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-coredns--76f75df574--sfprt-eth0" Dec 13 02:40:41.807144 containerd[1450]: 2024-12-13 02:40:41.797 [INFO][4500] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:40:41.807144 containerd[1450]: 2024-12-13 02:40:41.802 [INFO][4493] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="397b5d5566daa503c40d3625e685d60dba0902373966745528806a19b0a62f65" Dec 13 02:40:41.809931 containerd[1450]: time="2024-12-13T02:40:41.807867084Z" level=info msg="TearDown network for sandbox \"397b5d5566daa503c40d3625e685d60dba0902373966745528806a19b0a62f65\" successfully" Dec 13 02:40:41.809931 containerd[1450]: time="2024-12-13T02:40:41.807904956Z" level=info msg="StopPodSandbox for \"397b5d5566daa503c40d3625e685d60dba0902373966745528806a19b0a62f65\" returns successfully" Dec 13 02:40:41.810154 containerd[1450]: time="2024-12-13T02:40:41.810116439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-sfprt,Uid:33c3e9fb-68ef-4580-9c08-9e7c76469b7a,Namespace:kube-system,Attempt:1,}" Dec 13 02:40:41.816671 systemd[1]: run-netns-cni\x2d8af7a153\x2d4dd7\x2d2f23\x2db0ca\x2de4624a2e96ba.mount: Deactivated successfully. Dec 13 02:40:41.931750 containerd[1450]: time="2024-12-13T02:40:41.931361096Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:40:41.933275 containerd[1450]: time="2024-12-13T02:40:41.933221357Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Dec 13 02:40:41.936943 containerd[1450]: time="2024-12-13T02:40:41.936896020Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:40:41.943421 containerd[1450]: time="2024-12-13T02:40:41.943246665Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:40:41.944029 containerd[1450]: time="2024-12-13T02:40:41.943981061Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.399870357s" Dec 13 02:40:41.944088 containerd[1450]: time="2024-12-13T02:40:41.944027770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Dec 13 02:40:41.946533 containerd[1450]: time="2024-12-13T02:40:41.946054374Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 02:40:41.949088 containerd[1450]: time="2024-12-13T02:40:41.948813150Z" level=info msg="CreateContainer within sandbox \"92f0a4553201cef4fd5a1369d5d35948dc0fe7ac6b578e7fb11f7b793f36bc6e\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 02:40:41.987509 containerd[1450]: time="2024-12-13T02:40:41.987445825Z" level=info msg="CreateContainer within sandbox \"92f0a4553201cef4fd5a1369d5d35948dc0fe7ac6b578e7fb11f7b793f36bc6e\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"0b6b6f5641ceaf8502dab1aa4e5c95be57e493ddfa920a21d7fe1975a00a1cdc\"" Dec 13 02:40:41.990215 containerd[1450]: time="2024-12-13T02:40:41.990177781Z" level=info msg="StartContainer for \"0b6b6f5641ceaf8502dab1aa4e5c95be57e493ddfa920a21d7fe1975a00a1cdc\"" Dec 13 02:40:42.082924 systemd[1]: Started cri-containerd-0b6b6f5641ceaf8502dab1aa4e5c95be57e493ddfa920a21d7fe1975a00a1cdc.scope - libcontainer container 0b6b6f5641ceaf8502dab1aa4e5c95be57e493ddfa920a21d7fe1975a00a1cdc. Dec 13 02:40:42.110890 systemd-networkd[1362]: calif65d285f62a: Link UP Dec 13 02:40:42.113766 systemd-networkd[1362]: calif65d285f62a: Gained carrier Dec 13 02:40:42.142014 systemd-networkd[1362]: calid67b5941966: Gained IPv6LL Dec 13 02:40:42.145323 containerd[1450]: 2024-12-13 02:40:41.944 [INFO][4512] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--b--31d3d6554f.novalocal-k8s-coredns--76f75df574--sfprt-eth0 coredns-76f75df574- kube-system 33c3e9fb-68ef-4580-9c08-9e7c76469b7a 859 0 2024-12-13 02:39:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-2-1-b-31d3d6554f.novalocal coredns-76f75df574-sfprt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif65d285f62a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="03186e6ecb7659e1d0396ddd2860430d6d55daf84699ac15c60a5556e2cdbe4e" Namespace="kube-system" Pod="coredns-76f75df574-sfprt" WorkloadEndpoint="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-coredns--76f75df574--sfprt-" Dec 13 02:40:42.145323 containerd[1450]: 2024-12-13 02:40:41.944 [INFO][4512] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="03186e6ecb7659e1d0396ddd2860430d6d55daf84699ac15c60a5556e2cdbe4e" Namespace="kube-system" Pod="coredns-76f75df574-sfprt" WorkloadEndpoint="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-coredns--76f75df574--sfprt-eth0" Dec 13 02:40:42.145323 containerd[1450]: 2024-12-13 02:40:42.016 [INFO][4525] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="03186e6ecb7659e1d0396ddd2860430d6d55daf84699ac15c60a5556e2cdbe4e" HandleID="k8s-pod-network.03186e6ecb7659e1d0396ddd2860430d6d55daf84699ac15c60a5556e2cdbe4e" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-coredns--76f75df574--sfprt-eth0" Dec 13 02:40:42.145323 containerd[1450]: 2024-12-13 02:40:42.029 [INFO][4525] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="03186e6ecb7659e1d0396ddd2860430d6d55daf84699ac15c60a5556e2cdbe4e" HandleID="k8s-pod-network.03186e6ecb7659e1d0396ddd2860430d6d55daf84699ac15c60a5556e2cdbe4e" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-coredns--76f75df574--sfprt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00043d860), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-2-1-b-31d3d6554f.novalocal", "pod":"coredns-76f75df574-sfprt", "timestamp":"2024-12-13 02:40:42.016025275 +0000 UTC"}, Hostname:"ci-4081-2-1-b-31d3d6554f.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 02:40:42.145323 containerd[1450]: 2024-12-13 02:40:42.029 [INFO][4525] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:40:42.145323 containerd[1450]: 2024-12-13 02:40:42.029 [INFO][4525] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:40:42.145323 containerd[1450]: 2024-12-13 02:40:42.029 [INFO][4525] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-b-31d3d6554f.novalocal' Dec 13 02:40:42.145323 containerd[1450]: 2024-12-13 02:40:42.033 [INFO][4525] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.03186e6ecb7659e1d0396ddd2860430d6d55daf84699ac15c60a5556e2cdbe4e" host="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:40:42.145323 containerd[1450]: 2024-12-13 02:40:42.045 [INFO][4525] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:40:42.145323 containerd[1450]: 2024-12-13 02:40:42.057 [INFO][4525] ipam/ipam.go 489: Trying affinity for 192.168.24.64/26 host="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:40:42.145323 containerd[1450]: 2024-12-13 02:40:42.060 [INFO][4525] ipam/ipam.go 155: Attempting to load block cidr=192.168.24.64/26 host="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:40:42.145323 containerd[1450]: 2024-12-13 02:40:42.064 [INFO][4525] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.24.64/26 host="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:40:42.145323 containerd[1450]: 2024-12-13 02:40:42.064 [INFO][4525] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.24.64/26 handle="k8s-pod-network.03186e6ecb7659e1d0396ddd2860430d6d55daf84699ac15c60a5556e2cdbe4e" host="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:40:42.145323 containerd[1450]: 2024-12-13 02:40:42.067 [INFO][4525] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.03186e6ecb7659e1d0396ddd2860430d6d55daf84699ac15c60a5556e2cdbe4e Dec 13 02:40:42.145323 containerd[1450]: 2024-12-13 02:40:42.078 [INFO][4525] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.24.64/26 handle="k8s-pod-network.03186e6ecb7659e1d0396ddd2860430d6d55daf84699ac15c60a5556e2cdbe4e" host="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:40:42.145323 containerd[1450]: 2024-12-13 02:40:42.094 [INFO][4525] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.24.70/26] block=192.168.24.64/26 handle="k8s-pod-network.03186e6ecb7659e1d0396ddd2860430d6d55daf84699ac15c60a5556e2cdbe4e" host="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:40:42.145323 containerd[1450]: 2024-12-13 02:40:42.094 [INFO][4525] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.24.70/26] handle="k8s-pod-network.03186e6ecb7659e1d0396ddd2860430d6d55daf84699ac15c60a5556e2cdbe4e" host="ci-4081-2-1-b-31d3d6554f.novalocal" Dec 13 02:40:42.145323 containerd[1450]: 2024-12-13 02:40:42.094 [INFO][4525] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:40:42.145323 containerd[1450]: 2024-12-13 02:40:42.094 [INFO][4525] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.24.70/26] IPv6=[] ContainerID="03186e6ecb7659e1d0396ddd2860430d6d55daf84699ac15c60a5556e2cdbe4e" HandleID="k8s-pod-network.03186e6ecb7659e1d0396ddd2860430d6d55daf84699ac15c60a5556e2cdbe4e" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-coredns--76f75df574--sfprt-eth0" Dec 13 02:40:42.149752 containerd[1450]: 2024-12-13 02:40:42.103 [INFO][4512] cni-plugin/k8s.go 386: Populated endpoint ContainerID="03186e6ecb7659e1d0396ddd2860430d6d55daf84699ac15c60a5556e2cdbe4e" Namespace="kube-system" Pod="coredns-76f75df574-sfprt" WorkloadEndpoint="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-coredns--76f75df574--sfprt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--31d3d6554f.novalocal-k8s-coredns--76f75df574--sfprt-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"33c3e9fb-68ef-4580-9c08-9e7c76469b7a", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 39, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-31d3d6554f.novalocal", ContainerID:"", Pod:"coredns-76f75df574-sfprt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif65d285f62a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:40:42.149752 containerd[1450]: 2024-12-13 02:40:42.103 [INFO][4512] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.24.70/32] ContainerID="03186e6ecb7659e1d0396ddd2860430d6d55daf84699ac15c60a5556e2cdbe4e" Namespace="kube-system" Pod="coredns-76f75df574-sfprt" WorkloadEndpoint="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-coredns--76f75df574--sfprt-eth0" Dec 13 02:40:42.149752 containerd[1450]: 2024-12-13 02:40:42.103 [INFO][4512] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif65d285f62a ContainerID="03186e6ecb7659e1d0396ddd2860430d6d55daf84699ac15c60a5556e2cdbe4e" Namespace="kube-system" Pod="coredns-76f75df574-sfprt" WorkloadEndpoint="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-coredns--76f75df574--sfprt-eth0" Dec 13 02:40:42.149752 containerd[1450]: 2024-12-13 02:40:42.113 [INFO][4512] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="03186e6ecb7659e1d0396ddd2860430d6d55daf84699ac15c60a5556e2cdbe4e" Namespace="kube-system" Pod="coredns-76f75df574-sfprt" WorkloadEndpoint="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-coredns--76f75df574--sfprt-eth0" Dec 13 02:40:42.149752 containerd[1450]: 2024-12-13 02:40:42.114 [INFO][4512] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="03186e6ecb7659e1d0396ddd2860430d6d55daf84699ac15c60a5556e2cdbe4e" Namespace="kube-system" Pod="coredns-76f75df574-sfprt" WorkloadEndpoint="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-coredns--76f75df574--sfprt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--31d3d6554f.novalocal-k8s-coredns--76f75df574--sfprt-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"33c3e9fb-68ef-4580-9c08-9e7c76469b7a", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 39, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-31d3d6554f.novalocal", ContainerID:"03186e6ecb7659e1d0396ddd2860430d6d55daf84699ac15c60a5556e2cdbe4e", Pod:"coredns-76f75df574-sfprt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif65d285f62a", MAC:"7a:b8:89:22:f2:3e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:40:42.149752 containerd[1450]: 2024-12-13 02:40:42.130 [INFO][4512] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="03186e6ecb7659e1d0396ddd2860430d6d55daf84699ac15c60a5556e2cdbe4e" Namespace="kube-system" Pod="coredns-76f75df574-sfprt" WorkloadEndpoint="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-coredns--76f75df574--sfprt-eth0" Dec 13 02:40:42.208384 systemd-networkd[1362]: vxlan.calico: Gained IPv6LL Dec 13 02:40:42.219418 containerd[1450]: time="2024-12-13T02:40:42.218560707Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:40:42.219418 containerd[1450]: time="2024-12-13T02:40:42.218630289Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:40:42.219418 containerd[1450]: time="2024-12-13T02:40:42.218651639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:40:42.219418 containerd[1450]: time="2024-12-13T02:40:42.218748261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:40:42.219817 containerd[1450]: time="2024-12-13T02:40:42.219774518Z" level=info msg="StartContainer for \"0b6b6f5641ceaf8502dab1aa4e5c95be57e493ddfa920a21d7fe1975a00a1cdc\" returns successfully" Dec 13 02:40:42.246430 systemd[1]: Started cri-containerd-03186e6ecb7659e1d0396ddd2860430d6d55daf84699ac15c60a5556e2cdbe4e.scope - libcontainer container 03186e6ecb7659e1d0396ddd2860430d6d55daf84699ac15c60a5556e2cdbe4e. Dec 13 02:40:42.299421 containerd[1450]: time="2024-12-13T02:40:42.299366098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-sfprt,Uid:33c3e9fb-68ef-4580-9c08-9e7c76469b7a,Namespace:kube-system,Attempt:1,} returns sandbox id \"03186e6ecb7659e1d0396ddd2860430d6d55daf84699ac15c60a5556e2cdbe4e\"" Dec 13 02:40:42.305092 containerd[1450]: time="2024-12-13T02:40:42.305051265Z" level=info msg="CreateContainer within sandbox \"03186e6ecb7659e1d0396ddd2860430d6d55daf84699ac15c60a5556e2cdbe4e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 02:40:42.325862 containerd[1450]: time="2024-12-13T02:40:42.325763176Z" level=info msg="CreateContainer within sandbox \"03186e6ecb7659e1d0396ddd2860430d6d55daf84699ac15c60a5556e2cdbe4e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b1d9938d3a672b129033967cc4b6722d845a04e3a9c82299800db43ff6d19e4e\"" Dec 13 02:40:42.326997 containerd[1450]: time="2024-12-13T02:40:42.326778362Z" level=info msg="StartContainer for \"b1d9938d3a672b129033967cc4b6722d845a04e3a9c82299800db43ff6d19e4e\"" Dec 13 02:40:42.362702 systemd[1]: Started cri-containerd-b1d9938d3a672b129033967cc4b6722d845a04e3a9c82299800db43ff6d19e4e.scope - libcontainer container b1d9938d3a672b129033967cc4b6722d845a04e3a9c82299800db43ff6d19e4e. Dec 13 02:40:42.396895 containerd[1450]: time="2024-12-13T02:40:42.396753929Z" level=info msg="StartContainer for \"b1d9938d3a672b129033967cc4b6722d845a04e3a9c82299800db43ff6d19e4e\" returns successfully" Dec 13 02:40:42.404754 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2486023885.mount: Deactivated successfully. Dec 13 02:40:42.669282 kubelet[2672]: I1213 02:40:42.669200 2672 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-sfprt" podStartSLOduration=53.669143578 podStartE2EDuration="53.669143578s" podCreationTimestamp="2024-12-13 02:39:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:40:42.629898166 +0000 UTC m=+65.233278236" watchObservedRunningTime="2024-12-13 02:40:42.669143578 +0000 UTC m=+65.272523588" Dec 13 02:40:43.997843 systemd-networkd[1362]: calif65d285f62a: Gained IPv6LL Dec 13 02:40:46.693797 containerd[1450]: time="2024-12-13T02:40:46.692715918Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:40:46.697169 containerd[1450]: time="2024-12-13T02:40:46.696300169Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Dec 13 02:40:46.699439 containerd[1450]: time="2024-12-13T02:40:46.699285038Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:40:46.711692 containerd[1450]: time="2024-12-13T02:40:46.711005152Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:40:46.713270 containerd[1450]: time="2024-12-13T02:40:46.712919472Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 4.766802719s" Dec 13 02:40:46.713270 containerd[1450]: time="2024-12-13T02:40:46.713011966Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 02:40:46.716188 containerd[1450]: time="2024-12-13T02:40:46.716093187Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 02:40:46.723004 containerd[1450]: time="2024-12-13T02:40:46.722897409Z" level=info msg="CreateContainer within sandbox \"c6fd9ae2dff69b94f7651fe32609a64787ee689c91e7f4068d7a2c8f23ac2155\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 02:40:46.755606 containerd[1450]: time="2024-12-13T02:40:46.755474829Z" level=info msg="CreateContainer within sandbox \"c6fd9ae2dff69b94f7651fe32609a64787ee689c91e7f4068d7a2c8f23ac2155\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"60604755f74c026f2b639580a61a93efbc170f42395827ff805f14f89695b402\"" Dec 13 02:40:46.763522 containerd[1450]: time="2024-12-13T02:40:46.759914262Z" level=info msg="StartContainer for \"60604755f74c026f2b639580a61a93efbc170f42395827ff805f14f89695b402\"" Dec 13 02:40:46.774828 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount868861241.mount: Deactivated successfully. Dec 13 02:40:46.819680 systemd[1]: Started cri-containerd-60604755f74c026f2b639580a61a93efbc170f42395827ff805f14f89695b402.scope - libcontainer container 60604755f74c026f2b639580a61a93efbc170f42395827ff805f14f89695b402. Dec 13 02:40:47.514608 containerd[1450]: time="2024-12-13T02:40:47.514456859Z" level=info msg="StartContainer for \"60604755f74c026f2b639580a61a93efbc170f42395827ff805f14f89695b402\" returns successfully" Dec 13 02:40:47.583998 containerd[1450]: time="2024-12-13T02:40:47.583857391Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:40:47.586356 containerd[1450]: time="2024-12-13T02:40:47.586280360Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Dec 13 02:40:47.589712 containerd[1450]: time="2024-12-13T02:40:47.589610750Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 873.456337ms" Dec 13 02:40:47.590051 containerd[1450]: time="2024-12-13T02:40:47.589717171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 02:40:47.592687 containerd[1450]: time="2024-12-13T02:40:47.590932763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Dec 13 02:40:47.592687 containerd[1450]: time="2024-12-13T02:40:47.592100225Z" level=info msg="CreateContainer within sandbox \"f9da70b2ac6ad43933a5abf0a63757bb346c5c3808dbe58c4162b89878068870\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 02:40:47.623113 containerd[1450]: time="2024-12-13T02:40:47.623061690Z" level=info msg="CreateContainer within sandbox \"f9da70b2ac6ad43933a5abf0a63757bb346c5c3808dbe58c4162b89878068870\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"af919de9d0e98914adda0025449feac81b1f1db93ee076c18bf8418edf01a256\"" Dec 13 02:40:47.628069 containerd[1450]: time="2024-12-13T02:40:47.626571979Z" level=info msg="StartContainer for \"af919de9d0e98914adda0025449feac81b1f1db93ee076c18bf8418edf01a256\"" Dec 13 02:40:47.684840 kubelet[2672]: I1213 02:40:47.684698 2672 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-d894d9fbd-5thcx" podStartSLOduration=40.010821186 podStartE2EDuration="46.684642915s" podCreationTimestamp="2024-12-13 02:40:01 +0000 UTC" firstStartedPulling="2024-12-13 02:40:40.040590467 +0000 UTC m=+62.643970487" lastFinishedPulling="2024-12-13 02:40:46.714412156 +0000 UTC m=+69.317792216" observedRunningTime="2024-12-13 02:40:47.68326143 +0000 UTC m=+70.286641450" watchObservedRunningTime="2024-12-13 02:40:47.684642915 +0000 UTC m=+70.288022955" Dec 13 02:40:47.695131 systemd[1]: Started cri-containerd-af919de9d0e98914adda0025449feac81b1f1db93ee076c18bf8418edf01a256.scope - libcontainer container af919de9d0e98914adda0025449feac81b1f1db93ee076c18bf8418edf01a256. Dec 13 02:40:47.906393 containerd[1450]: time="2024-12-13T02:40:47.905296099Z" level=info msg="StartContainer for \"af919de9d0e98914adda0025449feac81b1f1db93ee076c18bf8418edf01a256\" returns successfully" Dec 13 02:40:48.972663 kubelet[2672]: I1213 02:40:48.972016 2672 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-d894d9fbd-w6s8z" podStartSLOduration=41.059790262 podStartE2EDuration="47.9719346s" podCreationTimestamp="2024-12-13 02:40:01 +0000 UTC" firstStartedPulling="2024-12-13 02:40:40.678074559 +0000 UTC m=+63.281454579" lastFinishedPulling="2024-12-13 02:40:47.590218897 +0000 UTC m=+70.193598917" observedRunningTime="2024-12-13 02:40:48.685883824 +0000 UTC m=+71.289263844" watchObservedRunningTime="2024-12-13 02:40:48.9719346 +0000 UTC m=+71.575314700" Dec 13 02:40:52.296529 containerd[1450]: time="2024-12-13T02:40:52.295762397Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:40:52.300920 containerd[1450]: time="2024-12-13T02:40:52.300593974Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Dec 13 02:40:52.304541 containerd[1450]: time="2024-12-13T02:40:52.304067681Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:40:52.310586 containerd[1450]: time="2024-12-13T02:40:52.309257883Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:40:52.312790 containerd[1450]: time="2024-12-13T02:40:52.312275360Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 4.721263087s" Dec 13 02:40:52.312790 containerd[1450]: time="2024-12-13T02:40:52.312339962Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Dec 13 02:40:52.323629 containerd[1450]: time="2024-12-13T02:40:52.322980497Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 02:40:52.391107 containerd[1450]: time="2024-12-13T02:40:52.389323577Z" level=info msg="CreateContainer within sandbox \"7e7503c23b575bf98e3ee1e404ca1af2f51d68a1d875fce8ff5727b411591676\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 02:40:52.565635 containerd[1450]: time="2024-12-13T02:40:52.565359229Z" level=info msg="CreateContainer within sandbox \"7e7503c23b575bf98e3ee1e404ca1af2f51d68a1d875fce8ff5727b411591676\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"c87cc41cd85208467a1f9f0e5575076a692b8d0b505e0d959808c34d633a4155\"" Dec 13 02:40:52.569690 containerd[1450]: time="2024-12-13T02:40:52.568960957Z" level=info msg="StartContainer for \"c87cc41cd85208467a1f9f0e5575076a692b8d0b505e0d959808c34d633a4155\"" Dec 13 02:40:52.673083 systemd[1]: Started cri-containerd-c87cc41cd85208467a1f9f0e5575076a692b8d0b505e0d959808c34d633a4155.scope - libcontainer container c87cc41cd85208467a1f9f0e5575076a692b8d0b505e0d959808c34d633a4155. Dec 13 02:40:52.819519 containerd[1450]: time="2024-12-13T02:40:52.819186569Z" level=info msg="StartContainer for \"c87cc41cd85208467a1f9f0e5575076a692b8d0b505e0d959808c34d633a4155\" returns successfully" Dec 13 02:40:54.104928 systemd[1]: Started sshd@9-172.24.4.208:22-172.24.4.1:55952.service - OpenSSH per-connection server daemon (172.24.4.1:55952). Dec 13 02:40:54.136001 kubelet[2672]: I1213 02:40:54.135768 2672 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-585df87b9-jhcxr" podStartSLOduration=41.155800551 podStartE2EDuration="52.135711877s" podCreationTimestamp="2024-12-13 02:40:02 +0000 UTC" firstStartedPulling="2024-12-13 02:40:41.333217893 +0000 UTC m=+63.936597903" lastFinishedPulling="2024-12-13 02:40:52.313129209 +0000 UTC m=+74.916509229" observedRunningTime="2024-12-13 02:40:53.731523852 +0000 UTC m=+76.334903912" watchObservedRunningTime="2024-12-13 02:40:54.135711877 +0000 UTC m=+76.739091897" Dec 13 02:40:54.642521 containerd[1450]: time="2024-12-13T02:40:54.642383957Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:40:54.645879 containerd[1450]: time="2024-12-13T02:40:54.645139519Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Dec 13 02:40:54.646702 containerd[1450]: time="2024-12-13T02:40:54.646636049Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:40:54.651169 containerd[1450]: time="2024-12-13T02:40:54.650020157Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:40:54.651169 containerd[1450]: time="2024-12-13T02:40:54.650990284Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 2.327918696s" Dec 13 02:40:54.651169 containerd[1450]: time="2024-12-13T02:40:54.651028818Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Dec 13 02:40:54.654172 containerd[1450]: time="2024-12-13T02:40:54.654116285Z" level=info msg="CreateContainer within sandbox \"92f0a4553201cef4fd5a1369d5d35948dc0fe7ac6b578e7fb11f7b793f36bc6e\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 02:40:54.696697 containerd[1450]: time="2024-12-13T02:40:54.696510152Z" level=info msg="CreateContainer within sandbox \"92f0a4553201cef4fd5a1369d5d35948dc0fe7ac6b578e7fb11f7b793f36bc6e\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"1f2eaa36d74a844211ba26a902ffca56cb09bc35f2210c1b7642fbebe4b035e9\"" Dec 13 02:40:54.699217 containerd[1450]: time="2024-12-13T02:40:54.699100694Z" level=info msg="StartContainer for \"1f2eaa36d74a844211ba26a902ffca56cb09bc35f2210c1b7642fbebe4b035e9\"" Dec 13 02:40:54.772739 systemd[1]: Started cri-containerd-1f2eaa36d74a844211ba26a902ffca56cb09bc35f2210c1b7642fbebe4b035e9.scope - libcontainer container 1f2eaa36d74a844211ba26a902ffca56cb09bc35f2210c1b7642fbebe4b035e9. Dec 13 02:40:54.821307 containerd[1450]: time="2024-12-13T02:40:54.821138981Z" level=info msg="StartContainer for \"1f2eaa36d74a844211ba26a902ffca56cb09bc35f2210c1b7642fbebe4b035e9\" returns successfully" Dec 13 02:40:55.496125 sshd[4844]: Accepted publickey for core from 172.24.4.1 port 55952 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:40:55.506577 sshd[4844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:40:55.523398 systemd-logind[1432]: New session 12 of user core. Dec 13 02:40:55.530749 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 02:40:55.794222 kubelet[2672]: I1213 02:40:55.793324 2672 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-9qnbf" podStartSLOduration=38.682728943 podStartE2EDuration="53.793232133s" podCreationTimestamp="2024-12-13 02:40:02 +0000 UTC" firstStartedPulling="2024-12-13 02:40:39.540806917 +0000 UTC m=+62.144186927" lastFinishedPulling="2024-12-13 02:40:54.651310107 +0000 UTC m=+77.254690117" observedRunningTime="2024-12-13 02:40:55.792854832 +0000 UTC m=+78.396234892" watchObservedRunningTime="2024-12-13 02:40:55.793232133 +0000 UTC m=+78.396612193" Dec 13 02:40:56.266679 kubelet[2672]: I1213 02:40:56.266591 2672 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 02:40:56.289574 kubelet[2672]: I1213 02:40:56.289531 2672 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 02:40:58.304855 sshd[4844]: pam_unix(sshd:session): session closed for user core Dec 13 02:40:58.320157 systemd[1]: sshd@9-172.24.4.208:22-172.24.4.1:55952.service: Deactivated successfully. Dec 13 02:40:58.325562 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 02:40:58.329536 systemd-logind[1432]: Session 12 logged out. Waiting for processes to exit. Dec 13 02:40:58.332696 systemd-logind[1432]: Removed session 12. Dec 13 02:41:03.332767 systemd[1]: Started sshd@10-172.24.4.208:22-172.24.4.1:40574.service - OpenSSH per-connection server daemon (172.24.4.1:40574). Dec 13 02:41:05.024849 sshd[4952]: Accepted publickey for core from 172.24.4.1 port 40574 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:41:05.028004 sshd[4952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:41:05.041299 systemd-logind[1432]: New session 13 of user core. Dec 13 02:41:05.049196 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 02:41:06.047224 sshd[4952]: pam_unix(sshd:session): session closed for user core Dec 13 02:41:06.051522 systemd-logind[1432]: Session 13 logged out. Waiting for processes to exit. Dec 13 02:41:06.051918 systemd[1]: sshd@10-172.24.4.208:22-172.24.4.1:40574.service: Deactivated successfully. Dec 13 02:41:06.054443 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 02:41:06.056201 systemd-logind[1432]: Removed session 13. Dec 13 02:41:11.070077 systemd[1]: Started sshd@11-172.24.4.208:22-172.24.4.1:58526.service - OpenSSH per-connection server daemon (172.24.4.1:58526). Dec 13 02:41:12.250460 sshd[4966]: Accepted publickey for core from 172.24.4.1 port 58526 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:41:12.252971 sshd[4966]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:41:12.264100 systemd-logind[1432]: New session 14 of user core. Dec 13 02:41:12.268730 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 02:41:13.049316 sshd[4966]: pam_unix(sshd:session): session closed for user core Dec 13 02:41:13.063932 systemd[1]: sshd@11-172.24.4.208:22-172.24.4.1:58526.service: Deactivated successfully. Dec 13 02:41:13.067423 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 02:41:13.070226 systemd-logind[1432]: Session 14 logged out. Waiting for processes to exit. Dec 13 02:41:13.081146 systemd[1]: Started sshd@12-172.24.4.208:22-172.24.4.1:58536.service - OpenSSH per-connection server daemon (172.24.4.1:58536). Dec 13 02:41:13.084576 systemd-logind[1432]: Removed session 14. Dec 13 02:41:14.314597 sshd[4980]: Accepted publickey for core from 172.24.4.1 port 58536 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:41:14.317760 sshd[4980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:41:14.328781 systemd-logind[1432]: New session 15 of user core. Dec 13 02:41:14.336982 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 02:41:15.328321 systemd[1]: Started sshd@13-172.24.4.208:22-172.24.4.1:58258.service - OpenSSH per-connection server daemon (172.24.4.1:58258). Dec 13 02:41:15.329335 sshd[4980]: pam_unix(sshd:session): session closed for user core Dec 13 02:41:15.356296 systemd[1]: sshd@12-172.24.4.208:22-172.24.4.1:58536.service: Deactivated successfully. Dec 13 02:41:15.363055 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 02:41:15.367161 systemd-logind[1432]: Session 15 logged out. Waiting for processes to exit. Dec 13 02:41:15.370261 systemd-logind[1432]: Removed session 15. Dec 13 02:41:16.809965 sshd[4991]: Accepted publickey for core from 172.24.4.1 port 58258 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:41:16.815067 sshd[4991]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:41:16.826595 systemd-logind[1432]: New session 16 of user core. Dec 13 02:41:16.835164 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 02:41:18.127689 sshd[4991]: pam_unix(sshd:session): session closed for user core Dec 13 02:41:18.153050 systemd[1]: sshd@13-172.24.4.208:22-172.24.4.1:58258.service: Deactivated successfully. Dec 13 02:41:18.158409 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 02:41:18.161629 systemd-logind[1432]: Session 16 logged out. Waiting for processes to exit. Dec 13 02:41:18.164992 systemd-logind[1432]: Removed session 16. Dec 13 02:41:23.155081 systemd[1]: Started sshd@14-172.24.4.208:22-172.24.4.1:58264.service - OpenSSH per-connection server daemon (172.24.4.1:58264). Dec 13 02:41:24.470385 sshd[5036]: Accepted publickey for core from 172.24.4.1 port 58264 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:41:24.471951 sshd[5036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:41:24.478760 systemd-logind[1432]: New session 17 of user core. Dec 13 02:41:24.483676 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 02:41:25.124210 sshd[5036]: pam_unix(sshd:session): session closed for user core Dec 13 02:41:25.130696 systemd[1]: sshd@14-172.24.4.208:22-172.24.4.1:58264.service: Deactivated successfully. Dec 13 02:41:25.132963 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 02:41:25.134441 systemd-logind[1432]: Session 17 logged out. Waiting for processes to exit. Dec 13 02:41:25.136016 systemd-logind[1432]: Removed session 17. Dec 13 02:41:30.149023 systemd[1]: Started sshd@15-172.24.4.208:22-172.24.4.1:54482.service - OpenSSH per-connection server daemon (172.24.4.1:54482). Dec 13 02:41:31.380516 sshd[5071]: Accepted publickey for core from 172.24.4.1 port 54482 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:41:31.382981 sshd[5071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:41:31.394547 systemd-logind[1432]: New session 18 of user core. Dec 13 02:41:31.404863 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 02:41:32.279915 sshd[5071]: pam_unix(sshd:session): session closed for user core Dec 13 02:41:32.284166 systemd[1]: sshd@15-172.24.4.208:22-172.24.4.1:54482.service: Deactivated successfully. Dec 13 02:41:32.286122 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 02:41:32.288078 systemd-logind[1432]: Session 18 logged out. Waiting for processes to exit. Dec 13 02:41:32.290091 systemd-logind[1432]: Removed session 18. Dec 13 02:41:37.300773 systemd[1]: Started sshd@16-172.24.4.208:22-172.24.4.1:43208.service - OpenSSH per-connection server daemon (172.24.4.1:43208). Dec 13 02:41:37.888619 containerd[1450]: time="2024-12-13T02:41:37.888409717Z" level=info msg="StopPodSandbox for \"545ea108a48aef43421de94533e20fddba5d935cba65950b7682f1bb0e629d33\"" Dec 13 02:41:38.526061 sshd[5105]: Accepted publickey for core from 172.24.4.1 port 43208 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:41:38.549289 sshd[5105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:41:38.562562 systemd-logind[1432]: New session 19 of user core. Dec 13 02:41:38.568910 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 02:41:38.737083 containerd[1450]: 2024-12-13 02:41:38.671 [WARNING][5121] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="545ea108a48aef43421de94533e20fddba5d935cba65950b7682f1bb0e629d33" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--31d3d6554f.novalocal-k8s-coredns--76f75df574--qnq6v-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"b6fec068-1607-4b7c-a071-cd5974d02433", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 39, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-31d3d6554f.novalocal", ContainerID:"0a27194898dd74baba7d504aca8ed7f50de43122e0ad14aa5be9fe1722f0a228", Pod:"coredns-76f75df574-qnq6v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie602d9d24e2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:41:38.737083 containerd[1450]: 2024-12-13 02:41:38.676 [INFO][5121] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="545ea108a48aef43421de94533e20fddba5d935cba65950b7682f1bb0e629d33" Dec 13 02:41:38.737083 containerd[1450]: 2024-12-13 02:41:38.676 [INFO][5121] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="545ea108a48aef43421de94533e20fddba5d935cba65950b7682f1bb0e629d33" iface="eth0" netns="" Dec 13 02:41:38.737083 containerd[1450]: 2024-12-13 02:41:38.676 [INFO][5121] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="545ea108a48aef43421de94533e20fddba5d935cba65950b7682f1bb0e629d33" Dec 13 02:41:38.737083 containerd[1450]: 2024-12-13 02:41:38.677 [INFO][5121] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="545ea108a48aef43421de94533e20fddba5d935cba65950b7682f1bb0e629d33" Dec 13 02:41:38.737083 containerd[1450]: 2024-12-13 02:41:38.721 [INFO][5128] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="545ea108a48aef43421de94533e20fddba5d935cba65950b7682f1bb0e629d33" HandleID="k8s-pod-network.545ea108a48aef43421de94533e20fddba5d935cba65950b7682f1bb0e629d33" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-coredns--76f75df574--qnq6v-eth0" Dec 13 02:41:38.737083 containerd[1450]: 2024-12-13 02:41:38.722 [INFO][5128] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:41:38.737083 containerd[1450]: 2024-12-13 02:41:38.722 [INFO][5128] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:41:38.737083 containerd[1450]: 2024-12-13 02:41:38.730 [WARNING][5128] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="545ea108a48aef43421de94533e20fddba5d935cba65950b7682f1bb0e629d33" HandleID="k8s-pod-network.545ea108a48aef43421de94533e20fddba5d935cba65950b7682f1bb0e629d33" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-coredns--76f75df574--qnq6v-eth0" Dec 13 02:41:38.737083 containerd[1450]: 2024-12-13 02:41:38.730 [INFO][5128] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="545ea108a48aef43421de94533e20fddba5d935cba65950b7682f1bb0e629d33" HandleID="k8s-pod-network.545ea108a48aef43421de94533e20fddba5d935cba65950b7682f1bb0e629d33" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-coredns--76f75df574--qnq6v-eth0" Dec 13 02:41:38.737083 containerd[1450]: 2024-12-13 02:41:38.732 [INFO][5128] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:41:38.737083 containerd[1450]: 2024-12-13 02:41:38.734 [INFO][5121] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="545ea108a48aef43421de94533e20fddba5d935cba65950b7682f1bb0e629d33" Dec 13 02:41:38.740708 containerd[1450]: time="2024-12-13T02:41:38.738222157Z" level=info msg="TearDown network for sandbox \"545ea108a48aef43421de94533e20fddba5d935cba65950b7682f1bb0e629d33\" successfully" Dec 13 02:41:38.740708 containerd[1450]: time="2024-12-13T02:41:38.738297920Z" level=info msg="StopPodSandbox for \"545ea108a48aef43421de94533e20fddba5d935cba65950b7682f1bb0e629d33\" returns successfully" Dec 13 02:41:38.759050 containerd[1450]: time="2024-12-13T02:41:38.758720982Z" level=info msg="RemovePodSandbox for \"545ea108a48aef43421de94533e20fddba5d935cba65950b7682f1bb0e629d33\"" Dec 13 02:41:38.759050 containerd[1450]: time="2024-12-13T02:41:38.758789441Z" level=info msg="Forcibly stopping sandbox \"545ea108a48aef43421de94533e20fddba5d935cba65950b7682f1bb0e629d33\"" Dec 13 02:41:38.861147 containerd[1450]: 2024-12-13 02:41:38.822 [WARNING][5146] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="545ea108a48aef43421de94533e20fddba5d935cba65950b7682f1bb0e629d33" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--31d3d6554f.novalocal-k8s-coredns--76f75df574--qnq6v-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"b6fec068-1607-4b7c-a071-cd5974d02433", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 39, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-31d3d6554f.novalocal", ContainerID:"0a27194898dd74baba7d504aca8ed7f50de43122e0ad14aa5be9fe1722f0a228", Pod:"coredns-76f75df574-qnq6v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie602d9d24e2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:41:38.861147 containerd[1450]: 2024-12-13 02:41:38.822 [INFO][5146] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="545ea108a48aef43421de94533e20fddba5d935cba65950b7682f1bb0e629d33" Dec 13 02:41:38.861147 containerd[1450]: 2024-12-13 02:41:38.822 [INFO][5146] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="545ea108a48aef43421de94533e20fddba5d935cba65950b7682f1bb0e629d33" iface="eth0" netns="" Dec 13 02:41:38.861147 containerd[1450]: 2024-12-13 02:41:38.822 [INFO][5146] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="545ea108a48aef43421de94533e20fddba5d935cba65950b7682f1bb0e629d33" Dec 13 02:41:38.861147 containerd[1450]: 2024-12-13 02:41:38.822 [INFO][5146] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="545ea108a48aef43421de94533e20fddba5d935cba65950b7682f1bb0e629d33" Dec 13 02:41:38.861147 containerd[1450]: 2024-12-13 02:41:38.847 [INFO][5152] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="545ea108a48aef43421de94533e20fddba5d935cba65950b7682f1bb0e629d33" HandleID="k8s-pod-network.545ea108a48aef43421de94533e20fddba5d935cba65950b7682f1bb0e629d33" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-coredns--76f75df574--qnq6v-eth0" Dec 13 02:41:38.861147 containerd[1450]: 2024-12-13 02:41:38.847 [INFO][5152] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:41:38.861147 containerd[1450]: 2024-12-13 02:41:38.847 [INFO][5152] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:41:38.861147 containerd[1450]: 2024-12-13 02:41:38.855 [WARNING][5152] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="545ea108a48aef43421de94533e20fddba5d935cba65950b7682f1bb0e629d33" HandleID="k8s-pod-network.545ea108a48aef43421de94533e20fddba5d935cba65950b7682f1bb0e629d33" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-coredns--76f75df574--qnq6v-eth0" Dec 13 02:41:38.861147 containerd[1450]: 2024-12-13 02:41:38.855 [INFO][5152] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="545ea108a48aef43421de94533e20fddba5d935cba65950b7682f1bb0e629d33" HandleID="k8s-pod-network.545ea108a48aef43421de94533e20fddba5d935cba65950b7682f1bb0e629d33" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-coredns--76f75df574--qnq6v-eth0" Dec 13 02:41:38.861147 containerd[1450]: 2024-12-13 02:41:38.857 [INFO][5152] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:41:38.861147 containerd[1450]: 2024-12-13 02:41:38.858 [INFO][5146] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="545ea108a48aef43421de94533e20fddba5d935cba65950b7682f1bb0e629d33" Dec 13 02:41:38.861147 containerd[1450]: time="2024-12-13T02:41:38.861120077Z" level=info msg="TearDown network for sandbox \"545ea108a48aef43421de94533e20fddba5d935cba65950b7682f1bb0e629d33\" successfully" Dec 13 02:41:38.886189 containerd[1450]: time="2024-12-13T02:41:38.886065873Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"545ea108a48aef43421de94533e20fddba5d935cba65950b7682f1bb0e629d33\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 02:41:38.920700 containerd[1450]: time="2024-12-13T02:41:38.920645225Z" level=info msg="RemovePodSandbox \"545ea108a48aef43421de94533e20fddba5d935cba65950b7682f1bb0e629d33\" returns successfully" Dec 13 02:41:38.932517 containerd[1450]: time="2024-12-13T02:41:38.932456293Z" level=info msg="StopPodSandbox for \"397b5d5566daa503c40d3625e685d60dba0902373966745528806a19b0a62f65\"" Dec 13 02:41:39.029717 containerd[1450]: 2024-12-13 02:41:38.979 [WARNING][5170] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="397b5d5566daa503c40d3625e685d60dba0902373966745528806a19b0a62f65" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--31d3d6554f.novalocal-k8s-coredns--76f75df574--sfprt-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"33c3e9fb-68ef-4580-9c08-9e7c76469b7a", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 39, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-31d3d6554f.novalocal", ContainerID:"03186e6ecb7659e1d0396ddd2860430d6d55daf84699ac15c60a5556e2cdbe4e", Pod:"coredns-76f75df574-sfprt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif65d285f62a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:41:39.029717 containerd[1450]: 2024-12-13 02:41:38.979 [INFO][5170] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="397b5d5566daa503c40d3625e685d60dba0902373966745528806a19b0a62f65" Dec 13 02:41:39.029717 containerd[1450]: 2024-12-13 02:41:38.979 [INFO][5170] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="397b5d5566daa503c40d3625e685d60dba0902373966745528806a19b0a62f65" iface="eth0" netns="" Dec 13 02:41:39.029717 containerd[1450]: 2024-12-13 02:41:38.979 [INFO][5170] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="397b5d5566daa503c40d3625e685d60dba0902373966745528806a19b0a62f65" Dec 13 02:41:39.029717 containerd[1450]: 2024-12-13 02:41:38.979 [INFO][5170] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="397b5d5566daa503c40d3625e685d60dba0902373966745528806a19b0a62f65" Dec 13 02:41:39.029717 containerd[1450]: 2024-12-13 02:41:39.006 [INFO][5176] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="397b5d5566daa503c40d3625e685d60dba0902373966745528806a19b0a62f65" HandleID="k8s-pod-network.397b5d5566daa503c40d3625e685d60dba0902373966745528806a19b0a62f65" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-coredns--76f75df574--sfprt-eth0" Dec 13 02:41:39.029717 containerd[1450]: 2024-12-13 02:41:39.006 [INFO][5176] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:41:39.029717 containerd[1450]: 2024-12-13 02:41:39.006 [INFO][5176] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:41:39.029717 containerd[1450]: 2024-12-13 02:41:39.016 [WARNING][5176] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="397b5d5566daa503c40d3625e685d60dba0902373966745528806a19b0a62f65" HandleID="k8s-pod-network.397b5d5566daa503c40d3625e685d60dba0902373966745528806a19b0a62f65" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-coredns--76f75df574--sfprt-eth0" Dec 13 02:41:39.029717 containerd[1450]: 2024-12-13 02:41:39.017 [INFO][5176] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="397b5d5566daa503c40d3625e685d60dba0902373966745528806a19b0a62f65" HandleID="k8s-pod-network.397b5d5566daa503c40d3625e685d60dba0902373966745528806a19b0a62f65" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-coredns--76f75df574--sfprt-eth0" Dec 13 02:41:39.029717 containerd[1450]: 2024-12-13 02:41:39.019 [INFO][5176] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:41:39.029717 containerd[1450]: 2024-12-13 02:41:39.026 [INFO][5170] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="397b5d5566daa503c40d3625e685d60dba0902373966745528806a19b0a62f65" Dec 13 02:41:39.029717 containerd[1450]: time="2024-12-13T02:41:39.029170378Z" level=info msg="TearDown network for sandbox \"397b5d5566daa503c40d3625e685d60dba0902373966745528806a19b0a62f65\" successfully" Dec 13 02:41:39.029717 containerd[1450]: time="2024-12-13T02:41:39.029201025Z" level=info msg="StopPodSandbox for \"397b5d5566daa503c40d3625e685d60dba0902373966745528806a19b0a62f65\" returns successfully" Dec 13 02:41:39.030397 containerd[1450]: time="2024-12-13T02:41:39.029806213Z" level=info msg="RemovePodSandbox for \"397b5d5566daa503c40d3625e685d60dba0902373966745528806a19b0a62f65\"" Dec 13 02:41:39.030397 containerd[1450]: time="2024-12-13T02:41:39.029941728Z" level=info msg="Forcibly stopping sandbox \"397b5d5566daa503c40d3625e685d60dba0902373966745528806a19b0a62f65\"" Dec 13 02:41:39.168126 containerd[1450]: 2024-12-13 02:41:39.121 [WARNING][5198] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="397b5d5566daa503c40d3625e685d60dba0902373966745528806a19b0a62f65" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--31d3d6554f.novalocal-k8s-coredns--76f75df574--sfprt-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"33c3e9fb-68ef-4580-9c08-9e7c76469b7a", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 39, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-31d3d6554f.novalocal", ContainerID:"03186e6ecb7659e1d0396ddd2860430d6d55daf84699ac15c60a5556e2cdbe4e", Pod:"coredns-76f75df574-sfprt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif65d285f62a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:41:39.168126 containerd[1450]: 2024-12-13 02:41:39.122 [INFO][5198] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="397b5d5566daa503c40d3625e685d60dba0902373966745528806a19b0a62f65" Dec 13 02:41:39.168126 containerd[1450]: 2024-12-13 02:41:39.122 [INFO][5198] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="397b5d5566daa503c40d3625e685d60dba0902373966745528806a19b0a62f65" iface="eth0" netns="" Dec 13 02:41:39.168126 containerd[1450]: 2024-12-13 02:41:39.122 [INFO][5198] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="397b5d5566daa503c40d3625e685d60dba0902373966745528806a19b0a62f65" Dec 13 02:41:39.168126 containerd[1450]: 2024-12-13 02:41:39.122 [INFO][5198] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="397b5d5566daa503c40d3625e685d60dba0902373966745528806a19b0a62f65" Dec 13 02:41:39.168126 containerd[1450]: 2024-12-13 02:41:39.147 [INFO][5207] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="397b5d5566daa503c40d3625e685d60dba0902373966745528806a19b0a62f65" HandleID="k8s-pod-network.397b5d5566daa503c40d3625e685d60dba0902373966745528806a19b0a62f65" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-coredns--76f75df574--sfprt-eth0" Dec 13 02:41:39.168126 containerd[1450]: 2024-12-13 02:41:39.147 [INFO][5207] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:41:39.168126 containerd[1450]: 2024-12-13 02:41:39.147 [INFO][5207] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:41:39.168126 containerd[1450]: 2024-12-13 02:41:39.156 [WARNING][5207] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="397b5d5566daa503c40d3625e685d60dba0902373966745528806a19b0a62f65" HandleID="k8s-pod-network.397b5d5566daa503c40d3625e685d60dba0902373966745528806a19b0a62f65" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-coredns--76f75df574--sfprt-eth0" Dec 13 02:41:39.168126 containerd[1450]: 2024-12-13 02:41:39.158 [INFO][5207] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="397b5d5566daa503c40d3625e685d60dba0902373966745528806a19b0a62f65" HandleID="k8s-pod-network.397b5d5566daa503c40d3625e685d60dba0902373966745528806a19b0a62f65" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-coredns--76f75df574--sfprt-eth0" Dec 13 02:41:39.168126 containerd[1450]: 2024-12-13 02:41:39.160 [INFO][5207] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:41:39.168126 containerd[1450]: 2024-12-13 02:41:39.165 [INFO][5198] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="397b5d5566daa503c40d3625e685d60dba0902373966745528806a19b0a62f65" Dec 13 02:41:39.168126 containerd[1450]: time="2024-12-13T02:41:39.167252253Z" level=info msg="TearDown network for sandbox \"397b5d5566daa503c40d3625e685d60dba0902373966745528806a19b0a62f65\" successfully" Dec 13 02:41:39.172532 containerd[1450]: time="2024-12-13T02:41:39.172366850Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"397b5d5566daa503c40d3625e685d60dba0902373966745528806a19b0a62f65\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 02:41:39.172532 containerd[1450]: time="2024-12-13T02:41:39.172441130Z" level=info msg="RemovePodSandbox \"397b5d5566daa503c40d3625e685d60dba0902373966745528806a19b0a62f65\" returns successfully" Dec 13 02:41:39.173547 containerd[1450]: time="2024-12-13T02:41:39.173226226Z" level=info msg="StopPodSandbox for \"94f138cb1525f1d6908b7caba24dad6518badd0df93a8320331c00602c63ba52\"" Dec 13 02:41:39.304990 containerd[1450]: 2024-12-13 02:41:39.242 [WARNING][5227] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="94f138cb1525f1d6908b7caba24dad6518badd0df93a8320331c00602c63ba52" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--kube--controllers--585df87b9--jhcxr-eth0", GenerateName:"calico-kube-controllers-585df87b9-", Namespace:"calico-system", SelfLink:"", UID:"7b74c729-793f-4b9e-8c1e-327ee29af018", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 40, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"585df87b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-31d3d6554f.novalocal", ContainerID:"7e7503c23b575bf98e3ee1e404ca1af2f51d68a1d875fce8ff5727b411591676", Pod:"calico-kube-controllers-585df87b9-jhcxr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.24.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid67b5941966", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:41:39.304990 containerd[1450]: 2024-12-13 02:41:39.243 [INFO][5227] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="94f138cb1525f1d6908b7caba24dad6518badd0df93a8320331c00602c63ba52" Dec 13 02:41:39.304990 containerd[1450]: 2024-12-13 02:41:39.243 [INFO][5227] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="94f138cb1525f1d6908b7caba24dad6518badd0df93a8320331c00602c63ba52" iface="eth0" netns="" Dec 13 02:41:39.304990 containerd[1450]: 2024-12-13 02:41:39.243 [INFO][5227] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="94f138cb1525f1d6908b7caba24dad6518badd0df93a8320331c00602c63ba52" Dec 13 02:41:39.304990 containerd[1450]: 2024-12-13 02:41:39.243 [INFO][5227] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="94f138cb1525f1d6908b7caba24dad6518badd0df93a8320331c00602c63ba52" Dec 13 02:41:39.304990 containerd[1450]: 2024-12-13 02:41:39.281 [INFO][5233] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="94f138cb1525f1d6908b7caba24dad6518badd0df93a8320331c00602c63ba52" HandleID="k8s-pod-network.94f138cb1525f1d6908b7caba24dad6518badd0df93a8320331c00602c63ba52" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--kube--controllers--585df87b9--jhcxr-eth0" Dec 13 02:41:39.304990 containerd[1450]: 2024-12-13 02:41:39.282 [INFO][5233] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:41:39.304990 containerd[1450]: 2024-12-13 02:41:39.284 [INFO][5233] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:41:39.304990 containerd[1450]: 2024-12-13 02:41:39.298 [WARNING][5233] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="94f138cb1525f1d6908b7caba24dad6518badd0df93a8320331c00602c63ba52" HandleID="k8s-pod-network.94f138cb1525f1d6908b7caba24dad6518badd0df93a8320331c00602c63ba52" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--kube--controllers--585df87b9--jhcxr-eth0" Dec 13 02:41:39.304990 containerd[1450]: 2024-12-13 02:41:39.298 [INFO][5233] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="94f138cb1525f1d6908b7caba24dad6518badd0df93a8320331c00602c63ba52" HandleID="k8s-pod-network.94f138cb1525f1d6908b7caba24dad6518badd0df93a8320331c00602c63ba52" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--kube--controllers--585df87b9--jhcxr-eth0" Dec 13 02:41:39.304990 containerd[1450]: 2024-12-13 02:41:39.300 [INFO][5233] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:41:39.304990 containerd[1450]: 2024-12-13 02:41:39.302 [INFO][5227] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="94f138cb1525f1d6908b7caba24dad6518badd0df93a8320331c00602c63ba52" Dec 13 02:41:39.306022 containerd[1450]: time="2024-12-13T02:41:39.305302659Z" level=info msg="TearDown network for sandbox \"94f138cb1525f1d6908b7caba24dad6518badd0df93a8320331c00602c63ba52\" successfully" Dec 13 02:41:39.306022 containerd[1450]: time="2024-12-13T02:41:39.305339438Z" level=info msg="StopPodSandbox for \"94f138cb1525f1d6908b7caba24dad6518badd0df93a8320331c00602c63ba52\" returns successfully" Dec 13 02:41:39.307121 containerd[1450]: time="2024-12-13T02:41:39.307034485Z" level=info msg="RemovePodSandbox for \"94f138cb1525f1d6908b7caba24dad6518badd0df93a8320331c00602c63ba52\"" Dec 13 02:41:39.307121 containerd[1450]: time="2024-12-13T02:41:39.307069591Z" level=info msg="Forcibly stopping sandbox \"94f138cb1525f1d6908b7caba24dad6518badd0df93a8320331c00602c63ba52\"" Dec 13 02:41:39.393028 containerd[1450]: 2024-12-13 02:41:39.353 [WARNING][5252] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="94f138cb1525f1d6908b7caba24dad6518badd0df93a8320331c00602c63ba52" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--kube--controllers--585df87b9--jhcxr-eth0", GenerateName:"calico-kube-controllers-585df87b9-", Namespace:"calico-system", SelfLink:"", UID:"7b74c729-793f-4b9e-8c1e-327ee29af018", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 40, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"585df87b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-31d3d6554f.novalocal", ContainerID:"7e7503c23b575bf98e3ee1e404ca1af2f51d68a1d875fce8ff5727b411591676", Pod:"calico-kube-controllers-585df87b9-jhcxr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.24.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid67b5941966", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:41:39.393028 containerd[1450]: 2024-12-13 02:41:39.353 [INFO][5252] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="94f138cb1525f1d6908b7caba24dad6518badd0df93a8320331c00602c63ba52" Dec 13 02:41:39.393028 containerd[1450]: 2024-12-13 02:41:39.353 [INFO][5252] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="94f138cb1525f1d6908b7caba24dad6518badd0df93a8320331c00602c63ba52" iface="eth0" netns="" Dec 13 02:41:39.393028 containerd[1450]: 2024-12-13 02:41:39.353 [INFO][5252] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="94f138cb1525f1d6908b7caba24dad6518badd0df93a8320331c00602c63ba52" Dec 13 02:41:39.393028 containerd[1450]: 2024-12-13 02:41:39.353 [INFO][5252] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="94f138cb1525f1d6908b7caba24dad6518badd0df93a8320331c00602c63ba52" Dec 13 02:41:39.393028 containerd[1450]: 2024-12-13 02:41:39.380 [INFO][5259] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="94f138cb1525f1d6908b7caba24dad6518badd0df93a8320331c00602c63ba52" HandleID="k8s-pod-network.94f138cb1525f1d6908b7caba24dad6518badd0df93a8320331c00602c63ba52" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--kube--controllers--585df87b9--jhcxr-eth0" Dec 13 02:41:39.393028 containerd[1450]: 2024-12-13 02:41:39.380 [INFO][5259] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:41:39.393028 containerd[1450]: 2024-12-13 02:41:39.380 [INFO][5259] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:41:39.393028 containerd[1450]: 2024-12-13 02:41:39.387 [WARNING][5259] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="94f138cb1525f1d6908b7caba24dad6518badd0df93a8320331c00602c63ba52" HandleID="k8s-pod-network.94f138cb1525f1d6908b7caba24dad6518badd0df93a8320331c00602c63ba52" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--kube--controllers--585df87b9--jhcxr-eth0" Dec 13 02:41:39.393028 containerd[1450]: 2024-12-13 02:41:39.387 [INFO][5259] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="94f138cb1525f1d6908b7caba24dad6518badd0df93a8320331c00602c63ba52" HandleID="k8s-pod-network.94f138cb1525f1d6908b7caba24dad6518badd0df93a8320331c00602c63ba52" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--kube--controllers--585df87b9--jhcxr-eth0" Dec 13 02:41:39.393028 containerd[1450]: 2024-12-13 02:41:39.389 [INFO][5259] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:41:39.393028 containerd[1450]: 2024-12-13 02:41:39.391 [INFO][5252] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="94f138cb1525f1d6908b7caba24dad6518badd0df93a8320331c00602c63ba52" Dec 13 02:41:39.394864 containerd[1450]: time="2024-12-13T02:41:39.393767239Z" level=info msg="TearDown network for sandbox \"94f138cb1525f1d6908b7caba24dad6518badd0df93a8320331c00602c63ba52\" successfully" Dec 13 02:41:39.398529 containerd[1450]: time="2024-12-13T02:41:39.398496822Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"94f138cb1525f1d6908b7caba24dad6518badd0df93a8320331c00602c63ba52\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 02:41:39.398605 containerd[1450]: time="2024-12-13T02:41:39.398569418Z" level=info msg="RemovePodSandbox \"94f138cb1525f1d6908b7caba24dad6518badd0df93a8320331c00602c63ba52\" returns successfully" Dec 13 02:41:39.399077 containerd[1450]: time="2024-12-13T02:41:39.399052847Z" level=info msg="StopPodSandbox for \"346f752a207b1dacd95d811a0541afb8312074760139a7369e171c82196f3c52\"" Dec 13 02:41:39.486276 sshd[5105]: pam_unix(sshd:session): session closed for user core Dec 13 02:41:39.498741 systemd[1]: Started sshd@17-172.24.4.208:22-172.24.4.1:43214.service - OpenSSH per-connection server daemon (172.24.4.1:43214). Dec 13 02:41:39.505083 systemd[1]: sshd@16-172.24.4.208:22-172.24.4.1:43208.service: Deactivated successfully. Dec 13 02:41:39.508213 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 02:41:39.510957 systemd-logind[1432]: Session 19 logged out. Waiting for processes to exit. Dec 13 02:41:39.516103 systemd-logind[1432]: Removed session 19. Dec 13 02:41:39.525737 containerd[1450]: 2024-12-13 02:41:39.444 [WARNING][5277] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="346f752a207b1dacd95d811a0541afb8312074760139a7369e171c82196f3c52" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--apiserver--d894d9fbd--w6s8z-eth0", GenerateName:"calico-apiserver-d894d9fbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"d1f56eec-7f0d-4c0d-9522-9259829f7521", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 40, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d894d9fbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-31d3d6554f.novalocal", ContainerID:"f9da70b2ac6ad43933a5abf0a63757bb346c5c3808dbe58c4162b89878068870", Pod:"calico-apiserver-d894d9fbd-w6s8z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid022a4d1220", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:41:39.525737 containerd[1450]: 2024-12-13 02:41:39.444 [INFO][5277] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="346f752a207b1dacd95d811a0541afb8312074760139a7369e171c82196f3c52" Dec 13 02:41:39.525737 containerd[1450]: 2024-12-13 02:41:39.444 [INFO][5277] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="346f752a207b1dacd95d811a0541afb8312074760139a7369e171c82196f3c52" iface="eth0" netns="" Dec 13 02:41:39.525737 containerd[1450]: 2024-12-13 02:41:39.444 [INFO][5277] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="346f752a207b1dacd95d811a0541afb8312074760139a7369e171c82196f3c52" Dec 13 02:41:39.525737 containerd[1450]: 2024-12-13 02:41:39.444 [INFO][5277] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="346f752a207b1dacd95d811a0541afb8312074760139a7369e171c82196f3c52" Dec 13 02:41:39.525737 containerd[1450]: 2024-12-13 02:41:39.475 [INFO][5283] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="346f752a207b1dacd95d811a0541afb8312074760139a7369e171c82196f3c52" HandleID="k8s-pod-network.346f752a207b1dacd95d811a0541afb8312074760139a7369e171c82196f3c52" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--apiserver--d894d9fbd--w6s8z-eth0" Dec 13 02:41:39.525737 containerd[1450]: 2024-12-13 02:41:39.475 [INFO][5283] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:41:39.525737 containerd[1450]: 2024-12-13 02:41:39.475 [INFO][5283] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:41:39.525737 containerd[1450]: 2024-12-13 02:41:39.509 [WARNING][5283] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="346f752a207b1dacd95d811a0541afb8312074760139a7369e171c82196f3c52" HandleID="k8s-pod-network.346f752a207b1dacd95d811a0541afb8312074760139a7369e171c82196f3c52" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--apiserver--d894d9fbd--w6s8z-eth0" Dec 13 02:41:39.525737 containerd[1450]: 2024-12-13 02:41:39.509 [INFO][5283] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="346f752a207b1dacd95d811a0541afb8312074760139a7369e171c82196f3c52" HandleID="k8s-pod-network.346f752a207b1dacd95d811a0541afb8312074760139a7369e171c82196f3c52" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--apiserver--d894d9fbd--w6s8z-eth0" Dec 13 02:41:39.525737 containerd[1450]: 2024-12-13 02:41:39.512 [INFO][5283] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:41:39.525737 containerd[1450]: 2024-12-13 02:41:39.522 [INFO][5277] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="346f752a207b1dacd95d811a0541afb8312074760139a7369e171c82196f3c52" Dec 13 02:41:39.525737 containerd[1450]: time="2024-12-13T02:41:39.525613579Z" level=info msg="TearDown network for sandbox \"346f752a207b1dacd95d811a0541afb8312074760139a7369e171c82196f3c52\" successfully" Dec 13 02:41:39.525737 containerd[1450]: time="2024-12-13T02:41:39.525638937Z" level=info msg="StopPodSandbox for \"346f752a207b1dacd95d811a0541afb8312074760139a7369e171c82196f3c52\" returns successfully" Dec 13 02:41:39.527242 containerd[1450]: time="2024-12-13T02:41:39.526411650Z" level=info msg="RemovePodSandbox for \"346f752a207b1dacd95d811a0541afb8312074760139a7369e171c82196f3c52\"" Dec 13 02:41:39.527242 containerd[1450]: time="2024-12-13T02:41:39.526437569Z" level=info msg="Forcibly stopping sandbox \"346f752a207b1dacd95d811a0541afb8312074760139a7369e171c82196f3c52\"" Dec 13 02:41:39.626540 containerd[1450]: 2024-12-13 02:41:39.587 [WARNING][5305] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="346f752a207b1dacd95d811a0541afb8312074760139a7369e171c82196f3c52" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--apiserver--d894d9fbd--w6s8z-eth0", GenerateName:"calico-apiserver-d894d9fbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"d1f56eec-7f0d-4c0d-9522-9259829f7521", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 40, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d894d9fbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-31d3d6554f.novalocal", ContainerID:"f9da70b2ac6ad43933a5abf0a63757bb346c5c3808dbe58c4162b89878068870", Pod:"calico-apiserver-d894d9fbd-w6s8z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid022a4d1220", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:41:39.626540 containerd[1450]: 2024-12-13 02:41:39.587 [INFO][5305] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="346f752a207b1dacd95d811a0541afb8312074760139a7369e171c82196f3c52" Dec 13 02:41:39.626540 containerd[1450]: 2024-12-13 02:41:39.587 [INFO][5305] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="346f752a207b1dacd95d811a0541afb8312074760139a7369e171c82196f3c52" iface="eth0" netns="" Dec 13 02:41:39.626540 containerd[1450]: 2024-12-13 02:41:39.587 [INFO][5305] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="346f752a207b1dacd95d811a0541afb8312074760139a7369e171c82196f3c52" Dec 13 02:41:39.626540 containerd[1450]: 2024-12-13 02:41:39.587 [INFO][5305] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="346f752a207b1dacd95d811a0541afb8312074760139a7369e171c82196f3c52" Dec 13 02:41:39.626540 containerd[1450]: 2024-12-13 02:41:39.613 [INFO][5312] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="346f752a207b1dacd95d811a0541afb8312074760139a7369e171c82196f3c52" HandleID="k8s-pod-network.346f752a207b1dacd95d811a0541afb8312074760139a7369e171c82196f3c52" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--apiserver--d894d9fbd--w6s8z-eth0" Dec 13 02:41:39.626540 containerd[1450]: 2024-12-13 02:41:39.614 [INFO][5312] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:41:39.626540 containerd[1450]: 2024-12-13 02:41:39.614 [INFO][5312] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:41:39.626540 containerd[1450]: 2024-12-13 02:41:39.621 [WARNING][5312] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="346f752a207b1dacd95d811a0541afb8312074760139a7369e171c82196f3c52" HandleID="k8s-pod-network.346f752a207b1dacd95d811a0541afb8312074760139a7369e171c82196f3c52" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--apiserver--d894d9fbd--w6s8z-eth0" Dec 13 02:41:39.626540 containerd[1450]: 2024-12-13 02:41:39.621 [INFO][5312] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="346f752a207b1dacd95d811a0541afb8312074760139a7369e171c82196f3c52" HandleID="k8s-pod-network.346f752a207b1dacd95d811a0541afb8312074760139a7369e171c82196f3c52" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--apiserver--d894d9fbd--w6s8z-eth0" Dec 13 02:41:39.626540 containerd[1450]: 2024-12-13 02:41:39.623 [INFO][5312] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:41:39.626540 containerd[1450]: 2024-12-13 02:41:39.624 [INFO][5305] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="346f752a207b1dacd95d811a0541afb8312074760139a7369e171c82196f3c52" Dec 13 02:41:39.627170 containerd[1450]: time="2024-12-13T02:41:39.626619129Z" level=info msg="TearDown network for sandbox \"346f752a207b1dacd95d811a0541afb8312074760139a7369e171c82196f3c52\" successfully" Dec 13 02:41:39.630897 containerd[1450]: time="2024-12-13T02:41:39.630778991Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"346f752a207b1dacd95d811a0541afb8312074760139a7369e171c82196f3c52\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 02:41:39.630897 containerd[1450]: time="2024-12-13T02:41:39.630881023Z" level=info msg="RemovePodSandbox \"346f752a207b1dacd95d811a0541afb8312074760139a7369e171c82196f3c52\" returns successfully" Dec 13 02:41:39.631777 containerd[1450]: time="2024-12-13T02:41:39.631456906Z" level=info msg="StopPodSandbox for \"9e95ab1f67d91335b43bdc101237c01d380e43b057afa10b93f8815725a6635d\"" Dec 13 02:41:39.717195 containerd[1450]: 2024-12-13 02:41:39.675 [WARNING][5331] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9e95ab1f67d91335b43bdc101237c01d380e43b057afa10b93f8815725a6635d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--apiserver--d894d9fbd--5thcx-eth0", GenerateName:"calico-apiserver-d894d9fbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"0fc46b44-6bcb-489b-aece-768f5c9d6bf3", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 40, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d894d9fbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-31d3d6554f.novalocal", ContainerID:"c6fd9ae2dff69b94f7651fe32609a64787ee689c91e7f4068d7a2c8f23ac2155", Pod:"calico-apiserver-d894d9fbd-5thcx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali723d674629a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:41:39.717195 containerd[1450]: 2024-12-13 02:41:39.675 [INFO][5331] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9e95ab1f67d91335b43bdc101237c01d380e43b057afa10b93f8815725a6635d" Dec 13 02:41:39.717195 containerd[1450]: 2024-12-13 02:41:39.675 [INFO][5331] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9e95ab1f67d91335b43bdc101237c01d380e43b057afa10b93f8815725a6635d" iface="eth0" netns="" Dec 13 02:41:39.717195 containerd[1450]: 2024-12-13 02:41:39.675 [INFO][5331] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9e95ab1f67d91335b43bdc101237c01d380e43b057afa10b93f8815725a6635d" Dec 13 02:41:39.717195 containerd[1450]: 2024-12-13 02:41:39.675 [INFO][5331] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9e95ab1f67d91335b43bdc101237c01d380e43b057afa10b93f8815725a6635d" Dec 13 02:41:39.717195 containerd[1450]: 2024-12-13 02:41:39.703 [INFO][5337] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9e95ab1f67d91335b43bdc101237c01d380e43b057afa10b93f8815725a6635d" HandleID="k8s-pod-network.9e95ab1f67d91335b43bdc101237c01d380e43b057afa10b93f8815725a6635d" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--apiserver--d894d9fbd--5thcx-eth0" Dec 13 02:41:39.717195 containerd[1450]: 2024-12-13 02:41:39.703 [INFO][5337] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:41:39.717195 containerd[1450]: 2024-12-13 02:41:39.703 [INFO][5337] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:41:39.717195 containerd[1450]: 2024-12-13 02:41:39.711 [WARNING][5337] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9e95ab1f67d91335b43bdc101237c01d380e43b057afa10b93f8815725a6635d" HandleID="k8s-pod-network.9e95ab1f67d91335b43bdc101237c01d380e43b057afa10b93f8815725a6635d" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--apiserver--d894d9fbd--5thcx-eth0" Dec 13 02:41:39.717195 containerd[1450]: 2024-12-13 02:41:39.711 [INFO][5337] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9e95ab1f67d91335b43bdc101237c01d380e43b057afa10b93f8815725a6635d" HandleID="k8s-pod-network.9e95ab1f67d91335b43bdc101237c01d380e43b057afa10b93f8815725a6635d" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--apiserver--d894d9fbd--5thcx-eth0" Dec 13 02:41:39.717195 containerd[1450]: 2024-12-13 02:41:39.713 [INFO][5337] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:41:39.717195 containerd[1450]: 2024-12-13 02:41:39.714 [INFO][5331] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9e95ab1f67d91335b43bdc101237c01d380e43b057afa10b93f8815725a6635d" Dec 13 02:41:39.717195 containerd[1450]: time="2024-12-13T02:41:39.716816077Z" level=info msg="TearDown network for sandbox \"9e95ab1f67d91335b43bdc101237c01d380e43b057afa10b93f8815725a6635d\" successfully" Dec 13 02:41:39.717195 containerd[1450]: time="2024-12-13T02:41:39.716840723Z" level=info msg="StopPodSandbox for \"9e95ab1f67d91335b43bdc101237c01d380e43b057afa10b93f8815725a6635d\" returns successfully" Dec 13 02:41:39.717733 containerd[1450]: time="2024-12-13T02:41:39.717343729Z" level=info msg="RemovePodSandbox for \"9e95ab1f67d91335b43bdc101237c01d380e43b057afa10b93f8815725a6635d\"" Dec 13 02:41:39.717733 containerd[1450]: time="2024-12-13T02:41:39.717387391Z" level=info msg="Forcibly stopping sandbox \"9e95ab1f67d91335b43bdc101237c01d380e43b057afa10b93f8815725a6635d\"" Dec 13 02:41:39.806641 containerd[1450]: 2024-12-13 02:41:39.763 [WARNING][5355] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9e95ab1f67d91335b43bdc101237c01d380e43b057afa10b93f8815725a6635d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--apiserver--d894d9fbd--5thcx-eth0", GenerateName:"calico-apiserver-d894d9fbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"0fc46b44-6bcb-489b-aece-768f5c9d6bf3", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 40, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d894d9fbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-31d3d6554f.novalocal", ContainerID:"c6fd9ae2dff69b94f7651fe32609a64787ee689c91e7f4068d7a2c8f23ac2155", Pod:"calico-apiserver-d894d9fbd-5thcx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali723d674629a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:41:39.806641 containerd[1450]: 2024-12-13 02:41:39.763 [INFO][5355] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9e95ab1f67d91335b43bdc101237c01d380e43b057afa10b93f8815725a6635d" Dec 13 02:41:39.806641 containerd[1450]: 2024-12-13 02:41:39.763 [INFO][5355] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9e95ab1f67d91335b43bdc101237c01d380e43b057afa10b93f8815725a6635d" iface="eth0" netns="" Dec 13 02:41:39.806641 containerd[1450]: 2024-12-13 02:41:39.763 [INFO][5355] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9e95ab1f67d91335b43bdc101237c01d380e43b057afa10b93f8815725a6635d" Dec 13 02:41:39.806641 containerd[1450]: 2024-12-13 02:41:39.763 [INFO][5355] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9e95ab1f67d91335b43bdc101237c01d380e43b057afa10b93f8815725a6635d" Dec 13 02:41:39.806641 containerd[1450]: 2024-12-13 02:41:39.792 [INFO][5361] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9e95ab1f67d91335b43bdc101237c01d380e43b057afa10b93f8815725a6635d" HandleID="k8s-pod-network.9e95ab1f67d91335b43bdc101237c01d380e43b057afa10b93f8815725a6635d" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--apiserver--d894d9fbd--5thcx-eth0" Dec 13 02:41:39.806641 containerd[1450]: 2024-12-13 02:41:39.792 [INFO][5361] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:41:39.806641 containerd[1450]: 2024-12-13 02:41:39.792 [INFO][5361] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:41:39.806641 containerd[1450]: 2024-12-13 02:41:39.799 [WARNING][5361] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9e95ab1f67d91335b43bdc101237c01d380e43b057afa10b93f8815725a6635d" HandleID="k8s-pod-network.9e95ab1f67d91335b43bdc101237c01d380e43b057afa10b93f8815725a6635d" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--apiserver--d894d9fbd--5thcx-eth0" Dec 13 02:41:39.806641 containerd[1450]: 2024-12-13 02:41:39.799 [INFO][5361] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9e95ab1f67d91335b43bdc101237c01d380e43b057afa10b93f8815725a6635d" HandleID="k8s-pod-network.9e95ab1f67d91335b43bdc101237c01d380e43b057afa10b93f8815725a6635d" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-calico--apiserver--d894d9fbd--5thcx-eth0" Dec 13 02:41:39.806641 containerd[1450]: 2024-12-13 02:41:39.802 [INFO][5361] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:41:39.806641 containerd[1450]: 2024-12-13 02:41:39.804 [INFO][5355] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9e95ab1f67d91335b43bdc101237c01d380e43b057afa10b93f8815725a6635d" Dec 13 02:41:39.806641 containerd[1450]: time="2024-12-13T02:41:39.805954955Z" level=info msg="TearDown network for sandbox \"9e95ab1f67d91335b43bdc101237c01d380e43b057afa10b93f8815725a6635d\" successfully" Dec 13 02:41:39.811124 containerd[1450]: time="2024-12-13T02:41:39.811092414Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9e95ab1f67d91335b43bdc101237c01d380e43b057afa10b93f8815725a6635d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 02:41:39.811352 containerd[1450]: time="2024-12-13T02:41:39.811239671Z" level=info msg="RemovePodSandbox \"9e95ab1f67d91335b43bdc101237c01d380e43b057afa10b93f8815725a6635d\" returns successfully" Dec 13 02:41:39.811885 containerd[1450]: time="2024-12-13T02:41:39.811845551Z" level=info msg="StopPodSandbox for \"299f4e56be73d6ca2b86b2641695a2326ea23375985d2a3098aacca162c0071b\"" Dec 13 02:41:39.898307 containerd[1450]: 2024-12-13 02:41:39.860 [WARNING][5380] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="299f4e56be73d6ca2b86b2641695a2326ea23375985d2a3098aacca162c0071b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--31d3d6554f.novalocal-k8s-csi--node--driver--9qnbf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8e72692f-d22b-4813-bb35-ab03aefb087b", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 40, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-31d3d6554f.novalocal", ContainerID:"92f0a4553201cef4fd5a1369d5d35948dc0fe7ac6b578e7fb11f7b793f36bc6e", Pod:"csi-node-driver-9qnbf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.24.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali165c0e42282", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:41:39.898307 containerd[1450]: 2024-12-13 02:41:39.861 [INFO][5380] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="299f4e56be73d6ca2b86b2641695a2326ea23375985d2a3098aacca162c0071b" Dec 13 02:41:39.898307 containerd[1450]: 2024-12-13 02:41:39.861 [INFO][5380] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="299f4e56be73d6ca2b86b2641695a2326ea23375985d2a3098aacca162c0071b" iface="eth0" netns="" Dec 13 02:41:39.898307 containerd[1450]: 2024-12-13 02:41:39.861 [INFO][5380] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="299f4e56be73d6ca2b86b2641695a2326ea23375985d2a3098aacca162c0071b" Dec 13 02:41:39.898307 containerd[1450]: 2024-12-13 02:41:39.861 [INFO][5380] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="299f4e56be73d6ca2b86b2641695a2326ea23375985d2a3098aacca162c0071b" Dec 13 02:41:39.898307 containerd[1450]: 2024-12-13 02:41:39.884 [INFO][5386] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="299f4e56be73d6ca2b86b2641695a2326ea23375985d2a3098aacca162c0071b" HandleID="k8s-pod-network.299f4e56be73d6ca2b86b2641695a2326ea23375985d2a3098aacca162c0071b" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-csi--node--driver--9qnbf-eth0" Dec 13 02:41:39.898307 containerd[1450]: 2024-12-13 02:41:39.884 [INFO][5386] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:41:39.898307 containerd[1450]: 2024-12-13 02:41:39.885 [INFO][5386] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:41:39.898307 containerd[1450]: 2024-12-13 02:41:39.892 [WARNING][5386] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="299f4e56be73d6ca2b86b2641695a2326ea23375985d2a3098aacca162c0071b" HandleID="k8s-pod-network.299f4e56be73d6ca2b86b2641695a2326ea23375985d2a3098aacca162c0071b" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-csi--node--driver--9qnbf-eth0" Dec 13 02:41:39.898307 containerd[1450]: 2024-12-13 02:41:39.892 [INFO][5386] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="299f4e56be73d6ca2b86b2641695a2326ea23375985d2a3098aacca162c0071b" HandleID="k8s-pod-network.299f4e56be73d6ca2b86b2641695a2326ea23375985d2a3098aacca162c0071b" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-csi--node--driver--9qnbf-eth0" Dec 13 02:41:39.898307 containerd[1450]: 2024-12-13 02:41:39.894 [INFO][5386] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:41:39.898307 containerd[1450]: 2024-12-13 02:41:39.896 [INFO][5380] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="299f4e56be73d6ca2b86b2641695a2326ea23375985d2a3098aacca162c0071b" Dec 13 02:41:39.899023 containerd[1450]: time="2024-12-13T02:41:39.898852389Z" level=info msg="TearDown network for sandbox \"299f4e56be73d6ca2b86b2641695a2326ea23375985d2a3098aacca162c0071b\" successfully" Dec 13 02:41:39.899023 containerd[1450]: time="2024-12-13T02:41:39.898901131Z" level=info msg="StopPodSandbox for \"299f4e56be73d6ca2b86b2641695a2326ea23375985d2a3098aacca162c0071b\" returns successfully" Dec 13 02:41:39.900085 containerd[1450]: time="2024-12-13T02:41:39.899856417Z" level=info msg="RemovePodSandbox for \"299f4e56be73d6ca2b86b2641695a2326ea23375985d2a3098aacca162c0071b\"" Dec 13 02:41:39.900085 containerd[1450]: time="2024-12-13T02:41:39.899884259Z" level=info msg="Forcibly stopping sandbox \"299f4e56be73d6ca2b86b2641695a2326ea23375985d2a3098aacca162c0071b\"" Dec 13 02:41:39.980230 containerd[1450]: 2024-12-13 02:41:39.944 [WARNING][5404] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="299f4e56be73d6ca2b86b2641695a2326ea23375985d2a3098aacca162c0071b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--31d3d6554f.novalocal-k8s-csi--node--driver--9qnbf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8e72692f-d22b-4813-bb35-ab03aefb087b", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 40, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-31d3d6554f.novalocal", ContainerID:"92f0a4553201cef4fd5a1369d5d35948dc0fe7ac6b578e7fb11f7b793f36bc6e", Pod:"csi-node-driver-9qnbf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.24.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali165c0e42282", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:41:39.980230 containerd[1450]: 2024-12-13 02:41:39.944 [INFO][5404] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="299f4e56be73d6ca2b86b2641695a2326ea23375985d2a3098aacca162c0071b" Dec 13 02:41:39.980230 containerd[1450]: 2024-12-13 02:41:39.944 [INFO][5404] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="299f4e56be73d6ca2b86b2641695a2326ea23375985d2a3098aacca162c0071b" iface="eth0" netns="" Dec 13 02:41:39.980230 containerd[1450]: 2024-12-13 02:41:39.944 [INFO][5404] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="299f4e56be73d6ca2b86b2641695a2326ea23375985d2a3098aacca162c0071b" Dec 13 02:41:39.980230 containerd[1450]: 2024-12-13 02:41:39.944 [INFO][5404] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="299f4e56be73d6ca2b86b2641695a2326ea23375985d2a3098aacca162c0071b" Dec 13 02:41:39.980230 containerd[1450]: 2024-12-13 02:41:39.966 [INFO][5410] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="299f4e56be73d6ca2b86b2641695a2326ea23375985d2a3098aacca162c0071b" HandleID="k8s-pod-network.299f4e56be73d6ca2b86b2641695a2326ea23375985d2a3098aacca162c0071b" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-csi--node--driver--9qnbf-eth0" Dec 13 02:41:39.980230 containerd[1450]: 2024-12-13 02:41:39.966 [INFO][5410] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:41:39.980230 containerd[1450]: 2024-12-13 02:41:39.966 [INFO][5410] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:41:39.980230 containerd[1450]: 2024-12-13 02:41:39.975 [WARNING][5410] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="299f4e56be73d6ca2b86b2641695a2326ea23375985d2a3098aacca162c0071b" HandleID="k8s-pod-network.299f4e56be73d6ca2b86b2641695a2326ea23375985d2a3098aacca162c0071b" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-csi--node--driver--9qnbf-eth0" Dec 13 02:41:39.980230 containerd[1450]: 2024-12-13 02:41:39.975 [INFO][5410] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="299f4e56be73d6ca2b86b2641695a2326ea23375985d2a3098aacca162c0071b" HandleID="k8s-pod-network.299f4e56be73d6ca2b86b2641695a2326ea23375985d2a3098aacca162c0071b" Workload="ci--4081--2--1--b--31d3d6554f.novalocal-k8s-csi--node--driver--9qnbf-eth0" Dec 13 02:41:39.980230 containerd[1450]: 2024-12-13 02:41:39.977 [INFO][5410] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:41:39.980230 containerd[1450]: 2024-12-13 02:41:39.978 [INFO][5404] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="299f4e56be73d6ca2b86b2641695a2326ea23375985d2a3098aacca162c0071b" Dec 13 02:41:39.981882 containerd[1450]: time="2024-12-13T02:41:39.980765911Z" level=info msg="TearDown network for sandbox \"299f4e56be73d6ca2b86b2641695a2326ea23375985d2a3098aacca162c0071b\" successfully" Dec 13 02:41:39.984958 containerd[1450]: time="2024-12-13T02:41:39.984921966Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"299f4e56be73d6ca2b86b2641695a2326ea23375985d2a3098aacca162c0071b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 02:41:39.985023 containerd[1450]: time="2024-12-13T02:41:39.984999291Z" level=info msg="RemovePodSandbox \"299f4e56be73d6ca2b86b2641695a2326ea23375985d2a3098aacca162c0071b\" returns successfully" Dec 13 02:41:40.847005 sshd[5290]: Accepted publickey for core from 172.24.4.1 port 43214 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:41:40.860129 sshd[5290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:41:40.874070 systemd-logind[1432]: New session 20 of user core. Dec 13 02:41:40.880019 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 02:41:42.268237 sshd[5290]: pam_unix(sshd:session): session closed for user core Dec 13 02:41:42.278059 systemd[1]: sshd@17-172.24.4.208:22-172.24.4.1:43214.service: Deactivated successfully. Dec 13 02:41:42.283396 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 02:41:42.285063 systemd-logind[1432]: Session 20 logged out. Waiting for processes to exit. Dec 13 02:41:42.295091 systemd[1]: Started sshd@18-172.24.4.208:22-172.24.4.1:43228.service - OpenSSH per-connection server daemon (172.24.4.1:43228). Dec 13 02:41:42.297762 systemd-logind[1432]: Removed session 20. Dec 13 02:41:43.530456 sshd[5425]: Accepted publickey for core from 172.24.4.1 port 43228 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:41:43.534892 sshd[5425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:41:43.545383 systemd-logind[1432]: New session 21 of user core. Dec 13 02:41:43.554778 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 02:41:47.844276 sshd[5425]: pam_unix(sshd:session): session closed for user core Dec 13 02:41:47.862676 systemd[1]: Started sshd@19-172.24.4.208:22-172.24.4.1:42248.service - OpenSSH per-connection server daemon (172.24.4.1:42248). Dec 13 02:41:47.869281 systemd[1]: sshd@18-172.24.4.208:22-172.24.4.1:43228.service: Deactivated successfully. Dec 13 02:41:47.877202 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 02:41:47.884173 systemd-logind[1432]: Session 21 logged out. Waiting for processes to exit. Dec 13 02:41:47.887328 systemd-logind[1432]: Removed session 21. Dec 13 02:41:49.112572 sshd[5442]: Accepted publickey for core from 172.24.4.1 port 42248 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:41:49.119215 sshd[5442]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:41:49.140387 systemd-logind[1432]: New session 22 of user core. Dec 13 02:41:49.149162 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 02:41:53.075990 sshd[5442]: pam_unix(sshd:session): session closed for user core Dec 13 02:41:53.094065 systemd[1]: sshd@19-172.24.4.208:22-172.24.4.1:42248.service: Deactivated successfully. Dec 13 02:41:53.099740 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 02:41:53.100178 systemd[1]: session-22.scope: Consumed 1.273s CPU time. Dec 13 02:41:53.104157 systemd-logind[1432]: Session 22 logged out. Waiting for processes to exit. Dec 13 02:41:53.112239 systemd[1]: Started sshd@20-172.24.4.208:22-172.24.4.1:42264.service - OpenSSH per-connection server daemon (172.24.4.1:42264). Dec 13 02:41:53.116623 systemd-logind[1432]: Removed session 22. Dec 13 02:41:54.002585 systemd[1]: run-containerd-runc-k8s.io-c87cc41cd85208467a1f9f0e5575076a692b8d0b505e0d959808c34d633a4155-runc.pGbRnx.mount: Deactivated successfully. Dec 13 02:41:54.331035 sshd[5457]: Accepted publickey for core from 172.24.4.1 port 42264 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:41:54.333393 sshd[5457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:41:54.341152 systemd-logind[1432]: New session 23 of user core. Dec 13 02:41:54.345644 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 02:41:55.210073 sshd[5457]: pam_unix(sshd:session): session closed for user core Dec 13 02:41:55.218139 systemd[1]: sshd@20-172.24.4.208:22-172.24.4.1:42264.service: Deactivated successfully. Dec 13 02:41:55.222385 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 02:41:55.225924 systemd-logind[1432]: Session 23 logged out. Waiting for processes to exit. Dec 13 02:41:55.228398 systemd-logind[1432]: Removed session 23. Dec 13 02:42:00.224823 systemd[1]: Started sshd@21-172.24.4.208:22-172.24.4.1:51696.service - OpenSSH per-connection server daemon (172.24.4.1:51696). Dec 13 02:42:01.650561 sshd[5493]: Accepted publickey for core from 172.24.4.1 port 51696 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:42:01.680106 sshd[5493]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:42:01.696533 systemd-logind[1432]: New session 24 of user core. Dec 13 02:42:01.702742 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 02:42:03.057828 sshd[5493]: pam_unix(sshd:session): session closed for user core Dec 13 02:42:03.063438 systemd[1]: sshd@21-172.24.4.208:22-172.24.4.1:51696.service: Deactivated successfully. Dec 13 02:42:03.065276 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 02:42:03.073332 systemd-logind[1432]: Session 24 logged out. Waiting for processes to exit. Dec 13 02:42:03.076282 systemd-logind[1432]: Removed session 24. Dec 13 02:42:08.073981 systemd[1]: Started sshd@22-172.24.4.208:22-172.24.4.1:55660.service - OpenSSH per-connection server daemon (172.24.4.1:55660). Dec 13 02:42:09.286140 sshd[5537]: Accepted publickey for core from 172.24.4.1 port 55660 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:42:09.288118 sshd[5537]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:42:09.300307 systemd-logind[1432]: New session 25 of user core. Dec 13 02:42:09.305726 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 02:42:10.396271 sshd[5537]: pam_unix(sshd:session): session closed for user core Dec 13 02:42:10.404091 systemd[1]: sshd@22-172.24.4.208:22-172.24.4.1:55660.service: Deactivated successfully. Dec 13 02:42:10.410584 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 02:42:10.412375 systemd-logind[1432]: Session 25 logged out. Waiting for processes to exit. Dec 13 02:42:10.414823 systemd-logind[1432]: Removed session 25. Dec 13 02:42:15.424574 systemd[1]: Started sshd@23-172.24.4.208:22-172.24.4.1:45408.service - OpenSSH per-connection server daemon (172.24.4.1:45408). Dec 13 02:42:16.875621 sshd[5550]: Accepted publickey for core from 172.24.4.1 port 45408 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:42:16.879006 sshd[5550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:42:16.891027 systemd-logind[1432]: New session 26 of user core. Dec 13 02:42:16.903900 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 13 02:42:17.598321 sshd[5550]: pam_unix(sshd:session): session closed for user core Dec 13 02:42:17.605339 systemd[1]: sshd@23-172.24.4.208:22-172.24.4.1:45408.service: Deactivated successfully. Dec 13 02:42:17.610242 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 02:42:17.614451 systemd-logind[1432]: Session 26 logged out. Waiting for processes to exit. Dec 13 02:42:17.617736 systemd-logind[1432]: Removed session 26. Dec 13 02:42:22.628979 systemd[1]: Started sshd@24-172.24.4.208:22-172.24.4.1:45422.service - OpenSSH per-connection server daemon (172.24.4.1:45422). Dec 13 02:42:23.506790 systemd[1]: run-containerd-runc-k8s.io-c87cc41cd85208467a1f9f0e5575076a692b8d0b505e0d959808c34d633a4155-runc.Mtxl4I.mount: Deactivated successfully. Dec 13 02:42:23.841682 sshd[5601]: Accepted publickey for core from 172.24.4.1 port 45422 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:42:23.846110 sshd[5601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:42:23.858560 systemd-logind[1432]: New session 27 of user core. Dec 13 02:42:23.865821 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 13 02:42:24.943084 sshd[5601]: pam_unix(sshd:session): session closed for user core Dec 13 02:42:24.947176 systemd[1]: sshd@24-172.24.4.208:22-172.24.4.1:45422.service: Deactivated successfully. Dec 13 02:42:24.952212 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 02:42:24.954728 systemd-logind[1432]: Session 27 logged out. Waiting for processes to exit. Dec 13 02:42:24.956251 systemd-logind[1432]: Removed session 27. Dec 13 02:42:29.962541 systemd[1]: Started sshd@25-172.24.4.208:22-172.24.4.1:50034.service - OpenSSH per-connection server daemon (172.24.4.1:50034). Dec 13 02:42:31.194027 systemd[1]: run-containerd-runc-k8s.io-e591f0b6f2e725ce8004e0080aeb1cd014f6f41f0b21cb09f7a5e10f4f916f6f-runc.6YbpRa.mount: Deactivated successfully. Dec 13 02:42:31.408334 sshd[5633]: Accepted publickey for core from 172.24.4.1 port 50034 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:42:31.411821 sshd[5633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:42:31.420294 systemd-logind[1432]: New session 28 of user core. Dec 13 02:42:31.424839 systemd[1]: Started session-28.scope - Session 28 of User core. Dec 13 02:42:32.190963 sshd[5633]: pam_unix(sshd:session): session closed for user core Dec 13 02:42:32.201006 systemd[1]: sshd@25-172.24.4.208:22-172.24.4.1:50034.service: Deactivated successfully. Dec 13 02:42:32.206805 systemd[1]: session-28.scope: Deactivated successfully. Dec 13 02:42:32.208356 systemd-logind[1432]: Session 28 logged out. Waiting for processes to exit. Dec 13 02:42:32.209981 systemd-logind[1432]: Removed session 28.