Oct 8 20:04:43.874140 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Oct 8 18:24:27 -00 2024 Oct 8 20:04:43.874164 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 8 20:04:43.874172 kernel: BIOS-provided physical RAM map: Oct 8 20:04:43.874177 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 8 20:04:43.874182 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 8 20:04:43.874187 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 8 20:04:43.874193 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Oct 8 20:04:43.874198 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Oct 8 20:04:43.874206 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Oct 8 20:04:43.874211 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Oct 8 20:04:43.874438 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 8 20:04:43.874449 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 8 20:04:43.874455 kernel: NX (Execute Disable) protection: active Oct 8 20:04:43.874460 kernel: APIC: Static calls initialized Oct 8 20:04:43.874470 kernel: SMBIOS 2.8 present. Oct 8 20:04:43.874477 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Oct 8 20:04:43.874482 kernel: Hypervisor detected: KVM Oct 8 20:04:43.874488 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 8 20:04:43.874493 kernel: kvm-clock: using sched offset of 2888795267 cycles Oct 8 20:04:43.874499 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 8 20:04:43.874505 kernel: tsc: Detected 2445.406 MHz processor Oct 8 20:04:43.874511 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 8 20:04:43.874517 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 8 20:04:43.874524 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Oct 8 20:04:43.874530 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Oct 8 20:04:43.874536 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 8 20:04:43.874541 kernel: Using GB pages for direct mapping Oct 8 20:04:43.874547 kernel: ACPI: Early table checksum verification disabled Oct 8 20:04:43.874552 kernel: ACPI: RSDP 0x00000000000F51F0 000014 (v00 BOCHS ) Oct 8 20:04:43.874558 kernel: ACPI: RSDT 0x000000007CFE265D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:04:43.874564 kernel: ACPI: FACP 0x000000007CFE244D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:04:43.874569 kernel: ACPI: DSDT 0x000000007CFE0040 00240D (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:04:43.874577 kernel: ACPI: FACS 0x000000007CFE0000 000040 Oct 8 20:04:43.874791 kernel: ACPI: APIC 0x000000007CFE2541 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:04:43.874799 kernel: ACPI: HPET 0x000000007CFE25C1 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:04:43.874805 kernel: ACPI: MCFG 0x000000007CFE25F9 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:04:43.874810 kernel: ACPI: WAET 0x000000007CFE2635 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:04:43.874816 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe244d-0x7cfe2540] Oct 8 20:04:43.874822 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe244c] Oct 8 20:04:43.874827 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Oct 8 20:04:43.874839 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2541-0x7cfe25c0] Oct 8 20:04:43.874845 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25c1-0x7cfe25f8] Oct 8 20:04:43.874851 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe25f9-0x7cfe2634] Oct 8 20:04:43.874857 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe2635-0x7cfe265c] Oct 8 20:04:43.874862 kernel: No NUMA configuration found Oct 8 20:04:43.874868 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Oct 8 20:04:43.874877 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] Oct 8 20:04:43.874883 kernel: Zone ranges: Oct 8 20:04:43.874888 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 8 20:04:43.874894 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Oct 8 20:04:43.874900 kernel: Normal empty Oct 8 20:04:43.874906 kernel: Movable zone start for each node Oct 8 20:04:43.874911 kernel: Early memory node ranges Oct 8 20:04:43.874917 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 8 20:04:43.874923 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Oct 8 20:04:43.874929 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Oct 8 20:04:43.874937 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 8 20:04:43.874943 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 8 20:04:43.874948 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Oct 8 20:04:43.874954 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 8 20:04:43.874960 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 8 20:04:43.874966 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 8 20:04:43.874971 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 8 20:04:43.874977 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 8 20:04:43.874983 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 8 20:04:43.874991 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 8 20:04:43.874997 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 8 20:04:43.875002 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 8 20:04:43.875008 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 8 20:04:43.875014 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Oct 8 20:04:43.875020 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 8 20:04:43.875026 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Oct 8 20:04:43.875031 kernel: Booting paravirtualized kernel on KVM Oct 8 20:04:43.875037 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 8 20:04:43.875045 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Oct 8 20:04:43.875051 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Oct 8 20:04:43.875057 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Oct 8 20:04:43.875063 kernel: pcpu-alloc: [0] 0 1 Oct 8 20:04:43.875069 kernel: kvm-guest: PV spinlocks disabled, no host support Oct 8 20:04:43.875075 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 8 20:04:43.875082 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 8 20:04:43.875087 kernel: random: crng init done Oct 8 20:04:43.875095 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 8 20:04:43.875101 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 8 20:04:43.875107 kernel: Fallback order for Node 0: 0 Oct 8 20:04:43.875113 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 Oct 8 20:04:43.875119 kernel: Policy zone: DMA32 Oct 8 20:04:43.875124 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 8 20:04:43.875130 kernel: Memory: 1922056K/2047464K available (12288K kernel code, 2305K rwdata, 22716K rodata, 42828K init, 2360K bss, 125148K reserved, 0K cma-reserved) Oct 8 20:04:43.875136 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 8 20:04:43.875142 kernel: ftrace: allocating 37784 entries in 148 pages Oct 8 20:04:43.875150 kernel: ftrace: allocated 148 pages with 3 groups Oct 8 20:04:43.875156 kernel: Dynamic Preempt: voluntary Oct 8 20:04:43.875162 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 8 20:04:43.875168 kernel: rcu: RCU event tracing is enabled. Oct 8 20:04:43.875175 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 8 20:04:43.875181 kernel: Trampoline variant of Tasks RCU enabled. Oct 8 20:04:43.875187 kernel: Rude variant of Tasks RCU enabled. Oct 8 20:04:43.875193 kernel: Tracing variant of Tasks RCU enabled. Oct 8 20:04:43.875198 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 8 20:04:43.875204 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 8 20:04:43.875212 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Oct 8 20:04:43.875247 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 8 20:04:43.875258 kernel: Console: colour VGA+ 80x25 Oct 8 20:04:43.875269 kernel: printk: console [tty0] enabled Oct 8 20:04:43.875280 kernel: printk: console [ttyS0] enabled Oct 8 20:04:43.875290 kernel: ACPI: Core revision 20230628 Oct 8 20:04:43.875296 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 8 20:04:43.875302 kernel: APIC: Switch to symmetric I/O mode setup Oct 8 20:04:43.875308 kernel: x2apic enabled Oct 8 20:04:43.875317 kernel: APIC: Switched APIC routing to: physical x2apic Oct 8 20:04:43.875323 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 8 20:04:43.875329 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Oct 8 20:04:43.875335 kernel: Calibrating delay loop (skipped) preset value.. 4890.81 BogoMIPS (lpj=2445406) Oct 8 20:04:43.875341 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 8 20:04:43.875346 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 8 20:04:43.875352 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 8 20:04:43.875358 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 8 20:04:43.875373 kernel: Spectre V2 : Mitigation: Retpolines Oct 8 20:04:43.875393 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 8 20:04:43.875403 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 8 20:04:43.875412 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 8 20:04:43.875418 kernel: RETBleed: Mitigation: untrained return thunk Oct 8 20:04:43.875424 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 8 20:04:43.875431 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 8 20:04:43.875437 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Oct 8 20:04:43.875444 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Oct 8 20:04:43.875450 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Oct 8 20:04:43.875456 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 8 20:04:43.875464 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 8 20:04:43.875471 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 8 20:04:43.875477 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 8 20:04:43.875483 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 8 20:04:43.875489 kernel: Freeing SMP alternatives memory: 32K Oct 8 20:04:43.875497 kernel: pid_max: default: 32768 minimum: 301 Oct 8 20:04:43.875503 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Oct 8 20:04:43.875510 kernel: landlock: Up and running. Oct 8 20:04:43.875516 kernel: SELinux: Initializing. Oct 8 20:04:43.875522 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 8 20:04:43.875528 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 8 20:04:43.875534 kernel: smpboot: CPU0: AMD EPYC Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 8 20:04:43.875541 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 8 20:04:43.875547 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 8 20:04:43.875555 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 8 20:04:43.875561 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 8 20:04:43.875567 kernel: ... version: 0 Oct 8 20:04:43.875573 kernel: ... bit width: 48 Oct 8 20:04:43.875579 kernel: ... generic registers: 6 Oct 8 20:04:43.875585 kernel: ... value mask: 0000ffffffffffff Oct 8 20:04:43.875591 kernel: ... max period: 00007fffffffffff Oct 8 20:04:43.875597 kernel: ... fixed-purpose events: 0 Oct 8 20:04:43.875604 kernel: ... event mask: 000000000000003f Oct 8 20:04:43.875610 kernel: signal: max sigframe size: 1776 Oct 8 20:04:43.875618 kernel: rcu: Hierarchical SRCU implementation. Oct 8 20:04:43.875624 kernel: rcu: Max phase no-delay instances is 400. Oct 8 20:04:43.875630 kernel: smp: Bringing up secondary CPUs ... Oct 8 20:04:43.875636 kernel: smpboot: x86: Booting SMP configuration: Oct 8 20:04:43.875642 kernel: .... node #0, CPUs: #1 Oct 8 20:04:43.875648 kernel: smp: Brought up 1 node, 2 CPUs Oct 8 20:04:43.875655 kernel: smpboot: Max logical packages: 1 Oct 8 20:04:43.875661 kernel: smpboot: Total of 2 processors activated (9781.62 BogoMIPS) Oct 8 20:04:43.875667 kernel: devtmpfs: initialized Oct 8 20:04:43.875675 kernel: x86/mm: Memory block size: 128MB Oct 8 20:04:43.875681 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 8 20:04:43.875687 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 8 20:04:43.875693 kernel: pinctrl core: initialized pinctrl subsystem Oct 8 20:04:43.875700 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 8 20:04:43.875706 kernel: audit: initializing netlink subsys (disabled) Oct 8 20:04:43.875712 kernel: audit: type=2000 audit(1728417882.189:1): state=initialized audit_enabled=0 res=1 Oct 8 20:04:43.875718 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 8 20:04:43.875724 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 8 20:04:43.875732 kernel: cpuidle: using governor menu Oct 8 20:04:43.875738 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 8 20:04:43.875744 kernel: dca service started, version 1.12.1 Oct 8 20:04:43.875750 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Oct 8 20:04:43.875757 kernel: PCI: Using configuration type 1 for base access Oct 8 20:04:43.875763 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 8 20:04:43.875769 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 8 20:04:43.875775 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 8 20:04:43.875781 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 8 20:04:43.875789 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 8 20:04:43.875795 kernel: ACPI: Added _OSI(Module Device) Oct 8 20:04:43.875801 kernel: ACPI: Added _OSI(Processor Device) Oct 8 20:04:43.875807 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 8 20:04:43.875814 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 8 20:04:43.875820 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 8 20:04:43.875826 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Oct 8 20:04:43.875832 kernel: ACPI: Interpreter enabled Oct 8 20:04:43.875838 kernel: ACPI: PM: (supports S0 S5) Oct 8 20:04:43.875846 kernel: ACPI: Using IOAPIC for interrupt routing Oct 8 20:04:43.875853 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 8 20:04:43.875859 kernel: PCI: Using E820 reservations for host bridge windows Oct 8 20:04:43.875865 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 8 20:04:43.875871 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 8 20:04:43.876032 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 8 20:04:43.876148 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 8 20:04:43.877949 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 8 20:04:43.877964 kernel: PCI host bridge to bus 0000:00 Oct 8 20:04:43.881377 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 8 20:04:43.881489 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 8 20:04:43.881587 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 8 20:04:43.881683 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Oct 8 20:04:43.881776 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 8 20:04:43.881875 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Oct 8 20:04:43.881968 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 8 20:04:43.882089 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Oct 8 20:04:43.882204 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Oct 8 20:04:43.882358 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] Oct 8 20:04:43.882471 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] Oct 8 20:04:43.882577 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] Oct 8 20:04:43.882687 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] Oct 8 20:04:43.882791 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 8 20:04:43.882904 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Oct 8 20:04:43.883009 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] Oct 8 20:04:43.883120 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Oct 8 20:04:43.886261 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] Oct 8 20:04:43.886416 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Oct 8 20:04:43.886527 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] Oct 8 20:04:43.886644 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Oct 8 20:04:43.886750 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] Oct 8 20:04:43.886861 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Oct 8 20:04:43.886965 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] Oct 8 20:04:43.887080 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Oct 8 20:04:43.887186 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] Oct 8 20:04:43.888368 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Oct 8 20:04:43.888485 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] Oct 8 20:04:43.888599 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Oct 8 20:04:43.888705 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] Oct 8 20:04:43.888822 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Oct 8 20:04:43.888928 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] Oct 8 20:04:43.889038 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Oct 8 20:04:43.889142 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 8 20:04:43.893312 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Oct 8 20:04:43.893435 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] Oct 8 20:04:43.893542 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] Oct 8 20:04:43.893665 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Oct 8 20:04:43.893770 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Oct 8 20:04:43.893888 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Oct 8 20:04:43.893999 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] Oct 8 20:04:43.894107 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Oct 8 20:04:43.894239 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] Oct 8 20:04:43.894388 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Oct 8 20:04:43.894497 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Oct 8 20:04:43.894602 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Oct 8 20:04:43.894719 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Oct 8 20:04:43.894830 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] Oct 8 20:04:43.894936 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Oct 8 20:04:43.895040 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Oct 8 20:04:43.895150 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Oct 8 20:04:43.898100 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Oct 8 20:04:43.898240 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] Oct 8 20:04:43.898384 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] Oct 8 20:04:43.898494 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Oct 8 20:04:43.898598 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Oct 8 20:04:43.898701 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Oct 8 20:04:43.898823 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Oct 8 20:04:43.898933 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Oct 8 20:04:43.899037 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Oct 8 20:04:43.899141 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Oct 8 20:04:43.899283 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Oct 8 20:04:43.899438 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Oct 8 20:04:43.899550 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] Oct 8 20:04:43.899660 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Oct 8 20:04:43.899763 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Oct 8 20:04:43.899866 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Oct 8 20:04:43.900017 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Oct 8 20:04:43.900136 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] Oct 8 20:04:43.901735 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] Oct 8 20:04:43.901868 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Oct 8 20:04:43.901985 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Oct 8 20:04:43.902091 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Oct 8 20:04:43.902100 kernel: acpiphp: Slot [0] registered Oct 8 20:04:43.902237 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Oct 8 20:04:43.902391 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] Oct 8 20:04:43.902504 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] Oct 8 20:04:43.902615 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] Oct 8 20:04:43.902720 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Oct 8 20:04:43.902831 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Oct 8 20:04:43.902934 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Oct 8 20:04:43.902943 kernel: acpiphp: Slot [0-2] registered Oct 8 20:04:43.903046 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Oct 8 20:04:43.903149 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Oct 8 20:04:43.905229 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Oct 8 20:04:43.905244 kernel: acpiphp: Slot [0-3] registered Oct 8 20:04:43.905386 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Oct 8 20:04:43.905502 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Oct 8 20:04:43.905607 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Oct 8 20:04:43.905616 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 8 20:04:43.905623 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 8 20:04:43.905629 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 8 20:04:43.905636 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 8 20:04:43.905642 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 8 20:04:43.905648 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 8 20:04:43.905654 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 8 20:04:43.905663 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 8 20:04:43.905670 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 8 20:04:43.905676 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 8 20:04:43.905682 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 8 20:04:43.905688 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 8 20:04:43.905695 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 8 20:04:43.905701 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 8 20:04:43.905707 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 8 20:04:43.905713 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 8 20:04:43.905722 kernel: iommu: Default domain type: Translated Oct 8 20:04:43.905728 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 8 20:04:43.905734 kernel: PCI: Using ACPI for IRQ routing Oct 8 20:04:43.905741 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 8 20:04:43.905747 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 8 20:04:43.905754 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Oct 8 20:04:43.905857 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 8 20:04:43.905961 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 8 20:04:43.906064 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 8 20:04:43.906075 kernel: vgaarb: loaded Oct 8 20:04:43.906082 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 8 20:04:43.906088 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 8 20:04:43.906094 kernel: clocksource: Switched to clocksource kvm-clock Oct 8 20:04:43.906101 kernel: VFS: Disk quotas dquot_6.6.0 Oct 8 20:04:43.906107 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 8 20:04:43.906113 kernel: pnp: PnP ACPI init Oct 8 20:04:43.906247 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Oct 8 20:04:43.906267 kernel: pnp: PnP ACPI: found 5 devices Oct 8 20:04:43.906280 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 8 20:04:43.906292 kernel: NET: Registered PF_INET protocol family Oct 8 20:04:43.906304 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 8 20:04:43.906316 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Oct 8 20:04:43.906328 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 8 20:04:43.906335 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 8 20:04:43.906341 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Oct 8 20:04:43.906347 kernel: TCP: Hash tables configured (established 16384 bind 16384) Oct 8 20:04:43.906357 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 8 20:04:43.906363 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 8 20:04:43.906369 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 8 20:04:43.906375 kernel: NET: Registered PF_XDP protocol family Oct 8 20:04:43.906490 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Oct 8 20:04:43.906596 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Oct 8 20:04:43.906701 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Oct 8 20:04:43.906811 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Oct 8 20:04:43.906915 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Oct 8 20:04:43.907018 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Oct 8 20:04:43.907120 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Oct 8 20:04:43.910263 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Oct 8 20:04:43.910408 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Oct 8 20:04:43.910520 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Oct 8 20:04:43.910626 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Oct 8 20:04:43.910736 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Oct 8 20:04:43.910838 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Oct 8 20:04:43.910941 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Oct 8 20:04:43.911043 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Oct 8 20:04:43.911144 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Oct 8 20:04:43.911276 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Oct 8 20:04:43.911424 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Oct 8 20:04:43.911538 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Oct 8 20:04:43.911661 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Oct 8 20:04:43.911765 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Oct 8 20:04:43.911867 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Oct 8 20:04:43.911970 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Oct 8 20:04:43.912072 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Oct 8 20:04:43.912174 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Oct 8 20:04:43.913350 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Oct 8 20:04:43.913488 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Oct 8 20:04:43.913596 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Oct 8 20:04:43.913706 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Oct 8 20:04:43.913808 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Oct 8 20:04:43.913912 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Oct 8 20:04:43.914015 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Oct 8 20:04:43.914151 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Oct 8 20:04:43.914351 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Oct 8 20:04:43.914464 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Oct 8 20:04:43.914574 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Oct 8 20:04:43.914674 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 8 20:04:43.914773 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 8 20:04:43.914930 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 8 20:04:43.915037 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Oct 8 20:04:43.915132 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Oct 8 20:04:43.918289 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Oct 8 20:04:43.918422 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Oct 8 20:04:43.918526 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Oct 8 20:04:43.918634 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Oct 8 20:04:43.918740 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Oct 8 20:04:43.918846 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Oct 8 20:04:43.918945 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Oct 8 20:04:43.919052 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Oct 8 20:04:43.919151 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Oct 8 20:04:43.919304 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Oct 8 20:04:43.919436 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Oct 8 20:04:43.919550 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Oct 8 20:04:43.919650 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Oct 8 20:04:43.919755 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Oct 8 20:04:43.919855 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Oct 8 20:04:43.919953 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Oct 8 20:04:43.920062 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Oct 8 20:04:43.920167 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Oct 8 20:04:43.923533 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Oct 8 20:04:43.923651 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Oct 8 20:04:43.923753 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Oct 8 20:04:43.923852 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Oct 8 20:04:43.923862 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 8 20:04:43.923869 kernel: PCI: CLS 0 bytes, default 64 Oct 8 20:04:43.923880 kernel: Initialise system trusted keyrings Oct 8 20:04:43.923887 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Oct 8 20:04:43.923893 kernel: Key type asymmetric registered Oct 8 20:04:43.923900 kernel: Asymmetric key parser 'x509' registered Oct 8 20:04:43.923907 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Oct 8 20:04:43.923913 kernel: io scheduler mq-deadline registered Oct 8 20:04:43.923920 kernel: io scheduler kyber registered Oct 8 20:04:43.923926 kernel: io scheduler bfq registered Oct 8 20:04:43.924032 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Oct 8 20:04:43.924143 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Oct 8 20:04:43.924286 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Oct 8 20:04:43.924409 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Oct 8 20:04:43.924514 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Oct 8 20:04:43.924618 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Oct 8 20:04:43.924721 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Oct 8 20:04:43.924825 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Oct 8 20:04:43.924929 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Oct 8 20:04:43.925033 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Oct 8 20:04:43.925141 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Oct 8 20:04:43.925272 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Oct 8 20:04:43.925408 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Oct 8 20:04:43.925515 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Oct 8 20:04:43.925621 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Oct 8 20:04:43.925727 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Oct 8 20:04:43.925736 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 8 20:04:43.925839 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Oct 8 20:04:43.925949 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Oct 8 20:04:43.925958 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 8 20:04:43.925965 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Oct 8 20:04:43.925972 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 8 20:04:43.925978 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 8 20:04:43.925985 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 8 20:04:43.925992 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 8 20:04:43.925998 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 8 20:04:43.926005 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 8 20:04:43.926116 kernel: rtc_cmos 00:03: RTC can wake from S4 Oct 8 20:04:43.926231 kernel: rtc_cmos 00:03: registered as rtc0 Oct 8 20:04:43.926375 kernel: rtc_cmos 00:03: setting system clock to 2024-10-08T20:04:43 UTC (1728417883) Oct 8 20:04:43.926478 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Oct 8 20:04:43.926487 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 8 20:04:43.926494 kernel: NET: Registered PF_INET6 protocol family Oct 8 20:04:43.926501 kernel: Segment Routing with IPv6 Oct 8 20:04:43.926507 kernel: In-situ OAM (IOAM) with IPv6 Oct 8 20:04:43.926519 kernel: NET: Registered PF_PACKET protocol family Oct 8 20:04:43.926526 kernel: Key type dns_resolver registered Oct 8 20:04:43.926533 kernel: IPI shorthand broadcast: enabled Oct 8 20:04:43.926540 kernel: sched_clock: Marking stable (1043011418, 130166466)->(1180948519, -7770635) Oct 8 20:04:43.926546 kernel: registered taskstats version 1 Oct 8 20:04:43.926553 kernel: Loading compiled-in X.509 certificates Oct 8 20:04:43.926560 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: 14ce23fc5070d0471461f1dd6e298a5588e7ba8f' Oct 8 20:04:43.926566 kernel: Key type .fscrypt registered Oct 8 20:04:43.926573 kernel: Key type fscrypt-provisioning registered Oct 8 20:04:43.926582 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 8 20:04:43.926588 kernel: ima: Allocated hash algorithm: sha1 Oct 8 20:04:43.926595 kernel: ima: No architecture policies found Oct 8 20:04:43.926601 kernel: clk: Disabling unused clocks Oct 8 20:04:43.926608 kernel: Freeing unused kernel image (initmem) memory: 42828K Oct 8 20:04:43.926615 kernel: Write protecting the kernel read-only data: 36864k Oct 8 20:04:43.926621 kernel: Freeing unused kernel image (rodata/data gap) memory: 1860K Oct 8 20:04:43.926628 kernel: Run /init as init process Oct 8 20:04:43.926636 kernel: with arguments: Oct 8 20:04:43.926643 kernel: /init Oct 8 20:04:43.926649 kernel: with environment: Oct 8 20:04:43.926656 kernel: HOME=/ Oct 8 20:04:43.926662 kernel: TERM=linux Oct 8 20:04:43.926668 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 8 20:04:43.926677 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 20:04:43.926686 systemd[1]: Detected virtualization kvm. Oct 8 20:04:43.926695 systemd[1]: Detected architecture x86-64. Oct 8 20:04:43.926702 systemd[1]: Running in initrd. Oct 8 20:04:43.926708 systemd[1]: No hostname configured, using default hostname. Oct 8 20:04:43.926715 systemd[1]: Hostname set to . Oct 8 20:04:43.926722 systemd[1]: Initializing machine ID from VM UUID. Oct 8 20:04:43.926729 systemd[1]: Queued start job for default target initrd.target. Oct 8 20:04:43.926737 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 20:04:43.926744 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 20:04:43.926753 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 8 20:04:43.926760 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 20:04:43.926767 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 8 20:04:43.926774 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 8 20:04:43.926782 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 8 20:04:43.926790 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 8 20:04:43.926797 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 20:04:43.926806 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 20:04:43.926813 systemd[1]: Reached target paths.target - Path Units. Oct 8 20:04:43.926820 systemd[1]: Reached target slices.target - Slice Units. Oct 8 20:04:43.926827 systemd[1]: Reached target swap.target - Swaps. Oct 8 20:04:43.926833 systemd[1]: Reached target timers.target - Timer Units. Oct 8 20:04:43.926840 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 20:04:43.926847 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 20:04:43.926854 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 8 20:04:43.926861 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 8 20:04:43.926870 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 20:04:43.926877 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 20:04:43.926884 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 20:04:43.926891 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 20:04:43.926898 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 8 20:04:43.926905 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 20:04:43.926912 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 8 20:04:43.926919 systemd[1]: Starting systemd-fsck-usr.service... Oct 8 20:04:43.926928 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 20:04:43.926934 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 20:04:43.926959 systemd-journald[187]: Collecting audit messages is disabled. Oct 8 20:04:43.926976 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:04:43.926986 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 8 20:04:43.926993 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 20:04:43.927000 systemd[1]: Finished systemd-fsck-usr.service. Oct 8 20:04:43.927008 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 8 20:04:43.927015 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 8 20:04:43.927031 systemd-journald[187]: Journal started Oct 8 20:04:43.927047 systemd-journald[187]: Runtime Journal (/run/log/journal/a70f95f746a74bb3b4a1bba5bd009925) is 4.8M, max 38.4M, 33.6M free. Oct 8 20:04:43.903248 systemd-modules-load[188]: Inserted module 'overlay' Oct 8 20:04:43.934373 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 20:04:43.937238 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 20:04:43.937646 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:04:43.940456 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 8 20:04:43.943046 systemd-modules-load[188]: Inserted module 'br_netfilter' Oct 8 20:04:43.943549 kernel: Bridge firewalling registered Oct 8 20:04:43.944508 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 20:04:43.945723 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 20:04:43.950367 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 20:04:43.952178 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 20:04:43.954345 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 8 20:04:43.964727 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 20:04:43.968715 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 20:04:43.970015 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 20:04:43.993351 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 8 20:04:43.997346 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 20:04:44.001992 dracut-cmdline[222]: dracut-dracut-053 Oct 8 20:04:44.004370 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 8 20:04:44.028531 systemd-resolved[223]: Positive Trust Anchors: Oct 8 20:04:44.029436 systemd-resolved[223]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 20:04:44.029486 systemd-resolved[223]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 8 20:04:44.033164 systemd-resolved[223]: Defaulting to hostname 'linux'. Oct 8 20:04:44.034443 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 20:04:44.036349 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 20:04:44.074258 kernel: SCSI subsystem initialized Oct 8 20:04:44.083243 kernel: Loading iSCSI transport class v2.0-870. Oct 8 20:04:44.093250 kernel: iscsi: registered transport (tcp) Oct 8 20:04:44.111443 kernel: iscsi: registered transport (qla4xxx) Oct 8 20:04:44.111496 kernel: QLogic iSCSI HBA Driver Oct 8 20:04:44.147101 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 8 20:04:44.151352 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 8 20:04:44.174260 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 8 20:04:44.174310 kernel: device-mapper: uevent: version 1.0.3 Oct 8 20:04:44.175241 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 8 20:04:44.215243 kernel: raid6: avx2x4 gen() 31444 MB/s Oct 8 20:04:44.232242 kernel: raid6: avx2x2 gen() 28794 MB/s Oct 8 20:04:44.249333 kernel: raid6: avx2x1 gen() 24147 MB/s Oct 8 20:04:44.249359 kernel: raid6: using algorithm avx2x4 gen() 31444 MB/s Oct 8 20:04:44.267450 kernel: raid6: .... xor() 4493 MB/s, rmw enabled Oct 8 20:04:44.267491 kernel: raid6: using avx2x2 recovery algorithm Oct 8 20:04:44.286250 kernel: xor: automatically using best checksumming function avx Oct 8 20:04:44.404261 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 8 20:04:44.415102 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 8 20:04:44.423424 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 20:04:44.434333 systemd-udevd[406]: Using default interface naming scheme 'v255'. Oct 8 20:04:44.438010 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 20:04:44.444449 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 8 20:04:44.461631 dracut-pre-trigger[412]: rd.md=0: removing MD RAID activation Oct 8 20:04:44.489647 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 20:04:44.494396 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 20:04:44.562325 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 20:04:44.569379 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 8 20:04:44.583117 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 8 20:04:44.586619 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 20:04:44.587794 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 20:04:44.588854 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 20:04:44.594369 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 8 20:04:44.609457 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 8 20:04:44.648273 kernel: ACPI: bus type USB registered Oct 8 20:04:44.712463 kernel: usbcore: registered new interface driver usbfs Oct 8 20:04:44.712521 kernel: usbcore: registered new interface driver hub Oct 8 20:04:44.712532 kernel: usbcore: registered new device driver usb Oct 8 20:04:44.723244 kernel: libata version 3.00 loaded. Oct 8 20:04:44.732241 kernel: scsi host0: Virtio SCSI HBA Oct 8 20:04:44.735967 kernel: cryptd: max_cpu_qlen set to 1000 Oct 8 20:04:44.742321 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 20:04:44.744260 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Oct 8 20:04:44.742474 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 20:04:44.745656 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 20:04:44.746816 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 20:04:44.746925 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:04:44.747490 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:04:44.756453 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:04:44.787252 kernel: ahci 0000:00:1f.2: version 3.0 Oct 8 20:04:44.787494 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 8 20:04:44.788238 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Oct 8 20:04:44.788417 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 8 20:04:44.791235 kernel: AVX2 version of gcm_enc/dec engaged. Oct 8 20:04:44.791254 kernel: AES CTR mode by8 optimization enabled Oct 8 20:04:44.794242 kernel: scsi host1: ahci Oct 8 20:04:44.797236 kernel: scsi host2: ahci Oct 8 20:04:44.797415 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Oct 8 20:04:44.797554 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Oct 8 20:04:44.797682 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Oct 8 20:04:44.798236 kernel: scsi host3: ahci Oct 8 20:04:44.800139 kernel: scsi host4: ahci Oct 8 20:04:44.800319 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Oct 8 20:04:44.800453 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Oct 8 20:04:44.800579 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Oct 8 20:04:44.803306 kernel: hub 1-0:1.0: USB hub found Oct 8 20:04:44.803492 kernel: hub 1-0:1.0: 4 ports detected Oct 8 20:04:44.803628 kernel: scsi host5: ahci Oct 8 20:04:44.803754 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Oct 8 20:04:44.803889 kernel: hub 2-0:1.0: USB hub found Oct 8 20:04:44.804022 kernel: hub 2-0:1.0: 4 ports detected Oct 8 20:04:44.806228 kernel: scsi host6: ahci Oct 8 20:04:44.806382 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 46 Oct 8 20:04:44.806393 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 46 Oct 8 20:04:44.806401 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 46 Oct 8 20:04:44.806409 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 46 Oct 8 20:04:44.806417 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 46 Oct 8 20:04:44.806425 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 46 Oct 8 20:04:44.842301 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:04:44.851347 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 20:04:44.861169 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 20:04:45.046272 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Oct 8 20:04:45.118246 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 8 20:04:45.118335 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 8 20:04:45.118354 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 8 20:04:45.118368 kernel: ata3: SATA link down (SStatus 0 SControl 300) Oct 8 20:04:45.119885 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 8 20:04:45.120431 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 8 20:04:45.122257 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 8 20:04:45.123739 kernel: ata1.00: applying bridge limits Oct 8 20:04:45.124863 kernel: ata1.00: configured for UDMA/100 Oct 8 20:04:45.125646 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 8 20:04:45.166360 kernel: sd 0:0:0:0: Power-on or device reset occurred Oct 8 20:04:45.169816 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Oct 8 20:04:45.170021 kernel: sd 0:0:0:0: [sda] Write Protect is off Oct 8 20:04:45.170255 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Oct 8 20:04:45.170430 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Oct 8 20:04:45.177243 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 8 20:04:45.177267 kernel: GPT:17805311 != 80003071 Oct 8 20:04:45.177277 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 8 20:04:45.179004 kernel: GPT:17805311 != 80003071 Oct 8 20:04:45.186020 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 8 20:04:45.186044 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 8 20:04:45.189824 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Oct 8 20:04:45.189999 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 8 20:04:45.196246 kernel: usbcore: registered new interface driver usbhid Oct 8 20:04:45.196277 kernel: usbhid: USB HID core driver Oct 8 20:04:45.200540 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 8 20:04:45.200780 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 8 20:04:45.200798 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Oct 8 20:04:45.204443 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Oct 8 20:04:45.218303 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Oct 8 20:04:45.230910 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (460) Oct 8 20:04:45.234264 kernel: BTRFS: device fsid a8680da2-059a-4648-a8e8-f62925ab33ec devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (452) Oct 8 20:04:45.236096 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Oct 8 20:04:45.244171 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Oct 8 20:04:45.252333 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Oct 8 20:04:45.257827 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Oct 8 20:04:45.259333 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Oct 8 20:04:45.265399 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 8 20:04:45.272877 disk-uuid[577]: Primary Header is updated. Oct 8 20:04:45.272877 disk-uuid[577]: Secondary Entries is updated. Oct 8 20:04:45.272877 disk-uuid[577]: Secondary Header is updated. Oct 8 20:04:45.279686 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 8 20:04:45.288253 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 8 20:04:45.296253 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 8 20:04:46.296287 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 8 20:04:46.297758 disk-uuid[579]: The operation has completed successfully. Oct 8 20:04:46.356045 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 8 20:04:46.356157 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 8 20:04:46.365437 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 8 20:04:46.370380 sh[598]: Success Oct 8 20:04:46.383240 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Oct 8 20:04:46.426641 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 8 20:04:46.440331 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 8 20:04:46.441001 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 8 20:04:46.456482 kernel: BTRFS info (device dm-0): first mount of filesystem a8680da2-059a-4648-a8e8-f62925ab33ec Oct 8 20:04:46.456526 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 8 20:04:46.459104 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 8 20:04:46.459131 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 8 20:04:46.461291 kernel: BTRFS info (device dm-0): using free space tree Oct 8 20:04:46.468243 kernel: BTRFS info (device dm-0): enabling ssd optimizations Oct 8 20:04:46.470519 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 8 20:04:46.471822 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 8 20:04:46.477361 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 8 20:04:46.479810 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 8 20:04:46.493714 kernel: BTRFS info (device sda6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 20:04:46.493744 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 8 20:04:46.493754 kernel: BTRFS info (device sda6): using free space tree Oct 8 20:04:46.498408 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 8 20:04:46.498431 kernel: BTRFS info (device sda6): auto enabling async discard Oct 8 20:04:46.508822 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 8 20:04:46.509806 kernel: BTRFS info (device sda6): last unmount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 20:04:46.514049 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 8 20:04:46.518416 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 8 20:04:46.589902 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 20:04:46.593884 ignition[702]: Ignition 2.19.0 Oct 8 20:04:46.593893 ignition[702]: Stage: fetch-offline Oct 8 20:04:46.598420 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 20:04:46.593929 ignition[702]: no configs at "/usr/lib/ignition/base.d" Oct 8 20:04:46.600728 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 20:04:46.593938 ignition[702]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 8 20:04:46.594005 ignition[702]: parsed url from cmdline: "" Oct 8 20:04:46.594008 ignition[702]: no config URL provided Oct 8 20:04:46.594013 ignition[702]: reading system config file "/usr/lib/ignition/user.ign" Oct 8 20:04:46.594020 ignition[702]: no config at "/usr/lib/ignition/user.ign" Oct 8 20:04:46.594025 ignition[702]: failed to fetch config: resource requires networking Oct 8 20:04:46.594401 ignition[702]: Ignition finished successfully Oct 8 20:04:46.619122 systemd-networkd[784]: lo: Link UP Oct 8 20:04:46.619132 systemd-networkd[784]: lo: Gained carrier Oct 8 20:04:46.621772 systemd-networkd[784]: Enumeration completed Oct 8 20:04:46.621843 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 20:04:46.622430 systemd[1]: Reached target network.target - Network. Oct 8 20:04:46.623040 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:04:46.623044 systemd-networkd[784]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 20:04:46.624700 systemd-networkd[784]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:04:46.624703 systemd-networkd[784]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 20:04:46.626333 systemd-networkd[784]: eth0: Link UP Oct 8 20:04:46.626337 systemd-networkd[784]: eth0: Gained carrier Oct 8 20:04:46.626344 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:04:46.629572 systemd-networkd[784]: eth1: Link UP Oct 8 20:04:46.629577 systemd-networkd[784]: eth1: Gained carrier Oct 8 20:04:46.629585 systemd-networkd[784]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:04:46.629817 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Oct 8 20:04:46.642780 ignition[787]: Ignition 2.19.0 Oct 8 20:04:46.642790 ignition[787]: Stage: fetch Oct 8 20:04:46.642926 ignition[787]: no configs at "/usr/lib/ignition/base.d" Oct 8 20:04:46.642936 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 8 20:04:46.643010 ignition[787]: parsed url from cmdline: "" Oct 8 20:04:46.643014 ignition[787]: no config URL provided Oct 8 20:04:46.643018 ignition[787]: reading system config file "/usr/lib/ignition/user.ign" Oct 8 20:04:46.643026 ignition[787]: no config at "/usr/lib/ignition/user.ign" Oct 8 20:04:46.643047 ignition[787]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Oct 8 20:04:46.643167 ignition[787]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Oct 8 20:04:46.665265 systemd-networkd[784]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 8 20:04:46.761329 systemd-networkd[784]: eth0: DHCPv4 address 49.13.138.82/32, gateway 172.31.1.1 acquired from 172.31.1.1 Oct 8 20:04:46.843264 ignition[787]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Oct 8 20:04:46.846958 ignition[787]: GET result: OK Oct 8 20:04:46.847013 ignition[787]: parsing config with SHA512: 5bf8230c6dfc1f697aca13bc1dfbfb6ab19faa77dafccb39ac82b667b2606fd5fbc120a7083eec8a49c5910354363833e32f75127482f052ca3af97426c46495 Oct 8 20:04:46.851380 unknown[787]: fetched base config from "system" Oct 8 20:04:46.851393 unknown[787]: fetched base config from "system" Oct 8 20:04:46.851653 ignition[787]: fetch: fetch complete Oct 8 20:04:46.851398 unknown[787]: fetched user config from "hetzner" Oct 8 20:04:46.851657 ignition[787]: fetch: fetch passed Oct 8 20:04:46.853844 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Oct 8 20:04:46.851697 ignition[787]: Ignition finished successfully Oct 8 20:04:46.860385 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 8 20:04:46.872091 ignition[794]: Ignition 2.19.0 Oct 8 20:04:46.872100 ignition[794]: Stage: kargs Oct 8 20:04:46.872269 ignition[794]: no configs at "/usr/lib/ignition/base.d" Oct 8 20:04:46.872280 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 8 20:04:46.872989 ignition[794]: kargs: kargs passed Oct 8 20:04:46.874383 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 8 20:04:46.873025 ignition[794]: Ignition finished successfully Oct 8 20:04:46.890384 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 8 20:04:46.900044 ignition[801]: Ignition 2.19.0 Oct 8 20:04:46.900060 ignition[801]: Stage: disks Oct 8 20:04:46.900261 ignition[801]: no configs at "/usr/lib/ignition/base.d" Oct 8 20:04:46.900273 ignition[801]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 8 20:04:46.902233 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 8 20:04:46.901133 ignition[801]: disks: disks passed Oct 8 20:04:46.903637 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 8 20:04:46.901178 ignition[801]: Ignition finished successfully Oct 8 20:04:46.904182 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 8 20:04:46.904976 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 20:04:46.905945 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 20:04:46.906772 systemd[1]: Reached target basic.target - Basic System. Oct 8 20:04:46.913334 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 8 20:04:46.926135 systemd-fsck[809]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Oct 8 20:04:46.929876 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 8 20:04:46.936284 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 8 20:04:47.013681 kernel: EXT4-fs (sda9): mounted filesystem 1df90f14-3ad0-4280-9b7d-a34f65d70e4d r/w with ordered data mode. Quota mode: none. Oct 8 20:04:47.014113 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 8 20:04:47.015037 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 8 20:04:47.020301 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 20:04:47.022532 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 8 20:04:47.025361 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Oct 8 20:04:47.026922 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 8 20:04:47.026946 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 20:04:47.033570 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 8 20:04:47.035871 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (817) Oct 8 20:04:47.035896 kernel: BTRFS info (device sda6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 20:04:47.038250 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 8 20:04:47.038280 kernel: BTRFS info (device sda6): using free space tree Oct 8 20:04:47.042236 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 8 20:04:47.042258 kernel: BTRFS info (device sda6): auto enabling async discard Oct 8 20:04:47.051156 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 20:04:47.053389 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 8 20:04:47.099611 coreos-metadata[819]: Oct 08 20:04:47.099 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Oct 8 20:04:47.100636 initrd-setup-root[844]: cut: /sysroot/etc/passwd: No such file or directory Oct 8 20:04:47.102638 coreos-metadata[819]: Oct 08 20:04:47.101 INFO Fetch successful Oct 8 20:04:47.102638 coreos-metadata[819]: Oct 08 20:04:47.101 INFO wrote hostname ci-4081-1-0-7-3c1e2fa9c6 to /sysroot/etc/hostname Oct 8 20:04:47.103810 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 8 20:04:47.106908 initrd-setup-root[852]: cut: /sysroot/etc/group: No such file or directory Oct 8 20:04:47.110538 initrd-setup-root[859]: cut: /sysroot/etc/shadow: No such file or directory Oct 8 20:04:47.114622 initrd-setup-root[866]: cut: /sysroot/etc/gshadow: No such file or directory Oct 8 20:04:47.194066 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 8 20:04:47.200309 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 8 20:04:47.203391 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 8 20:04:47.211290 kernel: BTRFS info (device sda6): last unmount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 20:04:47.237628 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 8 20:04:47.240984 ignition[934]: INFO : Ignition 2.19.0 Oct 8 20:04:47.240984 ignition[934]: INFO : Stage: mount Oct 8 20:04:47.242478 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 20:04:47.242478 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 8 20:04:47.244048 ignition[934]: INFO : mount: mount passed Oct 8 20:04:47.244048 ignition[934]: INFO : Ignition finished successfully Oct 8 20:04:47.244905 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 8 20:04:47.254306 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 8 20:04:47.455899 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 8 20:04:47.461431 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 20:04:47.472361 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (945) Oct 8 20:04:47.476468 kernel: BTRFS info (device sda6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 20:04:47.476498 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 8 20:04:47.476509 kernel: BTRFS info (device sda6): using free space tree Oct 8 20:04:47.483255 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 8 20:04:47.483288 kernel: BTRFS info (device sda6): auto enabling async discard Oct 8 20:04:47.486321 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 20:04:47.506871 ignition[962]: INFO : Ignition 2.19.0 Oct 8 20:04:47.506871 ignition[962]: INFO : Stage: files Oct 8 20:04:47.508515 ignition[962]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 20:04:47.508515 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 8 20:04:47.508515 ignition[962]: DEBUG : files: compiled without relabeling support, skipping Oct 8 20:04:47.511059 ignition[962]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 8 20:04:47.511059 ignition[962]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 8 20:04:47.513095 ignition[962]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 8 20:04:47.513095 ignition[962]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 8 20:04:47.515549 ignition[962]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 8 20:04:47.514495 unknown[962]: wrote ssh authorized keys file for user: core Oct 8 20:04:47.517273 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 8 20:04:47.517273 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Oct 8 20:04:47.585620 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 8 20:04:47.850409 systemd-networkd[784]: eth0: Gained IPv6LL Oct 8 20:04:47.871381 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 8 20:04:47.871381 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 8 20:04:47.875209 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 8 20:04:47.875209 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 8 20:04:47.875209 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 8 20:04:47.875209 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 20:04:47.875209 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 20:04:47.875209 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 20:04:47.875209 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 20:04:47.875209 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 20:04:47.875209 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 20:04:47.875209 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 8 20:04:47.875209 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 8 20:04:47.875209 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 8 20:04:47.875209 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Oct 8 20:04:48.170430 systemd-networkd[784]: eth1: Gained IPv6LL Oct 8 20:04:48.416707 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 8 20:04:48.697993 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 8 20:04:48.697993 ignition[962]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 8 20:04:48.701051 ignition[962]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 20:04:48.703384 ignition[962]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 20:04:48.703384 ignition[962]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 8 20:04:48.703384 ignition[962]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 8 20:04:48.703384 ignition[962]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Oct 8 20:04:48.703384 ignition[962]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Oct 8 20:04:48.703384 ignition[962]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 8 20:04:48.703384 ignition[962]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Oct 8 20:04:48.703384 ignition[962]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Oct 8 20:04:48.703384 ignition[962]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 8 20:04:48.703384 ignition[962]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 8 20:04:48.703384 ignition[962]: INFO : files: files passed Oct 8 20:04:48.703384 ignition[962]: INFO : Ignition finished successfully Oct 8 20:04:48.705238 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 8 20:04:48.711411 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 8 20:04:48.715357 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 8 20:04:48.716392 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 8 20:04:48.717120 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 8 20:04:48.728170 initrd-setup-root-after-ignition[990]: grep: Oct 8 20:04:48.729330 initrd-setup-root-after-ignition[994]: grep: Oct 8 20:04:48.729330 initrd-setup-root-after-ignition[990]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 20:04:48.729330 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 8 20:04:48.733288 initrd-setup-root-after-ignition[994]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 20:04:48.731846 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 20:04:48.732717 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 8 20:04:48.738489 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 8 20:04:48.763166 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 8 20:04:48.763370 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 8 20:04:48.764985 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 8 20:04:48.766186 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 8 20:04:48.766767 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 8 20:04:48.773432 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 8 20:04:48.785793 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 20:04:48.791397 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 8 20:04:48.803721 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 8 20:04:48.804614 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 20:04:48.805832 systemd[1]: Stopped target timers.target - Timer Units. Oct 8 20:04:48.807054 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 8 20:04:48.807281 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 20:04:48.808725 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 8 20:04:48.810054 systemd[1]: Stopped target basic.target - Basic System. Oct 8 20:04:48.811124 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 8 20:04:48.812131 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 20:04:48.813316 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 8 20:04:48.814535 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 8 20:04:48.815682 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 20:04:48.816862 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 8 20:04:48.817962 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 8 20:04:48.819131 systemd[1]: Stopped target swap.target - Swaps. Oct 8 20:04:48.820108 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 8 20:04:48.820273 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 8 20:04:48.821663 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 8 20:04:48.823027 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 20:04:48.824248 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 8 20:04:48.824405 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 20:04:48.825398 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 8 20:04:48.825543 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 8 20:04:48.826997 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 8 20:04:48.827245 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 20:04:48.828463 systemd[1]: ignition-files.service: Deactivated successfully. Oct 8 20:04:48.828669 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 8 20:04:48.829542 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Oct 8 20:04:48.829693 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 8 20:04:48.836458 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 8 20:04:48.837059 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 8 20:04:48.837269 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 20:04:48.842361 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 8 20:04:48.842943 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 8 20:04:48.843081 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 20:04:48.846194 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 8 20:04:48.846321 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 20:04:48.851387 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 8 20:04:48.851983 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 8 20:04:48.858228 ignition[1014]: INFO : Ignition 2.19.0 Oct 8 20:04:48.858228 ignition[1014]: INFO : Stage: umount Oct 8 20:04:48.858228 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 20:04:48.858228 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 8 20:04:48.862885 ignition[1014]: INFO : umount: umount passed Oct 8 20:04:48.862885 ignition[1014]: INFO : Ignition finished successfully Oct 8 20:04:48.861676 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 8 20:04:48.861811 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 8 20:04:48.862880 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 8 20:04:48.862961 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 8 20:04:48.864807 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 8 20:04:48.864873 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 8 20:04:48.866600 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 8 20:04:48.866646 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Oct 8 20:04:48.867508 systemd[1]: Stopped target network.target - Network. Oct 8 20:04:48.869861 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 8 20:04:48.869913 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 20:04:48.870870 systemd[1]: Stopped target paths.target - Path Units. Oct 8 20:04:48.872518 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 8 20:04:48.878271 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 20:04:48.879044 systemd[1]: Stopped target slices.target - Slice Units. Oct 8 20:04:48.879453 systemd[1]: Stopped target sockets.target - Socket Units. Oct 8 20:04:48.879871 systemd[1]: iscsid.socket: Deactivated successfully. Oct 8 20:04:48.879916 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 20:04:48.881123 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 8 20:04:48.881165 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 20:04:48.881609 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 8 20:04:48.881656 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 8 20:04:48.882079 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 8 20:04:48.882121 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 8 20:04:48.882768 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 8 20:04:48.884041 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 8 20:04:48.886274 systemd-networkd[784]: eth0: DHCPv6 lease lost Oct 8 20:04:48.886438 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 8 20:04:48.886941 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 8 20:04:48.887048 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 8 20:04:48.888630 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 8 20:04:48.888708 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 8 20:04:48.889281 systemd-networkd[784]: eth1: DHCPv6 lease lost Oct 8 20:04:48.892387 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 8 20:04:48.892529 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 8 20:04:48.895317 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 8 20:04:48.895511 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 8 20:04:48.897849 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 8 20:04:48.897908 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 8 20:04:48.903390 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 8 20:04:48.903870 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 8 20:04:48.903924 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 20:04:48.905062 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 8 20:04:48.905119 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 8 20:04:48.907720 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 8 20:04:48.907785 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 8 20:04:48.908581 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 8 20:04:48.908626 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 20:04:48.909777 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 20:04:48.919800 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 8 20:04:48.919934 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 8 20:04:48.923946 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 8 20:04:48.924125 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 20:04:48.925258 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 8 20:04:48.925305 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 8 20:04:48.926116 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 8 20:04:48.926160 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 20:04:48.927159 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 8 20:04:48.927212 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 8 20:04:48.928856 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 8 20:04:48.928906 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 8 20:04:48.930095 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 20:04:48.930142 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 20:04:48.938442 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 8 20:04:48.938918 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 8 20:04:48.938981 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 20:04:48.939519 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 20:04:48.939567 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:04:48.946317 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 8 20:04:48.946433 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 8 20:04:48.947666 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 8 20:04:48.953362 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 8 20:04:48.960105 systemd[1]: Switching root. Oct 8 20:04:49.022728 systemd-journald[187]: Journal stopped Oct 8 20:04:49.933468 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Oct 8 20:04:49.933539 kernel: SELinux: policy capability network_peer_controls=1 Oct 8 20:04:49.933559 kernel: SELinux: policy capability open_perms=1 Oct 8 20:04:49.933577 kernel: SELinux: policy capability extended_socket_class=1 Oct 8 20:04:49.933587 kernel: SELinux: policy capability always_check_network=0 Oct 8 20:04:49.933597 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 8 20:04:49.933606 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 8 20:04:49.933616 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 8 20:04:49.933627 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 8 20:04:49.933637 kernel: audit: type=1403 audit(1728417889.149:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 8 20:04:49.933652 systemd[1]: Successfully loaded SELinux policy in 39.877ms. Oct 8 20:04:49.933668 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.014ms. Oct 8 20:04:49.933680 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 20:04:49.933690 systemd[1]: Detected virtualization kvm. Oct 8 20:04:49.933700 systemd[1]: Detected architecture x86-64. Oct 8 20:04:49.933710 systemd[1]: Detected first boot. Oct 8 20:04:49.933722 systemd[1]: Hostname set to . Oct 8 20:04:49.933732 systemd[1]: Initializing machine ID from VM UUID. Oct 8 20:04:49.933742 zram_generator::config[1057]: No configuration found. Oct 8 20:04:49.933753 systemd[1]: Populated /etc with preset unit settings. Oct 8 20:04:49.933763 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 8 20:04:49.933774 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 8 20:04:49.933784 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 8 20:04:49.933794 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 8 20:04:49.933804 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 8 20:04:49.933816 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 8 20:04:49.933826 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 8 20:04:49.933836 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 8 20:04:49.933847 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 8 20:04:49.933857 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 8 20:04:49.933866 systemd[1]: Created slice user.slice - User and Session Slice. Oct 8 20:04:49.933880 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 20:04:49.933895 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 20:04:49.933907 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 8 20:04:49.933917 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 8 20:04:49.933927 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 8 20:04:49.933937 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 20:04:49.933948 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 8 20:04:49.933958 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 20:04:49.933968 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 8 20:04:49.933978 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 8 20:04:49.933991 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 8 20:04:49.934001 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 8 20:04:49.934011 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 20:04:49.934021 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 20:04:49.934031 systemd[1]: Reached target slices.target - Slice Units. Oct 8 20:04:49.934041 systemd[1]: Reached target swap.target - Swaps. Oct 8 20:04:49.934051 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 8 20:04:49.934061 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 8 20:04:49.934073 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 20:04:49.934083 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 20:04:49.934094 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 20:04:49.934104 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 8 20:04:49.934114 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 8 20:04:49.934132 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 8 20:04:49.934160 systemd[1]: Mounting media.mount - External Media Directory... Oct 8 20:04:49.934182 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:04:49.934330 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 8 20:04:49.934345 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 8 20:04:49.934355 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 8 20:04:49.934366 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 8 20:04:49.934376 systemd[1]: Reached target machines.target - Containers. Oct 8 20:04:49.934386 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 8 20:04:49.934414 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 20:04:49.934438 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 20:04:49.934463 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 8 20:04:49.934475 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 20:04:49.934485 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 20:04:49.934495 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 20:04:49.934505 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 8 20:04:49.934514 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 20:04:49.934529 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 8 20:04:49.934540 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 8 20:04:49.934550 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 8 20:04:49.934560 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 8 20:04:49.934570 systemd[1]: Stopped systemd-fsck-usr.service. Oct 8 20:04:49.934580 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 20:04:49.934590 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 20:04:49.934600 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 8 20:04:49.934610 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 8 20:04:49.934622 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 20:04:49.939614 kernel: fuse: init (API version 7.39) Oct 8 20:04:49.939638 systemd[1]: verity-setup.service: Deactivated successfully. Oct 8 20:04:49.939650 kernel: loop: module loaded Oct 8 20:04:49.939661 systemd[1]: Stopped verity-setup.service. Oct 8 20:04:49.939672 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:04:49.939682 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 8 20:04:49.939692 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 8 20:04:49.939707 systemd[1]: Mounted media.mount - External Media Directory. Oct 8 20:04:49.939717 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 8 20:04:49.939727 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 8 20:04:49.939737 kernel: ACPI: bus type drm_connector registered Oct 8 20:04:49.939747 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 8 20:04:49.939758 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 20:04:49.939770 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 8 20:04:49.939781 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 8 20:04:49.939791 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 20:04:49.939801 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 20:04:49.939810 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 20:04:49.939820 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 20:04:49.939852 systemd-journald[1130]: Collecting audit messages is disabled. Oct 8 20:04:49.939877 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 20:04:49.939888 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 20:04:49.939898 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 8 20:04:49.939908 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 8 20:04:49.939918 systemd-journald[1130]: Journal started Oct 8 20:04:49.939939 systemd-journald[1130]: Runtime Journal (/run/log/journal/a70f95f746a74bb3b4a1bba5bd009925) is 4.8M, max 38.4M, 33.6M free. Oct 8 20:04:49.662156 systemd[1]: Queued start job for default target multi-user.target. Oct 8 20:04:49.679397 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Oct 8 20:04:49.679820 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 8 20:04:49.944264 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 20:04:49.945974 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 20:04:49.946685 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 20:04:49.948563 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 20:04:49.949367 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 8 20:04:49.950058 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 8 20:04:49.961057 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 8 20:04:49.967896 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 8 20:04:49.974665 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 8 20:04:49.980089 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 8 20:04:49.980996 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 8 20:04:49.981080 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 20:04:49.982419 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 8 20:04:49.988364 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 8 20:04:49.991375 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 8 20:04:49.992518 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 20:04:50.000491 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 8 20:04:50.004431 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 8 20:04:50.005601 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 20:04:50.007404 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 8 20:04:50.007900 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 20:04:50.010359 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 20:04:50.012357 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 8 20:04:50.020405 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 8 20:04:50.030157 systemd-journald[1130]: Time spent on flushing to /var/log/journal/a70f95f746a74bb3b4a1bba5bd009925 is 19.962ms for 1133 entries. Oct 8 20:04:50.030157 systemd-journald[1130]: System Journal (/var/log/journal/a70f95f746a74bb3b4a1bba5bd009925) is 8.0M, max 584.8M, 576.8M free. Oct 8 20:04:50.071635 systemd-journald[1130]: Received client request to flush runtime journal. Oct 8 20:04:50.023806 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 8 20:04:50.024394 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 8 20:04:50.025622 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 8 20:04:50.053031 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 20:04:50.065375 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 8 20:04:50.073051 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 8 20:04:50.077235 kernel: loop0: detected capacity change from 0 to 8 Oct 8 20:04:50.084846 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 8 20:04:50.086250 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 8 20:04:50.086873 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 8 20:04:50.095474 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 8 20:04:50.106773 udevadm[1183]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 8 20:04:50.117250 kernel: loop1: detected capacity change from 0 to 142488 Oct 8 20:04:50.121656 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 20:04:50.129694 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 8 20:04:50.131370 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 8 20:04:50.140931 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 8 20:04:50.148018 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 20:04:50.166427 kernel: loop2: detected capacity change from 0 to 140768 Oct 8 20:04:50.174868 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Oct 8 20:04:50.175654 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Oct 8 20:04:50.183211 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 20:04:50.206248 kernel: loop3: detected capacity change from 0 to 211296 Oct 8 20:04:50.246245 kernel: loop4: detected capacity change from 0 to 8 Oct 8 20:04:50.251043 kernel: loop5: detected capacity change from 0 to 142488 Oct 8 20:04:50.277362 kernel: loop6: detected capacity change from 0 to 140768 Oct 8 20:04:50.294404 kernel: loop7: detected capacity change from 0 to 211296 Oct 8 20:04:50.315787 (sd-merge)[1203]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Oct 8 20:04:50.318739 (sd-merge)[1203]: Merged extensions into '/usr'. Oct 8 20:04:50.323652 systemd[1]: Reloading requested from client PID 1177 ('systemd-sysext') (unit systemd-sysext.service)... Oct 8 20:04:50.323666 systemd[1]: Reloading... Oct 8 20:04:50.405245 zram_generator::config[1228]: No configuration found. Oct 8 20:04:50.538613 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 20:04:50.540458 ldconfig[1172]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 8 20:04:50.581688 systemd[1]: Reloading finished in 257 ms. Oct 8 20:04:50.607477 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 8 20:04:50.609028 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 8 20:04:50.618388 systemd[1]: Starting ensure-sysext.service... Oct 8 20:04:50.624210 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 8 20:04:50.629832 systemd[1]: Reloading requested from client PID 1272 ('systemctl') (unit ensure-sysext.service)... Oct 8 20:04:50.629911 systemd[1]: Reloading... Oct 8 20:04:50.644003 systemd-tmpfiles[1273]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 8 20:04:50.644327 systemd-tmpfiles[1273]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 8 20:04:50.645115 systemd-tmpfiles[1273]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 8 20:04:50.648141 systemd-tmpfiles[1273]: ACLs are not supported, ignoring. Oct 8 20:04:50.648245 systemd-tmpfiles[1273]: ACLs are not supported, ignoring. Oct 8 20:04:50.651124 systemd-tmpfiles[1273]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 20:04:50.651134 systemd-tmpfiles[1273]: Skipping /boot Oct 8 20:04:50.664568 systemd-tmpfiles[1273]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 20:04:50.664582 systemd-tmpfiles[1273]: Skipping /boot Oct 8 20:04:50.708245 zram_generator::config[1300]: No configuration found. Oct 8 20:04:50.804385 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 20:04:50.848458 systemd[1]: Reloading finished in 218 ms. Oct 8 20:04:50.868528 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 8 20:04:50.873611 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 20:04:50.884413 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 20:04:50.889354 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 8 20:04:50.892425 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 8 20:04:50.897715 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 20:04:50.906332 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 20:04:50.911093 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 8 20:04:50.923459 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 8 20:04:50.925061 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:04:50.925201 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 20:04:50.929756 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 20:04:50.939455 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 20:04:50.941129 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 20:04:50.941751 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 20:04:50.941834 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:04:50.946364 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:04:50.946998 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 20:04:50.947181 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 20:04:50.947322 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:04:50.951368 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:04:50.951539 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 20:04:50.959618 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 20:04:50.961382 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 20:04:50.961541 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:04:50.962290 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 8 20:04:50.977925 systemd[1]: Finished ensure-sysext.service. Oct 8 20:04:50.993390 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 8 20:04:50.996368 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 8 20:04:50.997558 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 8 20:04:50.998375 systemd-udevd[1356]: Using default interface naming scheme 'v255'. Oct 8 20:04:50.998910 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 20:04:50.999057 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 20:04:51.000740 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 8 20:04:51.001821 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 20:04:51.002598 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 20:04:51.003724 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 20:04:51.004409 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 20:04:51.005709 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 20:04:51.006135 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 20:04:51.020703 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 20:04:51.020760 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 20:04:51.029786 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 8 20:04:51.031542 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 8 20:04:51.042499 augenrules[1388]: No rules Oct 8 20:04:51.042785 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 8 20:04:51.045531 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 20:04:51.057862 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 20:04:51.067363 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 20:04:51.092918 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 8 20:04:51.093520 systemd[1]: Reached target time-set.target - System Time Set. Oct 8 20:04:51.142650 systemd-resolved[1354]: Positive Trust Anchors: Oct 8 20:04:51.142667 systemd-resolved[1354]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 20:04:51.142694 systemd-resolved[1354]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 8 20:04:51.148460 systemd-resolved[1354]: Using system hostname 'ci-4081-1-0-7-3c1e2fa9c6'. Oct 8 20:04:51.149873 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 20:04:51.150628 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 20:04:51.155804 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 8 20:04:51.162286 systemd-networkd[1395]: lo: Link UP Oct 8 20:04:51.162293 systemd-networkd[1395]: lo: Gained carrier Oct 8 20:04:51.163670 systemd-networkd[1395]: Enumeration completed Oct 8 20:04:51.163765 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 20:04:51.164893 systemd[1]: Reached target network.target - Network. Oct 8 20:04:51.168423 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1402) Oct 8 20:04:51.172404 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 8 20:04:51.182244 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1402) Oct 8 20:04:51.223402 systemd-networkd[1395]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:04:51.223517 systemd-networkd[1395]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 20:04:51.225447 systemd-networkd[1395]: eth0: Link UP Oct 8 20:04:51.225510 systemd-networkd[1395]: eth0: Gained carrier Oct 8 20:04:51.225561 systemd-networkd[1395]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:04:51.232019 systemd-networkd[1395]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:04:51.233566 systemd-networkd[1395]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 20:04:51.235827 systemd-networkd[1395]: eth1: Link UP Oct 8 20:04:51.235899 systemd-networkd[1395]: eth1: Gained carrier Oct 8 20:04:51.235957 systemd-networkd[1395]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:04:51.246664 kernel: mousedev: PS/2 mouse device common for all mice Oct 8 20:04:51.246717 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 8 20:04:51.253243 kernel: ACPI: button: Power Button [PWRF] Oct 8 20:04:51.268343 systemd-networkd[1395]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 8 20:04:51.269751 systemd-timesyncd[1370]: Network configuration changed, trying to establish connection. Oct 8 20:04:51.283258 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Oct 8 20:04:51.288235 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1406) Oct 8 20:04:51.288276 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Oct 8 20:04:51.287390 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Oct 8 20:04:51.287429 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:04:51.287528 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 20:04:51.297382 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 20:04:51.300379 kernel: Console: switching to colour dummy device 80x25 Oct 8 20:04:51.300866 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 20:04:51.303380 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 20:04:51.303515 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 20:04:51.303542 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 8 20:04:51.303553 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:04:51.309427 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Oct 8 20:04:51.309465 kernel: [drm] features: -context_init Oct 8 20:04:51.311267 systemd-networkd[1395]: eth0: DHCPv4 address 49.13.138.82/32, gateway 172.31.1.1 acquired from 172.31.1.1 Oct 8 20:04:51.311634 systemd-timesyncd[1370]: Network configuration changed, trying to establish connection. Oct 8 20:04:51.312086 systemd-timesyncd[1370]: Network configuration changed, trying to establish connection. Oct 8 20:04:51.317560 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 20:04:51.317727 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 20:04:51.319809 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 20:04:51.320965 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 20:04:51.321427 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 20:04:51.321756 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 20:04:51.322020 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 20:04:51.322062 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 20:04:51.340243 kernel: [drm] number of scanouts: 1 Oct 8 20:04:51.340289 kernel: [drm] number of cap sets: 0 Oct 8 20:04:51.344543 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 8 20:04:51.344984 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Oct 8 20:04:51.345184 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 8 20:04:51.348233 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Oct 8 20:04:51.360355 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Oct 8 20:04:51.375803 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Oct 8 20:04:51.375854 kernel: Console: switching to colour frame buffer device 160x50 Oct 8 20:04:51.379708 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Oct 8 20:04:51.385927 kernel: EDAC MC: Ver: 3.0.0 Oct 8 20:04:51.392226 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Oct 8 20:04:51.401498 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 8 20:04:51.402973 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:04:51.422487 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 8 20:04:51.425107 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 20:04:51.425404 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:04:51.431368 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:04:51.484635 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:04:51.535497 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 8 20:04:51.540444 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 8 20:04:51.551378 lvm[1456]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 20:04:51.582316 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 8 20:04:51.583188 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 20:04:51.583364 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 20:04:51.583557 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 8 20:04:51.583668 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 8 20:04:51.583926 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 8 20:04:51.584134 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 8 20:04:51.584213 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 8 20:04:51.585317 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 8 20:04:51.585349 systemd[1]: Reached target paths.target - Path Units. Oct 8 20:04:51.585429 systemd[1]: Reached target timers.target - Timer Units. Oct 8 20:04:51.587698 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 8 20:04:51.589396 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 8 20:04:51.594501 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 8 20:04:51.596710 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 8 20:04:51.601314 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 8 20:04:51.604018 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 20:04:51.604546 systemd[1]: Reached target basic.target - Basic System. Oct 8 20:04:51.605142 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 8 20:04:51.605201 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 8 20:04:51.607882 lvm[1460]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 20:04:51.611341 systemd[1]: Starting containerd.service - containerd container runtime... Oct 8 20:04:51.617122 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Oct 8 20:04:51.625434 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 8 20:04:51.628419 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 8 20:04:51.638357 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 8 20:04:51.638866 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 8 20:04:51.642403 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 8 20:04:51.646339 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 8 20:04:51.650010 jq[1464]: false Oct 8 20:04:51.652529 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Oct 8 20:04:51.655410 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 8 20:04:51.660364 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 8 20:04:51.674076 coreos-metadata[1462]: Oct 08 20:04:51.673 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Oct 8 20:04:51.675655 coreos-metadata[1462]: Oct 08 20:04:51.675 INFO Fetch successful Oct 8 20:04:51.676313 coreos-metadata[1462]: Oct 08 20:04:51.675 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Oct 8 20:04:51.676889 coreos-metadata[1462]: Oct 08 20:04:51.676 INFO Fetch successful Oct 8 20:04:51.677365 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 8 20:04:51.678279 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 8 20:04:51.678718 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 8 20:04:51.681521 dbus-daemon[1463]: [system] SELinux support is enabled Oct 8 20:04:51.686318 systemd[1]: Starting update-engine.service - Update Engine... Oct 8 20:04:51.694650 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 8 20:04:51.695748 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 8 20:04:51.699507 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 8 20:04:51.710698 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 8 20:04:51.710905 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 8 20:04:51.718649 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 8 20:04:51.720412 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 8 20:04:51.724249 update_engine[1475]: I20241008 20:04:51.723873 1475 main.cc:92] Flatcar Update Engine starting Oct 8 20:04:51.725013 systemd[1]: motdgen.service: Deactivated successfully. Oct 8 20:04:51.725190 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 8 20:04:51.727142 update_engine[1475]: I20241008 20:04:51.726624 1475 update_check_scheduler.cc:74] Next update check in 11m48s Oct 8 20:04:51.732174 extend-filesystems[1465]: Found loop4 Oct 8 20:04:51.732174 extend-filesystems[1465]: Found loop5 Oct 8 20:04:51.732174 extend-filesystems[1465]: Found loop6 Oct 8 20:04:51.732174 extend-filesystems[1465]: Found loop7 Oct 8 20:04:51.732174 extend-filesystems[1465]: Found sda Oct 8 20:04:51.732174 extend-filesystems[1465]: Found sda1 Oct 8 20:04:51.732174 extend-filesystems[1465]: Found sda2 Oct 8 20:04:51.732174 extend-filesystems[1465]: Found sda3 Oct 8 20:04:51.732174 extend-filesystems[1465]: Found usr Oct 8 20:04:51.732174 extend-filesystems[1465]: Found sda4 Oct 8 20:04:51.732174 extend-filesystems[1465]: Found sda6 Oct 8 20:04:51.732174 extend-filesystems[1465]: Found sda7 Oct 8 20:04:51.732174 extend-filesystems[1465]: Found sda9 Oct 8 20:04:51.732174 extend-filesystems[1465]: Checking size of /dev/sda9 Oct 8 20:04:51.789091 jq[1477]: true Oct 8 20:04:51.752530 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 8 20:04:51.752571 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 8 20:04:51.758794 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 8 20:04:51.758815 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 8 20:04:51.759535 systemd[1]: Started update-engine.service - Update Engine. Oct 8 20:04:51.774331 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 8 20:04:51.789146 (ntainerd)[1500]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 8 20:04:51.801349 extend-filesystems[1465]: Resized partition /dev/sda9 Oct 8 20:04:51.801794 jq[1497]: true Oct 8 20:04:51.810347 extend-filesystems[1508]: resize2fs 1.47.1 (20-May-2024) Oct 8 20:04:51.829983 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Oct 8 20:04:51.830043 tar[1486]: linux-amd64/helm Oct 8 20:04:51.894409 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Oct 8 20:04:51.896238 systemd-logind[1473]: New seat seat0. Oct 8 20:04:51.901698 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 8 20:04:51.904751 systemd-logind[1473]: Watching system buttons on /dev/input/event2 (Power Button) Oct 8 20:04:51.904775 systemd-logind[1473]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 8 20:04:51.904946 systemd[1]: Started systemd-logind.service - User Login Management. Oct 8 20:04:51.959334 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1406) Oct 8 20:04:52.004267 bash[1532]: Updated "/home/core/.ssh/authorized_keys" Oct 8 20:04:52.007180 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 8 20:04:52.024429 systemd[1]: Starting sshkeys.service... Oct 8 20:04:52.056196 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Oct 8 20:04:52.063331 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Oct 8 20:04:52.074742 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Oct 8 20:04:52.084092 extend-filesystems[1508]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Oct 8 20:04:52.084092 extend-filesystems[1508]: old_desc_blocks = 1, new_desc_blocks = 5 Oct 8 20:04:52.084092 extend-filesystems[1508]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Oct 8 20:04:52.085462 extend-filesystems[1465]: Resized filesystem in /dev/sda9 Oct 8 20:04:52.085462 extend-filesystems[1465]: Found sr0 Oct 8 20:04:52.091415 sshd_keygen[1512]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 8 20:04:52.087999 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 8 20:04:52.088239 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 8 20:04:52.089261 locksmithd[1502]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 8 20:04:52.104929 containerd[1500]: time="2024-10-08T20:04:52.104417139Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Oct 8 20:04:52.109481 coreos-metadata[1543]: Oct 08 20:04:52.109 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Oct 8 20:04:52.112321 coreos-metadata[1543]: Oct 08 20:04:52.111 INFO Fetch successful Oct 8 20:04:52.115480 unknown[1543]: wrote ssh authorized keys file for user: core Oct 8 20:04:52.137622 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 8 20:04:52.142640 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 8 20:04:52.150029 containerd[1500]: time="2024-10-08T20:04:52.149994390Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 8 20:04:52.151985 containerd[1500]: time="2024-10-08T20:04:52.151956991Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 8 20:04:52.152067 containerd[1500]: time="2024-10-08T20:04:52.152053151Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 8 20:04:52.152117 containerd[1500]: time="2024-10-08T20:04:52.152105329Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 8 20:04:52.152329 containerd[1500]: time="2024-10-08T20:04:52.152312978Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 8 20:04:52.152406 containerd[1500]: time="2024-10-08T20:04:52.152391656Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 8 20:04:52.152522 containerd[1500]: time="2024-10-08T20:04:52.152504537Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 20:04:52.152902 containerd[1500]: time="2024-10-08T20:04:52.152880863Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 8 20:04:52.153893 containerd[1500]: time="2024-10-08T20:04:52.153499093Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 20:04:52.153893 containerd[1500]: time="2024-10-08T20:04:52.153516675Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 8 20:04:52.153893 containerd[1500]: time="2024-10-08T20:04:52.153529399Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 20:04:52.153893 containerd[1500]: time="2024-10-08T20:04:52.153538476Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 8 20:04:52.153893 containerd[1500]: time="2024-10-08T20:04:52.153623215Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 8 20:04:52.153893 containerd[1500]: time="2024-10-08T20:04:52.153850321Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 8 20:04:52.154385 containerd[1500]: time="2024-10-08T20:04:52.154037623Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 20:04:52.154385 containerd[1500]: time="2024-10-08T20:04:52.154051789Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 8 20:04:52.154659 containerd[1500]: time="2024-10-08T20:04:52.154493346Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 8 20:04:52.154659 containerd[1500]: time="2024-10-08T20:04:52.154549713Z" level=info msg="metadata content store policy set" policy=shared Oct 8 20:04:52.156760 update-ssh-keys[1560]: Updated "/home/core/.ssh/authorized_keys" Oct 8 20:04:52.158209 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Oct 8 20:04:52.164049 systemd[1]: Finished sshkeys.service. Oct 8 20:04:52.175340 containerd[1500]: time="2024-10-08T20:04:52.170194107Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 8 20:04:52.175340 containerd[1500]: time="2024-10-08T20:04:52.170283455Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 8 20:04:52.175340 containerd[1500]: time="2024-10-08T20:04:52.170307720Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 8 20:04:52.175340 containerd[1500]: time="2024-10-08T20:04:52.170324592Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 8 20:04:52.175340 containerd[1500]: time="2024-10-08T20:04:52.170337336Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 8 20:04:52.175340 containerd[1500]: time="2024-10-08T20:04:52.170500472Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 8 20:04:52.175340 containerd[1500]: time="2024-10-08T20:04:52.170688725Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 8 20:04:52.175340 containerd[1500]: time="2024-10-08T20:04:52.170800054Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 8 20:04:52.175340 containerd[1500]: time="2024-10-08T20:04:52.170814721Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 8 20:04:52.175340 containerd[1500]: time="2024-10-08T20:04:52.170825702Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 8 20:04:52.175340 containerd[1500]: time="2024-10-08T20:04:52.170837914Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 8 20:04:52.175340 containerd[1500]: time="2024-10-08T20:04:52.170850238Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 8 20:04:52.175340 containerd[1500]: time="2024-10-08T20:04:52.170860597Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 8 20:04:52.175340 containerd[1500]: time="2024-10-08T20:04:52.170873071Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 8 20:04:52.172533 systemd[1]: issuegen.service: Deactivated successfully. Oct 8 20:04:52.175673 containerd[1500]: time="2024-10-08T20:04:52.170887808Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 8 20:04:52.175673 containerd[1500]: time="2024-10-08T20:04:52.170908166Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 8 20:04:52.175673 containerd[1500]: time="2024-10-08T20:04:52.170926080Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 8 20:04:52.175673 containerd[1500]: time="2024-10-08T20:04:52.170941869Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 8 20:04:52.175673 containerd[1500]: time="2024-10-08T20:04:52.170981354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 8 20:04:52.175673 containerd[1500]: time="2024-10-08T20:04:52.170996682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 8 20:04:52.175673 containerd[1500]: time="2024-10-08T20:04:52.171008374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 8 20:04:52.175673 containerd[1500]: time="2024-10-08T20:04:52.171020076Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 8 20:04:52.175673 containerd[1500]: time="2024-10-08T20:04:52.171030576Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 8 20:04:52.175673 containerd[1500]: time="2024-10-08T20:04:52.171059550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 8 20:04:52.175673 containerd[1500]: time="2024-10-08T20:04:52.171077273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 8 20:04:52.175673 containerd[1500]: time="2024-10-08T20:04:52.171104434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 8 20:04:52.175673 containerd[1500]: time="2024-10-08T20:04:52.171127778Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 8 20:04:52.175673 containerd[1500]: time="2024-10-08T20:04:52.171150000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 8 20:04:52.172732 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 8 20:04:52.175923 containerd[1500]: time="2024-10-08T20:04:52.171161281Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 8 20:04:52.175923 containerd[1500]: time="2024-10-08T20:04:52.171172542Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 8 20:04:52.175923 containerd[1500]: time="2024-10-08T20:04:52.171185005Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 8 20:04:52.175923 containerd[1500]: time="2024-10-08T20:04:52.171199312Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 8 20:04:52.175923 containerd[1500]: time="2024-10-08T20:04:52.171234398Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 8 20:04:52.175923 containerd[1500]: time="2024-10-08T20:04:52.171246350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 8 20:04:52.175923 containerd[1500]: time="2024-10-08T20:04:52.171254987Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 8 20:04:52.175923 containerd[1500]: time="2024-10-08T20:04:52.171324056Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 8 20:04:52.175923 containerd[1500]: time="2024-10-08T20:04:52.171341168Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 8 20:04:52.175923 containerd[1500]: time="2024-10-08T20:04:52.171350235Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 8 20:04:52.175923 containerd[1500]: time="2024-10-08T20:04:52.171360925Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 8 20:04:52.175923 containerd[1500]: time="2024-10-08T20:04:52.171369081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 8 20:04:52.175923 containerd[1500]: time="2024-10-08T20:04:52.171379289Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 8 20:04:52.175923 containerd[1500]: time="2024-10-08T20:04:52.171387665Z" level=info msg="NRI interface is disabled by configuration." Oct 8 20:04:52.176122 containerd[1500]: time="2024-10-08T20:04:52.171396172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 8 20:04:52.176146 containerd[1500]: time="2024-10-08T20:04:52.171667280Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 8 20:04:52.176146 containerd[1500]: time="2024-10-08T20:04:52.171744444Z" level=info msg="Connect containerd service" Oct 8 20:04:52.176146 containerd[1500]: time="2024-10-08T20:04:52.171799137Z" level=info msg="using legacy CRI server" Oct 8 20:04:52.176146 containerd[1500]: time="2024-10-08T20:04:52.171809206Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 8 20:04:52.176146 containerd[1500]: time="2024-10-08T20:04:52.171900898Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 8 20:04:52.177535 containerd[1500]: time="2024-10-08T20:04:52.177503473Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 8 20:04:52.177737 containerd[1500]: time="2024-10-08T20:04:52.177706954Z" level=info msg="Start subscribing containerd event" Oct 8 20:04:52.177819 containerd[1500]: time="2024-10-08T20:04:52.177807313Z" level=info msg="Start recovering state" Oct 8 20:04:52.178030 containerd[1500]: time="2024-10-08T20:04:52.178017367Z" level=info msg="Start event monitor" Oct 8 20:04:52.178088 containerd[1500]: time="2024-10-08T20:04:52.178077500Z" level=info msg="Start snapshots syncer" Oct 8 20:04:52.178144 containerd[1500]: time="2024-10-08T20:04:52.178132422Z" level=info msg="Start cni network conf syncer for default" Oct 8 20:04:52.178183 containerd[1500]: time="2024-10-08T20:04:52.178173690Z" level=info msg="Start streaming server" Oct 8 20:04:52.178679 containerd[1500]: time="2024-10-08T20:04:52.178663398Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 8 20:04:52.178837 containerd[1500]: time="2024-10-08T20:04:52.178805795Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 8 20:04:52.179058 containerd[1500]: time="2024-10-08T20:04:52.179044372Z" level=info msg="containerd successfully booted in 0.076412s" Oct 8 20:04:52.180445 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 8 20:04:52.180978 systemd[1]: Started containerd.service - containerd container runtime. Oct 8 20:04:52.203795 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 8 20:04:52.210470 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 8 20:04:52.215545 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 8 20:04:52.217013 systemd[1]: Reached target getty.target - Login Prompts. Oct 8 20:04:52.455269 tar[1486]: linux-amd64/LICENSE Oct 8 20:04:52.455269 tar[1486]: linux-amd64/README.md Oct 8 20:04:52.476428 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 8 20:04:52.650428 systemd-networkd[1395]: eth1: Gained IPv6LL Oct 8 20:04:52.651300 systemd-timesyncd[1370]: Network configuration changed, trying to establish connection. Oct 8 20:04:52.654992 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 8 20:04:52.657051 systemd[1]: Reached target network-online.target - Network is Online. Oct 8 20:04:52.666405 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:04:52.669464 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 8 20:04:52.697669 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 8 20:04:53.034599 systemd-networkd[1395]: eth0: Gained IPv6LL Oct 8 20:04:53.035368 systemd-timesyncd[1370]: Network configuration changed, trying to establish connection. Oct 8 20:04:53.389404 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:04:53.389526 (kubelet)[1595]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:04:53.391360 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 8 20:04:53.393924 systemd[1]: Startup finished in 1.162s (kernel) + 5.464s (initrd) + 4.283s (userspace) = 10.910s. Oct 8 20:04:53.989496 kubelet[1595]: E1008 20:04:53.989409 1595 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:04:53.993045 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:04:53.993284 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:05:04.243681 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 8 20:05:04.249601 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:05:04.382862 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:05:04.387063 (kubelet)[1615]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:05:04.432155 kubelet[1615]: E1008 20:05:04.432088 1615 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:05:04.438453 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:05:04.438641 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:05:14.689071 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 8 20:05:14.694427 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:05:14.815659 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:05:14.819704 (kubelet)[1632]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:05:14.861126 kubelet[1632]: E1008 20:05:14.861078 1632 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:05:14.864786 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:05:14.864979 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:05:23.465209 systemd-timesyncd[1370]: Contacted time server 94.130.23.46:123 (2.flatcar.pool.ntp.org). Oct 8 20:05:23.465385 systemd-timesyncd[1370]: Initial clock synchronization to Tue 2024-10-08 20:05:23.096121 UTC. Oct 8 20:05:25.115370 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Oct 8 20:05:25.120380 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:05:25.253166 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:05:25.262581 (kubelet)[1648]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:05:25.306334 kubelet[1648]: E1008 20:05:25.306273 1648 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:05:25.311120 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:05:25.311328 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:05:35.561686 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Oct 8 20:05:35.567410 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:05:35.683055 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:05:35.687018 (kubelet)[1665]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:05:35.733654 kubelet[1665]: E1008 20:05:35.733591 1665 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:05:35.737504 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:05:35.737703 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:05:36.506474 update_engine[1475]: I20241008 20:05:36.506328 1475 update_attempter.cc:509] Updating boot flags... Oct 8 20:05:36.578316 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1681) Oct 8 20:05:36.635436 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1683) Oct 8 20:05:36.678964 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1683) Oct 8 20:05:45.969718 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Oct 8 20:05:45.975588 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:05:46.101372 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:05:46.103169 (kubelet)[1701]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:05:46.149057 kubelet[1701]: E1008 20:05:46.148989 1701 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:05:46.153844 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:05:46.154042 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:05:56.219863 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Oct 8 20:05:56.233498 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:05:56.364325 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:05:56.368370 (kubelet)[1716]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:05:56.410189 kubelet[1716]: E1008 20:05:56.410116 1716 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:05:56.418152 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:05:56.418369 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:06:06.469653 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Oct 8 20:06:06.475447 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:06:06.593937 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:06:06.598342 (kubelet)[1732]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:06:06.637966 kubelet[1732]: E1008 20:06:06.637897 1732 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:06:06.641943 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:06:06.642193 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:06:16.719533 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Oct 8 20:06:16.724391 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:06:16.847159 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:06:16.851421 (kubelet)[1748]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:06:16.886966 kubelet[1748]: E1008 20:06:16.886895 1748 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:06:16.890684 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:06:16.890931 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:06:26.969553 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Oct 8 20:06:26.976404 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:06:27.106598 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:06:27.111461 (kubelet)[1764]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:06:27.149385 kubelet[1764]: E1008 20:06:27.149322 1764 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:06:27.153015 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:06:27.153200 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:06:37.220041 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Oct 8 20:06:37.227549 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:06:37.389370 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:06:37.393518 (kubelet)[1781]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:06:37.433703 kubelet[1781]: E1008 20:06:37.433637 1781 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:06:37.438403 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:06:37.438620 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:06:47.469904 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Oct 8 20:06:47.477562 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:06:47.640321 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:06:47.644200 (kubelet)[1798]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:06:47.680104 kubelet[1798]: E1008 20:06:47.680019 1798 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:06:47.684513 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:06:47.684696 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:06:49.578905 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 8 20:06:49.583462 systemd[1]: Started sshd@0-49.13.138.82:22-147.75.109.163:58038.service - OpenSSH per-connection server daemon (147.75.109.163:58038). Oct 8 20:06:50.595634 sshd[1808]: Accepted publickey for core from 147.75.109.163 port 58038 ssh2: RSA SHA256:8pb/X5i1efUvJi8sgU2/AQBt50OQJsXEcuFpDNAus+I Oct 8 20:06:50.597676 sshd[1808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:06:50.606711 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 8 20:06:50.612444 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 8 20:06:50.615852 systemd-logind[1473]: New session 1 of user core. Oct 8 20:06:50.630423 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 8 20:06:50.636792 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 8 20:06:50.641311 (systemd)[1812]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 8 20:06:50.740658 systemd[1812]: Queued start job for default target default.target. Oct 8 20:06:50.752558 systemd[1812]: Created slice app.slice - User Application Slice. Oct 8 20:06:50.752588 systemd[1812]: Reached target paths.target - Paths. Oct 8 20:06:50.752600 systemd[1812]: Reached target timers.target - Timers. Oct 8 20:06:50.754095 systemd[1812]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 8 20:06:50.766539 systemd[1812]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 8 20:06:50.766715 systemd[1812]: Reached target sockets.target - Sockets. Oct 8 20:06:50.766740 systemd[1812]: Reached target basic.target - Basic System. Oct 8 20:06:50.766792 systemd[1812]: Reached target default.target - Main User Target. Oct 8 20:06:50.766836 systemd[1812]: Startup finished in 119ms. Oct 8 20:06:50.766906 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 8 20:06:50.781380 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 8 20:06:51.476261 systemd[1]: Started sshd@1-49.13.138.82:22-147.75.109.163:58050.service - OpenSSH per-connection server daemon (147.75.109.163:58050). Oct 8 20:06:52.437938 sshd[1823]: Accepted publickey for core from 147.75.109.163 port 58050 ssh2: RSA SHA256:8pb/X5i1efUvJi8sgU2/AQBt50OQJsXEcuFpDNAus+I Oct 8 20:06:52.440071 sshd[1823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:06:52.445137 systemd-logind[1473]: New session 2 of user core. Oct 8 20:06:52.461370 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 8 20:06:53.104590 sshd[1823]: pam_unix(sshd:session): session closed for user core Oct 8 20:06:53.109735 systemd[1]: sshd@1-49.13.138.82:22-147.75.109.163:58050.service: Deactivated successfully. Oct 8 20:06:53.113137 systemd[1]: session-2.scope: Deactivated successfully. Oct 8 20:06:53.115827 systemd-logind[1473]: Session 2 logged out. Waiting for processes to exit. Oct 8 20:06:53.117943 systemd-logind[1473]: Removed session 2. Oct 8 20:06:53.286607 systemd[1]: Started sshd@2-49.13.138.82:22-147.75.109.163:58056.service - OpenSSH per-connection server daemon (147.75.109.163:58056). Oct 8 20:06:54.258427 sshd[1830]: Accepted publickey for core from 147.75.109.163 port 58056 ssh2: RSA SHA256:8pb/X5i1efUvJi8sgU2/AQBt50OQJsXEcuFpDNAus+I Oct 8 20:06:54.261010 sshd[1830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:06:54.268510 systemd-logind[1473]: New session 3 of user core. Oct 8 20:06:54.281388 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 8 20:06:54.933733 sshd[1830]: pam_unix(sshd:session): session closed for user core Oct 8 20:06:54.936522 systemd[1]: sshd@2-49.13.138.82:22-147.75.109.163:58056.service: Deactivated successfully. Oct 8 20:06:54.938970 systemd[1]: session-3.scope: Deactivated successfully. Oct 8 20:06:54.940383 systemd-logind[1473]: Session 3 logged out. Waiting for processes to exit. Oct 8 20:06:54.941496 systemd-logind[1473]: Removed session 3. Oct 8 20:06:55.097864 systemd[1]: Started sshd@3-49.13.138.82:22-147.75.109.163:58060.service - OpenSSH per-connection server daemon (147.75.109.163:58060). Oct 8 20:06:56.068826 sshd[1837]: Accepted publickey for core from 147.75.109.163 port 58060 ssh2: RSA SHA256:8pb/X5i1efUvJi8sgU2/AQBt50OQJsXEcuFpDNAus+I Oct 8 20:06:56.070712 sshd[1837]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:06:56.075287 systemd-logind[1473]: New session 4 of user core. Oct 8 20:06:56.085365 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 8 20:06:56.754144 sshd[1837]: pam_unix(sshd:session): session closed for user core Oct 8 20:06:56.757890 systemd[1]: sshd@3-49.13.138.82:22-147.75.109.163:58060.service: Deactivated successfully. Oct 8 20:06:56.760237 systemd[1]: session-4.scope: Deactivated successfully. Oct 8 20:06:56.762089 systemd-logind[1473]: Session 4 logged out. Waiting for processes to exit. Oct 8 20:06:56.763189 systemd-logind[1473]: Removed session 4. Oct 8 20:06:56.933625 systemd[1]: Started sshd@4-49.13.138.82:22-147.75.109.163:58070.service - OpenSSH per-connection server daemon (147.75.109.163:58070). Oct 8 20:06:57.719620 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Oct 8 20:06:57.727739 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:06:57.856552 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:06:57.860163 (kubelet)[1854]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:06:57.902427 kubelet[1854]: E1008 20:06:57.902376 1854 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:06:57.905475 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:06:57.905652 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:06:57.934436 sshd[1844]: Accepted publickey for core from 147.75.109.163 port 58070 ssh2: RSA SHA256:8pb/X5i1efUvJi8sgU2/AQBt50OQJsXEcuFpDNAus+I Oct 8 20:06:57.936073 sshd[1844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:06:57.940167 systemd-logind[1473]: New session 5 of user core. Oct 8 20:06:57.950347 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 8 20:06:58.471070 sudo[1863]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 8 20:06:58.471511 sudo[1863]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 20:06:58.487348 sudo[1863]: pam_unix(sudo:session): session closed for user root Oct 8 20:06:58.648786 sshd[1844]: pam_unix(sshd:session): session closed for user core Oct 8 20:06:58.652859 systemd[1]: sshd@4-49.13.138.82:22-147.75.109.163:58070.service: Deactivated successfully. Oct 8 20:06:58.654735 systemd[1]: session-5.scope: Deactivated successfully. Oct 8 20:06:58.655375 systemd-logind[1473]: Session 5 logged out. Waiting for processes to exit. Oct 8 20:06:58.656308 systemd-logind[1473]: Removed session 5. Oct 8 20:06:58.811944 systemd[1]: Started sshd@5-49.13.138.82:22-147.75.109.163:58118.service - OpenSSH per-connection server daemon (147.75.109.163:58118). Oct 8 20:06:59.785499 sshd[1868]: Accepted publickey for core from 147.75.109.163 port 58118 ssh2: RSA SHA256:8pb/X5i1efUvJi8sgU2/AQBt50OQJsXEcuFpDNAus+I Oct 8 20:06:59.788971 sshd[1868]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:06:59.797595 systemd-logind[1473]: New session 6 of user core. Oct 8 20:06:59.810515 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 8 20:07:00.308234 sudo[1872]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 8 20:07:00.308648 sudo[1872]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 20:07:00.314427 sudo[1872]: pam_unix(sudo:session): session closed for user root Oct 8 20:07:00.325741 sudo[1871]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 8 20:07:00.326473 sudo[1871]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 20:07:00.347646 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Oct 8 20:07:00.367464 auditctl[1875]: No rules Oct 8 20:07:00.368296 systemd[1]: audit-rules.service: Deactivated successfully. Oct 8 20:07:00.368660 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Oct 8 20:07:00.383314 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 20:07:00.413650 augenrules[1893]: No rules Oct 8 20:07:00.415783 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 20:07:00.417197 sudo[1871]: pam_unix(sudo:session): session closed for user root Oct 8 20:07:00.575996 sshd[1868]: pam_unix(sshd:session): session closed for user core Oct 8 20:07:00.580246 systemd[1]: sshd@5-49.13.138.82:22-147.75.109.163:58118.service: Deactivated successfully. Oct 8 20:07:00.582151 systemd[1]: session-6.scope: Deactivated successfully. Oct 8 20:07:00.582778 systemd-logind[1473]: Session 6 logged out. Waiting for processes to exit. Oct 8 20:07:00.584008 systemd-logind[1473]: Removed session 6. Oct 8 20:07:00.742471 systemd[1]: Started sshd@6-49.13.138.82:22-147.75.109.163:58134.service - OpenSSH per-connection server daemon (147.75.109.163:58134). Oct 8 20:07:01.710138 sshd[1901]: Accepted publickey for core from 147.75.109.163 port 58134 ssh2: RSA SHA256:8pb/X5i1efUvJi8sgU2/AQBt50OQJsXEcuFpDNAus+I Oct 8 20:07:01.711997 sshd[1901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:07:01.717062 systemd-logind[1473]: New session 7 of user core. Oct 8 20:07:01.723437 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 8 20:07:02.224942 sudo[1904]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 8 20:07:02.225425 sudo[1904]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 20:07:02.482434 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 8 20:07:02.485128 (dockerd)[1919]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 8 20:07:02.741809 dockerd[1919]: time="2024-10-08T20:07:02.741649115Z" level=info msg="Starting up" Oct 8 20:07:02.805554 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3140469734-merged.mount: Deactivated successfully. Oct 8 20:07:02.833226 dockerd[1919]: time="2024-10-08T20:07:02.833178374Z" level=info msg="Loading containers: start." Oct 8 20:07:02.936252 kernel: Initializing XFRM netlink socket Oct 8 20:07:03.006770 systemd-networkd[1395]: docker0: Link UP Oct 8 20:07:03.021567 dockerd[1919]: time="2024-10-08T20:07:03.021532709Z" level=info msg="Loading containers: done." Oct 8 20:07:03.038290 dockerd[1919]: time="2024-10-08T20:07:03.038212006Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 8 20:07:03.038431 dockerd[1919]: time="2024-10-08T20:07:03.038357769Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Oct 8 20:07:03.038511 dockerd[1919]: time="2024-10-08T20:07:03.038483635Z" level=info msg="Daemon has completed initialization" Oct 8 20:07:03.064890 dockerd[1919]: time="2024-10-08T20:07:03.064840470Z" level=info msg="API listen on /run/docker.sock" Oct 8 20:07:03.065279 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 8 20:07:04.008416 containerd[1500]: time="2024-10-08T20:07:04.008330119Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\"" Oct 8 20:07:04.577496 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3480211590.mount: Deactivated successfully. Oct 8 20:07:05.602057 containerd[1500]: time="2024-10-08T20:07:05.601988026Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:07:05.603106 containerd[1500]: time="2024-10-08T20:07:05.602861946Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.9: active requests=0, bytes read=35213933" Oct 8 20:07:05.603923 containerd[1500]: time="2024-10-08T20:07:05.603880906Z" level=info msg="ImageCreate event name:\"sha256:bc1ec5c2b6c60a3b18e7f54a99f0452c038400ecaaa2576931fd5342a0586abb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:07:05.605942 containerd[1500]: time="2024-10-08T20:07:05.605910734Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:07:05.606899 containerd[1500]: time="2024-10-08T20:07:05.606763765Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.9\" with image id \"sha256:bc1ec5c2b6c60a3b18e7f54a99f0452c038400ecaaa2576931fd5342a0586abb\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\", size \"35210641\" in 1.598396726s" Oct 8 20:07:05.606899 containerd[1500]: time="2024-10-08T20:07:05.606792899Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\" returns image reference \"sha256:bc1ec5c2b6c60a3b18e7f54a99f0452c038400ecaaa2576931fd5342a0586abb\"" Oct 8 20:07:05.624705 containerd[1500]: time="2024-10-08T20:07:05.624484902Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\"" Oct 8 20:07:07.097408 containerd[1500]: time="2024-10-08T20:07:07.097350087Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:07:07.098307 containerd[1500]: time="2024-10-08T20:07:07.098263592Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.9: active requests=0, bytes read=32208693" Oct 8 20:07:07.099283 containerd[1500]: time="2024-10-08T20:07:07.099231660Z" level=info msg="ImageCreate event name:\"sha256:5abda0d0a9153cd1f90fd828be379f7a16a6c814e6efbbbf31e247e13c3843e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:07:07.101666 containerd[1500]: time="2024-10-08T20:07:07.101626097Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:07:07.102549 containerd[1500]: time="2024-10-08T20:07:07.102443623Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.9\" with image id \"sha256:5abda0d0a9153cd1f90fd828be379f7a16a6c814e6efbbbf31e247e13c3843e5\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\", size \"33739229\" in 1.477902635s" Oct 8 20:07:07.102549 containerd[1500]: time="2024-10-08T20:07:07.102469130Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\" returns image reference \"sha256:5abda0d0a9153cd1f90fd828be379f7a16a6c814e6efbbbf31e247e13c3843e5\"" Oct 8 20:07:07.122229 containerd[1500]: time="2024-10-08T20:07:07.122176104Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\"" Oct 8 20:07:07.969370 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Oct 8 20:07:07.976371 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:07:08.107453 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:07:08.110020 (kubelet)[2144]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:07:08.156859 kubelet[2144]: E1008 20:07:08.156812 2144 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:07:08.161262 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:07:08.161445 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:07:08.202661 containerd[1500]: time="2024-10-08T20:07:08.202600475Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:07:08.203556 containerd[1500]: time="2024-10-08T20:07:08.203419082Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.9: active requests=0, bytes read=17320476" Oct 8 20:07:08.204286 containerd[1500]: time="2024-10-08T20:07:08.204265053Z" level=info msg="ImageCreate event name:\"sha256:059957505b3370d4c57d793e79cc70f9063d7ab75767f7040f5cc85572fe7e8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:07:08.206488 containerd[1500]: time="2024-10-08T20:07:08.206452743Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:07:08.207709 containerd[1500]: time="2024-10-08T20:07:08.207680149Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.9\" with image id \"sha256:059957505b3370d4c57d793e79cc70f9063d7ab75767f7040f5cc85572fe7e8d\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\", size \"18851030\" in 1.085471413s" Oct 8 20:07:08.207768 containerd[1500]: time="2024-10-08T20:07:08.207707940Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\" returns image reference \"sha256:059957505b3370d4c57d793e79cc70f9063d7ab75767f7040f5cc85572fe7e8d\"" Oct 8 20:07:08.226201 containerd[1500]: time="2024-10-08T20:07:08.226094124Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\"" Oct 8 20:07:09.173171 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1874661638.mount: Deactivated successfully. Oct 8 20:07:09.457775 containerd[1500]: time="2024-10-08T20:07:09.457633524Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:07:09.458632 containerd[1500]: time="2024-10-08T20:07:09.458587348Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.9: active requests=0, bytes read=28601776" Oct 8 20:07:09.459474 containerd[1500]: time="2024-10-08T20:07:09.459423219Z" level=info msg="ImageCreate event name:\"sha256:dd650d127e51776919ec1622a4469a8b141b2dfee5a33fbc5cb9729372e0dcfa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:07:09.460981 containerd[1500]: time="2024-10-08T20:07:09.460961009Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:07:09.461693 containerd[1500]: time="2024-10-08T20:07:09.461557270Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.9\" with image id \"sha256:dd650d127e51776919ec1622a4469a8b141b2dfee5a33fbc5cb9729372e0dcfa\", repo tag \"registry.k8s.io/kube-proxy:v1.29.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\", size \"28600769\" in 1.235427749s" Oct 8 20:07:09.461693 containerd[1500]: time="2024-10-08T20:07:09.461588709Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\" returns image reference \"sha256:dd650d127e51776919ec1622a4469a8b141b2dfee5a33fbc5cb9729372e0dcfa\"" Oct 8 20:07:09.481640 containerd[1500]: time="2024-10-08T20:07:09.481596148Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 8 20:07:10.020387 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3831309833.mount: Deactivated successfully. Oct 8 20:07:10.728477 containerd[1500]: time="2024-10-08T20:07:10.728397796Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:07:10.729497 containerd[1500]: time="2024-10-08T20:07:10.729458131Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185841" Oct 8 20:07:10.730550 containerd[1500]: time="2024-10-08T20:07:10.730475123Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:07:10.733151 containerd[1500]: time="2024-10-08T20:07:10.733104899Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:07:10.734046 containerd[1500]: time="2024-10-08T20:07:10.734001355Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.252362786s" Oct 8 20:07:10.734046 containerd[1500]: time="2024-10-08T20:07:10.734042403Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Oct 8 20:07:10.755984 containerd[1500]: time="2024-10-08T20:07:10.755852301Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Oct 8 20:07:11.355924 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2384922547.mount: Deactivated successfully. Oct 8 20:07:11.362951 containerd[1500]: time="2024-10-08T20:07:11.362898287Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:07:11.363831 containerd[1500]: time="2024-10-08T20:07:11.363773865Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322310" Oct 8 20:07:11.364533 containerd[1500]: time="2024-10-08T20:07:11.364458694Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:07:11.367001 containerd[1500]: time="2024-10-08T20:07:11.366944671Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:07:11.368706 containerd[1500]: time="2024-10-08T20:07:11.367928413Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 612.04359ms" Oct 8 20:07:11.368706 containerd[1500]: time="2024-10-08T20:07:11.367966445Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Oct 8 20:07:11.393478 containerd[1500]: time="2024-10-08T20:07:11.393455296Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Oct 8 20:07:11.933018 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount626639458.mount: Deactivated successfully. Oct 8 20:07:15.023997 containerd[1500]: time="2024-10-08T20:07:15.023940202Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:07:15.024971 containerd[1500]: time="2024-10-08T20:07:15.024923365Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651705" Oct 8 20:07:15.025678 containerd[1500]: time="2024-10-08T20:07:15.025637402Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:07:15.028053 containerd[1500]: time="2024-10-08T20:07:15.028000896Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:07:15.029305 containerd[1500]: time="2024-10-08T20:07:15.029152217Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.63558941s" Oct 8 20:07:15.029305 containerd[1500]: time="2024-10-08T20:07:15.029184909Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Oct 8 20:07:18.136441 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:07:18.143448 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:07:18.165330 systemd[1]: Reloading requested from client PID 2341 ('systemctl') (unit session-7.scope)... Oct 8 20:07:18.165454 systemd[1]: Reloading... Oct 8 20:07:18.284268 zram_generator::config[2381]: No configuration found. Oct 8 20:07:18.383902 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 20:07:18.448502 systemd[1]: Reloading finished in 282 ms. Oct 8 20:07:18.496995 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 8 20:07:18.497088 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 8 20:07:18.497452 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:07:18.504466 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:07:18.625678 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:07:18.631523 (kubelet)[2435]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 20:07:18.676289 kubelet[2435]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 20:07:18.676289 kubelet[2435]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 20:07:18.676289 kubelet[2435]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 20:07:18.676615 kubelet[2435]: I1008 20:07:18.676356 2435 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 20:07:19.231353 kubelet[2435]: I1008 20:07:19.230781 2435 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 8 20:07:19.231353 kubelet[2435]: I1008 20:07:19.230811 2435 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 20:07:19.231353 kubelet[2435]: I1008 20:07:19.231015 2435 server.go:919] "Client rotation is on, will bootstrap in background" Oct 8 20:07:19.254577 kubelet[2435]: I1008 20:07:19.254134 2435 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 20:07:19.254577 kubelet[2435]: E1008 20:07:19.254538 2435 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://49.13.138.82:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 49.13.138.82:6443: connect: connection refused Oct 8 20:07:19.266920 kubelet[2435]: I1008 20:07:19.266891 2435 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 20:07:19.267180 kubelet[2435]: I1008 20:07:19.267152 2435 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 20:07:19.268135 kubelet[2435]: I1008 20:07:19.268094 2435 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 8 20:07:19.268135 kubelet[2435]: I1008 20:07:19.268138 2435 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 20:07:19.268284 kubelet[2435]: I1008 20:07:19.268148 2435 container_manager_linux.go:301] "Creating device plugin manager" Oct 8 20:07:19.268284 kubelet[2435]: I1008 20:07:19.268272 2435 state_mem.go:36] "Initialized new in-memory state store" Oct 8 20:07:19.268372 kubelet[2435]: I1008 20:07:19.268360 2435 kubelet.go:396] "Attempting to sync node with API server" Oct 8 20:07:19.268404 kubelet[2435]: I1008 20:07:19.268378 2435 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 20:07:19.268404 kubelet[2435]: I1008 20:07:19.268402 2435 kubelet.go:312] "Adding apiserver pod source" Oct 8 20:07:19.268455 kubelet[2435]: I1008 20:07:19.268431 2435 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 20:07:19.270844 kubelet[2435]: W1008 20:07:19.270360 2435 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://49.13.138.82:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 49.13.138.82:6443: connect: connection refused Oct 8 20:07:19.270844 kubelet[2435]: E1008 20:07:19.270414 2435 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://49.13.138.82:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 49.13.138.82:6443: connect: connection refused Oct 8 20:07:19.272259 kubelet[2435]: I1008 20:07:19.271895 2435 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 8 20:07:19.276316 kubelet[2435]: I1008 20:07:19.276297 2435 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 20:07:19.278461 kubelet[2435]: W1008 20:07:19.278418 2435 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://49.13.138.82:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-1-0-7-3c1e2fa9c6&limit=500&resourceVersion=0": dial tcp 49.13.138.82:6443: connect: connection refused Oct 8 20:07:19.278567 kubelet[2435]: E1008 20:07:19.278553 2435 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://49.13.138.82:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-1-0-7-3c1e2fa9c6&limit=500&resourceVersion=0": dial tcp 49.13.138.82:6443: connect: connection refused Oct 8 20:07:19.279399 kubelet[2435]: W1008 20:07:19.279361 2435 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 8 20:07:19.280241 kubelet[2435]: I1008 20:07:19.279940 2435 server.go:1256] "Started kubelet" Oct 8 20:07:19.280557 kubelet[2435]: I1008 20:07:19.280528 2435 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 20:07:19.281395 kubelet[2435]: I1008 20:07:19.281366 2435 server.go:461] "Adding debug handlers to kubelet server" Oct 8 20:07:19.284333 kubelet[2435]: I1008 20:07:19.283694 2435 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 20:07:19.284333 kubelet[2435]: I1008 20:07:19.283891 2435 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 20:07:19.284333 kubelet[2435]: I1008 20:07:19.284054 2435 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 20:07:19.285870 kubelet[2435]: E1008 20:07:19.285513 2435 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://49.13.138.82:6443/api/v1/namespaces/default/events\": dial tcp 49.13.138.82:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-1-0-7-3c1e2fa9c6.17fc930c96a82343 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-1-0-7-3c1e2fa9c6,UID:ci-4081-1-0-7-3c1e2fa9c6,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-1-0-7-3c1e2fa9c6,},FirstTimestamp:2024-10-08 20:07:19.279919939 +0000 UTC m=+0.644528069,LastTimestamp:2024-10-08 20:07:19.279919939 +0000 UTC m=+0.644528069,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-1-0-7-3c1e2fa9c6,}" Oct 8 20:07:19.290299 kubelet[2435]: E1008 20:07:19.290286 2435 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-1-0-7-3c1e2fa9c6\" not found" Oct 8 20:07:19.290431 kubelet[2435]: I1008 20:07:19.290402 2435 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 8 20:07:19.290570 kubelet[2435]: I1008 20:07:19.290559 2435 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 8 20:07:19.290715 kubelet[2435]: I1008 20:07:19.290704 2435 reconciler_new.go:29] "Reconciler: start to sync state" Oct 8 20:07:19.291099 kubelet[2435]: W1008 20:07:19.291069 2435 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://49.13.138.82:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 49.13.138.82:6443: connect: connection refused Oct 8 20:07:19.291795 kubelet[2435]: E1008 20:07:19.291161 2435 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://49.13.138.82:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 49.13.138.82:6443: connect: connection refused Oct 8 20:07:19.291795 kubelet[2435]: E1008 20:07:19.291700 2435 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 8 20:07:19.291795 kubelet[2435]: E1008 20:07:19.291770 2435 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.138.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-1-0-7-3c1e2fa9c6?timeout=10s\": dial tcp 49.13.138.82:6443: connect: connection refused" interval="200ms" Oct 8 20:07:19.293344 kubelet[2435]: I1008 20:07:19.292556 2435 factory.go:221] Registration of the systemd container factory successfully Oct 8 20:07:19.293344 kubelet[2435]: I1008 20:07:19.292638 2435 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 20:07:19.294812 kubelet[2435]: I1008 20:07:19.294759 2435 factory.go:221] Registration of the containerd container factory successfully Oct 8 20:07:19.301880 kubelet[2435]: I1008 20:07:19.301847 2435 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 20:07:19.302935 kubelet[2435]: I1008 20:07:19.302913 2435 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 20:07:19.302971 kubelet[2435]: I1008 20:07:19.302940 2435 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 20:07:19.302971 kubelet[2435]: I1008 20:07:19.302958 2435 kubelet.go:2329] "Starting kubelet main sync loop" Oct 8 20:07:19.303018 kubelet[2435]: E1008 20:07:19.302995 2435 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 20:07:19.309958 kubelet[2435]: W1008 20:07:19.309913 2435 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://49.13.138.82:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 49.13.138.82:6443: connect: connection refused Oct 8 20:07:19.309958 kubelet[2435]: E1008 20:07:19.309952 2435 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://49.13.138.82:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 49.13.138.82:6443: connect: connection refused Oct 8 20:07:19.319591 kubelet[2435]: I1008 20:07:19.319482 2435 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 20:07:19.319591 kubelet[2435]: I1008 20:07:19.319581 2435 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 20:07:19.319660 kubelet[2435]: I1008 20:07:19.319606 2435 state_mem.go:36] "Initialized new in-memory state store" Oct 8 20:07:19.321069 kubelet[2435]: I1008 20:07:19.321046 2435 policy_none.go:49] "None policy: Start" Oct 8 20:07:19.321525 kubelet[2435]: I1008 20:07:19.321501 2435 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 20:07:19.321570 kubelet[2435]: I1008 20:07:19.321533 2435 state_mem.go:35] "Initializing new in-memory state store" Oct 8 20:07:19.327503 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 8 20:07:19.335711 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 8 20:07:19.338709 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 8 20:07:19.342945 kubelet[2435]: I1008 20:07:19.342921 2435 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 20:07:19.343264 kubelet[2435]: I1008 20:07:19.343134 2435 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 20:07:19.344854 kubelet[2435]: E1008 20:07:19.344566 2435 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-1-0-7-3c1e2fa9c6\" not found" Oct 8 20:07:19.392482 kubelet[2435]: I1008 20:07:19.392431 2435 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:07:19.392779 kubelet[2435]: E1008 20:07:19.392757 2435 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://49.13.138.82:6443/api/v1/nodes\": dial tcp 49.13.138.82:6443: connect: connection refused" node="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:07:19.403984 kubelet[2435]: I1008 20:07:19.403944 2435 topology_manager.go:215] "Topology Admit Handler" podUID="e9cd5c7e1f5d9eb6ae422e8aa8a3d297" podNamespace="kube-system" podName="kube-apiserver-ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:07:19.405375 kubelet[2435]: I1008 20:07:19.405332 2435 topology_manager.go:215] "Topology Admit Handler" podUID="d512fd84d3d70b7fbdcdf07661629cee" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:07:19.407388 kubelet[2435]: I1008 20:07:19.407353 2435 topology_manager.go:215] "Topology Admit Handler" podUID="4c7edc2b675ff572e4226d21b15ce96b" podNamespace="kube-system" podName="kube-scheduler-ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:07:19.413566 systemd[1]: Created slice kubepods-burstable-pode9cd5c7e1f5d9eb6ae422e8aa8a3d297.slice - libcontainer container kubepods-burstable-pode9cd5c7e1f5d9eb6ae422e8aa8a3d297.slice. Oct 8 20:07:19.435798 systemd[1]: Created slice kubepods-burstable-pod4c7edc2b675ff572e4226d21b15ce96b.slice - libcontainer container kubepods-burstable-pod4c7edc2b675ff572e4226d21b15ce96b.slice. Oct 8 20:07:19.440345 systemd[1]: Created slice kubepods-burstable-podd512fd84d3d70b7fbdcdf07661629cee.slice - libcontainer container kubepods-burstable-podd512fd84d3d70b7fbdcdf07661629cee.slice. Oct 8 20:07:19.492597 kubelet[2435]: E1008 20:07:19.492474 2435 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.138.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-1-0-7-3c1e2fa9c6?timeout=10s\": dial tcp 49.13.138.82:6443: connect: connection refused" interval="400ms" Oct 8 20:07:19.592161 kubelet[2435]: I1008 20:07:19.592034 2435 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9cd5c7e1f5d9eb6ae422e8aa8a3d297-k8s-certs\") pod \"kube-apiserver-ci-4081-1-0-7-3c1e2fa9c6\" (UID: \"e9cd5c7e1f5d9eb6ae422e8aa8a3d297\") " pod="kube-system/kube-apiserver-ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:07:19.592161 kubelet[2435]: I1008 20:07:19.592137 2435 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9cd5c7e1f5d9eb6ae422e8aa8a3d297-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-1-0-7-3c1e2fa9c6\" (UID: \"e9cd5c7e1f5d9eb6ae422e8aa8a3d297\") " pod="kube-system/kube-apiserver-ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:07:19.592161 kubelet[2435]: I1008 20:07:19.592186 2435 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d512fd84d3d70b7fbdcdf07661629cee-ca-certs\") pod \"kube-controller-manager-ci-4081-1-0-7-3c1e2fa9c6\" (UID: \"d512fd84d3d70b7fbdcdf07661629cee\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:07:19.592502 kubelet[2435]: I1008 20:07:19.592245 2435 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d512fd84d3d70b7fbdcdf07661629cee-k8s-certs\") pod \"kube-controller-manager-ci-4081-1-0-7-3c1e2fa9c6\" (UID: \"d512fd84d3d70b7fbdcdf07661629cee\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:07:19.592502 kubelet[2435]: I1008 20:07:19.592275 2435 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d512fd84d3d70b7fbdcdf07661629cee-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-1-0-7-3c1e2fa9c6\" (UID: \"d512fd84d3d70b7fbdcdf07661629cee\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:07:19.592502 kubelet[2435]: I1008 20:07:19.592300 2435 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9cd5c7e1f5d9eb6ae422e8aa8a3d297-ca-certs\") pod \"kube-apiserver-ci-4081-1-0-7-3c1e2fa9c6\" (UID: \"e9cd5c7e1f5d9eb6ae422e8aa8a3d297\") " pod="kube-system/kube-apiserver-ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:07:19.592502 kubelet[2435]: I1008 20:07:19.592333 2435 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d512fd84d3d70b7fbdcdf07661629cee-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-1-0-7-3c1e2fa9c6\" (UID: \"d512fd84d3d70b7fbdcdf07661629cee\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:07:19.592502 kubelet[2435]: I1008 20:07:19.592357 2435 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d512fd84d3d70b7fbdcdf07661629cee-kubeconfig\") pod \"kube-controller-manager-ci-4081-1-0-7-3c1e2fa9c6\" (UID: \"d512fd84d3d70b7fbdcdf07661629cee\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:07:19.592713 kubelet[2435]: I1008 20:07:19.592381 2435 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4c7edc2b675ff572e4226d21b15ce96b-kubeconfig\") pod \"kube-scheduler-ci-4081-1-0-7-3c1e2fa9c6\" (UID: \"4c7edc2b675ff572e4226d21b15ce96b\") " pod="kube-system/kube-scheduler-ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:07:19.595902 kubelet[2435]: I1008 20:07:19.595794 2435 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:07:19.596424 kubelet[2435]: E1008 20:07:19.596386 2435 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://49.13.138.82:6443/api/v1/nodes\": dial tcp 49.13.138.82:6443: connect: connection refused" node="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:07:19.735740 containerd[1500]: time="2024-10-08T20:07:19.735606440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-1-0-7-3c1e2fa9c6,Uid:e9cd5c7e1f5d9eb6ae422e8aa8a3d297,Namespace:kube-system,Attempt:0,}" Oct 8 20:07:19.748685 containerd[1500]: time="2024-10-08T20:07:19.747907275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-1-0-7-3c1e2fa9c6,Uid:4c7edc2b675ff572e4226d21b15ce96b,Namespace:kube-system,Attempt:0,}" Oct 8 20:07:19.748685 containerd[1500]: time="2024-10-08T20:07:19.747917334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-1-0-7-3c1e2fa9c6,Uid:d512fd84d3d70b7fbdcdf07661629cee,Namespace:kube-system,Attempt:0,}" Oct 8 20:07:19.894542 kubelet[2435]: E1008 20:07:19.894481 2435 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.138.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-1-0-7-3c1e2fa9c6?timeout=10s\": dial tcp 49.13.138.82:6443: connect: connection refused" interval="800ms" Oct 8 20:07:20.000488 kubelet[2435]: I1008 20:07:20.000018 2435 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:07:20.000657 kubelet[2435]: E1008 20:07:20.000493 2435 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://49.13.138.82:6443/api/v1/nodes\": dial tcp 49.13.138.82:6443: connect: connection refused" node="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:07:20.158570 kubelet[2435]: W1008 20:07:20.158373 2435 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://49.13.138.82:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 49.13.138.82:6443: connect: connection refused Oct 8 20:07:20.158570 kubelet[2435]: E1008 20:07:20.158435 2435 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://49.13.138.82:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 49.13.138.82:6443: connect: connection refused Oct 8 20:07:20.284811 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2957785136.mount: Deactivated successfully. Oct 8 20:07:20.290820 containerd[1500]: time="2024-10-08T20:07:20.290729879Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 20:07:20.292524 containerd[1500]: time="2024-10-08T20:07:20.292459646Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312076" Oct 8 20:07:20.293699 containerd[1500]: time="2024-10-08T20:07:20.293468451Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 20:07:20.294504 containerd[1500]: time="2024-10-08T20:07:20.294406813Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 20:07:20.296044 containerd[1500]: time="2024-10-08T20:07:20.295998219Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 20:07:20.296479 containerd[1500]: time="2024-10-08T20:07:20.296207043Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 20:07:20.297576 containerd[1500]: time="2024-10-08T20:07:20.297534932Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 20:07:20.299853 containerd[1500]: time="2024-10-08T20:07:20.299771597Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 20:07:20.305787 containerd[1500]: time="2024-10-08T20:07:20.305758905Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 569.988776ms" Oct 8 20:07:20.308108 kubelet[2435]: W1008 20:07:20.306693 2435 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://49.13.138.82:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-1-0-7-3c1e2fa9c6&limit=500&resourceVersion=0": dial tcp 49.13.138.82:6443: connect: connection refused Oct 8 20:07:20.308108 kubelet[2435]: E1008 20:07:20.306745 2435 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://49.13.138.82:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-1-0-7-3c1e2fa9c6&limit=500&resourceVersion=0": dial tcp 49.13.138.82:6443: connect: connection refused Oct 8 20:07:20.316018 containerd[1500]: time="2024-10-08T20:07:20.315851057Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 567.367213ms" Oct 8 20:07:20.317556 containerd[1500]: time="2024-10-08T20:07:20.317501193Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 569.483019ms" Oct 8 20:07:20.351506 kubelet[2435]: W1008 20:07:20.350304 2435 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://49.13.138.82:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 49.13.138.82:6443: connect: connection refused Oct 8 20:07:20.351506 kubelet[2435]: E1008 20:07:20.350363 2435 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://49.13.138.82:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 49.13.138.82:6443: connect: connection refused Oct 8 20:07:20.439055 containerd[1500]: time="2024-10-08T20:07:20.438773201Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:07:20.439055 containerd[1500]: time="2024-10-08T20:07:20.438850928Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:07:20.439055 containerd[1500]: time="2024-10-08T20:07:20.438873450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:07:20.439055 containerd[1500]: time="2024-10-08T20:07:20.438965484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:07:20.439574 containerd[1500]: time="2024-10-08T20:07:20.439491348Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:07:20.439574 containerd[1500]: time="2024-10-08T20:07:20.439540700Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:07:20.439574 containerd[1500]: time="2024-10-08T20:07:20.439550639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:07:20.439849 containerd[1500]: time="2024-10-08T20:07:20.439742952Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:07:20.449077 containerd[1500]: time="2024-10-08T20:07:20.447482530Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:07:20.449077 containerd[1500]: time="2024-10-08T20:07:20.447522026Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:07:20.449077 containerd[1500]: time="2024-10-08T20:07:20.447532736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:07:20.449077 containerd[1500]: time="2024-10-08T20:07:20.447619931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:07:20.469378 systemd[1]: Started cri-containerd-78b29d8a9ba743262d457625e105fe516c8d2545eda3105fb6b32e784174a573.scope - libcontainer container 78b29d8a9ba743262d457625e105fe516c8d2545eda3105fb6b32e784174a573. Oct 8 20:07:20.473355 systemd[1]: Started cri-containerd-60c2319ccd3025710c401fdde25bf7c6e63986310840a65f0a7d690062f87402.scope - libcontainer container 60c2319ccd3025710c401fdde25bf7c6e63986310840a65f0a7d690062f87402. Oct 8 20:07:20.476743 systemd[1]: Started cri-containerd-8f5ea494b86518032de82f9d736218a34ed5774e02dff0c94fffb23da337d7a8.scope - libcontainer container 8f5ea494b86518032de82f9d736218a34ed5774e02dff0c94fffb23da337d7a8. Oct 8 20:07:20.508354 kubelet[2435]: W1008 20:07:20.508264 2435 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://49.13.138.82:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 49.13.138.82:6443: connect: connection refused Oct 8 20:07:20.508354 kubelet[2435]: E1008 20:07:20.508328 2435 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://49.13.138.82:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 49.13.138.82:6443: connect: connection refused Oct 8 20:07:20.527567 containerd[1500]: time="2024-10-08T20:07:20.527442348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-1-0-7-3c1e2fa9c6,Uid:e9cd5c7e1f5d9eb6ae422e8aa8a3d297,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f5ea494b86518032de82f9d736218a34ed5774e02dff0c94fffb23da337d7a8\"" Oct 8 20:07:20.536144 containerd[1500]: time="2024-10-08T20:07:20.536029347Z" level=info msg="CreateContainer within sandbox \"8f5ea494b86518032de82f9d736218a34ed5774e02dff0c94fffb23da337d7a8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 8 20:07:20.543088 containerd[1500]: time="2024-10-08T20:07:20.543030462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-1-0-7-3c1e2fa9c6,Uid:4c7edc2b675ff572e4226d21b15ce96b,Namespace:kube-system,Attempt:0,} returns sandbox id \"78b29d8a9ba743262d457625e105fe516c8d2545eda3105fb6b32e784174a573\"" Oct 8 20:07:20.545996 containerd[1500]: time="2024-10-08T20:07:20.545919929Z" level=info msg="CreateContainer within sandbox \"78b29d8a9ba743262d457625e105fe516c8d2545eda3105fb6b32e784174a573\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 8 20:07:20.552834 containerd[1500]: time="2024-10-08T20:07:20.552798421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-1-0-7-3c1e2fa9c6,Uid:d512fd84d3d70b7fbdcdf07661629cee,Namespace:kube-system,Attempt:0,} returns sandbox id \"60c2319ccd3025710c401fdde25bf7c6e63986310840a65f0a7d690062f87402\"" Oct 8 20:07:20.556357 containerd[1500]: time="2024-10-08T20:07:20.556333537Z" level=info msg="CreateContainer within sandbox \"60c2319ccd3025710c401fdde25bf7c6e63986310840a65f0a7d690062f87402\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 8 20:07:20.562085 containerd[1500]: time="2024-10-08T20:07:20.561930179Z" level=info msg="CreateContainer within sandbox \"8f5ea494b86518032de82f9d736218a34ed5774e02dff0c94fffb23da337d7a8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2ec3a2e7ad8f87b0a1167c1e248c4ada33d3c55aa1d4c5881eddbb755a1204e0\"" Oct 8 20:07:20.562469 containerd[1500]: time="2024-10-08T20:07:20.562437817Z" level=info msg="CreateContainer within sandbox \"78b29d8a9ba743262d457625e105fe516c8d2545eda3105fb6b32e784174a573\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b5ec704b561259fd6d82dc6d3a67425358a953fc6aeead4b0cf81d3977fcd0f9\"" Oct 8 20:07:20.562940 containerd[1500]: time="2024-10-08T20:07:20.562909770Z" level=info msg="StartContainer for \"b5ec704b561259fd6d82dc6d3a67425358a953fc6aeead4b0cf81d3977fcd0f9\"" Oct 8 20:07:20.564518 containerd[1500]: time="2024-10-08T20:07:20.563649888Z" level=info msg="StartContainer for \"2ec3a2e7ad8f87b0a1167c1e248c4ada33d3c55aa1d4c5881eddbb755a1204e0\"" Oct 8 20:07:20.573398 containerd[1500]: time="2024-10-08T20:07:20.573375747Z" level=info msg="CreateContainer within sandbox \"60c2319ccd3025710c401fdde25bf7c6e63986310840a65f0a7d690062f87402\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"44d0ecdccc299124e504fdca886d5ed6b69ed3431170fe49381aa28661d1a14b\"" Oct 8 20:07:20.573911 containerd[1500]: time="2024-10-08T20:07:20.573882655Z" level=info msg="StartContainer for \"44d0ecdccc299124e504fdca886d5ed6b69ed3431170fe49381aa28661d1a14b\"" Oct 8 20:07:20.599448 systemd[1]: Started cri-containerd-2ec3a2e7ad8f87b0a1167c1e248c4ada33d3c55aa1d4c5881eddbb755a1204e0.scope - libcontainer container 2ec3a2e7ad8f87b0a1167c1e248c4ada33d3c55aa1d4c5881eddbb755a1204e0. Oct 8 20:07:20.601282 systemd[1]: Started cri-containerd-b5ec704b561259fd6d82dc6d3a67425358a953fc6aeead4b0cf81d3977fcd0f9.scope - libcontainer container b5ec704b561259fd6d82dc6d3a67425358a953fc6aeead4b0cf81d3977fcd0f9. Oct 8 20:07:20.608027 systemd[1]: Started cri-containerd-44d0ecdccc299124e504fdca886d5ed6b69ed3431170fe49381aa28661d1a14b.scope - libcontainer container 44d0ecdccc299124e504fdca886d5ed6b69ed3431170fe49381aa28661d1a14b. Oct 8 20:07:20.659581 containerd[1500]: time="2024-10-08T20:07:20.659532439Z" level=info msg="StartContainer for \"2ec3a2e7ad8f87b0a1167c1e248c4ada33d3c55aa1d4c5881eddbb755a1204e0\" returns successfully" Oct 8 20:07:20.677211 containerd[1500]: time="2024-10-08T20:07:20.677153553Z" level=info msg="StartContainer for \"44d0ecdccc299124e504fdca886d5ed6b69ed3431170fe49381aa28661d1a14b\" returns successfully" Oct 8 20:07:20.678301 containerd[1500]: time="2024-10-08T20:07:20.678272966Z" level=info msg="StartContainer for \"b5ec704b561259fd6d82dc6d3a67425358a953fc6aeead4b0cf81d3977fcd0f9\" returns successfully" Oct 8 20:07:20.695562 kubelet[2435]: E1008 20:07:20.695530 2435 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.138.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-1-0-7-3c1e2fa9c6?timeout=10s\": dial tcp 49.13.138.82:6443: connect: connection refused" interval="1.6s" Oct 8 20:07:20.805349 kubelet[2435]: I1008 20:07:20.805242 2435 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:07:20.806592 kubelet[2435]: E1008 20:07:20.806571 2435 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://49.13.138.82:6443/api/v1/nodes\": dial tcp 49.13.138.82:6443: connect: connection refused" node="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:07:22.300586 kubelet[2435]: E1008 20:07:22.300538 2435 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-1-0-7-3c1e2fa9c6\" not found" node="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:07:22.356118 kubelet[2435]: E1008 20:07:22.356071 2435 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4081-1-0-7-3c1e2fa9c6" not found Oct 8 20:07:22.409012 kubelet[2435]: I1008 20:07:22.408789 2435 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:07:22.415626 kubelet[2435]: I1008 20:07:22.415599 2435 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:07:23.272838 kubelet[2435]: I1008 20:07:23.272785 2435 apiserver.go:52] "Watching apiserver" Oct 8 20:07:23.291684 kubelet[2435]: I1008 20:07:23.291660 2435 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 8 20:07:24.760771 systemd[1]: Reloading requested from client PID 2708 ('systemctl') (unit session-7.scope)... Oct 8 20:07:24.761191 systemd[1]: Reloading... Oct 8 20:07:24.867316 zram_generator::config[2754]: No configuration found. Oct 8 20:07:24.969068 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 20:07:25.044073 systemd[1]: Reloading finished in 282 ms. Oct 8 20:07:25.090448 kubelet[2435]: I1008 20:07:25.090373 2435 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 20:07:25.090841 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:07:25.099683 systemd[1]: kubelet.service: Deactivated successfully. Oct 8 20:07:25.099914 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:07:25.099956 systemd[1]: kubelet.service: Consumed 1.083s CPU time, 110.1M memory peak, 0B memory swap peak. Oct 8 20:07:25.103633 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:07:25.231353 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:07:25.237128 (kubelet)[2799]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 20:07:25.288324 kubelet[2799]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 20:07:25.288324 kubelet[2799]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 20:07:25.288324 kubelet[2799]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 20:07:25.288324 kubelet[2799]: I1008 20:07:25.287278 2799 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 20:07:25.292245 kubelet[2799]: I1008 20:07:25.291397 2799 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 8 20:07:25.292245 kubelet[2799]: I1008 20:07:25.291413 2799 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 20:07:25.292245 kubelet[2799]: I1008 20:07:25.291531 2799 server.go:919] "Client rotation is on, will bootstrap in background" Oct 8 20:07:25.296314 kubelet[2799]: I1008 20:07:25.292591 2799 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 8 20:07:25.296314 kubelet[2799]: I1008 20:07:25.294093 2799 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 20:07:25.306707 kubelet[2799]: I1008 20:07:25.306673 2799 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 20:07:25.309764 kubelet[2799]: I1008 20:07:25.309741 2799 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 20:07:25.309897 kubelet[2799]: I1008 20:07:25.309883 2799 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 8 20:07:25.309981 kubelet[2799]: I1008 20:07:25.309907 2799 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 20:07:25.309981 kubelet[2799]: I1008 20:07:25.309915 2799 container_manager_linux.go:301] "Creating device plugin manager" Oct 8 20:07:25.309981 kubelet[2799]: I1008 20:07:25.309943 2799 state_mem.go:36] "Initialized new in-memory state store" Oct 8 20:07:25.310042 kubelet[2799]: I1008 20:07:25.310019 2799 kubelet.go:396] "Attempting to sync node with API server" Oct 8 20:07:25.310042 kubelet[2799]: I1008 20:07:25.310030 2799 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 20:07:25.310079 kubelet[2799]: I1008 20:07:25.310051 2799 kubelet.go:312] "Adding apiserver pod source" Oct 8 20:07:25.310079 kubelet[2799]: I1008 20:07:25.310065 2799 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 20:07:25.311738 kubelet[2799]: I1008 20:07:25.311665 2799 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 8 20:07:25.312185 kubelet[2799]: I1008 20:07:25.312165 2799 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 20:07:25.312710 kubelet[2799]: I1008 20:07:25.312690 2799 server.go:1256] "Started kubelet" Oct 8 20:07:25.319974 kubelet[2799]: I1008 20:07:25.319437 2799 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 20:07:25.331245 kubelet[2799]: I1008 20:07:25.330878 2799 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 20:07:25.331628 kubelet[2799]: I1008 20:07:25.331615 2799 server.go:461] "Adding debug handlers to kubelet server" Oct 8 20:07:25.334197 kubelet[2799]: I1008 20:07:25.334183 2799 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 20:07:25.334448 kubelet[2799]: I1008 20:07:25.334412 2799 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 20:07:25.340513 kubelet[2799]: I1008 20:07:25.340126 2799 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 8 20:07:25.340513 kubelet[2799]: I1008 20:07:25.340211 2799 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 8 20:07:25.340513 kubelet[2799]: I1008 20:07:25.340336 2799 reconciler_new.go:29] "Reconciler: start to sync state" Oct 8 20:07:25.344426 kubelet[2799]: I1008 20:07:25.344413 2799 factory.go:221] Registration of the systemd container factory successfully Oct 8 20:07:25.344574 kubelet[2799]: I1008 20:07:25.344528 2799 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 20:07:25.345485 kubelet[2799]: E1008 20:07:25.345446 2799 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 8 20:07:25.346499 kubelet[2799]: I1008 20:07:25.346475 2799 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 20:07:25.350171 kubelet[2799]: I1008 20:07:25.349792 2799 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 20:07:25.350171 kubelet[2799]: I1008 20:07:25.349813 2799 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 20:07:25.350171 kubelet[2799]: I1008 20:07:25.349847 2799 kubelet.go:2329] "Starting kubelet main sync loop" Oct 8 20:07:25.350171 kubelet[2799]: E1008 20:07:25.349894 2799 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 20:07:25.353905 kubelet[2799]: I1008 20:07:25.353869 2799 factory.go:221] Registration of the containerd container factory successfully Oct 8 20:07:25.402081 kubelet[2799]: I1008 20:07:25.401972 2799 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 20:07:25.402081 kubelet[2799]: I1008 20:07:25.401990 2799 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 20:07:25.402081 kubelet[2799]: I1008 20:07:25.402004 2799 state_mem.go:36] "Initialized new in-memory state store" Oct 8 20:07:25.403553 kubelet[2799]: I1008 20:07:25.402118 2799 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 8 20:07:25.403553 kubelet[2799]: I1008 20:07:25.402136 2799 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 8 20:07:25.403553 kubelet[2799]: I1008 20:07:25.402142 2799 policy_none.go:49] "None policy: Start" Oct 8 20:07:25.403553 kubelet[2799]: I1008 20:07:25.402853 2799 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 20:07:25.403553 kubelet[2799]: I1008 20:07:25.402869 2799 state_mem.go:35] "Initializing new in-memory state store" Oct 8 20:07:25.403553 kubelet[2799]: I1008 20:07:25.402996 2799 state_mem.go:75] "Updated machine memory state" Oct 8 20:07:25.408889 kubelet[2799]: I1008 20:07:25.408739 2799 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 20:07:25.408937 kubelet[2799]: I1008 20:07:25.408924 2799 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 20:07:25.442546 kubelet[2799]: I1008 20:07:25.442499 2799 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:07:25.448916 kubelet[2799]: I1008 20:07:25.448893 2799 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:07:25.449015 kubelet[2799]: I1008 20:07:25.448944 2799 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:07:25.450529 kubelet[2799]: I1008 20:07:25.449957 2799 topology_manager.go:215] "Topology Admit Handler" podUID="e9cd5c7e1f5d9eb6ae422e8aa8a3d297" podNamespace="kube-system" podName="kube-apiserver-ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:07:25.450529 kubelet[2799]: I1008 20:07:25.450021 2799 topology_manager.go:215] "Topology Admit Handler" podUID="d512fd84d3d70b7fbdcdf07661629cee" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:07:25.450529 kubelet[2799]: I1008 20:07:25.450049 2799 topology_manager.go:215] "Topology Admit Handler" podUID="4c7edc2b675ff572e4226d21b15ce96b" podNamespace="kube-system" podName="kube-scheduler-ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:07:25.457050 kubelet[2799]: E1008 20:07:25.456719 2799 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-1-0-7-3c1e2fa9c6\" already exists" pod="kube-system/kube-apiserver-ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:07:25.542357 kubelet[2799]: I1008 20:07:25.542324 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9cd5c7e1f5d9eb6ae422e8aa8a3d297-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-1-0-7-3c1e2fa9c6\" (UID: \"e9cd5c7e1f5d9eb6ae422e8aa8a3d297\") " pod="kube-system/kube-apiserver-ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:07:25.542619 kubelet[2799]: I1008 20:07:25.542370 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d512fd84d3d70b7fbdcdf07661629cee-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-1-0-7-3c1e2fa9c6\" (UID: \"d512fd84d3d70b7fbdcdf07661629cee\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:07:25.542619 kubelet[2799]: I1008 20:07:25.542393 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4c7edc2b675ff572e4226d21b15ce96b-kubeconfig\") pod \"kube-scheduler-ci-4081-1-0-7-3c1e2fa9c6\" (UID: \"4c7edc2b675ff572e4226d21b15ce96b\") " pod="kube-system/kube-scheduler-ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:07:25.542619 kubelet[2799]: I1008 20:07:25.542411 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9cd5c7e1f5d9eb6ae422e8aa8a3d297-k8s-certs\") pod \"kube-apiserver-ci-4081-1-0-7-3c1e2fa9c6\" (UID: \"e9cd5c7e1f5d9eb6ae422e8aa8a3d297\") " pod="kube-system/kube-apiserver-ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:07:25.542619 kubelet[2799]: I1008 20:07:25.542432 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d512fd84d3d70b7fbdcdf07661629cee-ca-certs\") pod \"kube-controller-manager-ci-4081-1-0-7-3c1e2fa9c6\" (UID: \"d512fd84d3d70b7fbdcdf07661629cee\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:07:25.542619 kubelet[2799]: I1008 20:07:25.542451 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d512fd84d3d70b7fbdcdf07661629cee-k8s-certs\") pod \"kube-controller-manager-ci-4081-1-0-7-3c1e2fa9c6\" (UID: \"d512fd84d3d70b7fbdcdf07661629cee\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:07:25.542777 kubelet[2799]: I1008 20:07:25.542470 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d512fd84d3d70b7fbdcdf07661629cee-kubeconfig\") pod \"kube-controller-manager-ci-4081-1-0-7-3c1e2fa9c6\" (UID: \"d512fd84d3d70b7fbdcdf07661629cee\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:07:25.542777 kubelet[2799]: I1008 20:07:25.542490 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d512fd84d3d70b7fbdcdf07661629cee-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-1-0-7-3c1e2fa9c6\" (UID: \"d512fd84d3d70b7fbdcdf07661629cee\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:07:25.542777 kubelet[2799]: I1008 20:07:25.542510 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9cd5c7e1f5d9eb6ae422e8aa8a3d297-ca-certs\") pod \"kube-apiserver-ci-4081-1-0-7-3c1e2fa9c6\" (UID: \"e9cd5c7e1f5d9eb6ae422e8aa8a3d297\") " pod="kube-system/kube-apiserver-ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:07:26.311933 kubelet[2799]: I1008 20:07:26.311889 2799 apiserver.go:52] "Watching apiserver" Oct 8 20:07:26.341249 kubelet[2799]: I1008 20:07:26.340852 2799 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 8 20:07:26.477579 kubelet[2799]: I1008 20:07:26.477493 2799 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-1-0-7-3c1e2fa9c6" podStartSLOduration=1.477448001 podStartE2EDuration="1.477448001s" podCreationTimestamp="2024-10-08 20:07:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:07:26.466653713 +0000 UTC m=+1.224332143" watchObservedRunningTime="2024-10-08 20:07:26.477448001 +0000 UTC m=+1.235126431" Oct 8 20:07:26.500201 kubelet[2799]: I1008 20:07:26.499667 2799 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-1-0-7-3c1e2fa9c6" podStartSLOduration=3.499632366 podStartE2EDuration="3.499632366s" podCreationTimestamp="2024-10-08 20:07:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:07:26.478471138 +0000 UTC m=+1.236149568" watchObservedRunningTime="2024-10-08 20:07:26.499632366 +0000 UTC m=+1.257310795" Oct 8 20:07:26.528841 kubelet[2799]: I1008 20:07:26.528765 2799 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-1-0-7-3c1e2fa9c6" podStartSLOduration=1.52873285 podStartE2EDuration="1.52873285s" podCreationTimestamp="2024-10-08 20:07:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:07:26.499887789 +0000 UTC m=+1.257566219" watchObservedRunningTime="2024-10-08 20:07:26.52873285 +0000 UTC m=+1.286411280" Oct 8 20:07:29.572462 sudo[1904]: pam_unix(sudo:session): session closed for user root Oct 8 20:07:29.730560 sshd[1901]: pam_unix(sshd:session): session closed for user core Oct 8 20:07:29.738275 systemd[1]: sshd@6-49.13.138.82:22-147.75.109.163:58134.service: Deactivated successfully. Oct 8 20:07:29.744689 systemd[1]: session-7.scope: Deactivated successfully. Oct 8 20:07:29.745479 systemd[1]: session-7.scope: Consumed 4.699s CPU time, 188.0M memory peak, 0B memory swap peak. Oct 8 20:07:29.747378 systemd-logind[1473]: Session 7 logged out. Waiting for processes to exit. Oct 8 20:07:29.750295 systemd-logind[1473]: Removed session 7. Oct 8 20:07:37.209515 kubelet[2799]: I1008 20:07:37.209046 2799 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 8 20:07:37.211446 containerd[1500]: time="2024-10-08T20:07:37.211411715Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 8 20:07:37.211954 kubelet[2799]: I1008 20:07:37.211654 2799 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 8 20:07:38.067094 kubelet[2799]: I1008 20:07:38.067054 2799 topology_manager.go:215] "Topology Admit Handler" podUID="db2ba179-a3cb-4a84-bf62-cd993750e780" podNamespace="kube-system" podName="kube-proxy-l5z2c" Oct 8 20:07:38.077739 systemd[1]: Created slice kubepods-besteffort-poddb2ba179_a3cb_4a84_bf62_cd993750e780.slice - libcontainer container kubepods-besteffort-poddb2ba179_a3cb_4a84_bf62_cd993750e780.slice. Oct 8 20:07:38.127511 kubelet[2799]: I1008 20:07:38.127479 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/db2ba179-a3cb-4a84-bf62-cd993750e780-kube-proxy\") pod \"kube-proxy-l5z2c\" (UID: \"db2ba179-a3cb-4a84-bf62-cd993750e780\") " pod="kube-system/kube-proxy-l5z2c" Oct 8 20:07:38.127511 kubelet[2799]: I1008 20:07:38.127515 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/db2ba179-a3cb-4a84-bf62-cd993750e780-xtables-lock\") pod \"kube-proxy-l5z2c\" (UID: \"db2ba179-a3cb-4a84-bf62-cd993750e780\") " pod="kube-system/kube-proxy-l5z2c" Oct 8 20:07:38.127789 kubelet[2799]: I1008 20:07:38.127533 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/db2ba179-a3cb-4a84-bf62-cd993750e780-lib-modules\") pod \"kube-proxy-l5z2c\" (UID: \"db2ba179-a3cb-4a84-bf62-cd993750e780\") " pod="kube-system/kube-proxy-l5z2c" Oct 8 20:07:38.127789 kubelet[2799]: I1008 20:07:38.127554 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgwg7\" (UniqueName: \"kubernetes.io/projected/db2ba179-a3cb-4a84-bf62-cd993750e780-kube-api-access-hgwg7\") pod \"kube-proxy-l5z2c\" (UID: \"db2ba179-a3cb-4a84-bf62-cd993750e780\") " pod="kube-system/kube-proxy-l5z2c" Oct 8 20:07:38.284767 kubelet[2799]: I1008 20:07:38.284713 2799 topology_manager.go:215] "Topology Admit Handler" podUID="2fefb574-9d2c-4ec2-b3a7-b93001a35e9e" podNamespace="tigera-operator" podName="tigera-operator-5d56685c77-4sqvl" Oct 8 20:07:38.291833 systemd[1]: Created slice kubepods-besteffort-pod2fefb574_9d2c_4ec2_b3a7_b93001a35e9e.slice - libcontainer container kubepods-besteffort-pod2fefb574_9d2c_4ec2_b3a7_b93001a35e9e.slice. Oct 8 20:07:38.332849 kubelet[2799]: I1008 20:07:38.332703 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2fefb574-9d2c-4ec2-b3a7-b93001a35e9e-var-lib-calico\") pod \"tigera-operator-5d56685c77-4sqvl\" (UID: \"2fefb574-9d2c-4ec2-b3a7-b93001a35e9e\") " pod="tigera-operator/tigera-operator-5d56685c77-4sqvl" Oct 8 20:07:38.332849 kubelet[2799]: I1008 20:07:38.332746 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtxcp\" (UniqueName: \"kubernetes.io/projected/2fefb574-9d2c-4ec2-b3a7-b93001a35e9e-kube-api-access-mtxcp\") pod \"tigera-operator-5d56685c77-4sqvl\" (UID: \"2fefb574-9d2c-4ec2-b3a7-b93001a35e9e\") " pod="tigera-operator/tigera-operator-5d56685c77-4sqvl" Oct 8 20:07:38.385364 containerd[1500]: time="2024-10-08T20:07:38.385311627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l5z2c,Uid:db2ba179-a3cb-4a84-bf62-cd993750e780,Namespace:kube-system,Attempt:0,}" Oct 8 20:07:38.409009 containerd[1500]: time="2024-10-08T20:07:38.408771186Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:07:38.409009 containerd[1500]: time="2024-10-08T20:07:38.408822865Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:07:38.409009 containerd[1500]: time="2024-10-08T20:07:38.408848304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:07:38.409009 containerd[1500]: time="2024-10-08T20:07:38.408962229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:07:38.429608 systemd[1]: run-containerd-runc-k8s.io-98e51addb5a847c871801de50568d1886dd34cf5b9391b371b0f926b22b084cf-runc.mXuCGQ.mount: Deactivated successfully. Oct 8 20:07:38.435536 systemd[1]: Started cri-containerd-98e51addb5a847c871801de50568d1886dd34cf5b9391b371b0f926b22b084cf.scope - libcontainer container 98e51addb5a847c871801de50568d1886dd34cf5b9391b371b0f926b22b084cf. Oct 8 20:07:38.460917 containerd[1500]: time="2024-10-08T20:07:38.460868556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l5z2c,Uid:db2ba179-a3cb-4a84-bf62-cd993750e780,Namespace:kube-system,Attempt:0,} returns sandbox id \"98e51addb5a847c871801de50568d1886dd34cf5b9391b371b0f926b22b084cf\"" Oct 8 20:07:38.464178 containerd[1500]: time="2024-10-08T20:07:38.464139115Z" level=info msg="CreateContainer within sandbox \"98e51addb5a847c871801de50568d1886dd34cf5b9391b371b0f926b22b084cf\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 8 20:07:38.479557 containerd[1500]: time="2024-10-08T20:07:38.479504982Z" level=info msg="CreateContainer within sandbox \"98e51addb5a847c871801de50568d1886dd34cf5b9391b371b0f926b22b084cf\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c95c767b228bbc43e7ffe9f922271f5de1ae0940197e71be84fc9c04ed7cfc61\"" Oct 8 20:07:38.480353 containerd[1500]: time="2024-10-08T20:07:38.480332653Z" level=info msg="StartContainer for \"c95c767b228bbc43e7ffe9f922271f5de1ae0940197e71be84fc9c04ed7cfc61\"" Oct 8 20:07:38.509364 systemd[1]: Started cri-containerd-c95c767b228bbc43e7ffe9f922271f5de1ae0940197e71be84fc9c04ed7cfc61.scope - libcontainer container c95c767b228bbc43e7ffe9f922271f5de1ae0940197e71be84fc9c04ed7cfc61. Oct 8 20:07:38.541165 containerd[1500]: time="2024-10-08T20:07:38.541093578Z" level=info msg="StartContainer for \"c95c767b228bbc43e7ffe9f922271f5de1ae0940197e71be84fc9c04ed7cfc61\" returns successfully" Oct 8 20:07:38.595976 containerd[1500]: time="2024-10-08T20:07:38.595480414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-4sqvl,Uid:2fefb574-9d2c-4ec2-b3a7-b93001a35e9e,Namespace:tigera-operator,Attempt:0,}" Oct 8 20:07:38.616185 containerd[1500]: time="2024-10-08T20:07:38.616059166Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:07:38.616640 containerd[1500]: time="2024-10-08T20:07:38.616132435Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:07:38.616640 containerd[1500]: time="2024-10-08T20:07:38.616618929Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:07:38.616872 containerd[1500]: time="2024-10-08T20:07:38.616705463Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:07:38.639380 systemd[1]: Started cri-containerd-9e2d4894de139fae94b0def3e1b9b028a360e6f34f16025e9c09ed34f13dc299.scope - libcontainer container 9e2d4894de139fae94b0def3e1b9b028a360e6f34f16025e9c09ed34f13dc299. Oct 8 20:07:38.684083 containerd[1500]: time="2024-10-08T20:07:38.684050967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-4sqvl,Uid:2fefb574-9d2c-4ec2-b3a7-b93001a35e9e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"9e2d4894de139fae94b0def3e1b9b028a360e6f34f16025e9c09ed34f13dc299\"" Oct 8 20:07:38.691913 containerd[1500]: time="2024-10-08T20:07:38.691761252Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Oct 8 20:07:40.607460 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1680947609.mount: Deactivated successfully. Oct 8 20:07:40.956020 containerd[1500]: time="2024-10-08T20:07:40.955882313Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:07:40.956972 containerd[1500]: time="2024-10-08T20:07:40.956937837Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=22136565" Oct 8 20:07:40.957685 containerd[1500]: time="2024-10-08T20:07:40.957648607Z" level=info msg="ImageCreate event name:\"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:07:40.959472 containerd[1500]: time="2024-10-08T20:07:40.959427826Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:07:40.960151 containerd[1500]: time="2024-10-08T20:07:40.960030641Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"22130728\" in 2.268245473s" Oct 8 20:07:40.960151 containerd[1500]: time="2024-10-08T20:07:40.960057231Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\"" Oct 8 20:07:40.961923 containerd[1500]: time="2024-10-08T20:07:40.961804660Z" level=info msg="CreateContainer within sandbox \"9e2d4894de139fae94b0def3e1b9b028a360e6f34f16025e9c09ed34f13dc299\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 8 20:07:40.975458 containerd[1500]: time="2024-10-08T20:07:40.975381722Z" level=info msg="CreateContainer within sandbox \"9e2d4894de139fae94b0def3e1b9b028a360e6f34f16025e9c09ed34f13dc299\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"5b0ea205d8815bac78b398acd897351eef56c12a8d649d8a7ff21b5dc99b3a07\"" Oct 8 20:07:40.976263 containerd[1500]: time="2024-10-08T20:07:40.976069788Z" level=info msg="StartContainer for \"5b0ea205d8815bac78b398acd897351eef56c12a8d649d8a7ff21b5dc99b3a07\"" Oct 8 20:07:41.008369 systemd[1]: Started cri-containerd-5b0ea205d8815bac78b398acd897351eef56c12a8d649d8a7ff21b5dc99b3a07.scope - libcontainer container 5b0ea205d8815bac78b398acd897351eef56c12a8d649d8a7ff21b5dc99b3a07. Oct 8 20:07:41.033642 containerd[1500]: time="2024-10-08T20:07:41.033575509Z" level=info msg="StartContainer for \"5b0ea205d8815bac78b398acd897351eef56c12a8d649d8a7ff21b5dc99b3a07\" returns successfully" Oct 8 20:07:41.425316 kubelet[2799]: I1008 20:07:41.425247 2799 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-l5z2c" podStartSLOduration=3.424964568 podStartE2EDuration="3.424964568s" podCreationTimestamp="2024-10-08 20:07:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:07:39.446187757 +0000 UTC m=+14.203866267" watchObservedRunningTime="2024-10-08 20:07:41.424964568 +0000 UTC m=+16.182643008" Oct 8 20:07:41.426242 kubelet[2799]: I1008 20:07:41.426014 2799 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5d56685c77-4sqvl" podStartSLOduration=1.157103937 podStartE2EDuration="3.425984325s" podCreationTimestamp="2024-10-08 20:07:38 +0000 UTC" firstStartedPulling="2024-10-08 20:07:38.691470991 +0000 UTC m=+13.449149422" lastFinishedPulling="2024-10-08 20:07:40.96035138 +0000 UTC m=+15.718029810" observedRunningTime="2024-10-08 20:07:41.424804373 +0000 UTC m=+16.182482813" watchObservedRunningTime="2024-10-08 20:07:41.425984325 +0000 UTC m=+16.183662765" Oct 8 20:07:44.010256 kubelet[2799]: I1008 20:07:44.009998 2799 topology_manager.go:215] "Topology Admit Handler" podUID="43db6a68-6f69-4359-a18b-f1c7a53a2887" podNamespace="calico-system" podName="calico-typha-76df5dd7bc-l2p92" Oct 8 20:07:44.017244 systemd[1]: Created slice kubepods-besteffort-pod43db6a68_6f69_4359_a18b_f1c7a53a2887.slice - libcontainer container kubepods-besteffort-pod43db6a68_6f69_4359_a18b_f1c7a53a2887.slice. Oct 8 20:07:44.070482 kubelet[2799]: I1008 20:07:44.070441 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/43db6a68-6f69-4359-a18b-f1c7a53a2887-typha-certs\") pod \"calico-typha-76df5dd7bc-l2p92\" (UID: \"43db6a68-6f69-4359-a18b-f1c7a53a2887\") " pod="calico-system/calico-typha-76df5dd7bc-l2p92" Oct 8 20:07:44.070482 kubelet[2799]: I1008 20:07:44.070494 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxqtp\" (UniqueName: \"kubernetes.io/projected/43db6a68-6f69-4359-a18b-f1c7a53a2887-kube-api-access-lxqtp\") pod \"calico-typha-76df5dd7bc-l2p92\" (UID: \"43db6a68-6f69-4359-a18b-f1c7a53a2887\") " pod="calico-system/calico-typha-76df5dd7bc-l2p92" Oct 8 20:07:44.070648 kubelet[2799]: I1008 20:07:44.070519 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43db6a68-6f69-4359-a18b-f1c7a53a2887-tigera-ca-bundle\") pod \"calico-typha-76df5dd7bc-l2p92\" (UID: \"43db6a68-6f69-4359-a18b-f1c7a53a2887\") " pod="calico-system/calico-typha-76df5dd7bc-l2p92" Oct 8 20:07:44.117160 kubelet[2799]: I1008 20:07:44.117112 2799 topology_manager.go:215] "Topology Admit Handler" podUID="1f042c4a-7a23-44c1-96a5-a73cb446b7ac" podNamespace="calico-system" podName="calico-node-x6dfr" Oct 8 20:07:44.124592 systemd[1]: Created slice kubepods-besteffort-pod1f042c4a_7a23_44c1_96a5_a73cb446b7ac.slice - libcontainer container kubepods-besteffort-pod1f042c4a_7a23_44c1_96a5_a73cb446b7ac.slice. Oct 8 20:07:44.126260 kubelet[2799]: W1008 20:07:44.125679 2799 reflector.go:539] object-"calico-system"/"node-certs": failed to list *v1.Secret: secrets "node-certs" is forbidden: User "system:node:ci-4081-1-0-7-3c1e2fa9c6" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081-1-0-7-3c1e2fa9c6' and this object Oct 8 20:07:44.126260 kubelet[2799]: E1008 20:07:44.125705 2799 reflector.go:147] object-"calico-system"/"node-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "node-certs" is forbidden: User "system:node:ci-4081-1-0-7-3c1e2fa9c6" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081-1-0-7-3c1e2fa9c6' and this object Oct 8 20:07:44.171609 kubelet[2799]: I1008 20:07:44.171573 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1f042c4a-7a23-44c1-96a5-a73cb446b7ac-var-run-calico\") pod \"calico-node-x6dfr\" (UID: \"1f042c4a-7a23-44c1-96a5-a73cb446b7ac\") " pod="calico-system/calico-node-x6dfr" Oct 8 20:07:44.171609 kubelet[2799]: I1008 20:07:44.171613 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1f042c4a-7a23-44c1-96a5-a73cb446b7ac-cni-bin-dir\") pod \"calico-node-x6dfr\" (UID: \"1f042c4a-7a23-44c1-96a5-a73cb446b7ac\") " pod="calico-system/calico-node-x6dfr" Oct 8 20:07:44.171804 kubelet[2799]: I1008 20:07:44.171653 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1f042c4a-7a23-44c1-96a5-a73cb446b7ac-xtables-lock\") pod \"calico-node-x6dfr\" (UID: \"1f042c4a-7a23-44c1-96a5-a73cb446b7ac\") " pod="calico-system/calico-node-x6dfr" Oct 8 20:07:44.171804 kubelet[2799]: I1008 20:07:44.171669 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1f042c4a-7a23-44c1-96a5-a73cb446b7ac-policysync\") pod \"calico-node-x6dfr\" (UID: \"1f042c4a-7a23-44c1-96a5-a73cb446b7ac\") " pod="calico-system/calico-node-x6dfr" Oct 8 20:07:44.171804 kubelet[2799]: I1008 20:07:44.171684 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1f042c4a-7a23-44c1-96a5-a73cb446b7ac-node-certs\") pod \"calico-node-x6dfr\" (UID: \"1f042c4a-7a23-44c1-96a5-a73cb446b7ac\") " pod="calico-system/calico-node-x6dfr" Oct 8 20:07:44.171804 kubelet[2799]: I1008 20:07:44.171702 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1f042c4a-7a23-44c1-96a5-a73cb446b7ac-flexvol-driver-host\") pod \"calico-node-x6dfr\" (UID: \"1f042c4a-7a23-44c1-96a5-a73cb446b7ac\") " pod="calico-system/calico-node-x6dfr" Oct 8 20:07:44.171804 kubelet[2799]: I1008 20:07:44.171726 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1f042c4a-7a23-44c1-96a5-a73cb446b7ac-cni-net-dir\") pod \"calico-node-x6dfr\" (UID: \"1f042c4a-7a23-44c1-96a5-a73cb446b7ac\") " pod="calico-system/calico-node-x6dfr" Oct 8 20:07:44.171956 kubelet[2799]: I1008 20:07:44.171741 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f042c4a-7a23-44c1-96a5-a73cb446b7ac-tigera-ca-bundle\") pod \"calico-node-x6dfr\" (UID: \"1f042c4a-7a23-44c1-96a5-a73cb446b7ac\") " pod="calico-system/calico-node-x6dfr" Oct 8 20:07:44.171956 kubelet[2799]: I1008 20:07:44.171757 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9nvx\" (UniqueName: \"kubernetes.io/projected/1f042c4a-7a23-44c1-96a5-a73cb446b7ac-kube-api-access-c9nvx\") pod \"calico-node-x6dfr\" (UID: \"1f042c4a-7a23-44c1-96a5-a73cb446b7ac\") " pod="calico-system/calico-node-x6dfr" Oct 8 20:07:44.171956 kubelet[2799]: I1008 20:07:44.171772 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1f042c4a-7a23-44c1-96a5-a73cb446b7ac-lib-modules\") pod \"calico-node-x6dfr\" (UID: \"1f042c4a-7a23-44c1-96a5-a73cb446b7ac\") " pod="calico-system/calico-node-x6dfr" Oct 8 20:07:44.171956 kubelet[2799]: I1008 20:07:44.171787 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1f042c4a-7a23-44c1-96a5-a73cb446b7ac-cni-log-dir\") pod \"calico-node-x6dfr\" (UID: \"1f042c4a-7a23-44c1-96a5-a73cb446b7ac\") " pod="calico-system/calico-node-x6dfr" Oct 8 20:07:44.171956 kubelet[2799]: I1008 20:07:44.171802 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1f042c4a-7a23-44c1-96a5-a73cb446b7ac-var-lib-calico\") pod \"calico-node-x6dfr\" (UID: \"1f042c4a-7a23-44c1-96a5-a73cb446b7ac\") " pod="calico-system/calico-node-x6dfr" Oct 8 20:07:44.294767 kubelet[2799]: E1008 20:07:44.294544 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.294767 kubelet[2799]: W1008 20:07:44.294577 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.294767 kubelet[2799]: E1008 20:07:44.294597 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.297773 kubelet[2799]: I1008 20:07:44.297747 2799 topology_manager.go:215] "Topology Admit Handler" podUID="0b81e2b9-b925-4fbf-96b6-98335c1c1130" podNamespace="calico-system" podName="csi-node-driver-kmk8v" Oct 8 20:07:44.297981 kubelet[2799]: E1008 20:07:44.297947 2799 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kmk8v" podUID="0b81e2b9-b925-4fbf-96b6-98335c1c1130" Oct 8 20:07:44.322471 containerd[1500]: time="2024-10-08T20:07:44.322351911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-76df5dd7bc-l2p92,Uid:43db6a68-6f69-4359-a18b-f1c7a53a2887,Namespace:calico-system,Attempt:0,}" Oct 8 20:07:44.363267 kubelet[2799]: E1008 20:07:44.359273 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.363267 kubelet[2799]: W1008 20:07:44.359338 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.363267 kubelet[2799]: E1008 20:07:44.359359 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.363267 kubelet[2799]: E1008 20:07:44.359965 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.363267 kubelet[2799]: W1008 20:07:44.359976 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.363267 kubelet[2799]: E1008 20:07:44.360026 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.363267 kubelet[2799]: E1008 20:07:44.361095 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.363267 kubelet[2799]: W1008 20:07:44.361103 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.363267 kubelet[2799]: E1008 20:07:44.361115 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.363267 kubelet[2799]: E1008 20:07:44.361363 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.369599 kubelet[2799]: W1008 20:07:44.361374 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.369599 kubelet[2799]: E1008 20:07:44.361423 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.369599 kubelet[2799]: E1008 20:07:44.361640 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.369599 kubelet[2799]: W1008 20:07:44.361648 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.369599 kubelet[2799]: E1008 20:07:44.361658 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.369599 kubelet[2799]: E1008 20:07:44.362374 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.369599 kubelet[2799]: W1008 20:07:44.362383 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.369599 kubelet[2799]: E1008 20:07:44.362396 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.369599 kubelet[2799]: E1008 20:07:44.363130 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.369599 kubelet[2799]: W1008 20:07:44.363163 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.369861 kubelet[2799]: E1008 20:07:44.363175 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.369861 kubelet[2799]: E1008 20:07:44.363404 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.369861 kubelet[2799]: W1008 20:07:44.363412 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.369861 kubelet[2799]: E1008 20:07:44.363438 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.369861 kubelet[2799]: E1008 20:07:44.365396 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.369861 kubelet[2799]: W1008 20:07:44.365405 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.369861 kubelet[2799]: E1008 20:07:44.365415 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.369861 kubelet[2799]: E1008 20:07:44.365787 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.369861 kubelet[2799]: W1008 20:07:44.365795 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.369861 kubelet[2799]: E1008 20:07:44.365805 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.370038 kubelet[2799]: E1008 20:07:44.366167 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.370038 kubelet[2799]: W1008 20:07:44.366174 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.370038 kubelet[2799]: E1008 20:07:44.366184 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.370038 kubelet[2799]: E1008 20:07:44.366924 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.370038 kubelet[2799]: W1008 20:07:44.366932 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.370038 kubelet[2799]: E1008 20:07:44.366942 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.370038 kubelet[2799]: E1008 20:07:44.367343 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.370038 kubelet[2799]: W1008 20:07:44.367351 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.370038 kubelet[2799]: E1008 20:07:44.367363 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.370038 kubelet[2799]: E1008 20:07:44.369721 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.370236 kubelet[2799]: W1008 20:07:44.369730 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.370236 kubelet[2799]: E1008 20:07:44.369743 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.370236 kubelet[2799]: E1008 20:07:44.369916 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.370236 kubelet[2799]: W1008 20:07:44.369923 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.370236 kubelet[2799]: E1008 20:07:44.369933 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.370479 kubelet[2799]: E1008 20:07:44.370461 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.370479 kubelet[2799]: W1008 20:07:44.370474 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.370590 kubelet[2799]: E1008 20:07:44.370486 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.370678 kubelet[2799]: E1008 20:07:44.370660 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.370678 kubelet[2799]: W1008 20:07:44.370673 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.370763 kubelet[2799]: E1008 20:07:44.370682 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.371152 kubelet[2799]: E1008 20:07:44.371139 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.371152 kubelet[2799]: W1008 20:07:44.371149 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.371462 kubelet[2799]: E1008 20:07:44.371159 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.375163 kubelet[2799]: E1008 20:07:44.374637 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.375163 kubelet[2799]: W1008 20:07:44.374649 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.375163 kubelet[2799]: E1008 20:07:44.374660 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.375163 kubelet[2799]: E1008 20:07:44.374851 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.375163 kubelet[2799]: W1008 20:07:44.374858 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.375163 kubelet[2799]: E1008 20:07:44.374867 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.375343 kubelet[2799]: E1008 20:07:44.375294 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.375343 kubelet[2799]: W1008 20:07:44.375302 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.375343 kubelet[2799]: E1008 20:07:44.375313 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.375433 kubelet[2799]: I1008 20:07:44.375346 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0b81e2b9-b925-4fbf-96b6-98335c1c1130-registration-dir\") pod \"csi-node-driver-kmk8v\" (UID: \"0b81e2b9-b925-4fbf-96b6-98335c1c1130\") " pod="calico-system/csi-node-driver-kmk8v" Oct 8 20:07:44.375938 kubelet[2799]: E1008 20:07:44.375914 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.375938 kubelet[2799]: W1008 20:07:44.375930 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.376001 kubelet[2799]: E1008 20:07:44.375964 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.376001 kubelet[2799]: I1008 20:07:44.375984 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0b81e2b9-b925-4fbf-96b6-98335c1c1130-varrun\") pod \"csi-node-driver-kmk8v\" (UID: \"0b81e2b9-b925-4fbf-96b6-98335c1c1130\") " pod="calico-system/csi-node-driver-kmk8v" Oct 8 20:07:44.376441 kubelet[2799]: E1008 20:07:44.376309 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.376441 kubelet[2799]: W1008 20:07:44.376406 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.378292 kubelet[2799]: E1008 20:07:44.378272 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.378357 kubelet[2799]: I1008 20:07:44.378302 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0b81e2b9-b925-4fbf-96b6-98335c1c1130-socket-dir\") pod \"csi-node-driver-kmk8v\" (UID: \"0b81e2b9-b925-4fbf-96b6-98335c1c1130\") " pod="calico-system/csi-node-driver-kmk8v" Oct 8 20:07:44.378357 kubelet[2799]: E1008 20:07:44.379093 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.378357 kubelet[2799]: W1008 20:07:44.379102 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.378357 kubelet[2799]: E1008 20:07:44.379133 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.379418 kubelet[2799]: E1008 20:07:44.379341 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.379418 kubelet[2799]: W1008 20:07:44.379351 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.379456 kubelet[2799]: E1008 20:07:44.379435 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.379893 kubelet[2799]: E1008 20:07:44.379610 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.379893 kubelet[2799]: W1008 20:07:44.379624 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.379893 kubelet[2799]: E1008 20:07:44.379693 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.379893 kubelet[2799]: E1008 20:07:44.379838 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.379893 kubelet[2799]: W1008 20:07:44.379845 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.379893 kubelet[2799]: E1008 20:07:44.379861 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.379893 kubelet[2799]: I1008 20:07:44.379879 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7t2n\" (UniqueName: \"kubernetes.io/projected/0b81e2b9-b925-4fbf-96b6-98335c1c1130-kube-api-access-z7t2n\") pod \"csi-node-driver-kmk8v\" (UID: \"0b81e2b9-b925-4fbf-96b6-98335c1c1130\") " pod="calico-system/csi-node-driver-kmk8v" Oct 8 20:07:44.381256 kubelet[2799]: E1008 20:07:44.381233 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.381256 kubelet[2799]: W1008 20:07:44.381247 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.381423 kubelet[2799]: E1008 20:07:44.381389 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.383692 kubelet[2799]: E1008 20:07:44.383661 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.383692 kubelet[2799]: W1008 20:07:44.383673 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.383692 kubelet[2799]: E1008 20:07:44.383685 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.385588 kubelet[2799]: E1008 20:07:44.385557 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.385588 kubelet[2799]: W1008 20:07:44.385572 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.385658 kubelet[2799]: E1008 20:07:44.385598 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.385843 containerd[1500]: time="2024-10-08T20:07:44.384013245Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:07:44.385843 containerd[1500]: time="2024-10-08T20:07:44.384073489Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:07:44.385843 containerd[1500]: time="2024-10-08T20:07:44.384097866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:07:44.385843 containerd[1500]: time="2024-10-08T20:07:44.384200080Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:07:44.385999 kubelet[2799]: E1008 20:07:44.385857 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.385999 kubelet[2799]: W1008 20:07:44.385864 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.385999 kubelet[2799]: E1008 20:07:44.385874 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.386690 kubelet[2799]: E1008 20:07:44.386655 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.386690 kubelet[2799]: W1008 20:07:44.386674 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.386771 kubelet[2799]: E1008 20:07:44.386694 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.390245 kubelet[2799]: E1008 20:07:44.387569 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.390245 kubelet[2799]: W1008 20:07:44.387579 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.390245 kubelet[2799]: E1008 20:07:44.387590 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.390245 kubelet[2799]: E1008 20:07:44.387881 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.390245 kubelet[2799]: W1008 20:07:44.387889 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.390245 kubelet[2799]: E1008 20:07:44.388013 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.390245 kubelet[2799]: I1008 20:07:44.388044 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0b81e2b9-b925-4fbf-96b6-98335c1c1130-kubelet-dir\") pod \"csi-node-driver-kmk8v\" (UID: \"0b81e2b9-b925-4fbf-96b6-98335c1c1130\") " pod="calico-system/csi-node-driver-kmk8v" Oct 8 20:07:44.390245 kubelet[2799]: E1008 20:07:44.388533 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.390245 kubelet[2799]: W1008 20:07:44.388542 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.390469 kubelet[2799]: E1008 20:07:44.388552 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.390689 kubelet[2799]: E1008 20:07:44.390669 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.390689 kubelet[2799]: W1008 20:07:44.390684 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.390749 kubelet[2799]: E1008 20:07:44.390708 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.413195 systemd[1]: Started cri-containerd-bb8c694166c25d566a469ed1c78b0b87ba4325e0675c331b650449a74e64dce6.scope - libcontainer container bb8c694166c25d566a469ed1c78b0b87ba4325e0675c331b650449a74e64dce6. Oct 8 20:07:44.472406 containerd[1500]: time="2024-10-08T20:07:44.472313561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-76df5dd7bc-l2p92,Uid:43db6a68-6f69-4359-a18b-f1c7a53a2887,Namespace:calico-system,Attempt:0,} returns sandbox id \"bb8c694166c25d566a469ed1c78b0b87ba4325e0675c331b650449a74e64dce6\"" Oct 8 20:07:44.475820 containerd[1500]: time="2024-10-08T20:07:44.475781761Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Oct 8 20:07:44.488857 kubelet[2799]: E1008 20:07:44.488742 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.488857 kubelet[2799]: W1008 20:07:44.488758 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.488857 kubelet[2799]: E1008 20:07:44.488776 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.489179 kubelet[2799]: E1008 20:07:44.489058 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.489179 kubelet[2799]: W1008 20:07:44.489068 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.489477 kubelet[2799]: E1008 20:07:44.489364 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.489477 kubelet[2799]: W1008 20:07:44.489375 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.489477 kubelet[2799]: E1008 20:07:44.489388 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.489631 kubelet[2799]: E1008 20:07:44.489579 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.489788 kubelet[2799]: E1008 20:07:44.489754 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.489788 kubelet[2799]: W1008 20:07:44.489775 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.490098 kubelet[2799]: E1008 20:07:44.489988 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.490407 kubelet[2799]: E1008 20:07:44.490384 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.490407 kubelet[2799]: W1008 20:07:44.490395 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.490562 kubelet[2799]: E1008 20:07:44.490470 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.491076 kubelet[2799]: E1008 20:07:44.490818 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.491076 kubelet[2799]: W1008 20:07:44.490827 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.491076 kubelet[2799]: E1008 20:07:44.490842 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.491419 kubelet[2799]: E1008 20:07:44.491363 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.491501 kubelet[2799]: W1008 20:07:44.491488 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.491656 kubelet[2799]: E1008 20:07:44.491612 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.491984 kubelet[2799]: E1008 20:07:44.491932 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.491984 kubelet[2799]: W1008 20:07:44.491948 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.492429 kubelet[2799]: E1008 20:07:44.492417 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.492574 kubelet[2799]: E1008 20:07:44.492553 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.492574 kubelet[2799]: W1008 20:07:44.492562 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.492757 kubelet[2799]: E1008 20:07:44.492679 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.492957 kubelet[2799]: E1008 20:07:44.492935 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.492957 kubelet[2799]: W1008 20:07:44.492944 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.493164 kubelet[2799]: E1008 20:07:44.493153 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.493489 kubelet[2799]: E1008 20:07:44.493428 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.493489 kubelet[2799]: W1008 20:07:44.493440 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.493489 kubelet[2799]: E1008 20:07:44.493455 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.493949 kubelet[2799]: E1008 20:07:44.493889 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.493949 kubelet[2799]: W1008 20:07:44.493899 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.494137 kubelet[2799]: E1008 20:07:44.494048 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.494904 kubelet[2799]: E1008 20:07:44.494790 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.494904 kubelet[2799]: W1008 20:07:44.494800 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.494904 kubelet[2799]: E1008 20:07:44.494816 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.495870 kubelet[2799]: E1008 20:07:44.495771 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.495870 kubelet[2799]: W1008 20:07:44.495782 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.499530 kubelet[2799]: E1008 20:07:44.499057 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.499530 kubelet[2799]: E1008 20:07:44.499448 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.499530 kubelet[2799]: W1008 20:07:44.499456 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.499655 kubelet[2799]: E1008 20:07:44.499636 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.500487 kubelet[2799]: E1008 20:07:44.500286 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.500487 kubelet[2799]: W1008 20:07:44.500309 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.500487 kubelet[2799]: E1008 20:07:44.500401 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.500582 kubelet[2799]: E1008 20:07:44.500564 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.500582 kubelet[2799]: W1008 20:07:44.500578 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.501311 kubelet[2799]: E1008 20:07:44.500692 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.501311 kubelet[2799]: E1008 20:07:44.501009 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.501311 kubelet[2799]: W1008 20:07:44.501017 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.501311 kubelet[2799]: E1008 20:07:44.501242 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.502112 kubelet[2799]: E1008 20:07:44.501473 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.502112 kubelet[2799]: W1008 20:07:44.501486 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.502112 kubelet[2799]: E1008 20:07:44.501574 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.502112 kubelet[2799]: E1008 20:07:44.501883 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.502112 kubelet[2799]: W1008 20:07:44.501891 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.502112 kubelet[2799]: E1008 20:07:44.502004 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.502966 kubelet[2799]: E1008 20:07:44.502318 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.502966 kubelet[2799]: W1008 20:07:44.502330 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.502966 kubelet[2799]: E1008 20:07:44.502453 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.502966 kubelet[2799]: E1008 20:07:44.502765 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.502966 kubelet[2799]: W1008 20:07:44.502774 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.503838 kubelet[2799]: E1008 20:07:44.503201 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.503838 kubelet[2799]: E1008 20:07:44.503373 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.503838 kubelet[2799]: W1008 20:07:44.503408 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.503838 kubelet[2799]: E1008 20:07:44.503422 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.504718 kubelet[2799]: E1008 20:07:44.504300 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.504718 kubelet[2799]: W1008 20:07:44.504312 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.504718 kubelet[2799]: E1008 20:07:44.504397 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.504718 kubelet[2799]: E1008 20:07:44.504555 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.504718 kubelet[2799]: W1008 20:07:44.504562 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.504718 kubelet[2799]: E1008 20:07:44.504582 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.504858 kubelet[2799]: E1008 20:07:44.504783 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.504858 kubelet[2799]: W1008 20:07:44.504802 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.504858 kubelet[2799]: E1008 20:07:44.504813 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.505653 kubelet[2799]: E1008 20:07:44.505211 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.505653 kubelet[2799]: W1008 20:07:44.505243 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.505653 kubelet[2799]: E1008 20:07:44.505254 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.594934 kubelet[2799]: E1008 20:07:44.594769 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.594934 kubelet[2799]: W1008 20:07:44.594789 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.594934 kubelet[2799]: E1008 20:07:44.594810 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.696303 kubelet[2799]: E1008 20:07:44.696269 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.696303 kubelet[2799]: W1008 20:07:44.696289 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.696303 kubelet[2799]: E1008 20:07:44.696309 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.797198 kubelet[2799]: E1008 20:07:44.797115 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.797198 kubelet[2799]: W1008 20:07:44.797134 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.797198 kubelet[2799]: E1008 20:07:44.797154 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.898378 kubelet[2799]: E1008 20:07:44.898281 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.898378 kubelet[2799]: W1008 20:07:44.898301 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.898378 kubelet[2799]: E1008 20:07:44.898321 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:44.999460 kubelet[2799]: E1008 20:07:44.999418 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:44.999460 kubelet[2799]: W1008 20:07:44.999444 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:44.999460 kubelet[2799]: E1008 20:07:44.999470 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:45.100658 kubelet[2799]: E1008 20:07:45.100627 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:45.100658 kubelet[2799]: W1008 20:07:45.100645 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:45.100658 kubelet[2799]: E1008 20:07:45.100664 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:45.190534 kubelet[2799]: E1008 20:07:45.190406 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:45.190534 kubelet[2799]: W1008 20:07:45.190425 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:45.190534 kubelet[2799]: E1008 20:07:45.190445 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:45.329737 containerd[1500]: time="2024-10-08T20:07:45.329656304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-x6dfr,Uid:1f042c4a-7a23-44c1-96a5-a73cb446b7ac,Namespace:calico-system,Attempt:0,}" Oct 8 20:07:45.384247 containerd[1500]: time="2024-10-08T20:07:45.382876736Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:07:45.384247 containerd[1500]: time="2024-10-08T20:07:45.382937350Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:07:45.384247 containerd[1500]: time="2024-10-08T20:07:45.382957829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:07:45.384247 containerd[1500]: time="2024-10-08T20:07:45.383027492Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:07:45.417831 systemd[1]: Started cri-containerd-f8580ee68853bfaba13fcc4a82bde86f0a93f83f43b25515f3a0759bf39c94da.scope - libcontainer container f8580ee68853bfaba13fcc4a82bde86f0a93f83f43b25515f3a0759bf39c94da. Oct 8 20:07:45.477799 containerd[1500]: time="2024-10-08T20:07:45.477491440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-x6dfr,Uid:1f042c4a-7a23-44c1-96a5-a73cb446b7ac,Namespace:calico-system,Attempt:0,} returns sandbox id \"f8580ee68853bfaba13fcc4a82bde86f0a93f83f43b25515f3a0759bf39c94da\"" Oct 8 20:07:46.268250 containerd[1500]: time="2024-10-08T20:07:46.268189366Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:07:46.269109 containerd[1500]: time="2024-10-08T20:07:46.269060732Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=29471335" Oct 8 20:07:46.270074 containerd[1500]: time="2024-10-08T20:07:46.270035595Z" level=info msg="ImageCreate event name:\"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:07:46.272225 containerd[1500]: time="2024-10-08T20:07:46.272172929Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:07:46.273117 containerd[1500]: time="2024-10-08T20:07:46.272684631Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"30963728\" in 1.796862754s" Oct 8 20:07:46.273117 containerd[1500]: time="2024-10-08T20:07:46.272711473Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\"" Oct 8 20:07:46.274845 containerd[1500]: time="2024-10-08T20:07:46.274087548Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Oct 8 20:07:46.286778 containerd[1500]: time="2024-10-08T20:07:46.286015322Z" level=info msg="CreateContainer within sandbox \"bb8c694166c25d566a469ed1c78b0b87ba4325e0675c331b650449a74e64dce6\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 8 20:07:46.300414 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3863782446.mount: Deactivated successfully. Oct 8 20:07:46.310144 containerd[1500]: time="2024-10-08T20:07:46.309770087Z" level=info msg="CreateContainer within sandbox \"bb8c694166c25d566a469ed1c78b0b87ba4325e0675c331b650449a74e64dce6\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f357c633665c564990634ffd5cf8a7e18ea36ad682e67deffaaa1bac6de2a9b5\"" Oct 8 20:07:46.311396 containerd[1500]: time="2024-10-08T20:07:46.311358687Z" level=info msg="StartContainer for \"f357c633665c564990634ffd5cf8a7e18ea36ad682e67deffaaa1bac6de2a9b5\"" Oct 8 20:07:46.350139 kubelet[2799]: E1008 20:07:46.350108 2799 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kmk8v" podUID="0b81e2b9-b925-4fbf-96b6-98335c1c1130" Oct 8 20:07:46.357746 systemd[1]: Started cri-containerd-f357c633665c564990634ffd5cf8a7e18ea36ad682e67deffaaa1bac6de2a9b5.scope - libcontainer container f357c633665c564990634ffd5cf8a7e18ea36ad682e67deffaaa1bac6de2a9b5. Oct 8 20:07:46.402835 containerd[1500]: time="2024-10-08T20:07:46.402791833Z" level=info msg="StartContainer for \"f357c633665c564990634ffd5cf8a7e18ea36ad682e67deffaaa1bac6de2a9b5\" returns successfully" Oct 8 20:07:46.488399 kubelet[2799]: E1008 20:07:46.487902 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:46.488399 kubelet[2799]: W1008 20:07:46.487928 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:46.488399 kubelet[2799]: E1008 20:07:46.487949 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:46.488399 kubelet[2799]: E1008 20:07:46.488104 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:46.488399 kubelet[2799]: W1008 20:07:46.488112 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:46.488399 kubelet[2799]: E1008 20:07:46.488122 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:46.489692 kubelet[2799]: E1008 20:07:46.489458 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:46.489692 kubelet[2799]: W1008 20:07:46.489471 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:46.489692 kubelet[2799]: E1008 20:07:46.489482 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:46.490961 kubelet[2799]: E1008 20:07:46.490366 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:46.490961 kubelet[2799]: W1008 20:07:46.490379 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:46.490961 kubelet[2799]: E1008 20:07:46.490391 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:46.490961 kubelet[2799]: E1008 20:07:46.490653 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:46.490961 kubelet[2799]: W1008 20:07:46.490661 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:46.490961 kubelet[2799]: E1008 20:07:46.490671 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:46.490961 kubelet[2799]: E1008 20:07:46.490892 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:46.490961 kubelet[2799]: W1008 20:07:46.490899 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:46.490961 kubelet[2799]: E1008 20:07:46.490909 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:46.492482 kubelet[2799]: E1008 20:07:46.491154 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:46.492482 kubelet[2799]: W1008 20:07:46.491162 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:46.492482 kubelet[2799]: E1008 20:07:46.491172 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:46.492482 kubelet[2799]: E1008 20:07:46.491361 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:46.492482 kubelet[2799]: W1008 20:07:46.491369 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:46.492482 kubelet[2799]: E1008 20:07:46.491378 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:46.492482 kubelet[2799]: E1008 20:07:46.491552 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:46.492482 kubelet[2799]: W1008 20:07:46.491559 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:46.492482 kubelet[2799]: E1008 20:07:46.491569 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:46.492482 kubelet[2799]: E1008 20:07:46.491702 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:46.493114 kubelet[2799]: W1008 20:07:46.491708 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:46.493114 kubelet[2799]: E1008 20:07:46.491717 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:46.493114 kubelet[2799]: E1008 20:07:46.491853 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:46.493114 kubelet[2799]: W1008 20:07:46.491859 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:46.493114 kubelet[2799]: E1008 20:07:46.491868 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:46.493114 kubelet[2799]: E1008 20:07:46.491996 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:46.493114 kubelet[2799]: W1008 20:07:46.492003 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:46.493114 kubelet[2799]: E1008 20:07:46.492011 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:46.493114 kubelet[2799]: E1008 20:07:46.492251 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:46.493114 kubelet[2799]: W1008 20:07:46.492262 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:46.494394 kubelet[2799]: E1008 20:07:46.492278 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:46.494394 kubelet[2799]: E1008 20:07:46.492481 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:46.494394 kubelet[2799]: W1008 20:07:46.492488 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:46.494394 kubelet[2799]: E1008 20:07:46.492498 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:46.494394 kubelet[2799]: E1008 20:07:46.492648 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:46.494394 kubelet[2799]: W1008 20:07:46.492654 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:46.494394 kubelet[2799]: E1008 20:07:46.492663 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:46.511754 kubelet[2799]: E1008 20:07:46.511733 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:46.511998 kubelet[2799]: W1008 20:07:46.511841 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:46.511998 kubelet[2799]: E1008 20:07:46.511866 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:46.513993 kubelet[2799]: E1008 20:07:46.513310 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:46.513993 kubelet[2799]: W1008 20:07:46.513322 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:46.513993 kubelet[2799]: E1008 20:07:46.513371 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:46.513993 kubelet[2799]: E1008 20:07:46.513663 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:46.513993 kubelet[2799]: W1008 20:07:46.513685 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:46.513993 kubelet[2799]: E1008 20:07:46.513723 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:46.513993 kubelet[2799]: E1008 20:07:46.513908 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:46.513993 kubelet[2799]: W1008 20:07:46.513915 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:46.513993 kubelet[2799]: E1008 20:07:46.513926 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:46.519374 kubelet[2799]: E1008 20:07:46.514264 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:46.519374 kubelet[2799]: W1008 20:07:46.514280 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:46.519374 kubelet[2799]: E1008 20:07:46.514301 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:46.519374 kubelet[2799]: E1008 20:07:46.515567 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:46.519374 kubelet[2799]: W1008 20:07:46.515579 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:46.519374 kubelet[2799]: E1008 20:07:46.516341 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:46.519374 kubelet[2799]: W1008 20:07:46.516354 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:46.519374 kubelet[2799]: E1008 20:07:46.517119 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:46.519374 kubelet[2799]: E1008 20:07:46.517572 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:46.519374 kubelet[2799]: W1008 20:07:46.517582 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:46.519655 kubelet[2799]: E1008 20:07:46.517594 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:46.519655 kubelet[2799]: E1008 20:07:46.519236 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:46.519655 kubelet[2799]: W1008 20:07:46.519248 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:46.519655 kubelet[2799]: E1008 20:07:46.519265 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:46.521143 kubelet[2799]: E1008 20:07:46.519971 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:46.521143 kubelet[2799]: W1008 20:07:46.519988 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:46.521143 kubelet[2799]: E1008 20:07:46.520002 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:46.521143 kubelet[2799]: E1008 20:07:46.520492 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:46.521143 kubelet[2799]: W1008 20:07:46.520500 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:46.521143 kubelet[2799]: E1008 20:07:46.520534 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:46.521143 kubelet[2799]: E1008 20:07:46.520837 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:46.521143 kubelet[2799]: W1008 20:07:46.520846 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:46.521143 kubelet[2799]: E1008 20:07:46.520859 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:46.521143 kubelet[2799]: E1008 20:07:46.520887 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:46.522711 kubelet[2799]: E1008 20:07:46.521086 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:46.522711 kubelet[2799]: W1008 20:07:46.521094 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:46.522711 kubelet[2799]: E1008 20:07:46.521104 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:46.522711 kubelet[2799]: E1008 20:07:46.521300 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:46.522711 kubelet[2799]: W1008 20:07:46.521308 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:46.522711 kubelet[2799]: E1008 20:07:46.521319 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:46.522711 kubelet[2799]: E1008 20:07:46.521490 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:46.522711 kubelet[2799]: W1008 20:07:46.521498 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:46.522711 kubelet[2799]: E1008 20:07:46.521507 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:46.522711 kubelet[2799]: E1008 20:07:46.521663 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:46.522990 kubelet[2799]: W1008 20:07:46.521669 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:46.522990 kubelet[2799]: E1008 20:07:46.521678 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:46.522990 kubelet[2799]: E1008 20:07:46.521950 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:46.522990 kubelet[2799]: W1008 20:07:46.521957 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:46.522990 kubelet[2799]: E1008 20:07:46.521966 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:46.525293 kubelet[2799]: E1008 20:07:46.525184 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:46.525293 kubelet[2799]: W1008 20:07:46.525198 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:46.525293 kubelet[2799]: E1008 20:07:46.525212 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:47.471023 kubelet[2799]: I1008 20:07:47.470988 2799 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 20:07:47.499735 kubelet[2799]: E1008 20:07:47.499646 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:47.499735 kubelet[2799]: W1008 20:07:47.499669 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:47.499735 kubelet[2799]: E1008 20:07:47.499690 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:47.500431 kubelet[2799]: E1008 20:07:47.500230 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:47.500431 kubelet[2799]: W1008 20:07:47.500242 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:47.500431 kubelet[2799]: E1008 20:07:47.500254 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:47.500663 kubelet[2799]: E1008 20:07:47.500619 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:47.500663 kubelet[2799]: W1008 20:07:47.500631 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:47.500663 kubelet[2799]: E1008 20:07:47.500644 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:47.501096 kubelet[2799]: E1008 20:07:47.501010 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:47.501096 kubelet[2799]: W1008 20:07:47.501020 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:47.501096 kubelet[2799]: E1008 20:07:47.501030 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:47.501550 kubelet[2799]: E1008 20:07:47.501485 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:47.501550 kubelet[2799]: W1008 20:07:47.501495 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:47.501550 kubelet[2799]: E1008 20:07:47.501505 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:47.501908 kubelet[2799]: E1008 20:07:47.501805 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:47.501908 kubelet[2799]: W1008 20:07:47.501815 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:47.501908 kubelet[2799]: E1008 20:07:47.501825 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:47.502260 kubelet[2799]: E1008 20:07:47.502138 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:47.502260 kubelet[2799]: W1008 20:07:47.502148 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:47.502260 kubelet[2799]: E1008 20:07:47.502158 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:47.502529 kubelet[2799]: E1008 20:07:47.502503 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:47.502529 kubelet[2799]: W1008 20:07:47.502512 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:47.502889 kubelet[2799]: E1008 20:07:47.502659 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:47.503059 kubelet[2799]: E1008 20:07:47.502995 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:47.503059 kubelet[2799]: W1008 20:07:47.503005 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:47.503059 kubelet[2799]: E1008 20:07:47.503014 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:47.503666 kubelet[2799]: E1008 20:07:47.503592 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:47.503666 kubelet[2799]: W1008 20:07:47.503603 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:47.503666 kubelet[2799]: E1008 20:07:47.503614 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:47.504007 kubelet[2799]: E1008 20:07:47.503972 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:47.504129 kubelet[2799]: W1008 20:07:47.504028 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:47.504129 kubelet[2799]: E1008 20:07:47.504041 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:47.505260 kubelet[2799]: E1008 20:07:47.504945 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:47.505260 kubelet[2799]: W1008 20:07:47.504961 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:47.505260 kubelet[2799]: E1008 20:07:47.504976 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:47.505420 kubelet[2799]: E1008 20:07:47.505354 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:47.505513 kubelet[2799]: W1008 20:07:47.505367 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:47.505545 kubelet[2799]: E1008 20:07:47.505518 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:47.505898 kubelet[2799]: E1008 20:07:47.505872 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:47.505898 kubelet[2799]: W1008 20:07:47.505891 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:47.505976 kubelet[2799]: E1008 20:07:47.505907 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:47.506375 kubelet[2799]: E1008 20:07:47.506310 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:47.506375 kubelet[2799]: W1008 20:07:47.506326 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:47.506375 kubelet[2799]: E1008 20:07:47.506340 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:47.528039 kubelet[2799]: E1008 20:07:47.528004 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:47.528039 kubelet[2799]: W1008 20:07:47.528028 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:47.528039 kubelet[2799]: E1008 20:07:47.528050 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:47.528754 kubelet[2799]: E1008 20:07:47.528730 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:47.528754 kubelet[2799]: W1008 20:07:47.528745 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:47.528881 kubelet[2799]: E1008 20:07:47.528797 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:47.530110 kubelet[2799]: E1008 20:07:47.529673 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:47.530110 kubelet[2799]: W1008 20:07:47.529978 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:47.531240 kubelet[2799]: E1008 20:07:47.530592 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:47.531804 kubelet[2799]: E1008 20:07:47.531527 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:47.531804 kubelet[2799]: W1008 20:07:47.531539 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:47.531804 kubelet[2799]: E1008 20:07:47.531557 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:47.532041 kubelet[2799]: E1008 20:07:47.532029 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:47.532480 kubelet[2799]: W1008 20:07:47.532125 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:47.533031 kubelet[2799]: E1008 20:07:47.532843 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:47.533639 kubelet[2799]: E1008 20:07:47.533337 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:47.533639 kubelet[2799]: W1008 20:07:47.533348 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:47.533639 kubelet[2799]: E1008 20:07:47.533504 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:47.535258 kubelet[2799]: E1008 20:07:47.534131 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:47.535258 kubelet[2799]: W1008 20:07:47.534142 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:47.535444 kubelet[2799]: E1008 20:07:47.535354 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:47.535588 kubelet[2799]: E1008 20:07:47.535576 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:47.535683 kubelet[2799]: W1008 20:07:47.535639 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:47.535852 kubelet[2799]: E1008 20:07:47.535809 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:47.536643 kubelet[2799]: E1008 20:07:47.536620 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:47.537527 kubelet[2799]: W1008 20:07:47.536734 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:47.537702 kubelet[2799]: E1008 20:07:47.537628 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:47.538004 kubelet[2799]: E1008 20:07:47.537890 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:47.538004 kubelet[2799]: W1008 20:07:47.537900 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:47.538635 kubelet[2799]: E1008 20:07:47.538607 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:47.540676 kubelet[2799]: E1008 20:07:47.539254 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:47.540676 kubelet[2799]: W1008 20:07:47.539265 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:47.540676 kubelet[2799]: E1008 20:07:47.540271 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:47.541048 kubelet[2799]: E1008 20:07:47.541036 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:47.541106 kubelet[2799]: W1008 20:07:47.541096 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:47.541433 kubelet[2799]: E1008 20:07:47.541337 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:47.541940 kubelet[2799]: E1008 20:07:47.541811 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:47.542010 kubelet[2799]: W1008 20:07:47.541990 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:47.542493 kubelet[2799]: E1008 20:07:47.542370 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:47.542882 kubelet[2799]: E1008 20:07:47.542738 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:47.542882 kubelet[2799]: W1008 20:07:47.542748 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:47.543573 kubelet[2799]: E1008 20:07:47.543451 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:47.544146 kubelet[2799]: E1008 20:07:47.543775 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:47.544146 kubelet[2799]: W1008 20:07:47.543787 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:47.544905 kubelet[2799]: E1008 20:07:47.544502 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:47.545510 kubelet[2799]: E1008 20:07:47.545097 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:47.545572 kubelet[2799]: W1008 20:07:47.545558 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:47.546036 kubelet[2799]: E1008 20:07:47.545625 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:47.546689 kubelet[2799]: E1008 20:07:47.546385 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:47.546758 kubelet[2799]: W1008 20:07:47.546746 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:47.547021 kubelet[2799]: E1008 20:07:47.547004 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:47.547294 kubelet[2799]: E1008 20:07:47.547208 2799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:07:47.547671 kubelet[2799]: W1008 20:07:47.547657 2799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:07:47.548032 kubelet[2799]: E1008 20:07:47.547783 2799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:07:47.586101 containerd[1500]: time="2024-10-08T20:07:47.586046095Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:07:47.587103 containerd[1500]: time="2024-10-08T20:07:47.587058379Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=5141007" Oct 8 20:07:47.587926 containerd[1500]: time="2024-10-08T20:07:47.587883197Z" level=info msg="ImageCreate event name:\"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:07:47.589689 containerd[1500]: time="2024-10-08T20:07:47.589637653Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:07:47.590541 containerd[1500]: time="2024-10-08T20:07:47.590401626Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6633368\" in 1.316287558s" Oct 8 20:07:47.590541 containerd[1500]: time="2024-10-08T20:07:47.590440078Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\"" Oct 8 20:07:47.592956 containerd[1500]: time="2024-10-08T20:07:47.592924332Z" level=info msg="CreateContainer within sandbox \"f8580ee68853bfaba13fcc4a82bde86f0a93f83f43b25515f3a0759bf39c94da\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 8 20:07:47.610655 containerd[1500]: time="2024-10-08T20:07:47.610615355Z" level=info msg="CreateContainer within sandbox \"f8580ee68853bfaba13fcc4a82bde86f0a93f83f43b25515f3a0759bf39c94da\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"2f6590bbe8eb8000602e1a2fc4fd22b20b9433189f08f3385d88551e1e8d545a\"" Oct 8 20:07:47.611925 containerd[1500]: time="2024-10-08T20:07:47.611392524Z" level=info msg="StartContainer for \"2f6590bbe8eb8000602e1a2fc4fd22b20b9433189f08f3385d88551e1e8d545a\"" Oct 8 20:07:47.655328 systemd[1]: Started cri-containerd-2f6590bbe8eb8000602e1a2fc4fd22b20b9433189f08f3385d88551e1e8d545a.scope - libcontainer container 2f6590bbe8eb8000602e1a2fc4fd22b20b9433189f08f3385d88551e1e8d545a. Oct 8 20:07:47.690908 containerd[1500]: time="2024-10-08T20:07:47.690442964Z" level=info msg="StartContainer for \"2f6590bbe8eb8000602e1a2fc4fd22b20b9433189f08f3385d88551e1e8d545a\" returns successfully" Oct 8 20:07:47.712739 systemd[1]: cri-containerd-2f6590bbe8eb8000602e1a2fc4fd22b20b9433189f08f3385d88551e1e8d545a.scope: Deactivated successfully. Oct 8 20:07:47.738448 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f6590bbe8eb8000602e1a2fc4fd22b20b9433189f08f3385d88551e1e8d545a-rootfs.mount: Deactivated successfully. Oct 8 20:07:47.772944 containerd[1500]: time="2024-10-08T20:07:47.754080712Z" level=info msg="shim disconnected" id=2f6590bbe8eb8000602e1a2fc4fd22b20b9433189f08f3385d88551e1e8d545a namespace=k8s.io Oct 8 20:07:47.773350 containerd[1500]: time="2024-10-08T20:07:47.773164565Z" level=warning msg="cleaning up after shim disconnected" id=2f6590bbe8eb8000602e1a2fc4fd22b20b9433189f08f3385d88551e1e8d545a namespace=k8s.io Oct 8 20:07:47.773350 containerd[1500]: time="2024-10-08T20:07:47.773188811Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:07:48.350817 kubelet[2799]: E1008 20:07:48.350780 2799 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kmk8v" podUID="0b81e2b9-b925-4fbf-96b6-98335c1c1130" Oct 8 20:07:48.478200 containerd[1500]: time="2024-10-08T20:07:48.478141777Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Oct 8 20:07:48.492692 kubelet[2799]: I1008 20:07:48.492541 2799 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-76df5dd7bc-l2p92" podStartSLOduration=3.693219263 podStartE2EDuration="5.492499248s" podCreationTimestamp="2024-10-08 20:07:43 +0000 UTC" firstStartedPulling="2024-10-08 20:07:44.473736064 +0000 UTC m=+19.231414495" lastFinishedPulling="2024-10-08 20:07:46.273016051 +0000 UTC m=+21.030694480" observedRunningTime="2024-10-08 20:07:46.490089718 +0000 UTC m=+21.247768148" watchObservedRunningTime="2024-10-08 20:07:48.492499248 +0000 UTC m=+23.250177678" Oct 8 20:07:49.142264 kubelet[2799]: I1008 20:07:49.141656 2799 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 20:07:50.351516 kubelet[2799]: E1008 20:07:50.350979 2799 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kmk8v" podUID="0b81e2b9-b925-4fbf-96b6-98335c1c1130" Oct 8 20:07:51.583340 containerd[1500]: time="2024-10-08T20:07:51.583271861Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:07:51.584484 containerd[1500]: time="2024-10-08T20:07:51.584438211Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=93083736" Oct 8 20:07:51.585335 containerd[1500]: time="2024-10-08T20:07:51.585286173Z" level=info msg="ImageCreate event name:\"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:07:51.587248 containerd[1500]: time="2024-10-08T20:07:51.587191508Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:07:51.588473 containerd[1500]: time="2024-10-08T20:07:51.588312381Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"94576137\" in 3.110131298s" Oct 8 20:07:51.588473 containerd[1500]: time="2024-10-08T20:07:51.588348880Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\"" Oct 8 20:07:51.592050 containerd[1500]: time="2024-10-08T20:07:51.592010505Z" level=info msg="CreateContainer within sandbox \"f8580ee68853bfaba13fcc4a82bde86f0a93f83f43b25515f3a0759bf39c94da\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 8 20:07:51.645211 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3231742060.mount: Deactivated successfully. Oct 8 20:07:51.647775 containerd[1500]: time="2024-10-08T20:07:51.647739689Z" level=info msg="CreateContainer within sandbox \"f8580ee68853bfaba13fcc4a82bde86f0a93f83f43b25515f3a0759bf39c94da\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6695b03f9101a1c47ff5caa0c98df0c44ef42b1d23f369a074cf3b33a447a4b3\"" Oct 8 20:07:51.648660 containerd[1500]: time="2024-10-08T20:07:51.648371981Z" level=info msg="StartContainer for \"6695b03f9101a1c47ff5caa0c98df0c44ef42b1d23f369a074cf3b33a447a4b3\"" Oct 8 20:07:51.692360 systemd[1]: Started cri-containerd-6695b03f9101a1c47ff5caa0c98df0c44ef42b1d23f369a074cf3b33a447a4b3.scope - libcontainer container 6695b03f9101a1c47ff5caa0c98df0c44ef42b1d23f369a074cf3b33a447a4b3. Oct 8 20:07:51.721728 containerd[1500]: time="2024-10-08T20:07:51.721635500Z" level=info msg="StartContainer for \"6695b03f9101a1c47ff5caa0c98df0c44ef42b1d23f369a074cf3b33a447a4b3\" returns successfully" Oct 8 20:07:52.109113 systemd[1]: cri-containerd-6695b03f9101a1c47ff5caa0c98df0c44ef42b1d23f369a074cf3b33a447a4b3.scope: Deactivated successfully. Oct 8 20:07:52.142014 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6695b03f9101a1c47ff5caa0c98df0c44ef42b1d23f369a074cf3b33a447a4b3-rootfs.mount: Deactivated successfully. Oct 8 20:07:52.152426 containerd[1500]: time="2024-10-08T20:07:52.152352790Z" level=info msg="shim disconnected" id=6695b03f9101a1c47ff5caa0c98df0c44ef42b1d23f369a074cf3b33a447a4b3 namespace=k8s.io Oct 8 20:07:52.152426 containerd[1500]: time="2024-10-08T20:07:52.152403777Z" level=warning msg="cleaning up after shim disconnected" id=6695b03f9101a1c47ff5caa0c98df0c44ef42b1d23f369a074cf3b33a447a4b3 namespace=k8s.io Oct 8 20:07:52.152426 containerd[1500]: time="2024-10-08T20:07:52.152412573Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:07:52.170955 kubelet[2799]: I1008 20:07:52.170841 2799 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Oct 8 20:07:52.197397 kubelet[2799]: I1008 20:07:52.195699 2799 topology_manager.go:215] "Topology Admit Handler" podUID="106d4675-f2aa-41c2-b0d2-327ddbc42de2" podNamespace="kube-system" podName="coredns-76f75df574-jcw4q" Oct 8 20:07:52.204639 kubelet[2799]: I1008 20:07:52.204618 2799 topology_manager.go:215] "Topology Admit Handler" podUID="ffc6c48e-8dc3-4c58-a149-a815514690e4" podNamespace="kube-system" podName="coredns-76f75df574-gqdhb" Oct 8 20:07:52.209004 kubelet[2799]: I1008 20:07:52.208398 2799 topology_manager.go:215] "Topology Admit Handler" podUID="47c8fbe8-41ef-46a6-81f1-7c5c2c70818f" podNamespace="calico-system" podName="calico-kube-controllers-547cf679d9-vx22q" Oct 8 20:07:52.210008 systemd[1]: Created slice kubepods-burstable-pod106d4675_f2aa_41c2_b0d2_327ddbc42de2.slice - libcontainer container kubepods-burstable-pod106d4675_f2aa_41c2_b0d2_327ddbc42de2.slice. Oct 8 20:07:52.220712 systemd[1]: Created slice kubepods-burstable-podffc6c48e_8dc3_4c58_a149_a815514690e4.slice - libcontainer container kubepods-burstable-podffc6c48e_8dc3_4c58_a149_a815514690e4.slice. Oct 8 20:07:52.227025 systemd[1]: Created slice kubepods-besteffort-pod47c8fbe8_41ef_46a6_81f1_7c5c2c70818f.slice - libcontainer container kubepods-besteffort-pod47c8fbe8_41ef_46a6_81f1_7c5c2c70818f.slice. Oct 8 20:07:52.262708 kubelet[2799]: I1008 20:07:52.262638 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfbbm\" (UniqueName: \"kubernetes.io/projected/106d4675-f2aa-41c2-b0d2-327ddbc42de2-kube-api-access-vfbbm\") pod \"coredns-76f75df574-jcw4q\" (UID: \"106d4675-f2aa-41c2-b0d2-327ddbc42de2\") " pod="kube-system/coredns-76f75df574-jcw4q" Oct 8 20:07:52.262708 kubelet[2799]: I1008 20:07:52.262686 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47c8fbe8-41ef-46a6-81f1-7c5c2c70818f-tigera-ca-bundle\") pod \"calico-kube-controllers-547cf679d9-vx22q\" (UID: \"47c8fbe8-41ef-46a6-81f1-7c5c2c70818f\") " pod="calico-system/calico-kube-controllers-547cf679d9-vx22q" Oct 8 20:07:52.262708 kubelet[2799]: I1008 20:07:52.262712 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/106d4675-f2aa-41c2-b0d2-327ddbc42de2-config-volume\") pod \"coredns-76f75df574-jcw4q\" (UID: \"106d4675-f2aa-41c2-b0d2-327ddbc42de2\") " pod="kube-system/coredns-76f75df574-jcw4q" Oct 8 20:07:52.262963 kubelet[2799]: I1008 20:07:52.262730 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ffc6c48e-8dc3-4c58-a149-a815514690e4-config-volume\") pod \"coredns-76f75df574-gqdhb\" (UID: \"ffc6c48e-8dc3-4c58-a149-a815514690e4\") " pod="kube-system/coredns-76f75df574-gqdhb" Oct 8 20:07:52.262963 kubelet[2799]: I1008 20:07:52.262749 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qjnm\" (UniqueName: \"kubernetes.io/projected/47c8fbe8-41ef-46a6-81f1-7c5c2c70818f-kube-api-access-4qjnm\") pod \"calico-kube-controllers-547cf679d9-vx22q\" (UID: \"47c8fbe8-41ef-46a6-81f1-7c5c2c70818f\") " pod="calico-system/calico-kube-controllers-547cf679d9-vx22q" Oct 8 20:07:52.262963 kubelet[2799]: I1008 20:07:52.262766 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpqsn\" (UniqueName: \"kubernetes.io/projected/ffc6c48e-8dc3-4c58-a149-a815514690e4-kube-api-access-qpqsn\") pod \"coredns-76f75df574-gqdhb\" (UID: \"ffc6c48e-8dc3-4c58-a149-a815514690e4\") " pod="kube-system/coredns-76f75df574-gqdhb" Oct 8 20:07:52.356635 systemd[1]: Created slice kubepods-besteffort-pod0b81e2b9_b925_4fbf_96b6_98335c1c1130.slice - libcontainer container kubepods-besteffort-pod0b81e2b9_b925_4fbf_96b6_98335c1c1130.slice. Oct 8 20:07:52.360475 containerd[1500]: time="2024-10-08T20:07:52.360258490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kmk8v,Uid:0b81e2b9-b925-4fbf-96b6-98335c1c1130,Namespace:calico-system,Attempt:0,}" Oct 8 20:07:52.487477 containerd[1500]: time="2024-10-08T20:07:52.487345259Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Oct 8 20:07:52.518083 containerd[1500]: time="2024-10-08T20:07:52.517348712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-jcw4q,Uid:106d4675-f2aa-41c2-b0d2-327ddbc42de2,Namespace:kube-system,Attempt:0,}" Oct 8 20:07:52.523961 containerd[1500]: time="2024-10-08T20:07:52.523888634Z" level=error msg="Failed to destroy network for sandbox \"dd5cc048e18fce56dc3b25424e4029d9edbb4187c056776480e45af71db0016e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:07:52.526250 containerd[1500]: time="2024-10-08T20:07:52.525680624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-gqdhb,Uid:ffc6c48e-8dc3-4c58-a149-a815514690e4,Namespace:kube-system,Attempt:0,}" Oct 8 20:07:52.528765 containerd[1500]: time="2024-10-08T20:07:52.528729314Z" level=error msg="encountered an error cleaning up failed sandbox \"dd5cc048e18fce56dc3b25424e4029d9edbb4187c056776480e45af71db0016e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:07:52.529121 containerd[1500]: time="2024-10-08T20:07:52.529097865Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kmk8v,Uid:0b81e2b9-b925-4fbf-96b6-98335c1c1130,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dd5cc048e18fce56dc3b25424e4029d9edbb4187c056776480e45af71db0016e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:07:52.529652 kubelet[2799]: E1008 20:07:52.529624 2799 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd5cc048e18fce56dc3b25424e4029d9edbb4187c056776480e45af71db0016e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:07:52.529766 kubelet[2799]: E1008 20:07:52.529678 2799 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd5cc048e18fce56dc3b25424e4029d9edbb4187c056776480e45af71db0016e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kmk8v" Oct 8 20:07:52.529766 kubelet[2799]: E1008 20:07:52.529697 2799 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd5cc048e18fce56dc3b25424e4029d9edbb4187c056776480e45af71db0016e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kmk8v" Oct 8 20:07:52.529766 kubelet[2799]: E1008 20:07:52.529744 2799 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kmk8v_calico-system(0b81e2b9-b925-4fbf-96b6-98335c1c1130)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kmk8v_calico-system(0b81e2b9-b925-4fbf-96b6-98335c1c1130)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dd5cc048e18fce56dc3b25424e4029d9edbb4187c056776480e45af71db0016e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kmk8v" podUID="0b81e2b9-b925-4fbf-96b6-98335c1c1130" Oct 8 20:07:52.530661 containerd[1500]: time="2024-10-08T20:07:52.530641262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-547cf679d9-vx22q,Uid:47c8fbe8-41ef-46a6-81f1-7c5c2c70818f,Namespace:calico-system,Attempt:0,}" Oct 8 20:07:52.602142 containerd[1500]: time="2024-10-08T20:07:52.602092802Z" level=error msg="Failed to destroy network for sandbox \"bb752add53158b5a0fab6052e4e7c7abafa2deded948019132ceb76a80613679\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:07:52.602975 containerd[1500]: time="2024-10-08T20:07:52.602881232Z" level=error msg="encountered an error cleaning up failed sandbox \"bb752add53158b5a0fab6052e4e7c7abafa2deded948019132ceb76a80613679\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:07:52.602975 containerd[1500]: time="2024-10-08T20:07:52.602935135Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-jcw4q,Uid:106d4675-f2aa-41c2-b0d2-327ddbc42de2,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bb752add53158b5a0fab6052e4e7c7abafa2deded948019132ceb76a80613679\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:07:52.604338 kubelet[2799]: E1008 20:07:52.603388 2799 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb752add53158b5a0fab6052e4e7c7abafa2deded948019132ceb76a80613679\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:07:52.604338 kubelet[2799]: E1008 20:07:52.603438 2799 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb752add53158b5a0fab6052e4e7c7abafa2deded948019132ceb76a80613679\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-jcw4q" Oct 8 20:07:52.604338 kubelet[2799]: E1008 20:07:52.603465 2799 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb752add53158b5a0fab6052e4e7c7abafa2deded948019132ceb76a80613679\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-jcw4q" Oct 8 20:07:52.604509 kubelet[2799]: E1008 20:07:52.603531 2799 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-jcw4q_kube-system(106d4675-f2aa-41c2-b0d2-327ddbc42de2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-jcw4q_kube-system(106d4675-f2aa-41c2-b0d2-327ddbc42de2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bb752add53158b5a0fab6052e4e7c7abafa2deded948019132ceb76a80613679\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-jcw4q" podUID="106d4675-f2aa-41c2-b0d2-327ddbc42de2" Oct 8 20:07:52.624453 containerd[1500]: time="2024-10-08T20:07:52.624353302Z" level=error msg="Failed to destroy network for sandbox \"30a7c2f07df1bc24c4efcf58c645680c03ba72f1521aa1b989e52cb5369fba1d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:07:52.625530 containerd[1500]: time="2024-10-08T20:07:52.625504854Z" level=error msg="encountered an error cleaning up failed sandbox \"30a7c2f07df1bc24c4efcf58c645680c03ba72f1521aa1b989e52cb5369fba1d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:07:52.626088 containerd[1500]: time="2024-10-08T20:07:52.625616176Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-gqdhb,Uid:ffc6c48e-8dc3-4c58-a149-a815514690e4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"30a7c2f07df1bc24c4efcf58c645680c03ba72f1521aa1b989e52cb5369fba1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:07:52.628080 kubelet[2799]: E1008 20:07:52.628060 2799 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30a7c2f07df1bc24c4efcf58c645680c03ba72f1521aa1b989e52cb5369fba1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:07:52.628330 kubelet[2799]: E1008 20:07:52.628208 2799 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30a7c2f07df1bc24c4efcf58c645680c03ba72f1521aa1b989e52cb5369fba1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-gqdhb" Oct 8 20:07:52.628330 kubelet[2799]: E1008 20:07:52.628258 2799 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30a7c2f07df1bc24c4efcf58c645680c03ba72f1521aa1b989e52cb5369fba1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-gqdhb" Oct 8 20:07:52.629000 kubelet[2799]: E1008 20:07:52.628425 2799 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-gqdhb_kube-system(ffc6c48e-8dc3-4c58-a149-a815514690e4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-gqdhb_kube-system(ffc6c48e-8dc3-4c58-a149-a815514690e4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"30a7c2f07df1bc24c4efcf58c645680c03ba72f1521aa1b989e52cb5369fba1d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-gqdhb" podUID="ffc6c48e-8dc3-4c58-a149-a815514690e4" Oct 8 20:07:52.636084 containerd[1500]: time="2024-10-08T20:07:52.635988670Z" level=error msg="Failed to destroy network for sandbox \"41345d2a2fb53738835e84bfb02237d3b2d1cd2c57a9459fda354b4a3f6e3aa9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:07:52.636417 containerd[1500]: time="2024-10-08T20:07:52.636368213Z" level=error msg="encountered an error cleaning up failed sandbox \"41345d2a2fb53738835e84bfb02237d3b2d1cd2c57a9459fda354b4a3f6e3aa9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:07:52.636515 containerd[1500]: time="2024-10-08T20:07:52.636412386Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-547cf679d9-vx22q,Uid:47c8fbe8-41ef-46a6-81f1-7c5c2c70818f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"41345d2a2fb53738835e84bfb02237d3b2d1cd2c57a9459fda354b4a3f6e3aa9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:07:52.636614 kubelet[2799]: E1008 20:07:52.636575 2799 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41345d2a2fb53738835e84bfb02237d3b2d1cd2c57a9459fda354b4a3f6e3aa9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:07:52.636614 kubelet[2799]: E1008 20:07:52.636605 2799 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41345d2a2fb53738835e84bfb02237d3b2d1cd2c57a9459fda354b4a3f6e3aa9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-547cf679d9-vx22q" Oct 8 20:07:52.636668 kubelet[2799]: E1008 20:07:52.636627 2799 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41345d2a2fb53738835e84bfb02237d3b2d1cd2c57a9459fda354b4a3f6e3aa9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-547cf679d9-vx22q" Oct 8 20:07:52.636696 kubelet[2799]: E1008 20:07:52.636671 2799 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-547cf679d9-vx22q_calico-system(47c8fbe8-41ef-46a6-81f1-7c5c2c70818f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-547cf679d9-vx22q_calico-system(47c8fbe8-41ef-46a6-81f1-7c5c2c70818f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"41345d2a2fb53738835e84bfb02237d3b2d1cd2c57a9459fda354b4a3f6e3aa9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-547cf679d9-vx22q" podUID="47c8fbe8-41ef-46a6-81f1-7c5c2c70818f" Oct 8 20:07:53.487273 kubelet[2799]: I1008 20:07:53.487026 2799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd5cc048e18fce56dc3b25424e4029d9edbb4187c056776480e45af71db0016e" Oct 8 20:07:53.491486 kubelet[2799]: I1008 20:07:53.490624 2799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41345d2a2fb53738835e84bfb02237d3b2d1cd2c57a9459fda354b4a3f6e3aa9" Oct 8 20:07:53.492513 containerd[1500]: time="2024-10-08T20:07:53.491846699Z" level=info msg="StopPodSandbox for \"41345d2a2fb53738835e84bfb02237d3b2d1cd2c57a9459fda354b4a3f6e3aa9\"" Oct 8 20:07:53.492513 containerd[1500]: time="2024-10-08T20:07:53.492363652Z" level=info msg="StopPodSandbox for \"dd5cc048e18fce56dc3b25424e4029d9edbb4187c056776480e45af71db0016e\"" Oct 8 20:07:53.495309 containerd[1500]: time="2024-10-08T20:07:53.493511898Z" level=info msg="Ensure that sandbox dd5cc048e18fce56dc3b25424e4029d9edbb4187c056776480e45af71db0016e in task-service has been cleanup successfully" Oct 8 20:07:53.497795 kubelet[2799]: I1008 20:07:53.497411 2799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30a7c2f07df1bc24c4efcf58c645680c03ba72f1521aa1b989e52cb5369fba1d" Oct 8 20:07:53.498090 containerd[1500]: time="2024-10-08T20:07:53.498070832Z" level=info msg="StopPodSandbox for \"30a7c2f07df1bc24c4efcf58c645680c03ba72f1521aa1b989e52cb5369fba1d\"" Oct 8 20:07:53.498329 containerd[1500]: time="2024-10-08T20:07:53.498297764Z" level=info msg="Ensure that sandbox 30a7c2f07df1bc24c4efcf58c645680c03ba72f1521aa1b989e52cb5369fba1d in task-service has been cleanup successfully" Oct 8 20:07:53.498999 containerd[1500]: time="2024-10-08T20:07:53.498528974Z" level=info msg="Ensure that sandbox 41345d2a2fb53738835e84bfb02237d3b2d1cd2c57a9459fda354b4a3f6e3aa9 in task-service has been cleanup successfully" Oct 8 20:07:53.502581 kubelet[2799]: I1008 20:07:53.502516 2799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb752add53158b5a0fab6052e4e7c7abafa2deded948019132ceb76a80613679" Oct 8 20:07:53.503197 containerd[1500]: time="2024-10-08T20:07:53.503019058Z" level=info msg="StopPodSandbox for \"bb752add53158b5a0fab6052e4e7c7abafa2deded948019132ceb76a80613679\"" Oct 8 20:07:53.503197 containerd[1500]: time="2024-10-08T20:07:53.503145318Z" level=info msg="Ensure that sandbox bb752add53158b5a0fab6052e4e7c7abafa2deded948019132ceb76a80613679 in task-service has been cleanup successfully" Oct 8 20:07:53.549275 containerd[1500]: time="2024-10-08T20:07:53.547629402Z" level=error msg="StopPodSandbox for \"dd5cc048e18fce56dc3b25424e4029d9edbb4187c056776480e45af71db0016e\" failed" error="failed to destroy network for sandbox \"dd5cc048e18fce56dc3b25424e4029d9edbb4187c056776480e45af71db0016e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:07:53.549415 kubelet[2799]: E1008 20:07:53.547929 2799 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dd5cc048e18fce56dc3b25424e4029d9edbb4187c056776480e45af71db0016e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dd5cc048e18fce56dc3b25424e4029d9edbb4187c056776480e45af71db0016e" Oct 8 20:07:53.549415 kubelet[2799]: E1008 20:07:53.547998 2799 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dd5cc048e18fce56dc3b25424e4029d9edbb4187c056776480e45af71db0016e"} Oct 8 20:07:53.549415 kubelet[2799]: E1008 20:07:53.548030 2799 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0b81e2b9-b925-4fbf-96b6-98335c1c1130\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dd5cc048e18fce56dc3b25424e4029d9edbb4187c056776480e45af71db0016e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 20:07:53.549415 kubelet[2799]: E1008 20:07:53.548060 2799 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0b81e2b9-b925-4fbf-96b6-98335c1c1130\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dd5cc048e18fce56dc3b25424e4029d9edbb4187c056776480e45af71db0016e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kmk8v" podUID="0b81e2b9-b925-4fbf-96b6-98335c1c1130" Oct 8 20:07:53.567106 containerd[1500]: time="2024-10-08T20:07:53.565435313Z" level=error msg="StopPodSandbox for \"30a7c2f07df1bc24c4efcf58c645680c03ba72f1521aa1b989e52cb5369fba1d\" failed" error="failed to destroy network for sandbox \"30a7c2f07df1bc24c4efcf58c645680c03ba72f1521aa1b989e52cb5369fba1d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:07:53.567284 kubelet[2799]: E1008 20:07:53.565800 2799 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"30a7c2f07df1bc24c4efcf58c645680c03ba72f1521aa1b989e52cb5369fba1d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="30a7c2f07df1bc24c4efcf58c645680c03ba72f1521aa1b989e52cb5369fba1d" Oct 8 20:07:53.567284 kubelet[2799]: E1008 20:07:53.565843 2799 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"30a7c2f07df1bc24c4efcf58c645680c03ba72f1521aa1b989e52cb5369fba1d"} Oct 8 20:07:53.567284 kubelet[2799]: E1008 20:07:53.565901 2799 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ffc6c48e-8dc3-4c58-a149-a815514690e4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"30a7c2f07df1bc24c4efcf58c645680c03ba72f1521aa1b989e52cb5369fba1d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 20:07:53.567284 kubelet[2799]: E1008 20:07:53.565948 2799 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ffc6c48e-8dc3-4c58-a149-a815514690e4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"30a7c2f07df1bc24c4efcf58c645680c03ba72f1521aa1b989e52cb5369fba1d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-gqdhb" podUID="ffc6c48e-8dc3-4c58-a149-a815514690e4" Oct 8 20:07:53.570390 containerd[1500]: time="2024-10-08T20:07:53.570330767Z" level=error msg="StopPodSandbox for \"41345d2a2fb53738835e84bfb02237d3b2d1cd2c57a9459fda354b4a3f6e3aa9\" failed" error="failed to destroy network for sandbox \"41345d2a2fb53738835e84bfb02237d3b2d1cd2c57a9459fda354b4a3f6e3aa9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:07:53.570557 kubelet[2799]: E1008 20:07:53.570535 2799 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"41345d2a2fb53738835e84bfb02237d3b2d1cd2c57a9459fda354b4a3f6e3aa9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="41345d2a2fb53738835e84bfb02237d3b2d1cd2c57a9459fda354b4a3f6e3aa9" Oct 8 20:07:53.570608 kubelet[2799]: E1008 20:07:53.570571 2799 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"41345d2a2fb53738835e84bfb02237d3b2d1cd2c57a9459fda354b4a3f6e3aa9"} Oct 8 20:07:53.570608 kubelet[2799]: E1008 20:07:53.570608 2799 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"47c8fbe8-41ef-46a6-81f1-7c5c2c70818f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"41345d2a2fb53738835e84bfb02237d3b2d1cd2c57a9459fda354b4a3f6e3aa9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 20:07:53.570723 kubelet[2799]: E1008 20:07:53.570637 2799 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"47c8fbe8-41ef-46a6-81f1-7c5c2c70818f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"41345d2a2fb53738835e84bfb02237d3b2d1cd2c57a9459fda354b4a3f6e3aa9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-547cf679d9-vx22q" podUID="47c8fbe8-41ef-46a6-81f1-7c5c2c70818f" Oct 8 20:07:53.576855 containerd[1500]: time="2024-10-08T20:07:53.576818844Z" level=error msg="StopPodSandbox for \"bb752add53158b5a0fab6052e4e7c7abafa2deded948019132ceb76a80613679\" failed" error="failed to destroy network for sandbox \"bb752add53158b5a0fab6052e4e7c7abafa2deded948019132ceb76a80613679\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:07:53.577131 kubelet[2799]: E1008 20:07:53.577111 2799 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bb752add53158b5a0fab6052e4e7c7abafa2deded948019132ceb76a80613679\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bb752add53158b5a0fab6052e4e7c7abafa2deded948019132ceb76a80613679" Oct 8 20:07:53.577204 kubelet[2799]: E1008 20:07:53.577141 2799 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bb752add53158b5a0fab6052e4e7c7abafa2deded948019132ceb76a80613679"} Oct 8 20:07:53.577204 kubelet[2799]: E1008 20:07:53.577174 2799 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"106d4675-f2aa-41c2-b0d2-327ddbc42de2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bb752add53158b5a0fab6052e4e7c7abafa2deded948019132ceb76a80613679\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 20:07:53.577204 kubelet[2799]: E1008 20:07:53.577200 2799 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"106d4675-f2aa-41c2-b0d2-327ddbc42de2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bb752add53158b5a0fab6052e4e7c7abafa2deded948019132ceb76a80613679\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-jcw4q" podUID="106d4675-f2aa-41c2-b0d2-327ddbc42de2" Oct 8 20:07:58.605704 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3034499575.mount: Deactivated successfully. Oct 8 20:07:58.690961 containerd[1500]: time="2024-10-08T20:07:58.670647469Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=117873564" Oct 8 20:07:58.696762 containerd[1500]: time="2024-10-08T20:07:58.690863668Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"117873426\" in 6.194145353s" Oct 8 20:07:58.696919 containerd[1500]: time="2024-10-08T20:07:58.696832673Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\"" Oct 8 20:07:58.698774 containerd[1500]: time="2024-10-08T20:07:58.698336636Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:07:58.723265 containerd[1500]: time="2024-10-08T20:07:58.723209161Z" level=info msg="ImageCreate event name:\"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:07:58.723726 containerd[1500]: time="2024-10-08T20:07:58.723692812Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:07:58.775191 containerd[1500]: time="2024-10-08T20:07:58.775134231Z" level=info msg="CreateContainer within sandbox \"f8580ee68853bfaba13fcc4a82bde86f0a93f83f43b25515f3a0759bf39c94da\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 8 20:07:58.852202 containerd[1500]: time="2024-10-08T20:07:58.852152406Z" level=info msg="CreateContainer within sandbox \"f8580ee68853bfaba13fcc4a82bde86f0a93f83f43b25515f3a0759bf39c94da\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f4a2b2b4e80ad9b425620873f6041d121855cd30508bfe54e615fcca39e0e79d\"" Oct 8 20:07:58.859304 containerd[1500]: time="2024-10-08T20:07:58.858046208Z" level=info msg="StartContainer for \"f4a2b2b4e80ad9b425620873f6041d121855cd30508bfe54e615fcca39e0e79d\"" Oct 8 20:07:59.011438 systemd[1]: Started cri-containerd-f4a2b2b4e80ad9b425620873f6041d121855cd30508bfe54e615fcca39e0e79d.scope - libcontainer container f4a2b2b4e80ad9b425620873f6041d121855cd30508bfe54e615fcca39e0e79d. Oct 8 20:07:59.051200 containerd[1500]: time="2024-10-08T20:07:59.051168353Z" level=info msg="StartContainer for \"f4a2b2b4e80ad9b425620873f6041d121855cd30508bfe54e615fcca39e0e79d\" returns successfully" Oct 8 20:07:59.141060 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 8 20:07:59.143842 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 8 20:07:59.619489 kubelet[2799]: I1008 20:07:59.619427 2799 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-x6dfr" podStartSLOduration=2.373121432 podStartE2EDuration="15.590495567s" podCreationTimestamp="2024-10-08 20:07:44 +0000 UTC" firstStartedPulling="2024-10-08 20:07:45.479686311 +0000 UTC m=+20.237364742" lastFinishedPulling="2024-10-08 20:07:58.697060446 +0000 UTC m=+33.454738877" observedRunningTime="2024-10-08 20:07:59.582198218 +0000 UTC m=+34.339876648" watchObservedRunningTime="2024-10-08 20:07:59.590495567 +0000 UTC m=+34.348174027" Oct 8 20:08:00.601811 systemd[1]: run-containerd-runc-k8s.io-f4a2b2b4e80ad9b425620873f6041d121855cd30508bfe54e615fcca39e0e79d-runc.elco2s.mount: Deactivated successfully. Oct 8 20:08:00.780277 kernel: bpftool[4018]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Oct 8 20:08:01.001194 systemd-networkd[1395]: vxlan.calico: Link UP Oct 8 20:08:01.001202 systemd-networkd[1395]: vxlan.calico: Gained carrier Oct 8 20:08:02.410411 systemd-networkd[1395]: vxlan.calico: Gained IPv6LL Oct 8 20:08:04.352283 containerd[1500]: time="2024-10-08T20:08:04.352207338Z" level=info msg="StopPodSandbox for \"dd5cc048e18fce56dc3b25424e4029d9edbb4187c056776480e45af71db0016e\"" Oct 8 20:08:04.576133 containerd[1500]: 2024-10-08 20:08:04.417 [INFO][4124] k8s.go 608: Cleaning up netns ContainerID="dd5cc048e18fce56dc3b25424e4029d9edbb4187c056776480e45af71db0016e" Oct 8 20:08:04.576133 containerd[1500]: 2024-10-08 20:08:04.419 [INFO][4124] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="dd5cc048e18fce56dc3b25424e4029d9edbb4187c056776480e45af71db0016e" iface="eth0" netns="/var/run/netns/cni-093bdc43-0b9d-0b74-aa91-b9397ea2ad9f" Oct 8 20:08:04.576133 containerd[1500]: 2024-10-08 20:08:04.419 [INFO][4124] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="dd5cc048e18fce56dc3b25424e4029d9edbb4187c056776480e45af71db0016e" iface="eth0" netns="/var/run/netns/cni-093bdc43-0b9d-0b74-aa91-b9397ea2ad9f" Oct 8 20:08:04.576133 containerd[1500]: 2024-10-08 20:08:04.419 [INFO][4124] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="dd5cc048e18fce56dc3b25424e4029d9edbb4187c056776480e45af71db0016e" iface="eth0" netns="/var/run/netns/cni-093bdc43-0b9d-0b74-aa91-b9397ea2ad9f" Oct 8 20:08:04.576133 containerd[1500]: 2024-10-08 20:08:04.419 [INFO][4124] k8s.go 615: Releasing IP address(es) ContainerID="dd5cc048e18fce56dc3b25424e4029d9edbb4187c056776480e45af71db0016e" Oct 8 20:08:04.576133 containerd[1500]: 2024-10-08 20:08:04.419 [INFO][4124] utils.go 188: Calico CNI releasing IP address ContainerID="dd5cc048e18fce56dc3b25424e4029d9edbb4187c056776480e45af71db0016e" Oct 8 20:08:04.576133 containerd[1500]: 2024-10-08 20:08:04.559 [INFO][4130] ipam_plugin.go 417: Releasing address using handleID ContainerID="dd5cc048e18fce56dc3b25424e4029d9edbb4187c056776480e45af71db0016e" HandleID="k8s-pod-network.dd5cc048e18fce56dc3b25424e4029d9edbb4187c056776480e45af71db0016e" Workload="ci--4081--1--0--7--3c1e2fa9c6-k8s-csi--node--driver--kmk8v-eth0" Oct 8 20:08:04.576133 containerd[1500]: 2024-10-08 20:08:04.560 [INFO][4130] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:08:04.576133 containerd[1500]: 2024-10-08 20:08:04.560 [INFO][4130] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:08:04.576133 containerd[1500]: 2024-10-08 20:08:04.569 [WARNING][4130] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="dd5cc048e18fce56dc3b25424e4029d9edbb4187c056776480e45af71db0016e" HandleID="k8s-pod-network.dd5cc048e18fce56dc3b25424e4029d9edbb4187c056776480e45af71db0016e" Workload="ci--4081--1--0--7--3c1e2fa9c6-k8s-csi--node--driver--kmk8v-eth0" Oct 8 20:08:04.576133 containerd[1500]: 2024-10-08 20:08:04.569 [INFO][4130] ipam_plugin.go 445: Releasing address using workloadID ContainerID="dd5cc048e18fce56dc3b25424e4029d9edbb4187c056776480e45af71db0016e" HandleID="k8s-pod-network.dd5cc048e18fce56dc3b25424e4029d9edbb4187c056776480e45af71db0016e" Workload="ci--4081--1--0--7--3c1e2fa9c6-k8s-csi--node--driver--kmk8v-eth0" Oct 8 20:08:04.576133 containerd[1500]: 2024-10-08 20:08:04.571 [INFO][4130] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:08:04.576133 containerd[1500]: 2024-10-08 20:08:04.573 [INFO][4124] k8s.go 621: Teardown processing complete. ContainerID="dd5cc048e18fce56dc3b25424e4029d9edbb4187c056776480e45af71db0016e" Oct 8 20:08:04.577394 containerd[1500]: time="2024-10-08T20:08:04.577088331Z" level=info msg="TearDown network for sandbox \"dd5cc048e18fce56dc3b25424e4029d9edbb4187c056776480e45af71db0016e\" successfully" Oct 8 20:08:04.577394 containerd[1500]: time="2024-10-08T20:08:04.577125512Z" level=info msg="StopPodSandbox for \"dd5cc048e18fce56dc3b25424e4029d9edbb4187c056776480e45af71db0016e\" returns successfully" Oct 8 20:08:04.578445 containerd[1500]: time="2024-10-08T20:08:04.578350365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kmk8v,Uid:0b81e2b9-b925-4fbf-96b6-98335c1c1130,Namespace:calico-system,Attempt:1,}" Oct 8 20:08:04.582937 systemd[1]: run-netns-cni\x2d093bdc43\x2d0b9d\x2d0b74\x2daa91\x2db9397ea2ad9f.mount: Deactivated successfully. Oct 8 20:08:04.705962 systemd-networkd[1395]: cali9d3e992448d: Link UP Oct 8 20:08:04.707009 systemd-networkd[1395]: cali9d3e992448d: Gained carrier Oct 8 20:08:04.723740 containerd[1500]: 2024-10-08 20:08:04.626 [INFO][4136] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--1--0--7--3c1e2fa9c6-k8s-csi--node--driver--kmk8v-eth0 csi-node-driver- calico-system 0b81e2b9-b925-4fbf-96b6-98335c1c1130 666 0 2024-10-08 20:07:44 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78cd84fb8c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-4081-1-0-7-3c1e2fa9c6 csi-node-driver-kmk8v eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali9d3e992448d [] []}} ContainerID="e6e87401beb46a9b2b4e06aa4a68ea33aa1bd3b473e219c67f1eb6a1057905f5" Namespace="calico-system" Pod="csi-node-driver-kmk8v" WorkloadEndpoint="ci--4081--1--0--7--3c1e2fa9c6-k8s-csi--node--driver--kmk8v-" Oct 8 20:08:04.723740 containerd[1500]: 2024-10-08 20:08:04.626 [INFO][4136] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e6e87401beb46a9b2b4e06aa4a68ea33aa1bd3b473e219c67f1eb6a1057905f5" Namespace="calico-system" Pod="csi-node-driver-kmk8v" WorkloadEndpoint="ci--4081--1--0--7--3c1e2fa9c6-k8s-csi--node--driver--kmk8v-eth0" Oct 8 20:08:04.723740 containerd[1500]: 2024-10-08 20:08:04.663 [INFO][4148] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e6e87401beb46a9b2b4e06aa4a68ea33aa1bd3b473e219c67f1eb6a1057905f5" HandleID="k8s-pod-network.e6e87401beb46a9b2b4e06aa4a68ea33aa1bd3b473e219c67f1eb6a1057905f5" Workload="ci--4081--1--0--7--3c1e2fa9c6-k8s-csi--node--driver--kmk8v-eth0" Oct 8 20:08:04.723740 containerd[1500]: 2024-10-08 20:08:04.672 [INFO][4148] ipam_plugin.go 270: Auto assigning IP ContainerID="e6e87401beb46a9b2b4e06aa4a68ea33aa1bd3b473e219c67f1eb6a1057905f5" HandleID="k8s-pod-network.e6e87401beb46a9b2b4e06aa4a68ea33aa1bd3b473e219c67f1eb6a1057905f5" Workload="ci--4081--1--0--7--3c1e2fa9c6-k8s-csi--node--driver--kmk8v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319090), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-1-0-7-3c1e2fa9c6", "pod":"csi-node-driver-kmk8v", "timestamp":"2024-10-08 20:08:04.663326757 +0000 UTC"}, Hostname:"ci-4081-1-0-7-3c1e2fa9c6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 20:08:04.723740 containerd[1500]: 2024-10-08 20:08:04.672 [INFO][4148] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:08:04.723740 containerd[1500]: 2024-10-08 20:08:04.672 [INFO][4148] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:08:04.723740 containerd[1500]: 2024-10-08 20:08:04.672 [INFO][4148] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-1-0-7-3c1e2fa9c6' Oct 8 20:08:04.723740 containerd[1500]: 2024-10-08 20:08:04.674 [INFO][4148] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e6e87401beb46a9b2b4e06aa4a68ea33aa1bd3b473e219c67f1eb6a1057905f5" host="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:08:04.723740 containerd[1500]: 2024-10-08 20:08:04.680 [INFO][4148] ipam.go 372: Looking up existing affinities for host host="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:08:04.723740 containerd[1500]: 2024-10-08 20:08:04.684 [INFO][4148] ipam.go 489: Trying affinity for 192.168.90.0/26 host="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:08:04.723740 containerd[1500]: 2024-10-08 20:08:04.686 [INFO][4148] ipam.go 155: Attempting to load block cidr=192.168.90.0/26 host="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:08:04.723740 containerd[1500]: 2024-10-08 20:08:04.687 [INFO][4148] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.90.0/26 host="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:08:04.723740 containerd[1500]: 2024-10-08 20:08:04.687 [INFO][4148] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.90.0/26 handle="k8s-pod-network.e6e87401beb46a9b2b4e06aa4a68ea33aa1bd3b473e219c67f1eb6a1057905f5" host="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:08:04.723740 containerd[1500]: 2024-10-08 20:08:04.689 [INFO][4148] ipam.go 1685: Creating new handle: k8s-pod-network.e6e87401beb46a9b2b4e06aa4a68ea33aa1bd3b473e219c67f1eb6a1057905f5 Oct 8 20:08:04.723740 containerd[1500]: 2024-10-08 20:08:04.692 [INFO][4148] ipam.go 1203: Writing block in order to claim IPs block=192.168.90.0/26 handle="k8s-pod-network.e6e87401beb46a9b2b4e06aa4a68ea33aa1bd3b473e219c67f1eb6a1057905f5" host="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:08:04.723740 containerd[1500]: 2024-10-08 20:08:04.697 [INFO][4148] ipam.go 1216: Successfully claimed IPs: [192.168.90.1/26] block=192.168.90.0/26 handle="k8s-pod-network.e6e87401beb46a9b2b4e06aa4a68ea33aa1bd3b473e219c67f1eb6a1057905f5" host="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:08:04.723740 containerd[1500]: 2024-10-08 20:08:04.697 [INFO][4148] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.90.1/26] handle="k8s-pod-network.e6e87401beb46a9b2b4e06aa4a68ea33aa1bd3b473e219c67f1eb6a1057905f5" host="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:08:04.723740 containerd[1500]: 2024-10-08 20:08:04.697 [INFO][4148] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:08:04.723740 containerd[1500]: 2024-10-08 20:08:04.697 [INFO][4148] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.90.1/26] IPv6=[] ContainerID="e6e87401beb46a9b2b4e06aa4a68ea33aa1bd3b473e219c67f1eb6a1057905f5" HandleID="k8s-pod-network.e6e87401beb46a9b2b4e06aa4a68ea33aa1bd3b473e219c67f1eb6a1057905f5" Workload="ci--4081--1--0--7--3c1e2fa9c6-k8s-csi--node--driver--kmk8v-eth0" Oct 8 20:08:04.726145 containerd[1500]: 2024-10-08 20:08:04.701 [INFO][4136] k8s.go 386: Populated endpoint ContainerID="e6e87401beb46a9b2b4e06aa4a68ea33aa1bd3b473e219c67f1eb6a1057905f5" Namespace="calico-system" Pod="csi-node-driver-kmk8v" WorkloadEndpoint="ci--4081--1--0--7--3c1e2fa9c6-k8s-csi--node--driver--kmk8v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--7--3c1e2fa9c6-k8s-csi--node--driver--kmk8v-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0b81e2b9-b925-4fbf-96b6-98335c1c1130", ResourceVersion:"666", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 7, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-7-3c1e2fa9c6", ContainerID:"", Pod:"csi-node-driver-kmk8v", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.90.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali9d3e992448d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:08:04.726145 containerd[1500]: 2024-10-08 20:08:04.701 [INFO][4136] k8s.go 387: Calico CNI using IPs: [192.168.90.1/32] ContainerID="e6e87401beb46a9b2b4e06aa4a68ea33aa1bd3b473e219c67f1eb6a1057905f5" Namespace="calico-system" Pod="csi-node-driver-kmk8v" WorkloadEndpoint="ci--4081--1--0--7--3c1e2fa9c6-k8s-csi--node--driver--kmk8v-eth0" Oct 8 20:08:04.726145 containerd[1500]: 2024-10-08 20:08:04.701 [INFO][4136] dataplane_linux.go 68: Setting the host side veth name to cali9d3e992448d ContainerID="e6e87401beb46a9b2b4e06aa4a68ea33aa1bd3b473e219c67f1eb6a1057905f5" Namespace="calico-system" Pod="csi-node-driver-kmk8v" WorkloadEndpoint="ci--4081--1--0--7--3c1e2fa9c6-k8s-csi--node--driver--kmk8v-eth0" Oct 8 20:08:04.726145 containerd[1500]: 2024-10-08 20:08:04.707 [INFO][4136] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="e6e87401beb46a9b2b4e06aa4a68ea33aa1bd3b473e219c67f1eb6a1057905f5" Namespace="calico-system" Pod="csi-node-driver-kmk8v" WorkloadEndpoint="ci--4081--1--0--7--3c1e2fa9c6-k8s-csi--node--driver--kmk8v-eth0" Oct 8 20:08:04.726145 containerd[1500]: 2024-10-08 20:08:04.708 [INFO][4136] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e6e87401beb46a9b2b4e06aa4a68ea33aa1bd3b473e219c67f1eb6a1057905f5" Namespace="calico-system" Pod="csi-node-driver-kmk8v" WorkloadEndpoint="ci--4081--1--0--7--3c1e2fa9c6-k8s-csi--node--driver--kmk8v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--7--3c1e2fa9c6-k8s-csi--node--driver--kmk8v-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0b81e2b9-b925-4fbf-96b6-98335c1c1130", ResourceVersion:"666", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 7, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-7-3c1e2fa9c6", ContainerID:"e6e87401beb46a9b2b4e06aa4a68ea33aa1bd3b473e219c67f1eb6a1057905f5", Pod:"csi-node-driver-kmk8v", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.90.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali9d3e992448d", MAC:"76:e4:33:2f:59:61", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:08:04.726145 containerd[1500]: 2024-10-08 20:08:04.717 [INFO][4136] k8s.go 500: Wrote updated endpoint to datastore ContainerID="e6e87401beb46a9b2b4e06aa4a68ea33aa1bd3b473e219c67f1eb6a1057905f5" Namespace="calico-system" Pod="csi-node-driver-kmk8v" WorkloadEndpoint="ci--4081--1--0--7--3c1e2fa9c6-k8s-csi--node--driver--kmk8v-eth0" Oct 8 20:08:04.753858 containerd[1500]: time="2024-10-08T20:08:04.753763150Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:08:04.753858 containerd[1500]: time="2024-10-08T20:08:04.753822623Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:08:04.754048 containerd[1500]: time="2024-10-08T20:08:04.753844876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:08:04.754048 containerd[1500]: time="2024-10-08T20:08:04.753928224Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:08:04.771810 systemd[1]: run-containerd-runc-k8s.io-e6e87401beb46a9b2b4e06aa4a68ea33aa1bd3b473e219c67f1eb6a1057905f5-runc.JJ4IlJ.mount: Deactivated successfully. Oct 8 20:08:04.779876 systemd[1]: Started cri-containerd-e6e87401beb46a9b2b4e06aa4a68ea33aa1bd3b473e219c67f1eb6a1057905f5.scope - libcontainer container e6e87401beb46a9b2b4e06aa4a68ea33aa1bd3b473e219c67f1eb6a1057905f5. Oct 8 20:08:04.804576 containerd[1500]: time="2024-10-08T20:08:04.804546493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kmk8v,Uid:0b81e2b9-b925-4fbf-96b6-98335c1c1130,Namespace:calico-system,Attempt:1,} returns sandbox id \"e6e87401beb46a9b2b4e06aa4a68ea33aa1bd3b473e219c67f1eb6a1057905f5\"" Oct 8 20:08:04.814266 containerd[1500]: time="2024-10-08T20:08:04.814207538Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Oct 8 20:08:05.802475 systemd-networkd[1395]: cali9d3e992448d: Gained IPv6LL Oct 8 20:08:06.351604 containerd[1500]: time="2024-10-08T20:08:06.351290923Z" level=info msg="StopPodSandbox for \"30a7c2f07df1bc24c4efcf58c645680c03ba72f1521aa1b989e52cb5369fba1d\"" Oct 8 20:08:06.435055 containerd[1500]: 2024-10-08 20:08:06.397 [INFO][4227] k8s.go 608: Cleaning up netns ContainerID="30a7c2f07df1bc24c4efcf58c645680c03ba72f1521aa1b989e52cb5369fba1d" Oct 8 20:08:06.435055 containerd[1500]: 2024-10-08 20:08:06.397 [INFO][4227] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="30a7c2f07df1bc24c4efcf58c645680c03ba72f1521aa1b989e52cb5369fba1d" iface="eth0" netns="/var/run/netns/cni-659ea0c1-c749-4c0d-dacd-f03c54002135" Oct 8 20:08:06.435055 containerd[1500]: 2024-10-08 20:08:06.398 [INFO][4227] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="30a7c2f07df1bc24c4efcf58c645680c03ba72f1521aa1b989e52cb5369fba1d" iface="eth0" netns="/var/run/netns/cni-659ea0c1-c749-4c0d-dacd-f03c54002135" Oct 8 20:08:06.435055 containerd[1500]: 2024-10-08 20:08:06.398 [INFO][4227] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="30a7c2f07df1bc24c4efcf58c645680c03ba72f1521aa1b989e52cb5369fba1d" iface="eth0" netns="/var/run/netns/cni-659ea0c1-c749-4c0d-dacd-f03c54002135" Oct 8 20:08:06.435055 containerd[1500]: 2024-10-08 20:08:06.398 [INFO][4227] k8s.go 615: Releasing IP address(es) ContainerID="30a7c2f07df1bc24c4efcf58c645680c03ba72f1521aa1b989e52cb5369fba1d" Oct 8 20:08:06.435055 containerd[1500]: 2024-10-08 20:08:06.398 [INFO][4227] utils.go 188: Calico CNI releasing IP address ContainerID="30a7c2f07df1bc24c4efcf58c645680c03ba72f1521aa1b989e52cb5369fba1d" Oct 8 20:08:06.435055 containerd[1500]: 2024-10-08 20:08:06.424 [INFO][4233] ipam_plugin.go 417: Releasing address using handleID ContainerID="30a7c2f07df1bc24c4efcf58c645680c03ba72f1521aa1b989e52cb5369fba1d" HandleID="k8s-pod-network.30a7c2f07df1bc24c4efcf58c645680c03ba72f1521aa1b989e52cb5369fba1d" Workload="ci--4081--1--0--7--3c1e2fa9c6-k8s-coredns--76f75df574--gqdhb-eth0" Oct 8 20:08:06.435055 containerd[1500]: 2024-10-08 20:08:06.424 [INFO][4233] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:08:06.435055 containerd[1500]: 2024-10-08 20:08:06.424 [INFO][4233] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:08:06.435055 containerd[1500]: 2024-10-08 20:08:06.429 [WARNING][4233] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="30a7c2f07df1bc24c4efcf58c645680c03ba72f1521aa1b989e52cb5369fba1d" HandleID="k8s-pod-network.30a7c2f07df1bc24c4efcf58c645680c03ba72f1521aa1b989e52cb5369fba1d" Workload="ci--4081--1--0--7--3c1e2fa9c6-k8s-coredns--76f75df574--gqdhb-eth0" Oct 8 20:08:06.435055 containerd[1500]: 2024-10-08 20:08:06.429 [INFO][4233] ipam_plugin.go 445: Releasing address using workloadID ContainerID="30a7c2f07df1bc24c4efcf58c645680c03ba72f1521aa1b989e52cb5369fba1d" HandleID="k8s-pod-network.30a7c2f07df1bc24c4efcf58c645680c03ba72f1521aa1b989e52cb5369fba1d" Workload="ci--4081--1--0--7--3c1e2fa9c6-k8s-coredns--76f75df574--gqdhb-eth0" Oct 8 20:08:06.435055 containerd[1500]: 2024-10-08 20:08:06.431 [INFO][4233] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:08:06.435055 containerd[1500]: 2024-10-08 20:08:06.433 [INFO][4227] k8s.go 621: Teardown processing complete. ContainerID="30a7c2f07df1bc24c4efcf58c645680c03ba72f1521aa1b989e52cb5369fba1d" Oct 8 20:08:06.437613 containerd[1500]: time="2024-10-08T20:08:06.435327142Z" level=info msg="TearDown network for sandbox \"30a7c2f07df1bc24c4efcf58c645680c03ba72f1521aa1b989e52cb5369fba1d\" successfully" Oct 8 20:08:06.437613 containerd[1500]: time="2024-10-08T20:08:06.435350617Z" level=info msg="StopPodSandbox for \"30a7c2f07df1bc24c4efcf58c645680c03ba72f1521aa1b989e52cb5369fba1d\" returns successfully" Oct 8 20:08:06.438268 systemd[1]: run-netns-cni\x2d659ea0c1\x2dc749\x2d4c0d\x2ddacd\x2df03c54002135.mount: Deactivated successfully. Oct 8 20:08:06.438943 containerd[1500]: time="2024-10-08T20:08:06.438870583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-gqdhb,Uid:ffc6c48e-8dc3-4c58-a149-a815514690e4,Namespace:kube-system,Attempt:1,}" Oct 8 20:08:06.558420 systemd-networkd[1395]: cali93d4554a1fa: Link UP Oct 8 20:08:06.559808 systemd-networkd[1395]: cali93d4554a1fa: Gained carrier Oct 8 20:08:06.578704 containerd[1500]: 2024-10-08 20:08:06.497 [INFO][4239] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--1--0--7--3c1e2fa9c6-k8s-coredns--76f75df574--gqdhb-eth0 coredns-76f75df574- kube-system ffc6c48e-8dc3-4c58-a149-a815514690e4 675 0 2024-10-08 20:07:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-1-0-7-3c1e2fa9c6 coredns-76f75df574-gqdhb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali93d4554a1fa [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="2817c4cf20cbc55a46d8a483af034c74d07698f517f1e3ad27c11afac411c572" Namespace="kube-system" Pod="coredns-76f75df574-gqdhb" WorkloadEndpoint="ci--4081--1--0--7--3c1e2fa9c6-k8s-coredns--76f75df574--gqdhb-" Oct 8 20:08:06.578704 containerd[1500]: 2024-10-08 20:08:06.497 [INFO][4239] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2817c4cf20cbc55a46d8a483af034c74d07698f517f1e3ad27c11afac411c572" Namespace="kube-system" Pod="coredns-76f75df574-gqdhb" WorkloadEndpoint="ci--4081--1--0--7--3c1e2fa9c6-k8s-coredns--76f75df574--gqdhb-eth0" Oct 8 20:08:06.578704 containerd[1500]: 2024-10-08 20:08:06.521 [INFO][4250] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2817c4cf20cbc55a46d8a483af034c74d07698f517f1e3ad27c11afac411c572" HandleID="k8s-pod-network.2817c4cf20cbc55a46d8a483af034c74d07698f517f1e3ad27c11afac411c572" Workload="ci--4081--1--0--7--3c1e2fa9c6-k8s-coredns--76f75df574--gqdhb-eth0" Oct 8 20:08:06.578704 containerd[1500]: 2024-10-08 20:08:06.530 [INFO][4250] ipam_plugin.go 270: Auto assigning IP ContainerID="2817c4cf20cbc55a46d8a483af034c74d07698f517f1e3ad27c11afac411c572" HandleID="k8s-pod-network.2817c4cf20cbc55a46d8a483af034c74d07698f517f1e3ad27c11afac411c572" Workload="ci--4081--1--0--7--3c1e2fa9c6-k8s-coredns--76f75df574--gqdhb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002efe00), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-1-0-7-3c1e2fa9c6", "pod":"coredns-76f75df574-gqdhb", "timestamp":"2024-10-08 20:08:06.521552822 +0000 UTC"}, Hostname:"ci-4081-1-0-7-3c1e2fa9c6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 20:08:06.578704 containerd[1500]: 2024-10-08 20:08:06.530 [INFO][4250] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:08:06.578704 containerd[1500]: 2024-10-08 20:08:06.530 [INFO][4250] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:08:06.578704 containerd[1500]: 2024-10-08 20:08:06.530 [INFO][4250] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-1-0-7-3c1e2fa9c6' Oct 8 20:08:06.578704 containerd[1500]: 2024-10-08 20:08:06.531 [INFO][4250] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2817c4cf20cbc55a46d8a483af034c74d07698f517f1e3ad27c11afac411c572" host="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:08:06.578704 containerd[1500]: 2024-10-08 20:08:06.535 [INFO][4250] ipam.go 372: Looking up existing affinities for host host="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:08:06.578704 containerd[1500]: 2024-10-08 20:08:06.538 [INFO][4250] ipam.go 489: Trying affinity for 192.168.90.0/26 host="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:08:06.578704 containerd[1500]: 2024-10-08 20:08:06.540 [INFO][4250] ipam.go 155: Attempting to load block cidr=192.168.90.0/26 host="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:08:06.578704 containerd[1500]: 2024-10-08 20:08:06.542 [INFO][4250] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.90.0/26 host="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:08:06.578704 containerd[1500]: 2024-10-08 20:08:06.542 [INFO][4250] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.90.0/26 handle="k8s-pod-network.2817c4cf20cbc55a46d8a483af034c74d07698f517f1e3ad27c11afac411c572" host="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:08:06.578704 containerd[1500]: 2024-10-08 20:08:06.543 [INFO][4250] ipam.go 1685: Creating new handle: k8s-pod-network.2817c4cf20cbc55a46d8a483af034c74d07698f517f1e3ad27c11afac411c572 Oct 8 20:08:06.578704 containerd[1500]: 2024-10-08 20:08:06.547 [INFO][4250] ipam.go 1203: Writing block in order to claim IPs block=192.168.90.0/26 handle="k8s-pod-network.2817c4cf20cbc55a46d8a483af034c74d07698f517f1e3ad27c11afac411c572" host="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:08:06.578704 containerd[1500]: 2024-10-08 20:08:06.551 [INFO][4250] ipam.go 1216: Successfully claimed IPs: [192.168.90.2/26] block=192.168.90.0/26 handle="k8s-pod-network.2817c4cf20cbc55a46d8a483af034c74d07698f517f1e3ad27c11afac411c572" host="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:08:06.578704 containerd[1500]: 2024-10-08 20:08:06.552 [INFO][4250] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.90.2/26] handle="k8s-pod-network.2817c4cf20cbc55a46d8a483af034c74d07698f517f1e3ad27c11afac411c572" host="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:08:06.578704 containerd[1500]: 2024-10-08 20:08:06.552 [INFO][4250] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:08:06.578704 containerd[1500]: 2024-10-08 20:08:06.552 [INFO][4250] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.90.2/26] IPv6=[] ContainerID="2817c4cf20cbc55a46d8a483af034c74d07698f517f1e3ad27c11afac411c572" HandleID="k8s-pod-network.2817c4cf20cbc55a46d8a483af034c74d07698f517f1e3ad27c11afac411c572" Workload="ci--4081--1--0--7--3c1e2fa9c6-k8s-coredns--76f75df574--gqdhb-eth0" Oct 8 20:08:06.580415 containerd[1500]: 2024-10-08 20:08:06.554 [INFO][4239] k8s.go 386: Populated endpoint ContainerID="2817c4cf20cbc55a46d8a483af034c74d07698f517f1e3ad27c11afac411c572" Namespace="kube-system" Pod="coredns-76f75df574-gqdhb" WorkloadEndpoint="ci--4081--1--0--7--3c1e2fa9c6-k8s-coredns--76f75df574--gqdhb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--7--3c1e2fa9c6-k8s-coredns--76f75df574--gqdhb-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"ffc6c48e-8dc3-4c58-a149-a815514690e4", ResourceVersion:"675", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 7, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-7-3c1e2fa9c6", ContainerID:"", Pod:"coredns-76f75df574-gqdhb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.90.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali93d4554a1fa", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:08:06.580415 containerd[1500]: 2024-10-08 20:08:06.554 [INFO][4239] k8s.go 387: Calico CNI using IPs: [192.168.90.2/32] ContainerID="2817c4cf20cbc55a46d8a483af034c74d07698f517f1e3ad27c11afac411c572" Namespace="kube-system" Pod="coredns-76f75df574-gqdhb" WorkloadEndpoint="ci--4081--1--0--7--3c1e2fa9c6-k8s-coredns--76f75df574--gqdhb-eth0" Oct 8 20:08:06.580415 containerd[1500]: 2024-10-08 20:08:06.554 [INFO][4239] dataplane_linux.go 68: Setting the host side veth name to cali93d4554a1fa ContainerID="2817c4cf20cbc55a46d8a483af034c74d07698f517f1e3ad27c11afac411c572" Namespace="kube-system" Pod="coredns-76f75df574-gqdhb" WorkloadEndpoint="ci--4081--1--0--7--3c1e2fa9c6-k8s-coredns--76f75df574--gqdhb-eth0" Oct 8 20:08:06.580415 containerd[1500]: 2024-10-08 20:08:06.560 [INFO][4239] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="2817c4cf20cbc55a46d8a483af034c74d07698f517f1e3ad27c11afac411c572" Namespace="kube-system" Pod="coredns-76f75df574-gqdhb" WorkloadEndpoint="ci--4081--1--0--7--3c1e2fa9c6-k8s-coredns--76f75df574--gqdhb-eth0" Oct 8 20:08:06.580415 containerd[1500]: 2024-10-08 20:08:06.560 [INFO][4239] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2817c4cf20cbc55a46d8a483af034c74d07698f517f1e3ad27c11afac411c572" Namespace="kube-system" Pod="coredns-76f75df574-gqdhb" WorkloadEndpoint="ci--4081--1--0--7--3c1e2fa9c6-k8s-coredns--76f75df574--gqdhb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--7--3c1e2fa9c6-k8s-coredns--76f75df574--gqdhb-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"ffc6c48e-8dc3-4c58-a149-a815514690e4", ResourceVersion:"675", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 7, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-7-3c1e2fa9c6", ContainerID:"2817c4cf20cbc55a46d8a483af034c74d07698f517f1e3ad27c11afac411c572", Pod:"coredns-76f75df574-gqdhb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.90.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali93d4554a1fa", MAC:"3e:35:ce:19:4e:8a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:08:06.580415 containerd[1500]: 2024-10-08 20:08:06.573 [INFO][4239] k8s.go 500: Wrote updated endpoint to datastore ContainerID="2817c4cf20cbc55a46d8a483af034c74d07698f517f1e3ad27c11afac411c572" Namespace="kube-system" Pod="coredns-76f75df574-gqdhb" WorkloadEndpoint="ci--4081--1--0--7--3c1e2fa9c6-k8s-coredns--76f75df574--gqdhb-eth0" Oct 8 20:08:06.622807 containerd[1500]: time="2024-10-08T20:08:06.622636011Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:08:06.622807 containerd[1500]: time="2024-10-08T20:08:06.622682218Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:08:06.622807 containerd[1500]: time="2024-10-08T20:08:06.622695143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:08:06.623800 containerd[1500]: time="2024-10-08T20:08:06.623175408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:08:06.657767 systemd[1]: Started cri-containerd-2817c4cf20cbc55a46d8a483af034c74d07698f517f1e3ad27c11afac411c572.scope - libcontainer container 2817c4cf20cbc55a46d8a483af034c74d07698f517f1e3ad27c11afac411c572. Oct 8 20:08:06.704737 containerd[1500]: time="2024-10-08T20:08:06.704695643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-gqdhb,Uid:ffc6c48e-8dc3-4c58-a149-a815514690e4,Namespace:kube-system,Attempt:1,} returns sandbox id \"2817c4cf20cbc55a46d8a483af034c74d07698f517f1e3ad27c11afac411c572\"" Oct 8 20:08:06.708503 containerd[1500]: time="2024-10-08T20:08:06.708460726Z" level=info msg="CreateContainer within sandbox \"2817c4cf20cbc55a46d8a483af034c74d07698f517f1e3ad27c11afac411c572\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 20:08:06.730024 containerd[1500]: time="2024-10-08T20:08:06.729989637Z" level=info msg="CreateContainer within sandbox \"2817c4cf20cbc55a46d8a483af034c74d07698f517f1e3ad27c11afac411c572\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"95681a285b8b2e73c8308a07d361eb8f6ef9765c6aa1cdc9ed8e26c79af7c3a8\"" Oct 8 20:08:06.731949 containerd[1500]: time="2024-10-08T20:08:06.731147672Z" level=info msg="StartContainer for \"95681a285b8b2e73c8308a07d361eb8f6ef9765c6aa1cdc9ed8e26c79af7c3a8\"" Oct 8 20:08:06.758293 containerd[1500]: time="2024-10-08T20:08:06.758255062Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:08:06.759940 containerd[1500]: time="2024-10-08T20:08:06.759911788Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7642081" Oct 8 20:08:06.762508 containerd[1500]: time="2024-10-08T20:08:06.762443772Z" level=info msg="ImageCreate event name:\"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:08:06.765343 systemd[1]: Started cri-containerd-95681a285b8b2e73c8308a07d361eb8f6ef9765c6aa1cdc9ed8e26c79af7c3a8.scope - libcontainer container 95681a285b8b2e73c8308a07d361eb8f6ef9765c6aa1cdc9ed8e26c79af7c3a8. Oct 8 20:08:06.766503 containerd[1500]: time="2024-10-08T20:08:06.766460175Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:08:06.767555 containerd[1500]: time="2024-10-08T20:08:06.767523661Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"9134482\" in 1.953266229s" Oct 8 20:08:06.767601 containerd[1500]: time="2024-10-08T20:08:06.767554300Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\"" Oct 8 20:08:06.769724 containerd[1500]: time="2024-10-08T20:08:06.769515286Z" level=info msg="CreateContainer within sandbox \"e6e87401beb46a9b2b4e06aa4a68ea33aa1bd3b473e219c67f1eb6a1057905f5\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 8 20:08:06.795533 containerd[1500]: time="2024-10-08T20:08:06.795402891Z" level=info msg="CreateContainer within sandbox \"e6e87401beb46a9b2b4e06aa4a68ea33aa1bd3b473e219c67f1eb6a1057905f5\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"fb2309eebb2c19692a2d15540bd3cd8e2835a84ae04296b4782e67e6e9d0a8b5\"" Oct 8 20:08:06.799535 containerd[1500]: time="2024-10-08T20:08:06.798429889Z" level=info msg="StartContainer for \"fb2309eebb2c19692a2d15540bd3cd8e2835a84ae04296b4782e67e6e9d0a8b5\"" Oct 8 20:08:06.812073 containerd[1500]: time="2024-10-08T20:08:06.812035292Z" level=info msg="StartContainer for \"95681a285b8b2e73c8308a07d361eb8f6ef9765c6aa1cdc9ed8e26c79af7c3a8\" returns successfully" Oct 8 20:08:06.836392 systemd[1]: Started cri-containerd-fb2309eebb2c19692a2d15540bd3cd8e2835a84ae04296b4782e67e6e9d0a8b5.scope - libcontainer container fb2309eebb2c19692a2d15540bd3cd8e2835a84ae04296b4782e67e6e9d0a8b5. Oct 8 20:08:06.870998 containerd[1500]: time="2024-10-08T20:08:06.870962589Z" level=info msg="StartContainer for \"fb2309eebb2c19692a2d15540bd3cd8e2835a84ae04296b4782e67e6e9d0a8b5\" returns successfully" Oct 8 20:08:06.873031 containerd[1500]: time="2024-10-08T20:08:06.872807975Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Oct 8 20:08:07.353271 containerd[1500]: time="2024-10-08T20:08:07.353060277Z" level=info msg="StopPodSandbox for \"bb752add53158b5a0fab6052e4e7c7abafa2deded948019132ceb76a80613679\"" Oct 8 20:08:07.441081 containerd[1500]: 2024-10-08 20:08:07.407 [INFO][4402] k8s.go 608: Cleaning up netns ContainerID="bb752add53158b5a0fab6052e4e7c7abafa2deded948019132ceb76a80613679" Oct 8 20:08:07.441081 containerd[1500]: 2024-10-08 20:08:07.407 [INFO][4402] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="bb752add53158b5a0fab6052e4e7c7abafa2deded948019132ceb76a80613679" iface="eth0" netns="/var/run/netns/cni-b4fef7d2-05a9-8012-018f-e725f04c78bc" Oct 8 20:08:07.441081 containerd[1500]: 2024-10-08 20:08:07.408 [INFO][4402] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="bb752add53158b5a0fab6052e4e7c7abafa2deded948019132ceb76a80613679" iface="eth0" netns="/var/run/netns/cni-b4fef7d2-05a9-8012-018f-e725f04c78bc" Oct 8 20:08:07.441081 containerd[1500]: 2024-10-08 20:08:07.408 [INFO][4402] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="bb752add53158b5a0fab6052e4e7c7abafa2deded948019132ceb76a80613679" iface="eth0" netns="/var/run/netns/cni-b4fef7d2-05a9-8012-018f-e725f04c78bc" Oct 8 20:08:07.441081 containerd[1500]: 2024-10-08 20:08:07.408 [INFO][4402] k8s.go 615: Releasing IP address(es) ContainerID="bb752add53158b5a0fab6052e4e7c7abafa2deded948019132ceb76a80613679" Oct 8 20:08:07.441081 containerd[1500]: 2024-10-08 20:08:07.408 [INFO][4402] utils.go 188: Calico CNI releasing IP address ContainerID="bb752add53158b5a0fab6052e4e7c7abafa2deded948019132ceb76a80613679" Oct 8 20:08:07.441081 containerd[1500]: 2024-10-08 20:08:07.429 [INFO][4409] ipam_plugin.go 417: Releasing address using handleID ContainerID="bb752add53158b5a0fab6052e4e7c7abafa2deded948019132ceb76a80613679" HandleID="k8s-pod-network.bb752add53158b5a0fab6052e4e7c7abafa2deded948019132ceb76a80613679" Workload="ci--4081--1--0--7--3c1e2fa9c6-k8s-coredns--76f75df574--jcw4q-eth0" Oct 8 20:08:07.441081 containerd[1500]: 2024-10-08 20:08:07.429 [INFO][4409] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:08:07.441081 containerd[1500]: 2024-10-08 20:08:07.429 [INFO][4409] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:08:07.441081 containerd[1500]: 2024-10-08 20:08:07.434 [WARNING][4409] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="bb752add53158b5a0fab6052e4e7c7abafa2deded948019132ceb76a80613679" HandleID="k8s-pod-network.bb752add53158b5a0fab6052e4e7c7abafa2deded948019132ceb76a80613679" Workload="ci--4081--1--0--7--3c1e2fa9c6-k8s-coredns--76f75df574--jcw4q-eth0" Oct 8 20:08:07.441081 containerd[1500]: 2024-10-08 20:08:07.434 [INFO][4409] ipam_plugin.go 445: Releasing address using workloadID ContainerID="bb752add53158b5a0fab6052e4e7c7abafa2deded948019132ceb76a80613679" HandleID="k8s-pod-network.bb752add53158b5a0fab6052e4e7c7abafa2deded948019132ceb76a80613679" Workload="ci--4081--1--0--7--3c1e2fa9c6-k8s-coredns--76f75df574--jcw4q-eth0" Oct 8 20:08:07.441081 containerd[1500]: 2024-10-08 20:08:07.436 [INFO][4409] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:08:07.441081 containerd[1500]: 2024-10-08 20:08:07.438 [INFO][4402] k8s.go 621: Teardown processing complete. ContainerID="bb752add53158b5a0fab6052e4e7c7abafa2deded948019132ceb76a80613679" Oct 8 20:08:07.442445 containerd[1500]: time="2024-10-08T20:08:07.441597215Z" level=info msg="TearDown network for sandbox \"bb752add53158b5a0fab6052e4e7c7abafa2deded948019132ceb76a80613679\" successfully" Oct 8 20:08:07.442445 containerd[1500]: time="2024-10-08T20:08:07.441621400Z" level=info msg="StopPodSandbox for \"bb752add53158b5a0fab6052e4e7c7abafa2deded948019132ceb76a80613679\" returns successfully" Oct 8 20:08:07.444625 containerd[1500]: time="2024-10-08T20:08:07.443587718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-jcw4q,Uid:106d4675-f2aa-41c2-b0d2-327ddbc42de2,Namespace:kube-system,Attempt:1,}" Oct 8 20:08:07.445548 systemd[1]: run-netns-cni\x2db4fef7d2\x2d05a9\x2d8012\x2d018f\x2de725f04c78bc.mount: Deactivated successfully. Oct 8 20:08:07.550494 systemd-networkd[1395]: cali83d246ae6a4: Link UP Oct 8 20:08:07.550678 systemd-networkd[1395]: cali83d246ae6a4: Gained carrier Oct 8 20:08:07.564394 containerd[1500]: 2024-10-08 20:08:07.485 [INFO][4416] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--1--0--7--3c1e2fa9c6-k8s-coredns--76f75df574--jcw4q-eth0 coredns-76f75df574- kube-system 106d4675-f2aa-41c2-b0d2-327ddbc42de2 689 0 2024-10-08 20:07:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-1-0-7-3c1e2fa9c6 coredns-76f75df574-jcw4q eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali83d246ae6a4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="d5709c45f1b80b594087a1883e62e663e694fa62ee13d03fe72dcb4f658398b2" Namespace="kube-system" Pod="coredns-76f75df574-jcw4q" WorkloadEndpoint="ci--4081--1--0--7--3c1e2fa9c6-k8s-coredns--76f75df574--jcw4q-" Oct 8 20:08:07.564394 containerd[1500]: 2024-10-08 20:08:07.485 [INFO][4416] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d5709c45f1b80b594087a1883e62e663e694fa62ee13d03fe72dcb4f658398b2" Namespace="kube-system" Pod="coredns-76f75df574-jcw4q" WorkloadEndpoint="ci--4081--1--0--7--3c1e2fa9c6-k8s-coredns--76f75df574--jcw4q-eth0" Oct 8 20:08:07.564394 containerd[1500]: 2024-10-08 20:08:07.512 [INFO][4427] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d5709c45f1b80b594087a1883e62e663e694fa62ee13d03fe72dcb4f658398b2" HandleID="k8s-pod-network.d5709c45f1b80b594087a1883e62e663e694fa62ee13d03fe72dcb4f658398b2" Workload="ci--4081--1--0--7--3c1e2fa9c6-k8s-coredns--76f75df574--jcw4q-eth0" Oct 8 20:08:07.564394 containerd[1500]: 2024-10-08 20:08:07.519 [INFO][4427] ipam_plugin.go 270: Auto assigning IP ContainerID="d5709c45f1b80b594087a1883e62e663e694fa62ee13d03fe72dcb4f658398b2" HandleID="k8s-pod-network.d5709c45f1b80b594087a1883e62e663e694fa62ee13d03fe72dcb4f658398b2" Workload="ci--4081--1--0--7--3c1e2fa9c6-k8s-coredns--76f75df574--jcw4q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000292a70), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-1-0-7-3c1e2fa9c6", "pod":"coredns-76f75df574-jcw4q", "timestamp":"2024-10-08 20:08:07.512474322 +0000 UTC"}, Hostname:"ci-4081-1-0-7-3c1e2fa9c6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 20:08:07.564394 containerd[1500]: 2024-10-08 20:08:07.519 [INFO][4427] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:08:07.564394 containerd[1500]: 2024-10-08 20:08:07.519 [INFO][4427] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:08:07.564394 containerd[1500]: 2024-10-08 20:08:07.519 [INFO][4427] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-1-0-7-3c1e2fa9c6' Oct 8 20:08:07.564394 containerd[1500]: 2024-10-08 20:08:07.520 [INFO][4427] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d5709c45f1b80b594087a1883e62e663e694fa62ee13d03fe72dcb4f658398b2" host="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:08:07.564394 containerd[1500]: 2024-10-08 20:08:07.525 [INFO][4427] ipam.go 372: Looking up existing affinities for host host="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:08:07.564394 containerd[1500]: 2024-10-08 20:08:07.529 [INFO][4427] ipam.go 489: Trying affinity for 192.168.90.0/26 host="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:08:07.564394 containerd[1500]: 2024-10-08 20:08:07.531 [INFO][4427] ipam.go 155: Attempting to load block cidr=192.168.90.0/26 host="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:08:07.564394 containerd[1500]: 2024-10-08 20:08:07.533 [INFO][4427] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.90.0/26 host="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:08:07.564394 containerd[1500]: 2024-10-08 20:08:07.533 [INFO][4427] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.90.0/26 handle="k8s-pod-network.d5709c45f1b80b594087a1883e62e663e694fa62ee13d03fe72dcb4f658398b2" host="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:08:07.564394 containerd[1500]: 2024-10-08 20:08:07.534 [INFO][4427] ipam.go 1685: Creating new handle: k8s-pod-network.d5709c45f1b80b594087a1883e62e663e694fa62ee13d03fe72dcb4f658398b2 Oct 8 20:08:07.564394 containerd[1500]: 2024-10-08 20:08:07.537 [INFO][4427] ipam.go 1203: Writing block in order to claim IPs block=192.168.90.0/26 handle="k8s-pod-network.d5709c45f1b80b594087a1883e62e663e694fa62ee13d03fe72dcb4f658398b2" host="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:08:07.564394 containerd[1500]: 2024-10-08 20:08:07.542 [INFO][4427] ipam.go 1216: Successfully claimed IPs: [192.168.90.3/26] block=192.168.90.0/26 handle="k8s-pod-network.d5709c45f1b80b594087a1883e62e663e694fa62ee13d03fe72dcb4f658398b2" host="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:08:07.564394 containerd[1500]: 2024-10-08 20:08:07.542 [INFO][4427] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.90.3/26] handle="k8s-pod-network.d5709c45f1b80b594087a1883e62e663e694fa62ee13d03fe72dcb4f658398b2" host="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:08:07.564394 containerd[1500]: 2024-10-08 20:08:07.542 [INFO][4427] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:08:07.564394 containerd[1500]: 2024-10-08 20:08:07.542 [INFO][4427] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.90.3/26] IPv6=[] ContainerID="d5709c45f1b80b594087a1883e62e663e694fa62ee13d03fe72dcb4f658398b2" HandleID="k8s-pod-network.d5709c45f1b80b594087a1883e62e663e694fa62ee13d03fe72dcb4f658398b2" Workload="ci--4081--1--0--7--3c1e2fa9c6-k8s-coredns--76f75df574--jcw4q-eth0" Oct 8 20:08:07.566429 containerd[1500]: 2024-10-08 20:08:07.545 [INFO][4416] k8s.go 386: Populated endpoint ContainerID="d5709c45f1b80b594087a1883e62e663e694fa62ee13d03fe72dcb4f658398b2" Namespace="kube-system" Pod="coredns-76f75df574-jcw4q" WorkloadEndpoint="ci--4081--1--0--7--3c1e2fa9c6-k8s-coredns--76f75df574--jcw4q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--7--3c1e2fa9c6-k8s-coredns--76f75df574--jcw4q-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"106d4675-f2aa-41c2-b0d2-327ddbc42de2", ResourceVersion:"689", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 7, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-7-3c1e2fa9c6", ContainerID:"", Pod:"coredns-76f75df574-jcw4q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.90.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali83d246ae6a4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:08:07.566429 containerd[1500]: 2024-10-08 20:08:07.545 [INFO][4416] k8s.go 387: Calico CNI using IPs: [192.168.90.3/32] ContainerID="d5709c45f1b80b594087a1883e62e663e694fa62ee13d03fe72dcb4f658398b2" Namespace="kube-system" Pod="coredns-76f75df574-jcw4q" WorkloadEndpoint="ci--4081--1--0--7--3c1e2fa9c6-k8s-coredns--76f75df574--jcw4q-eth0" Oct 8 20:08:07.566429 containerd[1500]: 2024-10-08 20:08:07.546 [INFO][4416] dataplane_linux.go 68: Setting the host side veth name to cali83d246ae6a4 ContainerID="d5709c45f1b80b594087a1883e62e663e694fa62ee13d03fe72dcb4f658398b2" Namespace="kube-system" Pod="coredns-76f75df574-jcw4q" WorkloadEndpoint="ci--4081--1--0--7--3c1e2fa9c6-k8s-coredns--76f75df574--jcw4q-eth0" Oct 8 20:08:07.566429 containerd[1500]: 2024-10-08 20:08:07.547 [INFO][4416] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="d5709c45f1b80b594087a1883e62e663e694fa62ee13d03fe72dcb4f658398b2" Namespace="kube-system" Pod="coredns-76f75df574-jcw4q" WorkloadEndpoint="ci--4081--1--0--7--3c1e2fa9c6-k8s-coredns--76f75df574--jcw4q-eth0" Oct 8 20:08:07.566429 containerd[1500]: 2024-10-08 20:08:07.547 [INFO][4416] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d5709c45f1b80b594087a1883e62e663e694fa62ee13d03fe72dcb4f658398b2" Namespace="kube-system" Pod="coredns-76f75df574-jcw4q" WorkloadEndpoint="ci--4081--1--0--7--3c1e2fa9c6-k8s-coredns--76f75df574--jcw4q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--7--3c1e2fa9c6-k8s-coredns--76f75df574--jcw4q-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"106d4675-f2aa-41c2-b0d2-327ddbc42de2", ResourceVersion:"689", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 7, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-7-3c1e2fa9c6", ContainerID:"d5709c45f1b80b594087a1883e62e663e694fa62ee13d03fe72dcb4f658398b2", Pod:"coredns-76f75df574-jcw4q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.90.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali83d246ae6a4", MAC:"92:fd:10:37:bd:51", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:08:07.566429 containerd[1500]: 2024-10-08 20:08:07.559 [INFO][4416] k8s.go 500: Wrote updated endpoint to datastore ContainerID="d5709c45f1b80b594087a1883e62e663e694fa62ee13d03fe72dcb4f658398b2" Namespace="kube-system" Pod="coredns-76f75df574-jcw4q" WorkloadEndpoint="ci--4081--1--0--7--3c1e2fa9c6-k8s-coredns--76f75df574--jcw4q-eth0" Oct 8 20:08:07.593083 containerd[1500]: time="2024-10-08T20:08:07.592365495Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:08:07.593083 containerd[1500]: time="2024-10-08T20:08:07.592938937Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:08:07.593083 containerd[1500]: time="2024-10-08T20:08:07.592999793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:08:07.593364 containerd[1500]: time="2024-10-08T20:08:07.593306758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:08:07.631127 systemd[1]: Started cri-containerd-d5709c45f1b80b594087a1883e62e663e694fa62ee13d03fe72dcb4f658398b2.scope - libcontainer container d5709c45f1b80b594087a1883e62e663e694fa62ee13d03fe72dcb4f658398b2. Oct 8 20:08:07.656088 kubelet[2799]: I1008 20:08:07.655897 2799 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-gqdhb" podStartSLOduration=29.65585785 podStartE2EDuration="29.65585785s" podCreationTimestamp="2024-10-08 20:07:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:08:07.638213218 +0000 UTC m=+42.395891648" watchObservedRunningTime="2024-10-08 20:08:07.65585785 +0000 UTC m=+42.413536280" Oct 8 20:08:07.724526 containerd[1500]: time="2024-10-08T20:08:07.724479528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-jcw4q,Uid:106d4675-f2aa-41c2-b0d2-327ddbc42de2,Namespace:kube-system,Attempt:1,} returns sandbox id \"d5709c45f1b80b594087a1883e62e663e694fa62ee13d03fe72dcb4f658398b2\"" Oct 8 20:08:07.727030 containerd[1500]: time="2024-10-08T20:08:07.726999971Z" level=info msg="CreateContainer within sandbox \"d5709c45f1b80b594087a1883e62e663e694fa62ee13d03fe72dcb4f658398b2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 20:08:07.736162 containerd[1500]: time="2024-10-08T20:08:07.736056299Z" level=info msg="CreateContainer within sandbox \"d5709c45f1b80b594087a1883e62e663e694fa62ee13d03fe72dcb4f658398b2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d85e584555fd9b8f2ef273c7166bf3c3c76eac689bad12a70d884a86470e1a78\"" Oct 8 20:08:07.736628 containerd[1500]: time="2024-10-08T20:08:07.736597539Z" level=info msg="StartContainer for \"d85e584555fd9b8f2ef273c7166bf3c3c76eac689bad12a70d884a86470e1a78\"" Oct 8 20:08:07.769109 systemd[1]: Started cri-containerd-d85e584555fd9b8f2ef273c7166bf3c3c76eac689bad12a70d884a86470e1a78.scope - libcontainer container d85e584555fd9b8f2ef273c7166bf3c3c76eac689bad12a70d884a86470e1a78. Oct 8 20:08:07.800834 containerd[1500]: time="2024-10-08T20:08:07.800734693Z" level=info msg="StartContainer for \"d85e584555fd9b8f2ef273c7166bf3c3c76eac689bad12a70d884a86470e1a78\" returns successfully" Oct 8 20:08:08.351883 containerd[1500]: time="2024-10-08T20:08:08.351835991Z" level=info msg="StopPodSandbox for \"41345d2a2fb53738835e84bfb02237d3b2d1cd2c57a9459fda354b4a3f6e3aa9\"" Oct 8 20:08:08.362393 systemd-networkd[1395]: cali93d4554a1fa: Gained IPv6LL Oct 8 20:08:08.454756 containerd[1500]: 2024-10-08 20:08:08.405 [INFO][4541] k8s.go 608: Cleaning up netns ContainerID="41345d2a2fb53738835e84bfb02237d3b2d1cd2c57a9459fda354b4a3f6e3aa9" Oct 8 20:08:08.454756 containerd[1500]: 2024-10-08 20:08:08.405 [INFO][4541] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="41345d2a2fb53738835e84bfb02237d3b2d1cd2c57a9459fda354b4a3f6e3aa9" iface="eth0" netns="/var/run/netns/cni-1b21cadb-e892-2eb6-9558-3a2232e577a4" Oct 8 20:08:08.454756 containerd[1500]: 2024-10-08 20:08:08.406 [INFO][4541] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="41345d2a2fb53738835e84bfb02237d3b2d1cd2c57a9459fda354b4a3f6e3aa9" iface="eth0" netns="/var/run/netns/cni-1b21cadb-e892-2eb6-9558-3a2232e577a4" Oct 8 20:08:08.454756 containerd[1500]: 2024-10-08 20:08:08.407 [INFO][4541] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="41345d2a2fb53738835e84bfb02237d3b2d1cd2c57a9459fda354b4a3f6e3aa9" iface="eth0" netns="/var/run/netns/cni-1b21cadb-e892-2eb6-9558-3a2232e577a4" Oct 8 20:08:08.454756 containerd[1500]: 2024-10-08 20:08:08.407 [INFO][4541] k8s.go 615: Releasing IP address(es) ContainerID="41345d2a2fb53738835e84bfb02237d3b2d1cd2c57a9459fda354b4a3f6e3aa9" Oct 8 20:08:08.454756 containerd[1500]: 2024-10-08 20:08:08.407 [INFO][4541] utils.go 188: Calico CNI releasing IP address ContainerID="41345d2a2fb53738835e84bfb02237d3b2d1cd2c57a9459fda354b4a3f6e3aa9" Oct 8 20:08:08.454756 containerd[1500]: 2024-10-08 20:08:08.440 [INFO][4547] ipam_plugin.go 417: Releasing address using handleID ContainerID="41345d2a2fb53738835e84bfb02237d3b2d1cd2c57a9459fda354b4a3f6e3aa9" HandleID="k8s-pod-network.41345d2a2fb53738835e84bfb02237d3b2d1cd2c57a9459fda354b4a3f6e3aa9" Workload="ci--4081--1--0--7--3c1e2fa9c6-k8s-calico--kube--controllers--547cf679d9--vx22q-eth0" Oct 8 20:08:08.454756 containerd[1500]: 2024-10-08 20:08:08.440 [INFO][4547] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:08:08.454756 containerd[1500]: 2024-10-08 20:08:08.440 [INFO][4547] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:08:08.454756 containerd[1500]: 2024-10-08 20:08:08.449 [WARNING][4547] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="41345d2a2fb53738835e84bfb02237d3b2d1cd2c57a9459fda354b4a3f6e3aa9" HandleID="k8s-pod-network.41345d2a2fb53738835e84bfb02237d3b2d1cd2c57a9459fda354b4a3f6e3aa9" Workload="ci--4081--1--0--7--3c1e2fa9c6-k8s-calico--kube--controllers--547cf679d9--vx22q-eth0" Oct 8 20:08:08.454756 containerd[1500]: 2024-10-08 20:08:08.449 [INFO][4547] ipam_plugin.go 445: Releasing address using workloadID ContainerID="41345d2a2fb53738835e84bfb02237d3b2d1cd2c57a9459fda354b4a3f6e3aa9" HandleID="k8s-pod-network.41345d2a2fb53738835e84bfb02237d3b2d1cd2c57a9459fda354b4a3f6e3aa9" Workload="ci--4081--1--0--7--3c1e2fa9c6-k8s-calico--kube--controllers--547cf679d9--vx22q-eth0" Oct 8 20:08:08.454756 containerd[1500]: 2024-10-08 20:08:08.450 [INFO][4547] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:08:08.454756 containerd[1500]: 2024-10-08 20:08:08.452 [INFO][4541] k8s.go 621: Teardown processing complete. ContainerID="41345d2a2fb53738835e84bfb02237d3b2d1cd2c57a9459fda354b4a3f6e3aa9" Oct 8 20:08:08.456457 containerd[1500]: time="2024-10-08T20:08:08.455530578Z" level=info msg="TearDown network for sandbox \"41345d2a2fb53738835e84bfb02237d3b2d1cd2c57a9459fda354b4a3f6e3aa9\" successfully" Oct 8 20:08:08.456457 containerd[1500]: time="2024-10-08T20:08:08.455564202Z" level=info msg="StopPodSandbox for \"41345d2a2fb53738835e84bfb02237d3b2d1cd2c57a9459fda354b4a3f6e3aa9\" returns successfully" Oct 8 20:08:08.457272 containerd[1500]: time="2024-10-08T20:08:08.457102092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-547cf679d9-vx22q,Uid:47c8fbe8-41ef-46a6-81f1-7c5c2c70818f,Namespace:calico-system,Attempt:1,}" Oct 8 20:08:08.459012 systemd[1]: run-netns-cni\x2d1b21cadb\x2de892\x2d2eb6\x2d9558\x2d3a2232e577a4.mount: Deactivated successfully. Oct 8 20:08:08.576735 systemd-networkd[1395]: cali00dc0d59b2e: Link UP Oct 8 20:08:08.576963 systemd-networkd[1395]: cali00dc0d59b2e: Gained carrier Oct 8 20:08:08.601068 containerd[1500]: 2024-10-08 20:08:08.504 [INFO][4557] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--1--0--7--3c1e2fa9c6-k8s-calico--kube--controllers--547cf679d9--vx22q-eth0 calico-kube-controllers-547cf679d9- calico-system 47c8fbe8-41ef-46a6-81f1-7c5c2c70818f 706 0 2024-10-08 20:07:44 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:547cf679d9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-1-0-7-3c1e2fa9c6 calico-kube-controllers-547cf679d9-vx22q eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali00dc0d59b2e [] []}} ContainerID="e4e94d5814199141f8f0009019094875cb5358562b5553da83d3b3fae71e46d1" Namespace="calico-system" Pod="calico-kube-controllers-547cf679d9-vx22q" WorkloadEndpoint="ci--4081--1--0--7--3c1e2fa9c6-k8s-calico--kube--controllers--547cf679d9--vx22q-" Oct 8 20:08:08.601068 containerd[1500]: 2024-10-08 20:08:08.504 [INFO][4557] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e4e94d5814199141f8f0009019094875cb5358562b5553da83d3b3fae71e46d1" Namespace="calico-system" Pod="calico-kube-controllers-547cf679d9-vx22q" WorkloadEndpoint="ci--4081--1--0--7--3c1e2fa9c6-k8s-calico--kube--controllers--547cf679d9--vx22q-eth0" Oct 8 20:08:08.601068 containerd[1500]: 2024-10-08 20:08:08.533 [INFO][4565] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e4e94d5814199141f8f0009019094875cb5358562b5553da83d3b3fae71e46d1" HandleID="k8s-pod-network.e4e94d5814199141f8f0009019094875cb5358562b5553da83d3b3fae71e46d1" Workload="ci--4081--1--0--7--3c1e2fa9c6-k8s-calico--kube--controllers--547cf679d9--vx22q-eth0" Oct 8 20:08:08.601068 containerd[1500]: 2024-10-08 20:08:08.539 [INFO][4565] ipam_plugin.go 270: Auto assigning IP ContainerID="e4e94d5814199141f8f0009019094875cb5358562b5553da83d3b3fae71e46d1" HandleID="k8s-pod-network.e4e94d5814199141f8f0009019094875cb5358562b5553da83d3b3fae71e46d1" Workload="ci--4081--1--0--7--3c1e2fa9c6-k8s-calico--kube--controllers--547cf679d9--vx22q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000114a10), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-1-0-7-3c1e2fa9c6", "pod":"calico-kube-controllers-547cf679d9-vx22q", "timestamp":"2024-10-08 20:08:08.533028732 +0000 UTC"}, Hostname:"ci-4081-1-0-7-3c1e2fa9c6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 20:08:08.601068 containerd[1500]: 2024-10-08 20:08:08.539 [INFO][4565] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:08:08.601068 containerd[1500]: 2024-10-08 20:08:08.539 [INFO][4565] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:08:08.601068 containerd[1500]: 2024-10-08 20:08:08.539 [INFO][4565] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-1-0-7-3c1e2fa9c6' Oct 8 20:08:08.601068 containerd[1500]: 2024-10-08 20:08:08.541 [INFO][4565] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e4e94d5814199141f8f0009019094875cb5358562b5553da83d3b3fae71e46d1" host="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:08:08.601068 containerd[1500]: 2024-10-08 20:08:08.545 [INFO][4565] ipam.go 372: Looking up existing affinities for host host="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:08:08.601068 containerd[1500]: 2024-10-08 20:08:08.550 [INFO][4565] ipam.go 489: Trying affinity for 192.168.90.0/26 host="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:08:08.601068 containerd[1500]: 2024-10-08 20:08:08.552 [INFO][4565] ipam.go 155: Attempting to load block cidr=192.168.90.0/26 host="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:08:08.601068 containerd[1500]: 2024-10-08 20:08:08.555 [INFO][4565] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.90.0/26 host="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:08:08.601068 containerd[1500]: 2024-10-08 20:08:08.555 [INFO][4565] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.90.0/26 handle="k8s-pod-network.e4e94d5814199141f8f0009019094875cb5358562b5553da83d3b3fae71e46d1" host="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:08:08.601068 containerd[1500]: 2024-10-08 20:08:08.557 [INFO][4565] ipam.go 1685: Creating new handle: k8s-pod-network.e4e94d5814199141f8f0009019094875cb5358562b5553da83d3b3fae71e46d1 Oct 8 20:08:08.601068 containerd[1500]: 2024-10-08 20:08:08.562 [INFO][4565] ipam.go 1203: Writing block in order to claim IPs block=192.168.90.0/26 handle="k8s-pod-network.e4e94d5814199141f8f0009019094875cb5358562b5553da83d3b3fae71e46d1" host="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:08:08.601068 containerd[1500]: 2024-10-08 20:08:08.569 [INFO][4565] ipam.go 1216: Successfully claimed IPs: [192.168.90.4/26] block=192.168.90.0/26 handle="k8s-pod-network.e4e94d5814199141f8f0009019094875cb5358562b5553da83d3b3fae71e46d1" host="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:08:08.601068 containerd[1500]: 2024-10-08 20:08:08.569 [INFO][4565] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.90.4/26] handle="k8s-pod-network.e4e94d5814199141f8f0009019094875cb5358562b5553da83d3b3fae71e46d1" host="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:08:08.601068 containerd[1500]: 2024-10-08 20:08:08.569 [INFO][4565] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:08:08.601068 containerd[1500]: 2024-10-08 20:08:08.569 [INFO][4565] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.90.4/26] IPv6=[] ContainerID="e4e94d5814199141f8f0009019094875cb5358562b5553da83d3b3fae71e46d1" HandleID="k8s-pod-network.e4e94d5814199141f8f0009019094875cb5358562b5553da83d3b3fae71e46d1" Workload="ci--4081--1--0--7--3c1e2fa9c6-k8s-calico--kube--controllers--547cf679d9--vx22q-eth0" Oct 8 20:08:08.603175 containerd[1500]: 2024-10-08 20:08:08.572 [INFO][4557] k8s.go 386: Populated endpoint ContainerID="e4e94d5814199141f8f0009019094875cb5358562b5553da83d3b3fae71e46d1" Namespace="calico-system" Pod="calico-kube-controllers-547cf679d9-vx22q" WorkloadEndpoint="ci--4081--1--0--7--3c1e2fa9c6-k8s-calico--kube--controllers--547cf679d9--vx22q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--7--3c1e2fa9c6-k8s-calico--kube--controllers--547cf679d9--vx22q-eth0", GenerateName:"calico-kube-controllers-547cf679d9-", Namespace:"calico-system", SelfLink:"", UID:"47c8fbe8-41ef-46a6-81f1-7c5c2c70818f", ResourceVersion:"706", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 7, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"547cf679d9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-7-3c1e2fa9c6", ContainerID:"", Pod:"calico-kube-controllers-547cf679d9-vx22q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.90.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali00dc0d59b2e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:08:08.603175 containerd[1500]: 2024-10-08 20:08:08.572 [INFO][4557] k8s.go 387: Calico CNI using IPs: [192.168.90.4/32] ContainerID="e4e94d5814199141f8f0009019094875cb5358562b5553da83d3b3fae71e46d1" Namespace="calico-system" Pod="calico-kube-controllers-547cf679d9-vx22q" WorkloadEndpoint="ci--4081--1--0--7--3c1e2fa9c6-k8s-calico--kube--controllers--547cf679d9--vx22q-eth0" Oct 8 20:08:08.603175 containerd[1500]: 2024-10-08 20:08:08.572 [INFO][4557] dataplane_linux.go 68: Setting the host side veth name to cali00dc0d59b2e ContainerID="e4e94d5814199141f8f0009019094875cb5358562b5553da83d3b3fae71e46d1" Namespace="calico-system" Pod="calico-kube-controllers-547cf679d9-vx22q" WorkloadEndpoint="ci--4081--1--0--7--3c1e2fa9c6-k8s-calico--kube--controllers--547cf679d9--vx22q-eth0" Oct 8 20:08:08.603175 containerd[1500]: 2024-10-08 20:08:08.577 [INFO][4557] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="e4e94d5814199141f8f0009019094875cb5358562b5553da83d3b3fae71e46d1" Namespace="calico-system" Pod="calico-kube-controllers-547cf679d9-vx22q" WorkloadEndpoint="ci--4081--1--0--7--3c1e2fa9c6-k8s-calico--kube--controllers--547cf679d9--vx22q-eth0" Oct 8 20:08:08.603175 containerd[1500]: 2024-10-08 20:08:08.578 [INFO][4557] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e4e94d5814199141f8f0009019094875cb5358562b5553da83d3b3fae71e46d1" Namespace="calico-system" Pod="calico-kube-controllers-547cf679d9-vx22q" WorkloadEndpoint="ci--4081--1--0--7--3c1e2fa9c6-k8s-calico--kube--controllers--547cf679d9--vx22q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--7--3c1e2fa9c6-k8s-calico--kube--controllers--547cf679d9--vx22q-eth0", GenerateName:"calico-kube-controllers-547cf679d9-", Namespace:"calico-system", SelfLink:"", UID:"47c8fbe8-41ef-46a6-81f1-7c5c2c70818f", ResourceVersion:"706", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 7, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"547cf679d9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-7-3c1e2fa9c6", ContainerID:"e4e94d5814199141f8f0009019094875cb5358562b5553da83d3b3fae71e46d1", Pod:"calico-kube-controllers-547cf679d9-vx22q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.90.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali00dc0d59b2e", MAC:"66:d9:56:20:e6:8b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:08:08.603175 containerd[1500]: 2024-10-08 20:08:08.593 [INFO][4557] k8s.go 500: Wrote updated endpoint to datastore ContainerID="e4e94d5814199141f8f0009019094875cb5358562b5553da83d3b3fae71e46d1" Namespace="calico-system" Pod="calico-kube-controllers-547cf679d9-vx22q" WorkloadEndpoint="ci--4081--1--0--7--3c1e2fa9c6-k8s-calico--kube--controllers--547cf679d9--vx22q-eth0" Oct 8 20:08:08.684803 kubelet[2799]: I1008 20:08:08.680735 2799 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-jcw4q" podStartSLOduration=30.680594763 podStartE2EDuration="30.680594763s" podCreationTimestamp="2024-10-08 20:07:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:08:08.634358381 +0000 UTC m=+43.392036812" watchObservedRunningTime="2024-10-08 20:08:08.680594763 +0000 UTC m=+43.438273193" Oct 8 20:08:08.701737 containerd[1500]: time="2024-10-08T20:08:08.701280512Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:08:08.701737 containerd[1500]: time="2024-10-08T20:08:08.701338854Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:08:08.701737 containerd[1500]: time="2024-10-08T20:08:08.701352770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:08:08.701737 containerd[1500]: time="2024-10-08T20:08:08.701428304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:08:08.728683 systemd[1]: Started cri-containerd-e4e94d5814199141f8f0009019094875cb5358562b5553da83d3b3fae71e46d1.scope - libcontainer container e4e94d5814199141f8f0009019094875cb5358562b5553da83d3b3fae71e46d1. Oct 8 20:08:08.748061 systemd-networkd[1395]: cali83d246ae6a4: Gained IPv6LL Oct 8 20:08:08.780199 containerd[1500]: time="2024-10-08T20:08:08.780157155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-547cf679d9-vx22q,Uid:47c8fbe8-41ef-46a6-81f1-7c5c2c70818f,Namespace:calico-system,Attempt:1,} returns sandbox id \"e4e94d5814199141f8f0009019094875cb5358562b5553da83d3b3fae71e46d1\"" Oct 8 20:08:08.830307 containerd[1500]: time="2024-10-08T20:08:08.830111671Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:08:08.831090 containerd[1500]: time="2024-10-08T20:08:08.831046292Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12907822" Oct 8 20:08:08.831947 containerd[1500]: time="2024-10-08T20:08:08.831900008Z" level=info msg="ImageCreate event name:\"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:08:08.833600 containerd[1500]: time="2024-10-08T20:08:08.833536398Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:08:08.834384 containerd[1500]: time="2024-10-08T20:08:08.833986535Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"14400175\" in 1.961006231s" Oct 8 20:08:08.834384 containerd[1500]: time="2024-10-08T20:08:08.834013506Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\"" Oct 8 20:08:08.835097 containerd[1500]: time="2024-10-08T20:08:08.834988965Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Oct 8 20:08:08.836786 containerd[1500]: time="2024-10-08T20:08:08.836767413Z" level=info msg="CreateContainer within sandbox \"e6e87401beb46a9b2b4e06aa4a68ea33aa1bd3b473e219c67f1eb6a1057905f5\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 8 20:08:08.860571 containerd[1500]: time="2024-10-08T20:08:08.860466477Z" level=info msg="CreateContainer within sandbox \"e6e87401beb46a9b2b4e06aa4a68ea33aa1bd3b473e219c67f1eb6a1057905f5\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"31ab6e387bc90f7996d0822cd0e99996e9e8d5a7bdcb8952069a18128c65322c\"" Oct 8 20:08:08.861188 containerd[1500]: time="2024-10-08T20:08:08.860950880Z" level=info msg="StartContainer for \"31ab6e387bc90f7996d0822cd0e99996e9e8d5a7bdcb8952069a18128c65322c\"" Oct 8 20:08:08.887333 systemd[1]: Started cri-containerd-31ab6e387bc90f7996d0822cd0e99996e9e8d5a7bdcb8952069a18128c65322c.scope - libcontainer container 31ab6e387bc90f7996d0822cd0e99996e9e8d5a7bdcb8952069a18128c65322c. Oct 8 20:08:08.911400 containerd[1500]: time="2024-10-08T20:08:08.911363549Z" level=info msg="StartContainer for \"31ab6e387bc90f7996d0822cd0e99996e9e8d5a7bdcb8952069a18128c65322c\" returns successfully" Oct 8 20:08:09.531443 kubelet[2799]: I1008 20:08:09.531363 2799 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 8 20:08:09.532598 kubelet[2799]: I1008 20:08:09.532567 2799 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 8 20:08:10.090442 systemd-networkd[1395]: cali00dc0d59b2e: Gained IPv6LL Oct 8 20:08:10.545981 containerd[1500]: time="2024-10-08T20:08:10.545906479Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:08:10.547695 containerd[1500]: time="2024-10-08T20:08:10.547640534Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=33507125" Oct 8 20:08:10.550349 containerd[1500]: time="2024-10-08T20:08:10.550304992Z" level=info msg="ImageCreate event name:\"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:08:10.553049 containerd[1500]: time="2024-10-08T20:08:10.552435974Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:08:10.553049 containerd[1500]: time="2024-10-08T20:08:10.552935657Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"34999494\" in 1.717777799s" Oct 8 20:08:10.553049 containerd[1500]: time="2024-10-08T20:08:10.552978939Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\"" Oct 8 20:08:10.561277 containerd[1500]: time="2024-10-08T20:08:10.560857274Z" level=info msg="CreateContainer within sandbox \"e4e94d5814199141f8f0009019094875cb5358562b5553da83d3b3fae71e46d1\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 8 20:08:10.577954 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount346008952.mount: Deactivated successfully. Oct 8 20:08:10.586030 containerd[1500]: time="2024-10-08T20:08:10.585986961Z" level=info msg="CreateContainer within sandbox \"e4e94d5814199141f8f0009019094875cb5358562b5553da83d3b3fae71e46d1\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"93f9a8357e75b727a60e6031692e87410b43d91ef5471d1f3ca25e4bfbc93dac\"" Oct 8 20:08:10.587603 containerd[1500]: time="2024-10-08T20:08:10.586546316Z" level=info msg="StartContainer for \"93f9a8357e75b727a60e6031692e87410b43d91ef5471d1f3ca25e4bfbc93dac\"" Oct 8 20:08:10.618347 systemd[1]: Started cri-containerd-93f9a8357e75b727a60e6031692e87410b43d91ef5471d1f3ca25e4bfbc93dac.scope - libcontainer container 93f9a8357e75b727a60e6031692e87410b43d91ef5471d1f3ca25e4bfbc93dac. Oct 8 20:08:10.668206 containerd[1500]: time="2024-10-08T20:08:10.668159646Z" level=info msg="StartContainer for \"93f9a8357e75b727a60e6031692e87410b43d91ef5471d1f3ca25e4bfbc93dac\" returns successfully" Oct 8 20:08:11.649383 kubelet[2799]: I1008 20:08:11.649321 2799 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-kmk8v" podStartSLOduration=23.624228877 podStartE2EDuration="27.649266903s" podCreationTimestamp="2024-10-08 20:07:44 +0000 UTC" firstStartedPulling="2024-10-08 20:08:04.809302754 +0000 UTC m=+39.566981184" lastFinishedPulling="2024-10-08 20:08:08.83434078 +0000 UTC m=+43.592019210" observedRunningTime="2024-10-08 20:08:09.63358689 +0000 UTC m=+44.391265320" watchObservedRunningTime="2024-10-08 20:08:11.649266903 +0000 UTC m=+46.406945363" Oct 8 20:08:11.653730 kubelet[2799]: I1008 20:08:11.651577 2799 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-547cf679d9-vx22q" podStartSLOduration=25.881605152 podStartE2EDuration="27.651524576s" podCreationTimestamp="2024-10-08 20:07:44 +0000 UTC" firstStartedPulling="2024-10-08 20:08:08.783753288 +0000 UTC m=+43.541431717" lastFinishedPulling="2024-10-08 20:08:10.553672711 +0000 UTC m=+45.311351141" observedRunningTime="2024-10-08 20:08:11.646142397 +0000 UTC m=+46.403820867" watchObservedRunningTime="2024-10-08 20:08:11.651524576 +0000 UTC m=+46.409203036" Oct 8 20:08:11.675419 systemd[1]: run-containerd-runc-k8s.io-93f9a8357e75b727a60e6031692e87410b43d91ef5471d1f3ca25e4bfbc93dac-runc.rRScwI.mount: Deactivated successfully. Oct 8 20:08:25.370576 containerd[1500]: time="2024-10-08T20:08:25.370511626Z" level=info msg="StopPodSandbox for \"dd5cc048e18fce56dc3b25424e4029d9edbb4187c056776480e45af71db0016e\"" Oct 8 20:08:25.455609 containerd[1500]: 2024-10-08 20:08:25.416 [WARNING][4796] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dd5cc048e18fce56dc3b25424e4029d9edbb4187c056776480e45af71db0016e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--7--3c1e2fa9c6-k8s-csi--node--driver--kmk8v-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0b81e2b9-b925-4fbf-96b6-98335c1c1130", ResourceVersion:"726", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 7, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-7-3c1e2fa9c6", ContainerID:"e6e87401beb46a9b2b4e06aa4a68ea33aa1bd3b473e219c67f1eb6a1057905f5", Pod:"csi-node-driver-kmk8v", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.90.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali9d3e992448d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:08:25.455609 containerd[1500]: 2024-10-08 20:08:25.416 [INFO][4796] k8s.go 608: Cleaning up netns ContainerID="dd5cc048e18fce56dc3b25424e4029d9edbb4187c056776480e45af71db0016e" Oct 8 20:08:25.455609 containerd[1500]: 2024-10-08 20:08:25.416 [INFO][4796] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="dd5cc048e18fce56dc3b25424e4029d9edbb4187c056776480e45af71db0016e" iface="eth0" netns="" Oct 8 20:08:25.455609 containerd[1500]: 2024-10-08 20:08:25.416 [INFO][4796] k8s.go 615: Releasing IP address(es) ContainerID="dd5cc048e18fce56dc3b25424e4029d9edbb4187c056776480e45af71db0016e" Oct 8 20:08:25.455609 containerd[1500]: 2024-10-08 20:08:25.416 [INFO][4796] utils.go 188: Calico CNI releasing IP address ContainerID="dd5cc048e18fce56dc3b25424e4029d9edbb4187c056776480e45af71db0016e" Oct 8 20:08:25.455609 containerd[1500]: 2024-10-08 20:08:25.445 [INFO][4802] ipam_plugin.go 417: Releasing address using handleID ContainerID="dd5cc048e18fce56dc3b25424e4029d9edbb4187c056776480e45af71db0016e" HandleID="k8s-pod-network.dd5cc048e18fce56dc3b25424e4029d9edbb4187c056776480e45af71db0016e" Workload="ci--4081--1--0--7--3c1e2fa9c6-k8s-csi--node--driver--kmk8v-eth0" Oct 8 20:08:25.455609 containerd[1500]: 2024-10-08 20:08:25.445 [INFO][4802] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:08:25.455609 containerd[1500]: 2024-10-08 20:08:25.445 [INFO][4802] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:08:25.455609 containerd[1500]: 2024-10-08 20:08:25.449 [WARNING][4802] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="dd5cc048e18fce56dc3b25424e4029d9edbb4187c056776480e45af71db0016e" HandleID="k8s-pod-network.dd5cc048e18fce56dc3b25424e4029d9edbb4187c056776480e45af71db0016e" Workload="ci--4081--1--0--7--3c1e2fa9c6-k8s-csi--node--driver--kmk8v-eth0" Oct 8 20:08:25.455609 containerd[1500]: 2024-10-08 20:08:25.449 [INFO][4802] ipam_plugin.go 445: Releasing address using workloadID ContainerID="dd5cc048e18fce56dc3b25424e4029d9edbb4187c056776480e45af71db0016e" HandleID="k8s-pod-network.dd5cc048e18fce56dc3b25424e4029d9edbb4187c056776480e45af71db0016e" Workload="ci--4081--1--0--7--3c1e2fa9c6-k8s-csi--node--driver--kmk8v-eth0" Oct 8 20:08:25.455609 containerd[1500]: 2024-10-08 20:08:25.451 [INFO][4802] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:08:25.455609 containerd[1500]: 2024-10-08 20:08:25.453 [INFO][4796] k8s.go 621: Teardown processing complete. ContainerID="dd5cc048e18fce56dc3b25424e4029d9edbb4187c056776480e45af71db0016e" Oct 8 20:08:25.456446 containerd[1500]: time="2024-10-08T20:08:25.455641170Z" level=info msg="TearDown network for sandbox \"dd5cc048e18fce56dc3b25424e4029d9edbb4187c056776480e45af71db0016e\" successfully" Oct 8 20:08:25.456446 containerd[1500]: time="2024-10-08T20:08:25.455661839Z" level=info msg="StopPodSandbox for \"dd5cc048e18fce56dc3b25424e4029d9edbb4187c056776480e45af71db0016e\" returns successfully" Oct 8 20:08:25.456446 containerd[1500]: time="2024-10-08T20:08:25.456014222Z" level=info msg="RemovePodSandbox for \"dd5cc048e18fce56dc3b25424e4029d9edbb4187c056776480e45af71db0016e\"" Oct 8 20:08:25.458357 containerd[1500]: time="2024-10-08T20:08:25.458072358Z" level=info msg="Forcibly stopping sandbox \"dd5cc048e18fce56dc3b25424e4029d9edbb4187c056776480e45af71db0016e\"" Oct 8 20:08:25.517968 containerd[1500]: 2024-10-08 20:08:25.488 [WARNING][4820] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dd5cc048e18fce56dc3b25424e4029d9edbb4187c056776480e45af71db0016e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--7--3c1e2fa9c6-k8s-csi--node--driver--kmk8v-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0b81e2b9-b925-4fbf-96b6-98335c1c1130", ResourceVersion:"726", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 7, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-7-3c1e2fa9c6", ContainerID:"e6e87401beb46a9b2b4e06aa4a68ea33aa1bd3b473e219c67f1eb6a1057905f5", Pod:"csi-node-driver-kmk8v", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.90.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali9d3e992448d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:08:25.517968 containerd[1500]: 2024-10-08 20:08:25.488 [INFO][4820] k8s.go 608: Cleaning up netns ContainerID="dd5cc048e18fce56dc3b25424e4029d9edbb4187c056776480e45af71db0016e" Oct 8 20:08:25.517968 containerd[1500]: 2024-10-08 20:08:25.488 [INFO][4820] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="dd5cc048e18fce56dc3b25424e4029d9edbb4187c056776480e45af71db0016e" iface="eth0" netns="" Oct 8 20:08:25.517968 containerd[1500]: 2024-10-08 20:08:25.488 [INFO][4820] k8s.go 615: Releasing IP address(es) ContainerID="dd5cc048e18fce56dc3b25424e4029d9edbb4187c056776480e45af71db0016e" Oct 8 20:08:25.517968 containerd[1500]: 2024-10-08 20:08:25.488 [INFO][4820] utils.go 188: Calico CNI releasing IP address ContainerID="dd5cc048e18fce56dc3b25424e4029d9edbb4187c056776480e45af71db0016e" Oct 8 20:08:25.517968 containerd[1500]: 2024-10-08 20:08:25.507 [INFO][4826] ipam_plugin.go 417: Releasing address using handleID ContainerID="dd5cc048e18fce56dc3b25424e4029d9edbb4187c056776480e45af71db0016e" HandleID="k8s-pod-network.dd5cc048e18fce56dc3b25424e4029d9edbb4187c056776480e45af71db0016e" Workload="ci--4081--1--0--7--3c1e2fa9c6-k8s-csi--node--driver--kmk8v-eth0" Oct 8 20:08:25.517968 containerd[1500]: 2024-10-08 20:08:25.507 [INFO][4826] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:08:25.517968 containerd[1500]: 2024-10-08 20:08:25.507 [INFO][4826] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:08:25.517968 containerd[1500]: 2024-10-08 20:08:25.511 [WARNING][4826] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="dd5cc048e18fce56dc3b25424e4029d9edbb4187c056776480e45af71db0016e" HandleID="k8s-pod-network.dd5cc048e18fce56dc3b25424e4029d9edbb4187c056776480e45af71db0016e" Workload="ci--4081--1--0--7--3c1e2fa9c6-k8s-csi--node--driver--kmk8v-eth0" Oct 8 20:08:25.517968 containerd[1500]: 2024-10-08 20:08:25.511 [INFO][4826] ipam_plugin.go 445: Releasing address using workloadID ContainerID="dd5cc048e18fce56dc3b25424e4029d9edbb4187c056776480e45af71db0016e" HandleID="k8s-pod-network.dd5cc048e18fce56dc3b25424e4029d9edbb4187c056776480e45af71db0016e" Workload="ci--4081--1--0--7--3c1e2fa9c6-k8s-csi--node--driver--kmk8v-eth0" Oct 8 20:08:25.517968 containerd[1500]: 2024-10-08 20:08:25.514 [INFO][4826] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:08:25.517968 containerd[1500]: 2024-10-08 20:08:25.516 [INFO][4820] k8s.go 621: Teardown processing complete. ContainerID="dd5cc048e18fce56dc3b25424e4029d9edbb4187c056776480e45af71db0016e" Oct 8 20:08:25.519073 containerd[1500]: time="2024-10-08T20:08:25.517998564Z" level=info msg="TearDown network for sandbox \"dd5cc048e18fce56dc3b25424e4029d9edbb4187c056776480e45af71db0016e\" successfully" Oct 8 20:08:25.526077 containerd[1500]: time="2024-10-08T20:08:25.526043150Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dd5cc048e18fce56dc3b25424e4029d9edbb4187c056776480e45af71db0016e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 20:08:25.526132 containerd[1500]: time="2024-10-08T20:08:25.526100630Z" level=info msg="RemovePodSandbox \"dd5cc048e18fce56dc3b25424e4029d9edbb4187c056776480e45af71db0016e\" returns successfully" Oct 8 20:08:25.526863 containerd[1500]: time="2024-10-08T20:08:25.526646771Z" level=info msg="StopPodSandbox for \"30a7c2f07df1bc24c4efcf58c645680c03ba72f1521aa1b989e52cb5369fba1d\"" Oct 8 20:08:25.585683 containerd[1500]: 2024-10-08 20:08:25.556 [WARNING][4844] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="30a7c2f07df1bc24c4efcf58c645680c03ba72f1521aa1b989e52cb5369fba1d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--7--3c1e2fa9c6-k8s-coredns--76f75df574--gqdhb-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"ffc6c48e-8dc3-4c58-a149-a815514690e4", ResourceVersion:"696", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 7, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-7-3c1e2fa9c6", ContainerID:"2817c4cf20cbc55a46d8a483af034c74d07698f517f1e3ad27c11afac411c572", Pod:"coredns-76f75df574-gqdhb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.90.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali93d4554a1fa", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:08:25.585683 containerd[1500]: 2024-10-08 20:08:25.556 [INFO][4844] k8s.go 608: Cleaning up netns ContainerID="30a7c2f07df1bc24c4efcf58c645680c03ba72f1521aa1b989e52cb5369fba1d" Oct 8 20:08:25.585683 containerd[1500]: 2024-10-08 20:08:25.556 [INFO][4844] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="30a7c2f07df1bc24c4efcf58c645680c03ba72f1521aa1b989e52cb5369fba1d" iface="eth0" netns="" Oct 8 20:08:25.585683 containerd[1500]: 2024-10-08 20:08:25.556 [INFO][4844] k8s.go 615: Releasing IP address(es) ContainerID="30a7c2f07df1bc24c4efcf58c645680c03ba72f1521aa1b989e52cb5369fba1d" Oct 8 20:08:25.585683 containerd[1500]: 2024-10-08 20:08:25.556 [INFO][4844] utils.go 188: Calico CNI releasing IP address ContainerID="30a7c2f07df1bc24c4efcf58c645680c03ba72f1521aa1b989e52cb5369fba1d" Oct 8 20:08:25.585683 containerd[1500]: 2024-10-08 20:08:25.575 [INFO][4851] ipam_plugin.go 417: Releasing address using handleID ContainerID="30a7c2f07df1bc24c4efcf58c645680c03ba72f1521aa1b989e52cb5369fba1d" HandleID="k8s-pod-network.30a7c2f07df1bc24c4efcf58c645680c03ba72f1521aa1b989e52cb5369fba1d" Workload="ci--4081--1--0--7--3c1e2fa9c6-k8s-coredns--76f75df574--gqdhb-eth0" Oct 8 20:08:25.585683 containerd[1500]: 2024-10-08 20:08:25.575 [INFO][4851] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:08:25.585683 containerd[1500]: 2024-10-08 20:08:25.575 [INFO][4851] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:08:25.585683 containerd[1500]: 2024-10-08 20:08:25.580 [WARNING][4851] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="30a7c2f07df1bc24c4efcf58c645680c03ba72f1521aa1b989e52cb5369fba1d" HandleID="k8s-pod-network.30a7c2f07df1bc24c4efcf58c645680c03ba72f1521aa1b989e52cb5369fba1d" Workload="ci--4081--1--0--7--3c1e2fa9c6-k8s-coredns--76f75df574--gqdhb-eth0" Oct 8 20:08:25.585683 containerd[1500]: 2024-10-08 20:08:25.580 [INFO][4851] ipam_plugin.go 445: Releasing address using workloadID ContainerID="30a7c2f07df1bc24c4efcf58c645680c03ba72f1521aa1b989e52cb5369fba1d" HandleID="k8s-pod-network.30a7c2f07df1bc24c4efcf58c645680c03ba72f1521aa1b989e52cb5369fba1d" Workload="ci--4081--1--0--7--3c1e2fa9c6-k8s-coredns--76f75df574--gqdhb-eth0" Oct 8 20:08:25.585683 containerd[1500]: 2024-10-08 20:08:25.581 [INFO][4851] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:08:25.585683 containerd[1500]: 2024-10-08 20:08:25.583 [INFO][4844] k8s.go 621: Teardown processing complete. ContainerID="30a7c2f07df1bc24c4efcf58c645680c03ba72f1521aa1b989e52cb5369fba1d" Oct 8 20:08:25.586253 containerd[1500]: time="2024-10-08T20:08:25.586200486Z" level=info msg="TearDown network for sandbox \"30a7c2f07df1bc24c4efcf58c645680c03ba72f1521aa1b989e52cb5369fba1d\" successfully" Oct 8 20:08:25.586330 containerd[1500]: time="2024-10-08T20:08:25.586250662Z" level=info msg="StopPodSandbox for \"30a7c2f07df1bc24c4efcf58c645680c03ba72f1521aa1b989e52cb5369fba1d\" returns successfully" Oct 8 20:08:25.586698 containerd[1500]: time="2024-10-08T20:08:25.586663669Z" level=info msg="RemovePodSandbox for \"30a7c2f07df1bc24c4efcf58c645680c03ba72f1521aa1b989e52cb5369fba1d\"" Oct 8 20:08:25.586698 containerd[1500]: time="2024-10-08T20:08:25.586693587Z" level=info msg="Forcibly stopping sandbox \"30a7c2f07df1bc24c4efcf58c645680c03ba72f1521aa1b989e52cb5369fba1d\"" Oct 8 20:08:25.646734 containerd[1500]: 2024-10-08 20:08:25.617 [WARNING][4869] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="30a7c2f07df1bc24c4efcf58c645680c03ba72f1521aa1b989e52cb5369fba1d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--7--3c1e2fa9c6-k8s-coredns--76f75df574--gqdhb-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"ffc6c48e-8dc3-4c58-a149-a815514690e4", ResourceVersion:"696", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 7, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-7-3c1e2fa9c6", ContainerID:"2817c4cf20cbc55a46d8a483af034c74d07698f517f1e3ad27c11afac411c572", Pod:"coredns-76f75df574-gqdhb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.90.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali93d4554a1fa", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:08:25.646734 containerd[1500]: 2024-10-08 20:08:25.618 [INFO][4869] k8s.go 608: Cleaning up netns ContainerID="30a7c2f07df1bc24c4efcf58c645680c03ba72f1521aa1b989e52cb5369fba1d" Oct 8 20:08:25.646734 containerd[1500]: 2024-10-08 20:08:25.618 [INFO][4869] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="30a7c2f07df1bc24c4efcf58c645680c03ba72f1521aa1b989e52cb5369fba1d" iface="eth0" netns="" Oct 8 20:08:25.646734 containerd[1500]: 2024-10-08 20:08:25.618 [INFO][4869] k8s.go 615: Releasing IP address(es) ContainerID="30a7c2f07df1bc24c4efcf58c645680c03ba72f1521aa1b989e52cb5369fba1d" Oct 8 20:08:25.646734 containerd[1500]: 2024-10-08 20:08:25.618 [INFO][4869] utils.go 188: Calico CNI releasing IP address ContainerID="30a7c2f07df1bc24c4efcf58c645680c03ba72f1521aa1b989e52cb5369fba1d" Oct 8 20:08:25.646734 containerd[1500]: 2024-10-08 20:08:25.636 [INFO][4875] ipam_plugin.go 417: Releasing address using handleID ContainerID="30a7c2f07df1bc24c4efcf58c645680c03ba72f1521aa1b989e52cb5369fba1d" HandleID="k8s-pod-network.30a7c2f07df1bc24c4efcf58c645680c03ba72f1521aa1b989e52cb5369fba1d" Workload="ci--4081--1--0--7--3c1e2fa9c6-k8s-coredns--76f75df574--gqdhb-eth0" Oct 8 20:08:25.646734 containerd[1500]: 2024-10-08 20:08:25.636 [INFO][4875] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:08:25.646734 containerd[1500]: 2024-10-08 20:08:25.636 [INFO][4875] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:08:25.646734 containerd[1500]: 2024-10-08 20:08:25.640 [WARNING][4875] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="30a7c2f07df1bc24c4efcf58c645680c03ba72f1521aa1b989e52cb5369fba1d" HandleID="k8s-pod-network.30a7c2f07df1bc24c4efcf58c645680c03ba72f1521aa1b989e52cb5369fba1d" Workload="ci--4081--1--0--7--3c1e2fa9c6-k8s-coredns--76f75df574--gqdhb-eth0" Oct 8 20:08:25.646734 containerd[1500]: 2024-10-08 20:08:25.640 [INFO][4875] ipam_plugin.go 445: Releasing address using workloadID ContainerID="30a7c2f07df1bc24c4efcf58c645680c03ba72f1521aa1b989e52cb5369fba1d" HandleID="k8s-pod-network.30a7c2f07df1bc24c4efcf58c645680c03ba72f1521aa1b989e52cb5369fba1d" Workload="ci--4081--1--0--7--3c1e2fa9c6-k8s-coredns--76f75df574--gqdhb-eth0" Oct 8 20:08:25.646734 containerd[1500]: 2024-10-08 20:08:25.641 [INFO][4875] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:08:25.646734 containerd[1500]: 2024-10-08 20:08:25.644 [INFO][4869] k8s.go 621: Teardown processing complete. ContainerID="30a7c2f07df1bc24c4efcf58c645680c03ba72f1521aa1b989e52cb5369fba1d" Oct 8 20:08:25.646734 containerd[1500]: time="2024-10-08T20:08:25.646405976Z" level=info msg="TearDown network for sandbox \"30a7c2f07df1bc24c4efcf58c645680c03ba72f1521aa1b989e52cb5369fba1d\" successfully" Oct 8 20:08:25.651409 containerd[1500]: time="2024-10-08T20:08:25.651158010Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"30a7c2f07df1bc24c4efcf58c645680c03ba72f1521aa1b989e52cb5369fba1d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 20:08:25.651409 containerd[1500]: time="2024-10-08T20:08:25.651229648Z" level=info msg="RemovePodSandbox \"30a7c2f07df1bc24c4efcf58c645680c03ba72f1521aa1b989e52cb5369fba1d\" returns successfully" Oct 8 20:08:25.651573 containerd[1500]: time="2024-10-08T20:08:25.651549107Z" level=info msg="StopPodSandbox for \"41345d2a2fb53738835e84bfb02237d3b2d1cd2c57a9459fda354b4a3f6e3aa9\"" Oct 8 20:08:25.721787 containerd[1500]: 2024-10-08 20:08:25.693 [WARNING][4893] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="41345d2a2fb53738835e84bfb02237d3b2d1cd2c57a9459fda354b4a3f6e3aa9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--7--3c1e2fa9c6-k8s-calico--kube--controllers--547cf679d9--vx22q-eth0", GenerateName:"calico-kube-controllers-547cf679d9-", Namespace:"calico-system", SelfLink:"", UID:"47c8fbe8-41ef-46a6-81f1-7c5c2c70818f", ResourceVersion:"742", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 7, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"547cf679d9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-7-3c1e2fa9c6", ContainerID:"e4e94d5814199141f8f0009019094875cb5358562b5553da83d3b3fae71e46d1", Pod:"calico-kube-controllers-547cf679d9-vx22q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.90.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali00dc0d59b2e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:08:25.721787 containerd[1500]: 2024-10-08 20:08:25.693 [INFO][4893] k8s.go 608: Cleaning up netns ContainerID="41345d2a2fb53738835e84bfb02237d3b2d1cd2c57a9459fda354b4a3f6e3aa9" Oct 8 20:08:25.721787 containerd[1500]: 2024-10-08 20:08:25.693 [INFO][4893] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="41345d2a2fb53738835e84bfb02237d3b2d1cd2c57a9459fda354b4a3f6e3aa9" iface="eth0" netns="" Oct 8 20:08:25.721787 containerd[1500]: 2024-10-08 20:08:25.693 [INFO][4893] k8s.go 615: Releasing IP address(es) ContainerID="41345d2a2fb53738835e84bfb02237d3b2d1cd2c57a9459fda354b4a3f6e3aa9" Oct 8 20:08:25.721787 containerd[1500]: 2024-10-08 20:08:25.693 [INFO][4893] utils.go 188: Calico CNI releasing IP address ContainerID="41345d2a2fb53738835e84bfb02237d3b2d1cd2c57a9459fda354b4a3f6e3aa9" Oct 8 20:08:25.721787 containerd[1500]: 2024-10-08 20:08:25.711 [INFO][4902] ipam_plugin.go 417: Releasing address using handleID ContainerID="41345d2a2fb53738835e84bfb02237d3b2d1cd2c57a9459fda354b4a3f6e3aa9" HandleID="k8s-pod-network.41345d2a2fb53738835e84bfb02237d3b2d1cd2c57a9459fda354b4a3f6e3aa9" Workload="ci--4081--1--0--7--3c1e2fa9c6-k8s-calico--kube--controllers--547cf679d9--vx22q-eth0" Oct 8 20:08:25.721787 containerd[1500]: 2024-10-08 20:08:25.712 [INFO][4902] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:08:25.721787 containerd[1500]: 2024-10-08 20:08:25.712 [INFO][4902] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:08:25.721787 containerd[1500]: 2024-10-08 20:08:25.716 [WARNING][4902] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="41345d2a2fb53738835e84bfb02237d3b2d1cd2c57a9459fda354b4a3f6e3aa9" HandleID="k8s-pod-network.41345d2a2fb53738835e84bfb02237d3b2d1cd2c57a9459fda354b4a3f6e3aa9" Workload="ci--4081--1--0--7--3c1e2fa9c6-k8s-calico--kube--controllers--547cf679d9--vx22q-eth0" Oct 8 20:08:25.721787 containerd[1500]: 2024-10-08 20:08:25.716 [INFO][4902] ipam_plugin.go 445: Releasing address using workloadID ContainerID="41345d2a2fb53738835e84bfb02237d3b2d1cd2c57a9459fda354b4a3f6e3aa9" HandleID="k8s-pod-network.41345d2a2fb53738835e84bfb02237d3b2d1cd2c57a9459fda354b4a3f6e3aa9" Workload="ci--4081--1--0--7--3c1e2fa9c6-k8s-calico--kube--controllers--547cf679d9--vx22q-eth0" Oct 8 20:08:25.721787 containerd[1500]: 2024-10-08 20:08:25.717 [INFO][4902] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:08:25.721787 containerd[1500]: 2024-10-08 20:08:25.719 [INFO][4893] k8s.go 621: Teardown processing complete. ContainerID="41345d2a2fb53738835e84bfb02237d3b2d1cd2c57a9459fda354b4a3f6e3aa9" Oct 8 20:08:25.722616 containerd[1500]: time="2024-10-08T20:08:25.721796932Z" level=info msg="TearDown network for sandbox \"41345d2a2fb53738835e84bfb02237d3b2d1cd2c57a9459fda354b4a3f6e3aa9\" successfully" Oct 8 20:08:25.722616 containerd[1500]: time="2024-10-08T20:08:25.721823142Z" level=info msg="StopPodSandbox for \"41345d2a2fb53738835e84bfb02237d3b2d1cd2c57a9459fda354b4a3f6e3aa9\" returns successfully" Oct 8 20:08:25.722616 containerd[1500]: time="2024-10-08T20:08:25.722272800Z" level=info msg="RemovePodSandbox for \"41345d2a2fb53738835e84bfb02237d3b2d1cd2c57a9459fda354b4a3f6e3aa9\"" Oct 8 20:08:25.722616 containerd[1500]: time="2024-10-08T20:08:25.722295844Z" level=info msg="Forcibly stopping sandbox \"41345d2a2fb53738835e84bfb02237d3b2d1cd2c57a9459fda354b4a3f6e3aa9\"" Oct 8 20:08:25.788029 containerd[1500]: 2024-10-08 20:08:25.752 [WARNING][4920] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="41345d2a2fb53738835e84bfb02237d3b2d1cd2c57a9459fda354b4a3f6e3aa9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--7--3c1e2fa9c6-k8s-calico--kube--controllers--547cf679d9--vx22q-eth0", GenerateName:"calico-kube-controllers-547cf679d9-", Namespace:"calico-system", SelfLink:"", UID:"47c8fbe8-41ef-46a6-81f1-7c5c2c70818f", ResourceVersion:"742", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 7, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"547cf679d9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-7-3c1e2fa9c6", ContainerID:"e4e94d5814199141f8f0009019094875cb5358562b5553da83d3b3fae71e46d1", Pod:"calico-kube-controllers-547cf679d9-vx22q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.90.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali00dc0d59b2e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:08:25.788029 containerd[1500]: 2024-10-08 20:08:25.752 [INFO][4920] k8s.go 608: Cleaning up netns ContainerID="41345d2a2fb53738835e84bfb02237d3b2d1cd2c57a9459fda354b4a3f6e3aa9" Oct 8 20:08:25.788029 containerd[1500]: 2024-10-08 20:08:25.752 [INFO][4920] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="41345d2a2fb53738835e84bfb02237d3b2d1cd2c57a9459fda354b4a3f6e3aa9" iface="eth0" netns="" Oct 8 20:08:25.788029 containerd[1500]: 2024-10-08 20:08:25.752 [INFO][4920] k8s.go 615: Releasing IP address(es) ContainerID="41345d2a2fb53738835e84bfb02237d3b2d1cd2c57a9459fda354b4a3f6e3aa9" Oct 8 20:08:25.788029 containerd[1500]: 2024-10-08 20:08:25.752 [INFO][4920] utils.go 188: Calico CNI releasing IP address ContainerID="41345d2a2fb53738835e84bfb02237d3b2d1cd2c57a9459fda354b4a3f6e3aa9" Oct 8 20:08:25.788029 containerd[1500]: 2024-10-08 20:08:25.774 [INFO][4927] ipam_plugin.go 417: Releasing address using handleID ContainerID="41345d2a2fb53738835e84bfb02237d3b2d1cd2c57a9459fda354b4a3f6e3aa9" HandleID="k8s-pod-network.41345d2a2fb53738835e84bfb02237d3b2d1cd2c57a9459fda354b4a3f6e3aa9" Workload="ci--4081--1--0--7--3c1e2fa9c6-k8s-calico--kube--controllers--547cf679d9--vx22q-eth0" Oct 8 20:08:25.788029 containerd[1500]: 2024-10-08 20:08:25.774 [INFO][4927] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:08:25.788029 containerd[1500]: 2024-10-08 20:08:25.774 [INFO][4927] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:08:25.788029 containerd[1500]: 2024-10-08 20:08:25.779 [WARNING][4927] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="41345d2a2fb53738835e84bfb02237d3b2d1cd2c57a9459fda354b4a3f6e3aa9" HandleID="k8s-pod-network.41345d2a2fb53738835e84bfb02237d3b2d1cd2c57a9459fda354b4a3f6e3aa9" Workload="ci--4081--1--0--7--3c1e2fa9c6-k8s-calico--kube--controllers--547cf679d9--vx22q-eth0" Oct 8 20:08:25.788029 containerd[1500]: 2024-10-08 20:08:25.779 [INFO][4927] ipam_plugin.go 445: Releasing address using workloadID ContainerID="41345d2a2fb53738835e84bfb02237d3b2d1cd2c57a9459fda354b4a3f6e3aa9" HandleID="k8s-pod-network.41345d2a2fb53738835e84bfb02237d3b2d1cd2c57a9459fda354b4a3f6e3aa9" Workload="ci--4081--1--0--7--3c1e2fa9c6-k8s-calico--kube--controllers--547cf679d9--vx22q-eth0" Oct 8 20:08:25.788029 containerd[1500]: 2024-10-08 20:08:25.781 [INFO][4927] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:08:25.788029 containerd[1500]: 2024-10-08 20:08:25.785 [INFO][4920] k8s.go 621: Teardown processing complete. ContainerID="41345d2a2fb53738835e84bfb02237d3b2d1cd2c57a9459fda354b4a3f6e3aa9" Oct 8 20:08:25.788881 containerd[1500]: time="2024-10-08T20:08:25.788059957Z" level=info msg="TearDown network for sandbox \"41345d2a2fb53738835e84bfb02237d3b2d1cd2c57a9459fda354b4a3f6e3aa9\" successfully" Oct 8 20:08:25.794363 containerd[1500]: time="2024-10-08T20:08:25.794331099Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"41345d2a2fb53738835e84bfb02237d3b2d1cd2c57a9459fda354b4a3f6e3aa9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 20:08:25.794435 containerd[1500]: time="2024-10-08T20:08:25.794379573Z" level=info msg="RemovePodSandbox \"41345d2a2fb53738835e84bfb02237d3b2d1cd2c57a9459fda354b4a3f6e3aa9\" returns successfully" Oct 8 20:08:25.794794 containerd[1500]: time="2024-10-08T20:08:25.794767953Z" level=info msg="StopPodSandbox for \"bb752add53158b5a0fab6052e4e7c7abafa2deded948019132ceb76a80613679\"" Oct 8 20:08:25.854933 containerd[1500]: 2024-10-08 20:08:25.826 [WARNING][4945] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bb752add53158b5a0fab6052e4e7c7abafa2deded948019132ceb76a80613679" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--7--3c1e2fa9c6-k8s-coredns--76f75df574--jcw4q-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"106d4675-f2aa-41c2-b0d2-327ddbc42de2", ResourceVersion:"714", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 7, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-7-3c1e2fa9c6", ContainerID:"d5709c45f1b80b594087a1883e62e663e694fa62ee13d03fe72dcb4f658398b2", Pod:"coredns-76f75df574-jcw4q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.90.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali83d246ae6a4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:08:25.854933 containerd[1500]: 2024-10-08 20:08:25.826 [INFO][4945] k8s.go 608: Cleaning up netns ContainerID="bb752add53158b5a0fab6052e4e7c7abafa2deded948019132ceb76a80613679" Oct 8 20:08:25.854933 containerd[1500]: 2024-10-08 20:08:25.826 [INFO][4945] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="bb752add53158b5a0fab6052e4e7c7abafa2deded948019132ceb76a80613679" iface="eth0" netns="" Oct 8 20:08:25.854933 containerd[1500]: 2024-10-08 20:08:25.826 [INFO][4945] k8s.go 615: Releasing IP address(es) ContainerID="bb752add53158b5a0fab6052e4e7c7abafa2deded948019132ceb76a80613679" Oct 8 20:08:25.854933 containerd[1500]: 2024-10-08 20:08:25.826 [INFO][4945] utils.go 188: Calico CNI releasing IP address ContainerID="bb752add53158b5a0fab6052e4e7c7abafa2deded948019132ceb76a80613679" Oct 8 20:08:25.854933 containerd[1500]: 2024-10-08 20:08:25.845 [INFO][4951] ipam_plugin.go 417: Releasing address using handleID ContainerID="bb752add53158b5a0fab6052e4e7c7abafa2deded948019132ceb76a80613679" HandleID="k8s-pod-network.bb752add53158b5a0fab6052e4e7c7abafa2deded948019132ceb76a80613679" Workload="ci--4081--1--0--7--3c1e2fa9c6-k8s-coredns--76f75df574--jcw4q-eth0" Oct 8 20:08:25.854933 containerd[1500]: 2024-10-08 20:08:25.845 [INFO][4951] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:08:25.854933 containerd[1500]: 2024-10-08 20:08:25.845 [INFO][4951] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:08:25.854933 containerd[1500]: 2024-10-08 20:08:25.849 [WARNING][4951] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="bb752add53158b5a0fab6052e4e7c7abafa2deded948019132ceb76a80613679" HandleID="k8s-pod-network.bb752add53158b5a0fab6052e4e7c7abafa2deded948019132ceb76a80613679" Workload="ci--4081--1--0--7--3c1e2fa9c6-k8s-coredns--76f75df574--jcw4q-eth0" Oct 8 20:08:25.854933 containerd[1500]: 2024-10-08 20:08:25.849 [INFO][4951] ipam_plugin.go 445: Releasing address using workloadID ContainerID="bb752add53158b5a0fab6052e4e7c7abafa2deded948019132ceb76a80613679" HandleID="k8s-pod-network.bb752add53158b5a0fab6052e4e7c7abafa2deded948019132ceb76a80613679" Workload="ci--4081--1--0--7--3c1e2fa9c6-k8s-coredns--76f75df574--jcw4q-eth0" Oct 8 20:08:25.854933 containerd[1500]: 2024-10-08 20:08:25.850 [INFO][4951] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:08:25.854933 containerd[1500]: 2024-10-08 20:08:25.852 [INFO][4945] k8s.go 621: Teardown processing complete. ContainerID="bb752add53158b5a0fab6052e4e7c7abafa2deded948019132ceb76a80613679" Oct 8 20:08:25.855650 containerd[1500]: time="2024-10-08T20:08:25.855397090Z" level=info msg="TearDown network for sandbox \"bb752add53158b5a0fab6052e4e7c7abafa2deded948019132ceb76a80613679\" successfully" Oct 8 20:08:25.855650 containerd[1500]: time="2024-10-08T20:08:25.855422639Z" level=info msg="StopPodSandbox for \"bb752add53158b5a0fab6052e4e7c7abafa2deded948019132ceb76a80613679\" returns successfully" Oct 8 20:08:25.856206 containerd[1500]: time="2024-10-08T20:08:25.855875052Z" level=info msg="RemovePodSandbox for \"bb752add53158b5a0fab6052e4e7c7abafa2deded948019132ceb76a80613679\"" Oct 8 20:08:25.856206 containerd[1500]: time="2024-10-08T20:08:25.855895812Z" level=info msg="Forcibly stopping sandbox \"bb752add53158b5a0fab6052e4e7c7abafa2deded948019132ceb76a80613679\"" Oct 8 20:08:25.917365 containerd[1500]: 2024-10-08 20:08:25.889 [WARNING][4970] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bb752add53158b5a0fab6052e4e7c7abafa2deded948019132ceb76a80613679" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--7--3c1e2fa9c6-k8s-coredns--76f75df574--jcw4q-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"106d4675-f2aa-41c2-b0d2-327ddbc42de2", ResourceVersion:"714", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 7, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-7-3c1e2fa9c6", ContainerID:"d5709c45f1b80b594087a1883e62e663e694fa62ee13d03fe72dcb4f658398b2", Pod:"coredns-76f75df574-jcw4q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.90.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali83d246ae6a4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:08:25.917365 containerd[1500]: 2024-10-08 20:08:25.889 [INFO][4970] k8s.go 608: Cleaning up netns ContainerID="bb752add53158b5a0fab6052e4e7c7abafa2deded948019132ceb76a80613679" Oct 8 20:08:25.917365 containerd[1500]: 2024-10-08 20:08:25.889 [INFO][4970] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="bb752add53158b5a0fab6052e4e7c7abafa2deded948019132ceb76a80613679" iface="eth0" netns="" Oct 8 20:08:25.917365 containerd[1500]: 2024-10-08 20:08:25.889 [INFO][4970] k8s.go 615: Releasing IP address(es) ContainerID="bb752add53158b5a0fab6052e4e7c7abafa2deded948019132ceb76a80613679" Oct 8 20:08:25.917365 containerd[1500]: 2024-10-08 20:08:25.889 [INFO][4970] utils.go 188: Calico CNI releasing IP address ContainerID="bb752add53158b5a0fab6052e4e7c7abafa2deded948019132ceb76a80613679" Oct 8 20:08:25.917365 containerd[1500]: 2024-10-08 20:08:25.907 [INFO][4976] ipam_plugin.go 417: Releasing address using handleID ContainerID="bb752add53158b5a0fab6052e4e7c7abafa2deded948019132ceb76a80613679" HandleID="k8s-pod-network.bb752add53158b5a0fab6052e4e7c7abafa2deded948019132ceb76a80613679" Workload="ci--4081--1--0--7--3c1e2fa9c6-k8s-coredns--76f75df574--jcw4q-eth0" Oct 8 20:08:25.917365 containerd[1500]: 2024-10-08 20:08:25.907 [INFO][4976] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:08:25.917365 containerd[1500]: 2024-10-08 20:08:25.907 [INFO][4976] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:08:25.917365 containerd[1500]: 2024-10-08 20:08:25.912 [WARNING][4976] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="bb752add53158b5a0fab6052e4e7c7abafa2deded948019132ceb76a80613679" HandleID="k8s-pod-network.bb752add53158b5a0fab6052e4e7c7abafa2deded948019132ceb76a80613679" Workload="ci--4081--1--0--7--3c1e2fa9c6-k8s-coredns--76f75df574--jcw4q-eth0" Oct 8 20:08:25.917365 containerd[1500]: 2024-10-08 20:08:25.912 [INFO][4976] ipam_plugin.go 445: Releasing address using workloadID ContainerID="bb752add53158b5a0fab6052e4e7c7abafa2deded948019132ceb76a80613679" HandleID="k8s-pod-network.bb752add53158b5a0fab6052e4e7c7abafa2deded948019132ceb76a80613679" Workload="ci--4081--1--0--7--3c1e2fa9c6-k8s-coredns--76f75df574--jcw4q-eth0" Oct 8 20:08:25.917365 containerd[1500]: 2024-10-08 20:08:25.913 [INFO][4976] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:08:25.917365 containerd[1500]: 2024-10-08 20:08:25.915 [INFO][4970] k8s.go 621: Teardown processing complete. ContainerID="bb752add53158b5a0fab6052e4e7c7abafa2deded948019132ceb76a80613679" Oct 8 20:08:25.919407 containerd[1500]: time="2024-10-08T20:08:25.917659863Z" level=info msg="TearDown network for sandbox \"bb752add53158b5a0fab6052e4e7c7abafa2deded948019132ceb76a80613679\" successfully" Oct 8 20:08:25.920752 containerd[1500]: time="2024-10-08T20:08:25.920711124Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bb752add53158b5a0fab6052e4e7c7abafa2deded948019132ceb76a80613679\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 20:08:25.920805 containerd[1500]: time="2024-10-08T20:08:25.920760258Z" level=info msg="RemovePodSandbox \"bb752add53158b5a0fab6052e4e7c7abafa2deded948019132ceb76a80613679\" returns successfully" Oct 8 20:08:26.360415 systemd[1]: run-containerd-runc-k8s.io-93f9a8357e75b727a60e6031692e87410b43d91ef5471d1f3ca25e4bfbc93dac-runc.UTPGrD.mount: Deactivated successfully. Oct 8 20:08:28.851881 kubelet[2799]: I1008 20:08:28.851829 2799 topology_manager.go:215] "Topology Admit Handler" podUID="46fffe63-35a0-43e0-a664-684207328d5d" podNamespace="calico-apiserver" podName="calico-apiserver-dd4c968fc-x2rzp" Oct 8 20:08:28.866752 kubelet[2799]: I1008 20:08:28.866253 2799 topology_manager.go:215] "Topology Admit Handler" podUID="0b7ad087-da00-4f73-bb0e-d5cf086813f0" podNamespace="calico-apiserver" podName="calico-apiserver-dd4c968fc-mczmv" Oct 8 20:08:28.872827 systemd[1]: Created slice kubepods-besteffort-pod46fffe63_35a0_43e0_a664_684207328d5d.slice - libcontainer container kubepods-besteffort-pod46fffe63_35a0_43e0_a664_684207328d5d.slice. Oct 8 20:08:28.884580 systemd[1]: Created slice kubepods-besteffort-pod0b7ad087_da00_4f73_bb0e_d5cf086813f0.slice - libcontainer container kubepods-besteffort-pod0b7ad087_da00_4f73_bb0e_d5cf086813f0.slice. Oct 8 20:08:28.914517 kubelet[2799]: I1008 20:08:28.914476 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjf8z\" (UniqueName: \"kubernetes.io/projected/0b7ad087-da00-4f73-bb0e-d5cf086813f0-kube-api-access-cjf8z\") pod \"calico-apiserver-dd4c968fc-mczmv\" (UID: \"0b7ad087-da00-4f73-bb0e-d5cf086813f0\") " pod="calico-apiserver/calico-apiserver-dd4c968fc-mczmv" Oct 8 20:08:28.914708 kubelet[2799]: I1008 20:08:28.914697 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/46fffe63-35a0-43e0-a664-684207328d5d-calico-apiserver-certs\") pod \"calico-apiserver-dd4c968fc-x2rzp\" (UID: \"46fffe63-35a0-43e0-a664-684207328d5d\") " pod="calico-apiserver/calico-apiserver-dd4c968fc-x2rzp" Oct 8 20:08:28.915230 kubelet[2799]: I1008 20:08:28.914802 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lldv9\" (UniqueName: \"kubernetes.io/projected/46fffe63-35a0-43e0-a664-684207328d5d-kube-api-access-lldv9\") pod \"calico-apiserver-dd4c968fc-x2rzp\" (UID: \"46fffe63-35a0-43e0-a664-684207328d5d\") " pod="calico-apiserver/calico-apiserver-dd4c968fc-x2rzp" Oct 8 20:08:28.915230 kubelet[2799]: I1008 20:08:28.914830 2799 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0b7ad087-da00-4f73-bb0e-d5cf086813f0-calico-apiserver-certs\") pod \"calico-apiserver-dd4c968fc-mczmv\" (UID: \"0b7ad087-da00-4f73-bb0e-d5cf086813f0\") " pod="calico-apiserver/calico-apiserver-dd4c968fc-mczmv" Oct 8 20:08:29.018422 kubelet[2799]: E1008 20:08:29.018283 2799 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Oct 8 20:08:29.018422 kubelet[2799]: E1008 20:08:29.018376 2799 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Oct 8 20:08:29.024670 kubelet[2799]: E1008 20:08:29.023865 2799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/46fffe63-35a0-43e0-a664-684207328d5d-calico-apiserver-certs podName:46fffe63-35a0-43e0-a664-684207328d5d nodeName:}" failed. No retries permitted until 2024-10-08 20:08:29.521530334 +0000 UTC m=+64.279208764 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/46fffe63-35a0-43e0-a664-684207328d5d-calico-apiserver-certs") pod "calico-apiserver-dd4c968fc-x2rzp" (UID: "46fffe63-35a0-43e0-a664-684207328d5d") : secret "calico-apiserver-certs" not found Oct 8 20:08:29.024670 kubelet[2799]: E1008 20:08:29.023894 2799 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b7ad087-da00-4f73-bb0e-d5cf086813f0-calico-apiserver-certs podName:0b7ad087-da00-4f73-bb0e-d5cf086813f0 nodeName:}" failed. No retries permitted until 2024-10-08 20:08:29.523882743 +0000 UTC m=+64.281561173 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/0b7ad087-da00-4f73-bb0e-d5cf086813f0-calico-apiserver-certs") pod "calico-apiserver-dd4c968fc-mczmv" (UID: "0b7ad087-da00-4f73-bb0e-d5cf086813f0") : secret "calico-apiserver-certs" not found Oct 8 20:08:29.780290 containerd[1500]: time="2024-10-08T20:08:29.780134938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dd4c968fc-x2rzp,Uid:46fffe63-35a0-43e0-a664-684207328d5d,Namespace:calico-apiserver,Attempt:0,}" Oct 8 20:08:29.797264 containerd[1500]: time="2024-10-08T20:08:29.796759547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dd4c968fc-mczmv,Uid:0b7ad087-da00-4f73-bb0e-d5cf086813f0,Namespace:calico-apiserver,Attempt:0,}" Oct 8 20:08:29.931031 systemd-networkd[1395]: cali9d2d9079210: Link UP Oct 8 20:08:29.931280 systemd-networkd[1395]: cali9d2d9079210: Gained carrier Oct 8 20:08:29.953499 containerd[1500]: 2024-10-08 20:08:29.851 [INFO][5011] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--1--0--7--3c1e2fa9c6-k8s-calico--apiserver--dd4c968fc--x2rzp-eth0 calico-apiserver-dd4c968fc- calico-apiserver 46fffe63-35a0-43e0-a664-684207328d5d 827 0 2024-10-08 20:08:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:dd4c968fc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-1-0-7-3c1e2fa9c6 calico-apiserver-dd4c968fc-x2rzp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9d2d9079210 [] []}} ContainerID="a7504046d6539ed7349636f93a83dc6bc36215d1c2840857d4612e30d0dc9bde" Namespace="calico-apiserver" Pod="calico-apiserver-dd4c968fc-x2rzp" WorkloadEndpoint="ci--4081--1--0--7--3c1e2fa9c6-k8s-calico--apiserver--dd4c968fc--x2rzp-" Oct 8 20:08:29.953499 containerd[1500]: 2024-10-08 20:08:29.851 [INFO][5011] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a7504046d6539ed7349636f93a83dc6bc36215d1c2840857d4612e30d0dc9bde" Namespace="calico-apiserver" Pod="calico-apiserver-dd4c968fc-x2rzp" WorkloadEndpoint="ci--4081--1--0--7--3c1e2fa9c6-k8s-calico--apiserver--dd4c968fc--x2rzp-eth0" Oct 8 20:08:29.953499 containerd[1500]: 2024-10-08 20:08:29.892 [INFO][5036] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a7504046d6539ed7349636f93a83dc6bc36215d1c2840857d4612e30d0dc9bde" HandleID="k8s-pod-network.a7504046d6539ed7349636f93a83dc6bc36215d1c2840857d4612e30d0dc9bde" Workload="ci--4081--1--0--7--3c1e2fa9c6-k8s-calico--apiserver--dd4c968fc--x2rzp-eth0" Oct 8 20:08:29.953499 containerd[1500]: 2024-10-08 20:08:29.899 [INFO][5036] ipam_plugin.go 270: Auto assigning IP ContainerID="a7504046d6539ed7349636f93a83dc6bc36215d1c2840857d4612e30d0dc9bde" HandleID="k8s-pod-network.a7504046d6539ed7349636f93a83dc6bc36215d1c2840857d4612e30d0dc9bde" Workload="ci--4081--1--0--7--3c1e2fa9c6-k8s-calico--apiserver--dd4c968fc--x2rzp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001ff260), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-1-0-7-3c1e2fa9c6", "pod":"calico-apiserver-dd4c968fc-x2rzp", "timestamp":"2024-10-08 20:08:29.89249795 +0000 UTC"}, Hostname:"ci-4081-1-0-7-3c1e2fa9c6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 20:08:29.953499 containerd[1500]: 2024-10-08 20:08:29.899 [INFO][5036] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:08:29.953499 containerd[1500]: 2024-10-08 20:08:29.899 [INFO][5036] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:08:29.953499 containerd[1500]: 2024-10-08 20:08:29.899 [INFO][5036] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-1-0-7-3c1e2fa9c6' Oct 8 20:08:29.953499 containerd[1500]: 2024-10-08 20:08:29.900 [INFO][5036] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a7504046d6539ed7349636f93a83dc6bc36215d1c2840857d4612e30d0dc9bde" host="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:08:29.953499 containerd[1500]: 2024-10-08 20:08:29.904 [INFO][5036] ipam.go 372: Looking up existing affinities for host host="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:08:29.953499 containerd[1500]: 2024-10-08 20:08:29.907 [INFO][5036] ipam.go 489: Trying affinity for 192.168.90.0/26 host="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:08:29.953499 containerd[1500]: 2024-10-08 20:08:29.909 [INFO][5036] ipam.go 155: Attempting to load block cidr=192.168.90.0/26 host="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:08:29.953499 containerd[1500]: 2024-10-08 20:08:29.912 [INFO][5036] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.90.0/26 host="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:08:29.953499 containerd[1500]: 2024-10-08 20:08:29.912 [INFO][5036] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.90.0/26 handle="k8s-pod-network.a7504046d6539ed7349636f93a83dc6bc36215d1c2840857d4612e30d0dc9bde" host="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:08:29.953499 containerd[1500]: 2024-10-08 20:08:29.914 [INFO][5036] ipam.go 1685: Creating new handle: k8s-pod-network.a7504046d6539ed7349636f93a83dc6bc36215d1c2840857d4612e30d0dc9bde Oct 8 20:08:29.953499 containerd[1500]: 2024-10-08 20:08:29.917 [INFO][5036] ipam.go 1203: Writing block in order to claim IPs block=192.168.90.0/26 handle="k8s-pod-network.a7504046d6539ed7349636f93a83dc6bc36215d1c2840857d4612e30d0dc9bde" host="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:08:29.953499 containerd[1500]: 2024-10-08 20:08:29.922 [INFO][5036] ipam.go 1216: Successfully claimed IPs: [192.168.90.5/26] block=192.168.90.0/26 handle="k8s-pod-network.a7504046d6539ed7349636f93a83dc6bc36215d1c2840857d4612e30d0dc9bde" host="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:08:29.953499 containerd[1500]: 2024-10-08 20:08:29.922 [INFO][5036] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.90.5/26] handle="k8s-pod-network.a7504046d6539ed7349636f93a83dc6bc36215d1c2840857d4612e30d0dc9bde" host="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:08:29.953499 containerd[1500]: 2024-10-08 20:08:29.922 [INFO][5036] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:08:29.953499 containerd[1500]: 2024-10-08 20:08:29.922 [INFO][5036] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.90.5/26] IPv6=[] ContainerID="a7504046d6539ed7349636f93a83dc6bc36215d1c2840857d4612e30d0dc9bde" HandleID="k8s-pod-network.a7504046d6539ed7349636f93a83dc6bc36215d1c2840857d4612e30d0dc9bde" Workload="ci--4081--1--0--7--3c1e2fa9c6-k8s-calico--apiserver--dd4c968fc--x2rzp-eth0" Oct 8 20:08:29.955903 containerd[1500]: 2024-10-08 20:08:29.926 [INFO][5011] k8s.go 386: Populated endpoint ContainerID="a7504046d6539ed7349636f93a83dc6bc36215d1c2840857d4612e30d0dc9bde" Namespace="calico-apiserver" Pod="calico-apiserver-dd4c968fc-x2rzp" WorkloadEndpoint="ci--4081--1--0--7--3c1e2fa9c6-k8s-calico--apiserver--dd4c968fc--x2rzp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--7--3c1e2fa9c6-k8s-calico--apiserver--dd4c968fc--x2rzp-eth0", GenerateName:"calico-apiserver-dd4c968fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"46fffe63-35a0-43e0-a664-684207328d5d", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 8, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dd4c968fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-7-3c1e2fa9c6", ContainerID:"", Pod:"calico-apiserver-dd4c968fc-x2rzp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.90.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9d2d9079210", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:08:29.955903 containerd[1500]: 2024-10-08 20:08:29.926 [INFO][5011] k8s.go 387: Calico CNI using IPs: [192.168.90.5/32] ContainerID="a7504046d6539ed7349636f93a83dc6bc36215d1c2840857d4612e30d0dc9bde" Namespace="calico-apiserver" Pod="calico-apiserver-dd4c968fc-x2rzp" WorkloadEndpoint="ci--4081--1--0--7--3c1e2fa9c6-k8s-calico--apiserver--dd4c968fc--x2rzp-eth0" Oct 8 20:08:29.955903 containerd[1500]: 2024-10-08 20:08:29.926 [INFO][5011] dataplane_linux.go 68: Setting the host side veth name to cali9d2d9079210 ContainerID="a7504046d6539ed7349636f93a83dc6bc36215d1c2840857d4612e30d0dc9bde" Namespace="calico-apiserver" Pod="calico-apiserver-dd4c968fc-x2rzp" WorkloadEndpoint="ci--4081--1--0--7--3c1e2fa9c6-k8s-calico--apiserver--dd4c968fc--x2rzp-eth0" Oct 8 20:08:29.955903 containerd[1500]: 2024-10-08 20:08:29.931 [INFO][5011] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="a7504046d6539ed7349636f93a83dc6bc36215d1c2840857d4612e30d0dc9bde" Namespace="calico-apiserver" Pod="calico-apiserver-dd4c968fc-x2rzp" WorkloadEndpoint="ci--4081--1--0--7--3c1e2fa9c6-k8s-calico--apiserver--dd4c968fc--x2rzp-eth0" Oct 8 20:08:29.955903 containerd[1500]: 2024-10-08 20:08:29.933 [INFO][5011] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a7504046d6539ed7349636f93a83dc6bc36215d1c2840857d4612e30d0dc9bde" Namespace="calico-apiserver" Pod="calico-apiserver-dd4c968fc-x2rzp" WorkloadEndpoint="ci--4081--1--0--7--3c1e2fa9c6-k8s-calico--apiserver--dd4c968fc--x2rzp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--7--3c1e2fa9c6-k8s-calico--apiserver--dd4c968fc--x2rzp-eth0", GenerateName:"calico-apiserver-dd4c968fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"46fffe63-35a0-43e0-a664-684207328d5d", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 8, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dd4c968fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-7-3c1e2fa9c6", ContainerID:"a7504046d6539ed7349636f93a83dc6bc36215d1c2840857d4612e30d0dc9bde", Pod:"calico-apiserver-dd4c968fc-x2rzp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.90.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9d2d9079210", MAC:"36:98:60:0d:f6:d4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:08:29.955903 containerd[1500]: 2024-10-08 20:08:29.945 [INFO][5011] k8s.go 500: Wrote updated endpoint to datastore ContainerID="a7504046d6539ed7349636f93a83dc6bc36215d1c2840857d4612e30d0dc9bde" Namespace="calico-apiserver" Pod="calico-apiserver-dd4c968fc-x2rzp" WorkloadEndpoint="ci--4081--1--0--7--3c1e2fa9c6-k8s-calico--apiserver--dd4c968fc--x2rzp-eth0" Oct 8 20:08:30.001412 systemd-networkd[1395]: calia480cb1cd28: Link UP Oct 8 20:08:30.002414 systemd-networkd[1395]: calia480cb1cd28: Gained carrier Oct 8 20:08:30.014300 containerd[1500]: time="2024-10-08T20:08:30.012914538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:08:30.014300 containerd[1500]: time="2024-10-08T20:08:30.012973571Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:08:30.014300 containerd[1500]: time="2024-10-08T20:08:30.012985965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:08:30.014300 containerd[1500]: time="2024-10-08T20:08:30.013057571Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:08:30.023371 containerd[1500]: 2024-10-08 20:08:29.887 [INFO][5023] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--1--0--7--3c1e2fa9c6-k8s-calico--apiserver--dd4c968fc--mczmv-eth0 calico-apiserver-dd4c968fc- calico-apiserver 0b7ad087-da00-4f73-bb0e-d5cf086813f0 829 0 2024-10-08 20:08:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:dd4c968fc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-1-0-7-3c1e2fa9c6 calico-apiserver-dd4c968fc-mczmv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia480cb1cd28 [] []}} ContainerID="74d989f8e46b4cfec98622c16157c759aa818b86b2245158324c491453351aba" Namespace="calico-apiserver" Pod="calico-apiserver-dd4c968fc-mczmv" WorkloadEndpoint="ci--4081--1--0--7--3c1e2fa9c6-k8s-calico--apiserver--dd4c968fc--mczmv-" Oct 8 20:08:30.023371 containerd[1500]: 2024-10-08 20:08:29.889 [INFO][5023] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="74d989f8e46b4cfec98622c16157c759aa818b86b2245158324c491453351aba" Namespace="calico-apiserver" Pod="calico-apiserver-dd4c968fc-mczmv" WorkloadEndpoint="ci--4081--1--0--7--3c1e2fa9c6-k8s-calico--apiserver--dd4c968fc--mczmv-eth0" Oct 8 20:08:30.023371 containerd[1500]: 2024-10-08 20:08:29.939 [INFO][5044] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="74d989f8e46b4cfec98622c16157c759aa818b86b2245158324c491453351aba" HandleID="k8s-pod-network.74d989f8e46b4cfec98622c16157c759aa818b86b2245158324c491453351aba" Workload="ci--4081--1--0--7--3c1e2fa9c6-k8s-calico--apiserver--dd4c968fc--mczmv-eth0" Oct 8 20:08:30.023371 containerd[1500]: 2024-10-08 20:08:29.960 [INFO][5044] ipam_plugin.go 270: Auto assigning IP ContainerID="74d989f8e46b4cfec98622c16157c759aa818b86b2245158324c491453351aba" HandleID="k8s-pod-network.74d989f8e46b4cfec98622c16157c759aa818b86b2245158324c491453351aba" Workload="ci--4081--1--0--7--3c1e2fa9c6-k8s-calico--apiserver--dd4c968fc--mczmv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050730), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-1-0-7-3c1e2fa9c6", "pod":"calico-apiserver-dd4c968fc-mczmv", "timestamp":"2024-10-08 20:08:29.939750675 +0000 UTC"}, Hostname:"ci-4081-1-0-7-3c1e2fa9c6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 20:08:30.023371 containerd[1500]: 2024-10-08 20:08:29.960 [INFO][5044] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:08:30.023371 containerd[1500]: 2024-10-08 20:08:29.960 [INFO][5044] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:08:30.023371 containerd[1500]: 2024-10-08 20:08:29.960 [INFO][5044] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-1-0-7-3c1e2fa9c6' Oct 8 20:08:30.023371 containerd[1500]: 2024-10-08 20:08:29.963 [INFO][5044] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.74d989f8e46b4cfec98622c16157c759aa818b86b2245158324c491453351aba" host="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:08:30.023371 containerd[1500]: 2024-10-08 20:08:29.968 [INFO][5044] ipam.go 372: Looking up existing affinities for host host="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:08:30.023371 containerd[1500]: 2024-10-08 20:08:29.974 [INFO][5044] ipam.go 489: Trying affinity for 192.168.90.0/26 host="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:08:30.023371 containerd[1500]: 2024-10-08 20:08:29.977 [INFO][5044] ipam.go 155: Attempting to load block cidr=192.168.90.0/26 host="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:08:30.023371 containerd[1500]: 2024-10-08 20:08:29.979 [INFO][5044] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.90.0/26 host="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:08:30.023371 containerd[1500]: 2024-10-08 20:08:29.979 [INFO][5044] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.90.0/26 handle="k8s-pod-network.74d989f8e46b4cfec98622c16157c759aa818b86b2245158324c491453351aba" host="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:08:30.023371 containerd[1500]: 2024-10-08 20:08:29.981 [INFO][5044] ipam.go 1685: Creating new handle: k8s-pod-network.74d989f8e46b4cfec98622c16157c759aa818b86b2245158324c491453351aba Oct 8 20:08:30.023371 containerd[1500]: 2024-10-08 20:08:29.985 [INFO][5044] ipam.go 1203: Writing block in order to claim IPs block=192.168.90.0/26 handle="k8s-pod-network.74d989f8e46b4cfec98622c16157c759aa818b86b2245158324c491453351aba" host="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:08:30.023371 containerd[1500]: 2024-10-08 20:08:29.991 [INFO][5044] ipam.go 1216: Successfully claimed IPs: [192.168.90.6/26] block=192.168.90.0/26 handle="k8s-pod-network.74d989f8e46b4cfec98622c16157c759aa818b86b2245158324c491453351aba" host="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:08:30.023371 containerd[1500]: 2024-10-08 20:08:29.991 [INFO][5044] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.90.6/26] handle="k8s-pod-network.74d989f8e46b4cfec98622c16157c759aa818b86b2245158324c491453351aba" host="ci-4081-1-0-7-3c1e2fa9c6" Oct 8 20:08:30.023371 containerd[1500]: 2024-10-08 20:08:29.991 [INFO][5044] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:08:30.023371 containerd[1500]: 2024-10-08 20:08:29.991 [INFO][5044] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.90.6/26] IPv6=[] ContainerID="74d989f8e46b4cfec98622c16157c759aa818b86b2245158324c491453351aba" HandleID="k8s-pod-network.74d989f8e46b4cfec98622c16157c759aa818b86b2245158324c491453351aba" Workload="ci--4081--1--0--7--3c1e2fa9c6-k8s-calico--apiserver--dd4c968fc--mczmv-eth0" Oct 8 20:08:30.025722 containerd[1500]: 2024-10-08 20:08:29.997 [INFO][5023] k8s.go 386: Populated endpoint ContainerID="74d989f8e46b4cfec98622c16157c759aa818b86b2245158324c491453351aba" Namespace="calico-apiserver" Pod="calico-apiserver-dd4c968fc-mczmv" WorkloadEndpoint="ci--4081--1--0--7--3c1e2fa9c6-k8s-calico--apiserver--dd4c968fc--mczmv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--7--3c1e2fa9c6-k8s-calico--apiserver--dd4c968fc--mczmv-eth0", GenerateName:"calico-apiserver-dd4c968fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"0b7ad087-da00-4f73-bb0e-d5cf086813f0", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 8, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dd4c968fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-7-3c1e2fa9c6", ContainerID:"", Pod:"calico-apiserver-dd4c968fc-mczmv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.90.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia480cb1cd28", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:08:30.025722 containerd[1500]: 2024-10-08 20:08:29.997 [INFO][5023] k8s.go 387: Calico CNI using IPs: [192.168.90.6/32] ContainerID="74d989f8e46b4cfec98622c16157c759aa818b86b2245158324c491453351aba" Namespace="calico-apiserver" Pod="calico-apiserver-dd4c968fc-mczmv" WorkloadEndpoint="ci--4081--1--0--7--3c1e2fa9c6-k8s-calico--apiserver--dd4c968fc--mczmv-eth0" Oct 8 20:08:30.025722 containerd[1500]: 2024-10-08 20:08:29.997 [INFO][5023] dataplane_linux.go 68: Setting the host side veth name to calia480cb1cd28 ContainerID="74d989f8e46b4cfec98622c16157c759aa818b86b2245158324c491453351aba" Namespace="calico-apiserver" Pod="calico-apiserver-dd4c968fc-mczmv" WorkloadEndpoint="ci--4081--1--0--7--3c1e2fa9c6-k8s-calico--apiserver--dd4c968fc--mczmv-eth0" Oct 8 20:08:30.025722 containerd[1500]: 2024-10-08 20:08:30.003 [INFO][5023] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="74d989f8e46b4cfec98622c16157c759aa818b86b2245158324c491453351aba" Namespace="calico-apiserver" Pod="calico-apiserver-dd4c968fc-mczmv" WorkloadEndpoint="ci--4081--1--0--7--3c1e2fa9c6-k8s-calico--apiserver--dd4c968fc--mczmv-eth0" Oct 8 20:08:30.025722 containerd[1500]: 2024-10-08 20:08:30.003 [INFO][5023] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="74d989f8e46b4cfec98622c16157c759aa818b86b2245158324c491453351aba" Namespace="calico-apiserver" Pod="calico-apiserver-dd4c968fc-mczmv" WorkloadEndpoint="ci--4081--1--0--7--3c1e2fa9c6-k8s-calico--apiserver--dd4c968fc--mczmv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--7--3c1e2fa9c6-k8s-calico--apiserver--dd4c968fc--mczmv-eth0", GenerateName:"calico-apiserver-dd4c968fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"0b7ad087-da00-4f73-bb0e-d5cf086813f0", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 8, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dd4c968fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-7-3c1e2fa9c6", ContainerID:"74d989f8e46b4cfec98622c16157c759aa818b86b2245158324c491453351aba", Pod:"calico-apiserver-dd4c968fc-mczmv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.90.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia480cb1cd28", MAC:"6e:dd:13:93:45:7f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:08:30.025722 containerd[1500]: 2024-10-08 20:08:30.015 [INFO][5023] k8s.go 500: Wrote updated endpoint to datastore ContainerID="74d989f8e46b4cfec98622c16157c759aa818b86b2245158324c491453351aba" Namespace="calico-apiserver" Pod="calico-apiserver-dd4c968fc-mczmv" WorkloadEndpoint="ci--4081--1--0--7--3c1e2fa9c6-k8s-calico--apiserver--dd4c968fc--mczmv-eth0" Oct 8 20:08:30.061375 systemd[1]: Started cri-containerd-a7504046d6539ed7349636f93a83dc6bc36215d1c2840857d4612e30d0dc9bde.scope - libcontainer container a7504046d6539ed7349636f93a83dc6bc36215d1c2840857d4612e30d0dc9bde. Oct 8 20:08:30.065045 containerd[1500]: time="2024-10-08T20:08:30.064958760Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:08:30.066574 containerd[1500]: time="2024-10-08T20:08:30.065073488Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:08:30.066574 containerd[1500]: time="2024-10-08T20:08:30.065334847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:08:30.066574 containerd[1500]: time="2024-10-08T20:08:30.065425140Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:08:30.097387 systemd[1]: Started cri-containerd-74d989f8e46b4cfec98622c16157c759aa818b86b2245158324c491453351aba.scope - libcontainer container 74d989f8e46b4cfec98622c16157c759aa818b86b2245158324c491453351aba. Oct 8 20:08:30.143075 containerd[1500]: time="2024-10-08T20:08:30.142975256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dd4c968fc-mczmv,Uid:0b7ad087-da00-4f73-bb0e-d5cf086813f0,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"74d989f8e46b4cfec98622c16157c759aa818b86b2245158324c491453351aba\"" Oct 8 20:08:30.145774 containerd[1500]: time="2024-10-08T20:08:30.145749079Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Oct 8 20:08:30.149915 containerd[1500]: time="2024-10-08T20:08:30.149884910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dd4c968fc-x2rzp,Uid:46fffe63-35a0-43e0-a664-684207328d5d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a7504046d6539ed7349636f93a83dc6bc36215d1c2840857d4612e30d0dc9bde\"" Oct 8 20:08:31.274459 systemd-networkd[1395]: cali9d2d9079210: Gained IPv6LL Oct 8 20:08:31.659036 systemd-networkd[1395]: calia480cb1cd28: Gained IPv6LL Oct 8 20:08:32.030583 containerd[1500]: time="2024-10-08T20:08:32.029952779Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:08:32.031136 containerd[1500]: time="2024-10-08T20:08:32.031042238Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=40419849" Oct 8 20:08:32.031478 containerd[1500]: time="2024-10-08T20:08:32.031449916Z" level=info msg="ImageCreate event name:\"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:08:32.033087 containerd[1500]: time="2024-10-08T20:08:32.033042645Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:08:32.034827 containerd[1500]: time="2024-10-08T20:08:32.034658358Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 1.888881175s" Oct 8 20:08:32.034827 containerd[1500]: time="2024-10-08T20:08:32.034684078Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Oct 8 20:08:32.035610 containerd[1500]: time="2024-10-08T20:08:32.035320983Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Oct 8 20:08:32.036434 containerd[1500]: time="2024-10-08T20:08:32.036406704Z" level=info msg="CreateContainer within sandbox \"74d989f8e46b4cfec98622c16157c759aa818b86b2245158324c491453351aba\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 8 20:08:32.052404 containerd[1500]: time="2024-10-08T20:08:32.052374453Z" level=info msg="CreateContainer within sandbox \"74d989f8e46b4cfec98622c16157c759aa818b86b2245158324c491453351aba\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"26ae1bbea12758998f8e45330458e4593067d124613604918065146b0691730d\"" Oct 8 20:08:32.052936 containerd[1500]: time="2024-10-08T20:08:32.052917310Z" level=info msg="StartContainer for \"26ae1bbea12758998f8e45330458e4593067d124613604918065146b0691730d\"" Oct 8 20:08:32.093342 systemd[1]: Started cri-containerd-26ae1bbea12758998f8e45330458e4593067d124613604918065146b0691730d.scope - libcontainer container 26ae1bbea12758998f8e45330458e4593067d124613604918065146b0691730d. Oct 8 20:08:32.130719 containerd[1500]: time="2024-10-08T20:08:32.130681466Z" level=info msg="StartContainer for \"26ae1bbea12758998f8e45330458e4593067d124613604918065146b0691730d\" returns successfully" Oct 8 20:08:32.428829 containerd[1500]: time="2024-10-08T20:08:32.428416282Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:08:32.429827 containerd[1500]: time="2024-10-08T20:08:32.429395099Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=77" Oct 8 20:08:32.432392 containerd[1500]: time="2024-10-08T20:08:32.432357522Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 397.005389ms" Oct 8 20:08:32.432392 containerd[1500]: time="2024-10-08T20:08:32.432389353Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Oct 8 20:08:32.433894 containerd[1500]: time="2024-10-08T20:08:32.433870038Z" level=info msg="CreateContainer within sandbox \"a7504046d6539ed7349636f93a83dc6bc36215d1c2840857d4612e30d0dc9bde\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 8 20:08:32.455147 containerd[1500]: time="2024-10-08T20:08:32.455038821Z" level=info msg="CreateContainer within sandbox \"a7504046d6539ed7349636f93a83dc6bc36215d1c2840857d4612e30d0dc9bde\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"09f0d471f2ced8ea5212776e26ae23cedbbb89fa909d86b6824fef2fa9b32147\"" Oct 8 20:08:32.455690 containerd[1500]: time="2024-10-08T20:08:32.455666840Z" level=info msg="StartContainer for \"09f0d471f2ced8ea5212776e26ae23cedbbb89fa909d86b6824fef2fa9b32147\"" Oct 8 20:08:32.483057 systemd[1]: Started cri-containerd-09f0d471f2ced8ea5212776e26ae23cedbbb89fa909d86b6824fef2fa9b32147.scope - libcontainer container 09f0d471f2ced8ea5212776e26ae23cedbbb89fa909d86b6824fef2fa9b32147. Oct 8 20:08:32.535523 containerd[1500]: time="2024-10-08T20:08:32.535492210Z" level=info msg="StartContainer for \"09f0d471f2ced8ea5212776e26ae23cedbbb89fa909d86b6824fef2fa9b32147\" returns successfully" Oct 8 20:08:32.729720 kubelet[2799]: I1008 20:08:32.729696 2799 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-dd4c968fc-x2rzp" podStartSLOduration=2.448199652 podStartE2EDuration="4.729663309s" podCreationTimestamp="2024-10-08 20:08:28 +0000 UTC" firstStartedPulling="2024-10-08 20:08:30.151090811 +0000 UTC m=+64.908769241" lastFinishedPulling="2024-10-08 20:08:32.432554469 +0000 UTC m=+67.190232898" observedRunningTime="2024-10-08 20:08:32.72889316 +0000 UTC m=+67.486571590" watchObservedRunningTime="2024-10-08 20:08:32.729663309 +0000 UTC m=+67.487341739" Oct 8 20:08:32.730804 kubelet[2799]: I1008 20:08:32.730397 2799 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-dd4c968fc-mczmv" podStartSLOduration=2.840672182 podStartE2EDuration="4.730370178s" podCreationTimestamp="2024-10-08 20:08:28 +0000 UTC" firstStartedPulling="2024-10-08 20:08:30.14539871 +0000 UTC m=+64.903077140" lastFinishedPulling="2024-10-08 20:08:32.035096705 +0000 UTC m=+66.792775136" observedRunningTime="2024-10-08 20:08:32.718857079 +0000 UTC m=+67.476535509" watchObservedRunningTime="2024-10-08 20:08:32.730370178 +0000 UTC m=+67.488048618" Oct 8 20:08:53.323572 systemd[1]: run-containerd-runc-k8s.io-f4a2b2b4e80ad9b425620873f6041d121855cd30508bfe54e615fcca39e0e79d-runc.7uNQfw.mount: Deactivated successfully. Oct 8 20:09:23.359971 systemd[1]: run-containerd-runc-k8s.io-f4a2b2b4e80ad9b425620873f6041d121855cd30508bfe54e615fcca39e0e79d-runc.xOXpzJ.mount: Deactivated successfully. Oct 8 20:10:23.327272 systemd[1]: run-containerd-runc-k8s.io-f4a2b2b4e80ad9b425620873f6041d121855cd30508bfe54e615fcca39e0e79d-runc.VqF1Ou.mount: Deactivated successfully. Oct 8 20:10:58.030299 systemd[1]: run-containerd-runc-k8s.io-93f9a8357e75b727a60e6031692e87410b43d91ef5471d1f3ca25e4bfbc93dac-runc.UDirYo.mount: Deactivated successfully. Oct 8 20:11:53.332024 systemd[1]: run-containerd-runc-k8s.io-f4a2b2b4e80ad9b425620873f6041d121855cd30508bfe54e615fcca39e0e79d-runc.PN6pgF.mount: Deactivated successfully. Oct 8 20:11:58.010966 systemd[1]: run-containerd-runc-k8s.io-93f9a8357e75b727a60e6031692e87410b43d91ef5471d1f3ca25e4bfbc93dac-runc.zpE86c.mount: Deactivated successfully. Oct 8 20:12:17.471465 systemd[1]: Started sshd@7-49.13.138.82:22-147.75.109.163:44080.service - OpenSSH per-connection server daemon (147.75.109.163:44080). Oct 8 20:12:18.492504 sshd[5819]: Accepted publickey for core from 147.75.109.163 port 44080 ssh2: RSA SHA256:8pb/X5i1efUvJi8sgU2/AQBt50OQJsXEcuFpDNAus+I Oct 8 20:12:18.495212 sshd[5819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:12:18.502931 systemd-logind[1473]: New session 8 of user core. Oct 8 20:12:18.510412 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 8 20:12:19.631724 sshd[5819]: pam_unix(sshd:session): session closed for user core Oct 8 20:12:19.635982 systemd[1]: sshd@7-49.13.138.82:22-147.75.109.163:44080.service: Deactivated successfully. Oct 8 20:12:19.637926 systemd[1]: session-8.scope: Deactivated successfully. Oct 8 20:12:19.640006 systemd-logind[1473]: Session 8 logged out. Waiting for processes to exit. Oct 8 20:12:19.642253 systemd-logind[1473]: Removed session 8. Oct 8 20:12:24.805552 systemd[1]: Started sshd@8-49.13.138.82:22-147.75.109.163:44092.service - OpenSSH per-connection server daemon (147.75.109.163:44092). Oct 8 20:12:25.795799 sshd[5864]: Accepted publickey for core from 147.75.109.163 port 44092 ssh2: RSA SHA256:8pb/X5i1efUvJi8sgU2/AQBt50OQJsXEcuFpDNAus+I Oct 8 20:12:25.797645 sshd[5864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:12:25.803693 systemd-logind[1473]: New session 9 of user core. Oct 8 20:12:25.808388 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 8 20:12:26.362137 systemd[1]: run-containerd-runc-k8s.io-93f9a8357e75b727a60e6031692e87410b43d91ef5471d1f3ca25e4bfbc93dac-runc.p2IMMJ.mount: Deactivated successfully. Oct 8 20:12:26.634382 sshd[5864]: pam_unix(sshd:session): session closed for user core Oct 8 20:12:26.638088 systemd[1]: sshd@8-49.13.138.82:22-147.75.109.163:44092.service: Deactivated successfully. Oct 8 20:12:26.639988 systemd[1]: session-9.scope: Deactivated successfully. Oct 8 20:12:26.641734 systemd-logind[1473]: Session 9 logged out. Waiting for processes to exit. Oct 8 20:12:26.643187 systemd-logind[1473]: Removed session 9. Oct 8 20:12:31.813517 systemd[1]: Started sshd@9-49.13.138.82:22-147.75.109.163:59080.service - OpenSSH per-connection server daemon (147.75.109.163:59080). Oct 8 20:12:32.789749 sshd[5899]: Accepted publickey for core from 147.75.109.163 port 59080 ssh2: RSA SHA256:8pb/X5i1efUvJi8sgU2/AQBt50OQJsXEcuFpDNAus+I Oct 8 20:12:32.792275 sshd[5899]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:12:32.799035 systemd-logind[1473]: New session 10 of user core. Oct 8 20:12:32.804559 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 8 20:12:33.564894 sshd[5899]: pam_unix(sshd:session): session closed for user core Oct 8 20:12:33.568005 systemd[1]: sshd@9-49.13.138.82:22-147.75.109.163:59080.service: Deactivated successfully. Oct 8 20:12:33.570326 systemd[1]: session-10.scope: Deactivated successfully. Oct 8 20:12:33.572723 systemd-logind[1473]: Session 10 logged out. Waiting for processes to exit. Oct 8 20:12:33.574106 systemd-logind[1473]: Removed session 10. Oct 8 20:12:33.748025 systemd[1]: Started sshd@10-49.13.138.82:22-147.75.109.163:59090.service - OpenSSH per-connection server daemon (147.75.109.163:59090). Oct 8 20:12:34.741741 sshd[5919]: Accepted publickey for core from 147.75.109.163 port 59090 ssh2: RSA SHA256:8pb/X5i1efUvJi8sgU2/AQBt50OQJsXEcuFpDNAus+I Oct 8 20:12:34.743470 sshd[5919]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:12:34.748597 systemd-logind[1473]: New session 11 of user core. Oct 8 20:12:34.755386 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 8 20:12:35.555725 sshd[5919]: pam_unix(sshd:session): session closed for user core Oct 8 20:12:35.560272 systemd-logind[1473]: Session 11 logged out. Waiting for processes to exit. Oct 8 20:12:35.561041 systemd[1]: sshd@10-49.13.138.82:22-147.75.109.163:59090.service: Deactivated successfully. Oct 8 20:12:35.563157 systemd[1]: session-11.scope: Deactivated successfully. Oct 8 20:12:35.564472 systemd-logind[1473]: Removed session 11. Oct 8 20:12:35.733539 systemd[1]: Started sshd@11-49.13.138.82:22-147.75.109.163:59096.service - OpenSSH per-connection server daemon (147.75.109.163:59096). Oct 8 20:12:36.719002 sshd[5935]: Accepted publickey for core from 147.75.109.163 port 59096 ssh2: RSA SHA256:8pb/X5i1efUvJi8sgU2/AQBt50OQJsXEcuFpDNAus+I Oct 8 20:12:36.722044 sshd[5935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:12:36.728460 systemd-logind[1473]: New session 12 of user core. Oct 8 20:12:36.734406 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 8 20:12:37.497379 sshd[5935]: pam_unix(sshd:session): session closed for user core Oct 8 20:12:37.501317 systemd-logind[1473]: Session 12 logged out. Waiting for processes to exit. Oct 8 20:12:37.502157 systemd[1]: sshd@11-49.13.138.82:22-147.75.109.163:59096.service: Deactivated successfully. Oct 8 20:12:37.504617 systemd[1]: session-12.scope: Deactivated successfully. Oct 8 20:12:37.506741 systemd-logind[1473]: Removed session 12. Oct 8 20:12:42.667463 systemd[1]: Started sshd@12-49.13.138.82:22-147.75.109.163:44026.service - OpenSSH per-connection server daemon (147.75.109.163:44026). Oct 8 20:12:43.636253 sshd[5962]: Accepted publickey for core from 147.75.109.163 port 44026 ssh2: RSA SHA256:8pb/X5i1efUvJi8sgU2/AQBt50OQJsXEcuFpDNAus+I Oct 8 20:12:43.637998 sshd[5962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:12:43.643265 systemd-logind[1473]: New session 13 of user core. Oct 8 20:12:43.649564 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 8 20:12:44.386357 sshd[5962]: pam_unix(sshd:session): session closed for user core Oct 8 20:12:44.389479 systemd[1]: sshd@12-49.13.138.82:22-147.75.109.163:44026.service: Deactivated successfully. Oct 8 20:12:44.391840 systemd[1]: session-13.scope: Deactivated successfully. Oct 8 20:12:44.392767 systemd-logind[1473]: Session 13 logged out. Waiting for processes to exit. Oct 8 20:12:44.394424 systemd-logind[1473]: Removed session 13. Oct 8 20:12:44.557545 systemd[1]: Started sshd@13-49.13.138.82:22-147.75.109.163:44028.service - OpenSSH per-connection server daemon (147.75.109.163:44028). Oct 8 20:12:45.538589 sshd[5980]: Accepted publickey for core from 147.75.109.163 port 44028 ssh2: RSA SHA256:8pb/X5i1efUvJi8sgU2/AQBt50OQJsXEcuFpDNAus+I Oct 8 20:12:45.542512 sshd[5980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:12:45.553880 systemd-logind[1473]: New session 14 of user core. Oct 8 20:12:45.560591 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 8 20:12:46.465048 sshd[5980]: pam_unix(sshd:session): session closed for user core Oct 8 20:12:46.474321 systemd[1]: sshd@13-49.13.138.82:22-147.75.109.163:44028.service: Deactivated successfully. Oct 8 20:12:46.478518 systemd[1]: session-14.scope: Deactivated successfully. Oct 8 20:12:46.481050 systemd-logind[1473]: Session 14 logged out. Waiting for processes to exit. Oct 8 20:12:46.483027 systemd-logind[1473]: Removed session 14. Oct 8 20:12:46.631621 systemd[1]: Started sshd@14-49.13.138.82:22-147.75.109.163:44042.service - OpenSSH per-connection server daemon (147.75.109.163:44042). Oct 8 20:12:47.634638 sshd[5991]: Accepted publickey for core from 147.75.109.163 port 44042 ssh2: RSA SHA256:8pb/X5i1efUvJi8sgU2/AQBt50OQJsXEcuFpDNAus+I Oct 8 20:12:47.638725 sshd[5991]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:12:47.647750 systemd-logind[1473]: New session 15 of user core. Oct 8 20:12:47.654502 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 8 20:12:50.069931 sshd[5991]: pam_unix(sshd:session): session closed for user core Oct 8 20:12:50.083351 systemd[1]: sshd@14-49.13.138.82:22-147.75.109.163:44042.service: Deactivated successfully. Oct 8 20:12:50.088623 systemd[1]: session-15.scope: Deactivated successfully. Oct 8 20:12:50.090790 systemd-logind[1473]: Session 15 logged out. Waiting for processes to exit. Oct 8 20:12:50.094157 systemd-logind[1473]: Removed session 15. Oct 8 20:12:50.237575 systemd[1]: Started sshd@15-49.13.138.82:22-147.75.109.163:41164.service - OpenSSH per-connection server daemon (147.75.109.163:41164). Oct 8 20:12:51.239675 sshd[6009]: Accepted publickey for core from 147.75.109.163 port 41164 ssh2: RSA SHA256:8pb/X5i1efUvJi8sgU2/AQBt50OQJsXEcuFpDNAus+I Oct 8 20:12:51.242331 sshd[6009]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:12:51.247669 systemd-logind[1473]: New session 16 of user core. Oct 8 20:12:51.252377 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 8 20:12:52.361702 sshd[6009]: pam_unix(sshd:session): session closed for user core Oct 8 20:12:52.367497 systemd-logind[1473]: Session 16 logged out. Waiting for processes to exit. Oct 8 20:12:52.368534 systemd[1]: sshd@15-49.13.138.82:22-147.75.109.163:41164.service: Deactivated successfully. Oct 8 20:12:52.371021 systemd[1]: session-16.scope: Deactivated successfully. Oct 8 20:12:52.372447 systemd-logind[1473]: Removed session 16. Oct 8 20:12:52.531298 systemd[1]: Started sshd@16-49.13.138.82:22-147.75.109.163:41180.service - OpenSSH per-connection server daemon (147.75.109.163:41180). Oct 8 20:12:53.535146 sshd[6020]: Accepted publickey for core from 147.75.109.163 port 41180 ssh2: RSA SHA256:8pb/X5i1efUvJi8sgU2/AQBt50OQJsXEcuFpDNAus+I Oct 8 20:12:53.538185 sshd[6020]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:12:53.544544 systemd-logind[1473]: New session 17 of user core. Oct 8 20:12:53.550517 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 8 20:12:54.299693 sshd[6020]: pam_unix(sshd:session): session closed for user core Oct 8 20:12:54.303778 systemd[1]: sshd@16-49.13.138.82:22-147.75.109.163:41180.service: Deactivated successfully. Oct 8 20:12:54.306108 systemd[1]: session-17.scope: Deactivated successfully. Oct 8 20:12:54.306917 systemd-logind[1473]: Session 17 logged out. Waiting for processes to exit. Oct 8 20:12:54.308388 systemd-logind[1473]: Removed session 17. Oct 8 20:12:56.342955 systemd[1]: run-containerd-runc-k8s.io-93f9a8357e75b727a60e6031692e87410b43d91ef5471d1f3ca25e4bfbc93dac-runc.BYHh43.mount: Deactivated successfully. Oct 8 20:12:59.481825 systemd[1]: Started sshd@17-49.13.138.82:22-147.75.109.163:48236.service - OpenSSH per-connection server daemon (147.75.109.163:48236). Oct 8 20:13:00.506367 sshd[6102]: Accepted publickey for core from 147.75.109.163 port 48236 ssh2: RSA SHA256:8pb/X5i1efUvJi8sgU2/AQBt50OQJsXEcuFpDNAus+I Oct 8 20:13:00.510592 sshd[6102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:13:00.518006 systemd-logind[1473]: New session 18 of user core. Oct 8 20:13:00.527470 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 8 20:13:01.284502 sshd[6102]: pam_unix(sshd:session): session closed for user core Oct 8 20:13:01.290662 systemd[1]: sshd@17-49.13.138.82:22-147.75.109.163:48236.service: Deactivated successfully. Oct 8 20:13:01.293993 systemd[1]: session-18.scope: Deactivated successfully. Oct 8 20:13:01.295700 systemd-logind[1473]: Session 18 logged out. Waiting for processes to exit. Oct 8 20:13:01.297794 systemd-logind[1473]: Removed session 18. Oct 8 20:13:06.469806 systemd[1]: Started sshd@18-49.13.138.82:22-147.75.109.163:48248.service - OpenSSH per-connection server daemon (147.75.109.163:48248). Oct 8 20:13:07.453611 sshd[6121]: Accepted publickey for core from 147.75.109.163 port 48248 ssh2: RSA SHA256:8pb/X5i1efUvJi8sgU2/AQBt50OQJsXEcuFpDNAus+I Oct 8 20:13:07.457632 sshd[6121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:13:07.465084 systemd-logind[1473]: New session 19 of user core. Oct 8 20:13:07.472479 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 8 20:13:08.181789 sshd[6121]: pam_unix(sshd:session): session closed for user core Oct 8 20:13:08.189000 systemd[1]: sshd@18-49.13.138.82:22-147.75.109.163:48248.service: Deactivated successfully. Oct 8 20:13:08.193214 systemd[1]: session-19.scope: Deactivated successfully. Oct 8 20:13:08.194788 systemd-logind[1473]: Session 19 logged out. Waiting for processes to exit. Oct 8 20:13:08.197549 systemd-logind[1473]: Removed session 19. Oct 8 20:13:13.347774 systemd[1]: Started sshd@19-49.13.138.82:22-147.75.109.163:55668.service - OpenSSH per-connection server daemon (147.75.109.163:55668). Oct 8 20:13:14.317342 sshd[6136]: Accepted publickey for core from 147.75.109.163 port 55668 ssh2: RSA SHA256:8pb/X5i1efUvJi8sgU2/AQBt50OQJsXEcuFpDNAus+I Oct 8 20:13:14.319914 sshd[6136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:13:14.324818 systemd-logind[1473]: New session 20 of user core. Oct 8 20:13:14.329355 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 8 20:13:15.167293 sshd[6136]: pam_unix(sshd:session): session closed for user core Oct 8 20:13:15.176492 systemd[1]: sshd@19-49.13.138.82:22-147.75.109.163:55668.service: Deactivated successfully. Oct 8 20:13:15.180984 systemd[1]: session-20.scope: Deactivated successfully. Oct 8 20:13:15.183815 systemd-logind[1473]: Session 20 logged out. Waiting for processes to exit. Oct 8 20:13:15.185797 systemd-logind[1473]: Removed session 20. Oct 8 20:13:26.343618 systemd[1]: run-containerd-runc-k8s.io-93f9a8357e75b727a60e6031692e87410b43d91ef5471d1f3ca25e4bfbc93dac-runc.10WxAT.mount: Deactivated successfully. Oct 8 20:13:30.472441 systemd[1]: cri-containerd-5b0ea205d8815bac78b398acd897351eef56c12a8d649d8a7ff21b5dc99b3a07.scope: Deactivated successfully. Oct 8 20:13:30.472763 systemd[1]: cri-containerd-5b0ea205d8815bac78b398acd897351eef56c12a8d649d8a7ff21b5dc99b3a07.scope: Consumed 5.483s CPU time. Oct 8 20:13:30.525264 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5b0ea205d8815bac78b398acd897351eef56c12a8d649d8a7ff21b5dc99b3a07-rootfs.mount: Deactivated successfully. Oct 8 20:13:30.545264 containerd[1500]: time="2024-10-08T20:13:30.524771515Z" level=info msg="shim disconnected" id=5b0ea205d8815bac78b398acd897351eef56c12a8d649d8a7ff21b5dc99b3a07 namespace=k8s.io Oct 8 20:13:30.552471 containerd[1500]: time="2024-10-08T20:13:30.552413603Z" level=warning msg="cleaning up after shim disconnected" id=5b0ea205d8815bac78b398acd897351eef56c12a8d649d8a7ff21b5dc99b3a07 namespace=k8s.io Oct 8 20:13:30.552471 containerd[1500]: time="2024-10-08T20:13:30.552452306Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:13:30.932181 kubelet[2799]: E1008 20:13:30.931957 2799 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:43368->10.0.0.2:2379: read: connection timed out" Oct 8 20:13:31.272728 systemd[1]: cri-containerd-44d0ecdccc299124e504fdca886d5ed6b69ed3431170fe49381aa28661d1a14b.scope: Deactivated successfully. Oct 8 20:13:31.273236 systemd[1]: cri-containerd-44d0ecdccc299124e504fdca886d5ed6b69ed3431170fe49381aa28661d1a14b.scope: Consumed 5.420s CPU time, 25.9M memory peak, 0B memory swap peak. Oct 8 20:13:31.294049 containerd[1500]: time="2024-10-08T20:13:31.293827743Z" level=info msg="shim disconnected" id=44d0ecdccc299124e504fdca886d5ed6b69ed3431170fe49381aa28661d1a14b namespace=k8s.io Oct 8 20:13:31.294049 containerd[1500]: time="2024-10-08T20:13:31.293876463Z" level=warning msg="cleaning up after shim disconnected" id=44d0ecdccc299124e504fdca886d5ed6b69ed3431170fe49381aa28661d1a14b namespace=k8s.io Oct 8 20:13:31.294049 containerd[1500]: time="2024-10-08T20:13:31.293886511Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:13:31.294533 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-44d0ecdccc299124e504fdca886d5ed6b69ed3431170fe49381aa28661d1a14b-rootfs.mount: Deactivated successfully. Oct 8 20:13:31.472677 kubelet[2799]: I1008 20:13:31.472629 2799 scope.go:117] "RemoveContainer" containerID="5b0ea205d8815bac78b398acd897351eef56c12a8d649d8a7ff21b5dc99b3a07" Oct 8 20:13:31.472930 kubelet[2799]: I1008 20:13:31.472917 2799 scope.go:117] "RemoveContainer" containerID="44d0ecdccc299124e504fdca886d5ed6b69ed3431170fe49381aa28661d1a14b" Oct 8 20:13:31.495961 containerd[1500]: time="2024-10-08T20:13:31.495863186Z" level=info msg="CreateContainer within sandbox \"60c2319ccd3025710c401fdde25bf7c6e63986310840a65f0a7d690062f87402\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Oct 8 20:13:31.505209 containerd[1500]: time="2024-10-08T20:13:31.505127785Z" level=info msg="CreateContainer within sandbox \"9e2d4894de139fae94b0def3e1b9b028a360e6f34f16025e9c09ed34f13dc299\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Oct 8 20:13:31.536850 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3526495622.mount: Deactivated successfully. Oct 8 20:13:31.541657 containerd[1500]: time="2024-10-08T20:13:31.541525797Z" level=info msg="CreateContainer within sandbox \"9e2d4894de139fae94b0def3e1b9b028a360e6f34f16025e9c09ed34f13dc299\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"666c8da047f097aee183a7fa0c1313c27191c08d87becfa30cede0e10e2a0831\"" Oct 8 20:13:31.542648 containerd[1500]: time="2024-10-08T20:13:31.542605818Z" level=info msg="StartContainer for \"666c8da047f097aee183a7fa0c1313c27191c08d87becfa30cede0e10e2a0831\"" Oct 8 20:13:31.546295 containerd[1500]: time="2024-10-08T20:13:31.545791076Z" level=info msg="CreateContainer within sandbox \"60c2319ccd3025710c401fdde25bf7c6e63986310840a65f0a7d690062f87402\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"6ec10a61a0477f92308ecfbe21c48579ff5e266bb00976c23e3802a550cf3a8e\"" Oct 8 20:13:31.559031 containerd[1500]: time="2024-10-08T20:13:31.558164320Z" level=info msg="StartContainer for \"6ec10a61a0477f92308ecfbe21c48579ff5e266bb00976c23e3802a550cf3a8e\"" Oct 8 20:13:31.602576 systemd[1]: Started cri-containerd-666c8da047f097aee183a7fa0c1313c27191c08d87becfa30cede0e10e2a0831.scope - libcontainer container 666c8da047f097aee183a7fa0c1313c27191c08d87becfa30cede0e10e2a0831. Oct 8 20:13:31.604850 systemd[1]: Started cri-containerd-6ec10a61a0477f92308ecfbe21c48579ff5e266bb00976c23e3802a550cf3a8e.scope - libcontainer container 6ec10a61a0477f92308ecfbe21c48579ff5e266bb00976c23e3802a550cf3a8e. Oct 8 20:13:31.647991 containerd[1500]: time="2024-10-08T20:13:31.647959910Z" level=info msg="StartContainer for \"666c8da047f097aee183a7fa0c1313c27191c08d87becfa30cede0e10e2a0831\" returns successfully" Oct 8 20:13:31.665974 containerd[1500]: time="2024-10-08T20:13:31.665934772Z" level=info msg="StartContainer for \"6ec10a61a0477f92308ecfbe21c48579ff5e266bb00976c23e3802a550cf3a8e\" returns successfully" Oct 8 20:13:35.160074 kubelet[2799]: E1008 20:13:35.160010 2799 event.go:346] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:43178->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-1-0-7-3c1e2fa9c6.17fc9361a7ffd988 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-1-0-7-3c1e2fa9c6,UID:e9cd5c7e1f5d9eb6ae422e8aa8a3d297,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-1-0-7-3c1e2fa9c6,},FirstTimestamp:2024-10-08 20:13:24.643101064 +0000 UTC m=+359.400779534,LastTimestamp:2024-10-08 20:13:24.643101064 +0000 UTC m=+359.400779534,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-1-0-7-3c1e2fa9c6,}" Oct 8 20:13:36.502471 systemd[1]: cri-containerd-b5ec704b561259fd6d82dc6d3a67425358a953fc6aeead4b0cf81d3977fcd0f9.scope: Deactivated successfully. Oct 8 20:13:36.503508 systemd[1]: cri-containerd-b5ec704b561259fd6d82dc6d3a67425358a953fc6aeead4b0cf81d3977fcd0f9.scope: Consumed 1.910s CPU time, 16.5M memory peak, 0B memory swap peak. Oct 8 20:13:36.526291 containerd[1500]: time="2024-10-08T20:13:36.525979108Z" level=info msg="shim disconnected" id=b5ec704b561259fd6d82dc6d3a67425358a953fc6aeead4b0cf81d3977fcd0f9 namespace=k8s.io Oct 8 20:13:36.526291 containerd[1500]: time="2024-10-08T20:13:36.526041074Z" level=warning msg="cleaning up after shim disconnected" id=b5ec704b561259fd6d82dc6d3a67425358a953fc6aeead4b0cf81d3977fcd0f9 namespace=k8s.io Oct 8 20:13:36.526291 containerd[1500]: time="2024-10-08T20:13:36.526052716Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:13:36.529075 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b5ec704b561259fd6d82dc6d3a67425358a953fc6aeead4b0cf81d3977fcd0f9-rootfs.mount: Deactivated successfully.