Nov 12 21:32:17.950893 kernel: Linux version 6.6.60-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Nov 12 16:20:46 -00 2024 Nov 12 21:32:17.950919 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 21:32:17.950928 kernel: BIOS-provided physical RAM map: Nov 12 21:32:17.950934 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 12 21:32:17.950940 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 12 21:32:17.950946 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 12 21:32:17.950953 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Nov 12 21:32:17.950960 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Nov 12 21:32:17.950968 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 12 21:32:17.950974 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 12 21:32:17.950980 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 12 21:32:17.950986 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 12 21:32:17.950993 kernel: NX (Execute Disable) protection: active Nov 12 21:32:17.950999 kernel: APIC: Static calls initialized Nov 12 21:32:17.951009 kernel: SMBIOS 2.8 present. Nov 12 21:32:17.951016 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Nov 12 21:32:17.951022 kernel: Hypervisor detected: KVM Nov 12 21:32:17.951029 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 12 21:32:17.951035 kernel: kvm-clock: using sched offset of 3147749999 cycles Nov 12 21:32:17.951043 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 12 21:32:17.951050 kernel: tsc: Detected 2495.310 MHz processor Nov 12 21:32:17.951057 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 12 21:32:17.951064 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 12 21:32:17.955259 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Nov 12 21:32:17.955277 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 12 21:32:17.955285 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 12 21:32:17.955293 kernel: Using GB pages for direct mapping Nov 12 21:32:17.955301 kernel: ACPI: Early table checksum verification disabled Nov 12 21:32:17.955308 kernel: ACPI: RSDP 0x00000000000F51F0 000014 (v00 BOCHS ) Nov 12 21:32:17.955316 kernel: ACPI: RSDT 0x000000007CFE265D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 21:32:17.955324 kernel: ACPI: FACP 0x000000007CFE244D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 21:32:17.955331 kernel: ACPI: DSDT 0x000000007CFE0040 00240D (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 21:32:17.955341 kernel: ACPI: FACS 0x000000007CFE0000 000040 Nov 12 21:32:17.955349 kernel: ACPI: APIC 0x000000007CFE2541 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 21:32:17.955356 kernel: ACPI: HPET 0x000000007CFE25C1 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 21:32:17.955363 kernel: ACPI: MCFG 0x000000007CFE25F9 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 21:32:17.955370 kernel: ACPI: WAET 0x000000007CFE2635 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 21:32:17.955377 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe244d-0x7cfe2540] Nov 12 21:32:17.955384 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe244c] Nov 12 21:32:17.955391 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Nov 12 21:32:17.955404 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2541-0x7cfe25c0] Nov 12 21:32:17.955411 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25c1-0x7cfe25f8] Nov 12 21:32:17.955418 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe25f9-0x7cfe2634] Nov 12 21:32:17.955426 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe2635-0x7cfe265c] Nov 12 21:32:17.955433 kernel: No NUMA configuration found Nov 12 21:32:17.955440 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Nov 12 21:32:17.955450 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] Nov 12 21:32:17.955458 kernel: Zone ranges: Nov 12 21:32:17.955465 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 12 21:32:17.955473 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Nov 12 21:32:17.955480 kernel: Normal empty Nov 12 21:32:17.955487 kernel: Movable zone start for each node Nov 12 21:32:17.955494 kernel: Early memory node ranges Nov 12 21:32:17.955502 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 12 21:32:17.955509 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Nov 12 21:32:17.955516 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Nov 12 21:32:17.955525 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 12 21:32:17.955533 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 12 21:32:17.955540 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Nov 12 21:32:17.955547 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 12 21:32:17.955554 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 12 21:32:17.955562 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 12 21:32:17.955569 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 12 21:32:17.955576 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 12 21:32:17.955584 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 12 21:32:17.955593 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 12 21:32:17.955601 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 12 21:32:17.955608 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 12 21:32:17.955615 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 12 21:32:17.955622 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 12 21:32:17.955629 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 12 21:32:17.955637 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 12 21:32:17.955644 kernel: Booting paravirtualized kernel on KVM Nov 12 21:32:17.955652 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 12 21:32:17.955662 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 12 21:32:17.955669 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Nov 12 21:32:17.955676 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Nov 12 21:32:17.955684 kernel: pcpu-alloc: [0] 0 1 Nov 12 21:32:17.955691 kernel: kvm-guest: PV spinlocks disabled, no host support Nov 12 21:32:17.955699 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 21:32:17.955707 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 12 21:32:17.955714 kernel: random: crng init done Nov 12 21:32:17.955727 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 12 21:32:17.955738 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 12 21:32:17.955749 kernel: Fallback order for Node 0: 0 Nov 12 21:32:17.955760 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 Nov 12 21:32:17.955771 kernel: Policy zone: DMA32 Nov 12 21:32:17.955782 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 12 21:32:17.955792 kernel: Memory: 1922056K/2047464K available (12288K kernel code, 2305K rwdata, 22724K rodata, 42828K init, 2360K bss, 125148K reserved, 0K cma-reserved) Nov 12 21:32:17.955803 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 12 21:32:17.955814 kernel: ftrace: allocating 37799 entries in 148 pages Nov 12 21:32:17.955828 kernel: ftrace: allocated 148 pages with 3 groups Nov 12 21:32:17.955839 kernel: Dynamic Preempt: voluntary Nov 12 21:32:17.955849 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 12 21:32:17.955866 kernel: rcu: RCU event tracing is enabled. Nov 12 21:32:17.955876 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 12 21:32:17.955886 kernel: Trampoline variant of Tasks RCU enabled. Nov 12 21:32:17.955895 kernel: Rude variant of Tasks RCU enabled. Nov 12 21:32:17.955905 kernel: Tracing variant of Tasks RCU enabled. Nov 12 21:32:17.955914 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 12 21:32:17.955928 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 12 21:32:17.955939 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 12 21:32:17.955950 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 12 21:32:17.955960 kernel: Console: colour VGA+ 80x25 Nov 12 21:32:17.955970 kernel: printk: console [tty0] enabled Nov 12 21:32:17.955978 kernel: printk: console [ttyS0] enabled Nov 12 21:32:17.955988 kernel: ACPI: Core revision 20230628 Nov 12 21:32:17.955997 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 12 21:32:17.956007 kernel: APIC: Switch to symmetric I/O mode setup Nov 12 21:32:17.956016 kernel: x2apic enabled Nov 12 21:32:17.956058 kernel: APIC: Switched APIC routing to: physical x2apic Nov 12 21:32:17.956089 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 12 21:32:17.956099 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Nov 12 21:32:17.956127 kernel: Calibrating delay loop (skipped) preset value.. 4990.62 BogoMIPS (lpj=2495310) Nov 12 21:32:17.956137 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 12 21:32:17.956147 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 12 21:32:17.956157 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 12 21:32:17.956167 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 12 21:32:17.956196 kernel: Spectre V2 : Mitigation: Retpolines Nov 12 21:32:17.956206 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Nov 12 21:32:17.956216 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Nov 12 21:32:17.956229 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 12 21:32:17.956239 kernel: RETBleed: Mitigation: untrained return thunk Nov 12 21:32:17.956250 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 12 21:32:17.956260 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 12 21:32:17.956270 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 12 21:32:17.956281 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 12 21:32:17.956291 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 12 21:32:17.956302 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 12 21:32:17.956315 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 12 21:32:17.956325 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 12 21:32:17.956335 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 12 21:32:17.956345 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 12 21:32:17.956355 kernel: Freeing SMP alternatives memory: 32K Nov 12 21:32:17.956367 kernel: pid_max: default: 32768 minimum: 301 Nov 12 21:32:17.956378 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 12 21:32:17.956388 kernel: landlock: Up and running. Nov 12 21:32:17.956397 kernel: SELinux: Initializing. Nov 12 21:32:17.956408 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 12 21:32:17.956418 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 12 21:32:17.956428 kernel: smpboot: CPU0: AMD EPYC Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 12 21:32:17.956438 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 12 21:32:17.956448 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 12 21:32:17.956461 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 12 21:32:17.956471 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 12 21:32:17.956481 kernel: ... version: 0 Nov 12 21:32:17.956492 kernel: ... bit width: 48 Nov 12 21:32:17.956502 kernel: ... generic registers: 6 Nov 12 21:32:17.956512 kernel: ... value mask: 0000ffffffffffff Nov 12 21:32:17.956556 kernel: ... max period: 00007fffffffffff Nov 12 21:32:17.956567 kernel: ... fixed-purpose events: 0 Nov 12 21:32:17.956578 kernel: ... event mask: 000000000000003f Nov 12 21:32:17.956592 kernel: signal: max sigframe size: 1776 Nov 12 21:32:17.956603 kernel: rcu: Hierarchical SRCU implementation. Nov 12 21:32:17.956614 kernel: rcu: Max phase no-delay instances is 400. Nov 12 21:32:17.956624 kernel: smp: Bringing up secondary CPUs ... Nov 12 21:32:17.956634 kernel: smpboot: x86: Booting SMP configuration: Nov 12 21:32:17.956643 kernel: .... node #0, CPUs: #1 Nov 12 21:32:17.956653 kernel: smp: Brought up 1 node, 2 CPUs Nov 12 21:32:17.956686 kernel: smpboot: Max logical packages: 1 Nov 12 21:32:17.956697 kernel: smpboot: Total of 2 processors activated (9981.24 BogoMIPS) Nov 12 21:32:17.956712 kernel: devtmpfs: initialized Nov 12 21:32:17.956723 kernel: x86/mm: Memory block size: 128MB Nov 12 21:32:17.956734 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 12 21:32:17.956744 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 12 21:32:17.956755 kernel: pinctrl core: initialized pinctrl subsystem Nov 12 21:32:17.956765 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 12 21:32:17.956775 kernel: audit: initializing netlink subsys (disabled) Nov 12 21:32:17.956786 kernel: audit: type=2000 audit(1731447136.575:1): state=initialized audit_enabled=0 res=1 Nov 12 21:32:17.956796 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 12 21:32:17.956810 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 12 21:32:17.956820 kernel: cpuidle: using governor menu Nov 12 21:32:17.956830 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 12 21:32:17.956839 kernel: dca service started, version 1.12.1 Nov 12 21:32:17.956850 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Nov 12 21:32:17.956860 kernel: PCI: Using configuration type 1 for base access Nov 12 21:32:17.956870 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 12 21:32:17.956881 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 12 21:32:17.956891 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 12 21:32:17.956905 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 12 21:32:17.956916 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 12 21:32:17.956926 kernel: ACPI: Added _OSI(Module Device) Nov 12 21:32:17.956935 kernel: ACPI: Added _OSI(Processor Device) Nov 12 21:32:17.956980 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Nov 12 21:32:17.956992 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 12 21:32:17.957002 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 12 21:32:17.957012 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 12 21:32:17.957022 kernel: ACPI: Interpreter enabled Nov 12 21:32:17.957036 kernel: ACPI: PM: (supports S0 S5) Nov 12 21:32:17.957046 kernel: ACPI: Using IOAPIC for interrupt routing Nov 12 21:32:17.957057 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 12 21:32:17.958030 kernel: PCI: Using E820 reservations for host bridge windows Nov 12 21:32:17.958050 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 12 21:32:17.958060 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 12 21:32:17.958409 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 12 21:32:17.958568 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 12 21:32:17.958720 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 12 21:32:17.958733 kernel: PCI host bridge to bus 0000:00 Nov 12 21:32:17.958885 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 12 21:32:17.959018 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 12 21:32:17.959229 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 12 21:32:17.959351 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Nov 12 21:32:17.959471 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 12 21:32:17.959601 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Nov 12 21:32:17.959713 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 12 21:32:17.959876 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Nov 12 21:32:17.960010 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Nov 12 21:32:17.961012 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] Nov 12 21:32:17.961180 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] Nov 12 21:32:17.961325 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] Nov 12 21:32:17.961448 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] Nov 12 21:32:17.961569 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 12 21:32:17.961703 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Nov 12 21:32:17.961853 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] Nov 12 21:32:17.961995 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Nov 12 21:32:17.962210 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] Nov 12 21:32:17.962346 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Nov 12 21:32:17.962467 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] Nov 12 21:32:17.962595 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Nov 12 21:32:17.962715 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] Nov 12 21:32:17.962844 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Nov 12 21:32:17.962964 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] Nov 12 21:32:17.963130 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Nov 12 21:32:17.963259 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] Nov 12 21:32:17.963387 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Nov 12 21:32:17.963508 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] Nov 12 21:32:17.963636 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Nov 12 21:32:17.963756 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] Nov 12 21:32:17.963894 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Nov 12 21:32:17.964015 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] Nov 12 21:32:17.964348 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Nov 12 21:32:17.964476 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 12 21:32:17.964603 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Nov 12 21:32:17.964742 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] Nov 12 21:32:17.964870 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] Nov 12 21:32:17.964997 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Nov 12 21:32:17.965150 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Nov 12 21:32:17.965292 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Nov 12 21:32:17.965545 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] Nov 12 21:32:17.965705 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Nov 12 21:32:17.965837 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] Nov 12 21:32:17.965966 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Nov 12 21:32:17.968573 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Nov 12 21:32:17.968760 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Nov 12 21:32:17.968906 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Nov 12 21:32:17.969043 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] Nov 12 21:32:17.969207 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Nov 12 21:32:17.969357 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Nov 12 21:32:17.969494 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Nov 12 21:32:17.969656 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Nov 12 21:32:17.969802 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] Nov 12 21:32:17.969940 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] Nov 12 21:32:17.970063 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Nov 12 21:32:17.970215 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Nov 12 21:32:17.970334 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 12 21:32:17.970473 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Nov 12 21:32:17.970606 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Nov 12 21:32:17.970729 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Nov 12 21:32:17.970854 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Nov 12 21:32:17.970982 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 12 21:32:17.971723 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Nov 12 21:32:17.971857 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] Nov 12 21:32:17.972019 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Nov 12 21:32:17.974191 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Nov 12 21:32:17.974360 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 12 21:32:17.974524 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Nov 12 21:32:17.974656 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] Nov 12 21:32:17.974784 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] Nov 12 21:32:17.974908 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Nov 12 21:32:17.975035 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Nov 12 21:32:17.975213 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 12 21:32:17.975225 kernel: acpiphp: Slot [0] registered Nov 12 21:32:17.975358 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Nov 12 21:32:17.975482 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] Nov 12 21:32:17.975607 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] Nov 12 21:32:17.975732 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] Nov 12 21:32:17.975889 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Nov 12 21:32:17.976043 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Nov 12 21:32:17.976214 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 12 21:32:17.976226 kernel: acpiphp: Slot [0-2] registered Nov 12 21:32:17.976346 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Nov 12 21:32:17.976479 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Nov 12 21:32:17.976627 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 12 21:32:17.976638 kernel: acpiphp: Slot [0-3] registered Nov 12 21:32:17.976782 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Nov 12 21:32:17.976906 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Nov 12 21:32:17.977025 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 12 21:32:17.977035 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 12 21:32:17.977043 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 12 21:32:17.977051 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 12 21:32:17.977058 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 12 21:32:17.977066 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 12 21:32:17.977089 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 12 21:32:17.977096 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 12 21:32:17.977107 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 12 21:32:17.977115 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 12 21:32:17.977123 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 12 21:32:17.977131 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 12 21:32:17.977138 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 12 21:32:17.977146 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 12 21:32:17.977154 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 12 21:32:17.977161 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 12 21:32:17.977169 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 12 21:32:17.977181 kernel: iommu: Default domain type: Translated Nov 12 21:32:17.977196 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 12 21:32:17.977209 kernel: PCI: Using ACPI for IRQ routing Nov 12 21:32:17.977222 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 12 21:32:17.977234 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 12 21:32:17.977245 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Nov 12 21:32:17.977408 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 12 21:32:17.977566 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 12 21:32:17.977746 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 12 21:32:17.977764 kernel: vgaarb: loaded Nov 12 21:32:17.977776 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 12 21:32:17.977787 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 12 21:32:17.977798 kernel: clocksource: Switched to clocksource kvm-clock Nov 12 21:32:17.977809 kernel: VFS: Disk quotas dquot_6.6.0 Nov 12 21:32:17.977821 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 12 21:32:17.977831 kernel: pnp: PnP ACPI init Nov 12 21:32:17.977977 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 12 21:32:17.977994 kernel: pnp: PnP ACPI: found 5 devices Nov 12 21:32:17.978002 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 12 21:32:17.978010 kernel: NET: Registered PF_INET protocol family Nov 12 21:32:17.978017 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 12 21:32:17.978025 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 12 21:32:17.978033 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 12 21:32:17.978041 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 12 21:32:17.978049 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 12 21:32:17.978059 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 12 21:32:17.978066 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 12 21:32:17.979149 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 12 21:32:17.979157 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 12 21:32:17.979165 kernel: NET: Registered PF_XDP protocol family Nov 12 21:32:17.979329 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Nov 12 21:32:17.979486 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Nov 12 21:32:17.979627 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Nov 12 21:32:17.979755 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Nov 12 21:32:17.979874 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Nov 12 21:32:17.979993 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Nov 12 21:32:17.981144 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Nov 12 21:32:17.981276 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Nov 12 21:32:17.981394 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Nov 12 21:32:17.981515 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Nov 12 21:32:17.981637 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Nov 12 21:32:17.981764 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Nov 12 21:32:17.981884 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Nov 12 21:32:17.982003 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Nov 12 21:32:17.982751 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 12 21:32:17.982903 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Nov 12 21:32:17.983024 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Nov 12 21:32:17.983161 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 12 21:32:17.983325 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Nov 12 21:32:17.983502 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Nov 12 21:32:17.983626 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 12 21:32:17.983750 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Nov 12 21:32:17.983869 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Nov 12 21:32:17.983989 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 12 21:32:17.984137 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Nov 12 21:32:17.984311 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Nov 12 21:32:17.984438 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Nov 12 21:32:17.984556 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 12 21:32:17.984701 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Nov 12 21:32:17.984821 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Nov 12 21:32:17.984971 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Nov 12 21:32:17.985600 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 12 21:32:17.985777 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Nov 12 21:32:17.985929 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Nov 12 21:32:17.986169 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Nov 12 21:32:17.986346 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 12 21:32:17.986502 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 12 21:32:17.986645 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 12 21:32:17.986782 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 12 21:32:17.986892 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Nov 12 21:32:17.987013 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 12 21:32:17.987182 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Nov 12 21:32:17.987320 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Nov 12 21:32:17.987437 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Nov 12 21:32:17.987561 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Nov 12 21:32:17.987684 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Nov 12 21:32:17.987807 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Nov 12 21:32:17.987948 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 12 21:32:17.988208 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Nov 12 21:32:17.988337 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 12 21:32:17.988528 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Nov 12 21:32:17.988703 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 12 21:32:17.988844 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Nov 12 21:32:17.988986 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 12 21:32:17.989151 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Nov 12 21:32:17.989273 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Nov 12 21:32:17.989425 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 12 21:32:17.989553 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Nov 12 21:32:17.989711 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Nov 12 21:32:17.989833 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 12 21:32:17.989995 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Nov 12 21:32:17.990259 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Nov 12 21:32:17.990392 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 12 21:32:17.990408 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 12 21:32:17.990420 kernel: PCI: CLS 0 bytes, default 64 Nov 12 21:32:17.990438 kernel: Initialise system trusted keyrings Nov 12 21:32:17.990450 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 12 21:32:17.990462 kernel: Key type asymmetric registered Nov 12 21:32:17.990474 kernel: Asymmetric key parser 'x509' registered Nov 12 21:32:17.990486 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 12 21:32:17.990498 kernel: io scheduler mq-deadline registered Nov 12 21:32:17.990509 kernel: io scheduler kyber registered Nov 12 21:32:17.990521 kernel: io scheduler bfq registered Nov 12 21:32:17.990709 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Nov 12 21:32:17.990837 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Nov 12 21:32:17.991003 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Nov 12 21:32:17.991318 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Nov 12 21:32:17.991457 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Nov 12 21:32:17.991577 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Nov 12 21:32:17.991806 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Nov 12 21:32:17.991952 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Nov 12 21:32:17.992166 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Nov 12 21:32:17.992296 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Nov 12 21:32:17.992519 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Nov 12 21:32:17.992646 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Nov 12 21:32:17.992836 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Nov 12 21:32:17.992961 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Nov 12 21:32:17.993115 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Nov 12 21:32:17.993284 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Nov 12 21:32:17.993297 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 12 21:32:17.993418 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Nov 12 21:32:17.993544 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Nov 12 21:32:17.993557 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 12 21:32:17.993621 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Nov 12 21:32:17.993635 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 12 21:32:17.993647 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 12 21:32:17.993657 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 12 21:32:17.993668 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 12 21:32:17.993679 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 12 21:32:17.993695 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 12 21:32:17.993899 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 12 21:32:17.994151 kernel: rtc_cmos 00:03: registered as rtc0 Nov 12 21:32:17.994283 kernel: rtc_cmos 00:03: setting system clock to 2024-11-12T21:32:17 UTC (1731447137) Nov 12 21:32:17.994481 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 12 21:32:17.994496 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 12 21:32:17.994504 kernel: NET: Registered PF_INET6 protocol family Nov 12 21:32:17.994512 kernel: Segment Routing with IPv6 Nov 12 21:32:17.994526 kernel: In-situ OAM (IOAM) with IPv6 Nov 12 21:32:17.994534 kernel: NET: Registered PF_PACKET protocol family Nov 12 21:32:17.994542 kernel: Key type dns_resolver registered Nov 12 21:32:17.994550 kernel: IPI shorthand broadcast: enabled Nov 12 21:32:17.994558 kernel: sched_clock: Marking stable (1230007548, 146043326)->(1385568032, -9517158) Nov 12 21:32:17.994566 kernel: registered taskstats version 1 Nov 12 21:32:17.994574 kernel: Loading compiled-in X.509 certificates Nov 12 21:32:17.994583 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.60-flatcar: 0473a73d840db5324524af106a53c13fc6fc218a' Nov 12 21:32:17.994590 kernel: Key type .fscrypt registered Nov 12 21:32:17.994601 kernel: Key type fscrypt-provisioning registered Nov 12 21:32:17.994609 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 12 21:32:17.994617 kernel: ima: Allocated hash algorithm: sha1 Nov 12 21:32:17.994626 kernel: ima: No architecture policies found Nov 12 21:32:17.994633 kernel: clk: Disabling unused clocks Nov 12 21:32:17.994642 kernel: Freeing unused kernel image (initmem) memory: 42828K Nov 12 21:32:17.994650 kernel: Write protecting the kernel read-only data: 36864k Nov 12 21:32:17.994658 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Nov 12 21:32:17.994666 kernel: Run /init as init process Nov 12 21:32:17.994720 kernel: with arguments: Nov 12 21:32:17.994730 kernel: /init Nov 12 21:32:17.994737 kernel: with environment: Nov 12 21:32:17.994745 kernel: HOME=/ Nov 12 21:32:17.994753 kernel: TERM=linux Nov 12 21:32:17.994760 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 12 21:32:17.994771 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 21:32:17.994782 systemd[1]: Detected virtualization kvm. Nov 12 21:32:17.994793 systemd[1]: Detected architecture x86-64. Nov 12 21:32:17.994801 systemd[1]: Running in initrd. Nov 12 21:32:17.994810 systemd[1]: No hostname configured, using default hostname. Nov 12 21:32:17.994818 systemd[1]: Hostname set to . Nov 12 21:32:17.994826 systemd[1]: Initializing machine ID from VM UUID. Nov 12 21:32:17.994835 systemd[1]: Queued start job for default target initrd.target. Nov 12 21:32:17.994843 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 21:32:17.994852 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 21:32:17.994864 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 12 21:32:17.994872 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 21:32:17.994881 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 12 21:32:17.994890 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 12 21:32:17.994900 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 12 21:32:17.994908 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 12 21:32:17.994917 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 21:32:17.994928 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 21:32:17.994937 systemd[1]: Reached target paths.target - Path Units. Nov 12 21:32:17.994946 systemd[1]: Reached target slices.target - Slice Units. Nov 12 21:32:17.994954 systemd[1]: Reached target swap.target - Swaps. Nov 12 21:32:17.994963 systemd[1]: Reached target timers.target - Timer Units. Nov 12 21:32:17.994972 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 21:32:17.994980 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 21:32:17.994989 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 12 21:32:17.995000 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 12 21:32:17.995008 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 21:32:17.995017 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 21:32:17.995025 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 21:32:17.995103 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 21:32:17.995112 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 12 21:32:17.995120 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 21:32:17.995129 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 12 21:32:17.995137 systemd[1]: Starting systemd-fsck-usr.service... Nov 12 21:32:17.995150 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 21:32:17.995159 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 21:32:17.995167 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 21:32:17.995176 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 12 21:32:17.995210 systemd-journald[188]: Collecting audit messages is disabled. Nov 12 21:32:17.995234 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 21:32:17.995243 systemd[1]: Finished systemd-fsck-usr.service. Nov 12 21:32:17.995252 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 21:32:17.995264 systemd-journald[188]: Journal started Nov 12 21:32:17.995282 systemd-journald[188]: Runtime Journal (/run/log/journal/28039648453243ada8a776fa1fec9872) is 4.8M, max 38.4M, 33.6M free. Nov 12 21:32:17.953957 systemd-modules-load[189]: Inserted module 'overlay' Nov 12 21:32:18.033421 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 21:32:18.033446 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 12 21:32:18.033458 kernel: Bridge firewalling registered Nov 12 21:32:18.004270 systemd-modules-load[189]: Inserted module 'br_netfilter' Nov 12 21:32:18.033286 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 21:32:18.034155 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 21:32:18.035239 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 21:32:18.043201 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 21:32:18.045208 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 21:32:18.048245 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 21:32:18.055284 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 21:32:18.067976 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 21:32:18.070895 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 21:32:18.077407 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 21:32:18.083452 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 21:32:18.084316 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 21:32:18.088306 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 12 21:32:18.105663 dracut-cmdline[223]: dracut-dracut-053 Nov 12 21:32:18.109110 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 21:32:18.118301 systemd-resolved[220]: Positive Trust Anchors: Nov 12 21:32:18.118319 systemd-resolved[220]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 21:32:18.118351 systemd-resolved[220]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 21:32:18.121086 systemd-resolved[220]: Defaulting to hostname 'linux'. Nov 12 21:32:18.123852 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 21:32:18.125126 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 21:32:18.209156 kernel: SCSI subsystem initialized Nov 12 21:32:18.219103 kernel: Loading iSCSI transport class v2.0-870. Nov 12 21:32:18.232177 kernel: iscsi: registered transport (tcp) Nov 12 21:32:18.254298 kernel: iscsi: registered transport (qla4xxx) Nov 12 21:32:18.254388 kernel: QLogic iSCSI HBA Driver Nov 12 21:32:18.306863 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 12 21:32:18.315251 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 12 21:32:18.343213 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 12 21:32:18.343305 kernel: device-mapper: uevent: version 1.0.3 Nov 12 21:32:18.343318 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 12 21:32:18.388118 kernel: raid6: avx2x4 gen() 29046 MB/s Nov 12 21:32:18.405110 kernel: raid6: avx2x2 gen() 30265 MB/s Nov 12 21:32:18.422326 kernel: raid6: avx2x1 gen() 24610 MB/s Nov 12 21:32:18.422417 kernel: raid6: using algorithm avx2x2 gen() 30265 MB/s Nov 12 21:32:18.441128 kernel: raid6: .... xor() 19629 MB/s, rmw enabled Nov 12 21:32:18.441211 kernel: raid6: using avx2x2 recovery algorithm Nov 12 21:32:18.462121 kernel: xor: automatically using best checksumming function avx Nov 12 21:32:18.625119 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 12 21:32:18.639493 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 12 21:32:18.646351 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 21:32:18.662114 systemd-udevd[406]: Using default interface naming scheme 'v255'. Nov 12 21:32:18.667304 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 21:32:18.675264 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 12 21:32:18.695367 dracut-pre-trigger[413]: rd.md=0: removing MD RAID activation Nov 12 21:32:18.741240 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 21:32:18.747287 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 21:32:18.830126 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 21:32:18.839360 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 12 21:32:18.856282 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 12 21:32:18.858621 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 21:32:18.860346 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 21:32:18.861428 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 21:32:18.868390 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 12 21:32:18.880194 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 12 21:32:18.937261 kernel: ACPI: bus type USB registered Nov 12 21:32:18.937328 kernel: usbcore: registered new interface driver usbfs Nov 12 21:32:18.939094 kernel: usbcore: registered new interface driver hub Nov 12 21:32:18.939126 kernel: usbcore: registered new device driver usb Nov 12 21:32:18.941145 kernel: scsi host0: Virtio SCSI HBA Nov 12 21:32:18.943086 kernel: cryptd: max_cpu_qlen set to 1000 Nov 12 21:32:18.956093 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Nov 12 21:32:18.962504 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 21:32:18.962637 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 21:32:18.998461 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 21:32:19.004126 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 21:32:19.004286 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 21:32:19.004806 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 21:32:19.013404 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 21:32:19.027107 kernel: libata version 3.00 loaded. Nov 12 21:32:19.046085 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Nov 12 21:32:19.068405 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Nov 12 21:32:19.068598 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Nov 12 21:32:19.068787 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Nov 12 21:32:19.068967 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Nov 12 21:32:19.069167 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Nov 12 21:32:19.069332 kernel: AVX2 version of gcm_enc/dec engaged. Nov 12 21:32:19.069348 kernel: AES CTR mode by8 optimization enabled Nov 12 21:32:19.069363 kernel: hub 1-0:1.0: USB hub found Nov 12 21:32:19.069566 kernel: ahci 0000:00:1f.2: version 3.0 Nov 12 21:32:19.079171 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 12 21:32:19.079197 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Nov 12 21:32:19.079377 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 12 21:32:19.079623 kernel: hub 1-0:1.0: 4 ports detected Nov 12 21:32:19.079812 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Nov 12 21:32:19.080043 kernel: scsi host1: ahci Nov 12 21:32:19.080225 kernel: hub 2-0:1.0: USB hub found Nov 12 21:32:19.080465 kernel: hub 2-0:1.0: 4 ports detected Nov 12 21:32:19.080667 kernel: scsi host2: ahci Nov 12 21:32:19.080875 kernel: scsi host3: ahci Nov 12 21:32:19.081057 kernel: scsi host4: ahci Nov 12 21:32:19.081366 kernel: scsi host5: ahci Nov 12 21:32:19.081570 kernel: scsi host6: ahci Nov 12 21:32:19.081775 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 49 Nov 12 21:32:19.081793 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 49 Nov 12 21:32:19.081810 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 49 Nov 12 21:32:19.081831 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 49 Nov 12 21:32:19.081845 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 49 Nov 12 21:32:19.081859 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 49 Nov 12 21:32:19.141159 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 21:32:19.146292 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 21:32:19.165478 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 21:32:19.304121 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Nov 12 21:32:19.391383 kernel: ata3: SATA link down (SStatus 0 SControl 300) Nov 12 21:32:19.391462 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 12 21:32:19.391473 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 12 21:32:19.392306 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 12 21:32:19.394711 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 12 21:32:19.396923 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 12 21:32:19.396995 kernel: ata1.00: applying bridge limits Nov 12 21:32:19.400492 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 12 21:32:19.400582 kernel: ata1.00: configured for UDMA/100 Nov 12 21:32:19.407131 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 12 21:32:19.459144 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 12 21:32:19.465065 kernel: sd 0:0:0:0: Power-on or device reset occurred Nov 12 21:32:19.497719 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Nov 12 21:32:19.498063 kernel: sd 0:0:0:0: [sda] Write Protect is off Nov 12 21:32:19.498341 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Nov 12 21:32:19.498571 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 12 21:32:19.510761 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 12 21:32:19.512216 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 12 21:32:19.512242 kernel: usbcore: registered new interface driver usbhid Nov 12 21:32:19.512259 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 12 21:32:19.512276 kernel: usbhid: USB HID core driver Nov 12 21:32:19.512293 kernel: GPT:17805311 != 80003071 Nov 12 21:32:19.512309 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 12 21:32:19.512325 kernel: GPT:17805311 != 80003071 Nov 12 21:32:19.512351 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 12 21:32:19.512369 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 12 21:32:19.512392 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Nov 12 21:32:19.512756 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Nov 12 21:32:19.512778 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Nov 12 21:32:19.513184 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Nov 12 21:32:19.561171 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (458) Nov 12 21:32:19.568092 kernel: BTRFS: device fsid 9dfeafbb-8ab7-4be2-acae-f51db463fc77 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (452) Nov 12 21:32:19.580786 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Nov 12 21:32:19.592276 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Nov 12 21:32:19.600122 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Nov 12 21:32:19.601776 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Nov 12 21:32:19.609959 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 12 21:32:19.617370 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 12 21:32:19.624816 disk-uuid[578]: Primary Header is updated. Nov 12 21:32:19.624816 disk-uuid[578]: Secondary Entries is updated. Nov 12 21:32:19.624816 disk-uuid[578]: Secondary Header is updated. Nov 12 21:32:19.641118 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 12 21:32:19.656148 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 12 21:32:19.666109 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 12 21:32:20.668145 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 12 21:32:20.669376 disk-uuid[580]: The operation has completed successfully. Nov 12 21:32:20.759764 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 12 21:32:20.759899 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 12 21:32:20.775195 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 12 21:32:20.793327 sh[599]: Success Nov 12 21:32:20.807148 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Nov 12 21:32:20.883335 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 12 21:32:20.895865 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 12 21:32:20.897646 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 12 21:32:20.923445 kernel: BTRFS info (device dm-0): first mount of filesystem 9dfeafbb-8ab7-4be2-acae-f51db463fc77 Nov 12 21:32:20.923535 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 12 21:32:20.926331 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 12 21:32:20.926399 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 12 21:32:20.927583 kernel: BTRFS info (device dm-0): using free space tree Nov 12 21:32:20.937125 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 12 21:32:20.939436 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 12 21:32:20.940604 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 12 21:32:20.946194 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 12 21:32:20.949191 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 12 21:32:20.961164 kernel: BTRFS info (device sda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 21:32:20.961212 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 21:32:20.961223 kernel: BTRFS info (device sda6): using free space tree Nov 12 21:32:20.966552 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 12 21:32:20.966576 kernel: BTRFS info (device sda6): auto enabling async discard Nov 12 21:32:20.978233 kernel: BTRFS info (device sda6): last unmount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 21:32:20.977894 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 12 21:32:20.986985 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 12 21:32:20.998229 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 12 21:32:21.082722 ignition[697]: Ignition 2.19.0 Nov 12 21:32:21.082738 ignition[697]: Stage: fetch-offline Nov 12 21:32:21.084904 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 21:32:21.082778 ignition[697]: no configs at "/usr/lib/ignition/base.d" Nov 12 21:32:21.082788 ignition[697]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 12 21:32:21.082916 ignition[697]: parsed url from cmdline: "" Nov 12 21:32:21.082923 ignition[697]: no config URL provided Nov 12 21:32:21.082929 ignition[697]: reading system config file "/usr/lib/ignition/user.ign" Nov 12 21:32:21.082938 ignition[697]: no config at "/usr/lib/ignition/user.ign" Nov 12 21:32:21.082944 ignition[697]: failed to fetch config: resource requires networking Nov 12 21:32:21.089384 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 21:32:21.083338 ignition[697]: Ignition finished successfully Nov 12 21:32:21.096243 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 21:32:21.121716 systemd-networkd[785]: lo: Link UP Nov 12 21:32:21.121730 systemd-networkd[785]: lo: Gained carrier Nov 12 21:32:21.124549 systemd-networkd[785]: Enumeration completed Nov 12 21:32:21.124636 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 21:32:21.125374 systemd[1]: Reached target network.target - Network. Nov 12 21:32:21.125421 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 21:32:21.125425 systemd-networkd[785]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 21:32:21.126475 systemd-networkd[785]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 21:32:21.126482 systemd-networkd[785]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 21:32:21.132353 systemd-networkd[785]: eth0: Link UP Nov 12 21:32:21.132357 systemd-networkd[785]: eth0: Gained carrier Nov 12 21:32:21.132366 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 21:32:21.133295 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 12 21:32:21.138315 systemd-networkd[785]: eth1: Link UP Nov 12 21:32:21.138320 systemd-networkd[785]: eth1: Gained carrier Nov 12 21:32:21.138333 systemd-networkd[785]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 21:32:21.151782 ignition[787]: Ignition 2.19.0 Nov 12 21:32:21.151795 ignition[787]: Stage: fetch Nov 12 21:32:21.151994 ignition[787]: no configs at "/usr/lib/ignition/base.d" Nov 12 21:32:21.152008 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 12 21:32:21.152138 ignition[787]: parsed url from cmdline: "" Nov 12 21:32:21.152143 ignition[787]: no config URL provided Nov 12 21:32:21.152150 ignition[787]: reading system config file "/usr/lib/ignition/user.ign" Nov 12 21:32:21.152161 ignition[787]: no config at "/usr/lib/ignition/user.ign" Nov 12 21:32:21.152183 ignition[787]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Nov 12 21:32:21.152430 ignition[787]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Nov 12 21:32:21.169160 systemd-networkd[785]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 12 21:32:21.258170 systemd-networkd[785]: eth0: DHCPv4 address 188.245.86.234/32, gateway 172.31.1.1 acquired from 172.31.1.1 Nov 12 21:32:21.353343 ignition[787]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Nov 12 21:32:21.359292 ignition[787]: GET result: OK Nov 12 21:32:21.359357 ignition[787]: parsing config with SHA512: fe678fd51d51bf3054639d2199266cc636f9e6851bf38113d3b1071f9f93642a3a6bfce1fada4b0c231ec505d76e968f2072d1e8f4003e9b566fbc1dc341bdd6 Nov 12 21:32:21.363813 unknown[787]: fetched base config from "system" Nov 12 21:32:21.363825 unknown[787]: fetched base config from "system" Nov 12 21:32:21.364298 ignition[787]: fetch: fetch complete Nov 12 21:32:21.363834 unknown[787]: fetched user config from "hetzner" Nov 12 21:32:21.364304 ignition[787]: fetch: fetch passed Nov 12 21:32:21.368768 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 12 21:32:21.364355 ignition[787]: Ignition finished successfully Nov 12 21:32:21.377473 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 12 21:32:21.395616 ignition[794]: Ignition 2.19.0 Nov 12 21:32:21.395629 ignition[794]: Stage: kargs Nov 12 21:32:21.395820 ignition[794]: no configs at "/usr/lib/ignition/base.d" Nov 12 21:32:21.395834 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 12 21:32:21.402859 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 12 21:32:21.396924 ignition[794]: kargs: kargs passed Nov 12 21:32:21.396977 ignition[794]: Ignition finished successfully Nov 12 21:32:21.414343 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 12 21:32:21.428948 ignition[800]: Ignition 2.19.0 Nov 12 21:32:21.428958 ignition[800]: Stage: disks Nov 12 21:32:21.429139 ignition[800]: no configs at "/usr/lib/ignition/base.d" Nov 12 21:32:21.432087 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 12 21:32:21.429149 ignition[800]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 12 21:32:21.433172 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 12 21:32:21.429886 ignition[800]: disks: disks passed Nov 12 21:32:21.433970 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 12 21:32:21.429928 ignition[800]: Ignition finished successfully Nov 12 21:32:21.435046 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 21:32:21.436092 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 21:32:21.436935 systemd[1]: Reached target basic.target - Basic System. Nov 12 21:32:21.448305 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 12 21:32:21.464038 systemd-fsck[808]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Nov 12 21:32:21.467255 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 12 21:32:21.474266 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 12 21:32:21.580116 kernel: EXT4-fs (sda9): mounted filesystem cc5635ac-cac6-420e-b789-89e3a937cfb2 r/w with ordered data mode. Quota mode: none. Nov 12 21:32:21.580532 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 12 21:32:21.581644 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 12 21:32:21.589360 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 21:32:21.605376 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 12 21:32:21.610362 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 12 21:32:21.611690 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 12 21:32:21.612441 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 21:32:21.616211 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 12 21:32:21.625107 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (816) Nov 12 21:32:21.625718 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 12 21:32:21.633415 kernel: BTRFS info (device sda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 21:32:21.633485 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 21:32:21.634203 kernel: BTRFS info (device sda6): using free space tree Nov 12 21:32:21.643285 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 12 21:32:21.643356 kernel: BTRFS info (device sda6): auto enabling async discard Nov 12 21:32:21.648366 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 21:32:21.694243 initrd-setup-root[843]: cut: /sysroot/etc/passwd: No such file or directory Nov 12 21:32:21.696164 coreos-metadata[818]: Nov 12 21:32:21.694 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Nov 12 21:32:21.696164 coreos-metadata[818]: Nov 12 21:32:21.695 INFO Fetch successful Nov 12 21:32:21.696164 coreos-metadata[818]: Nov 12 21:32:21.695 INFO wrote hostname ci-4081-2-0-6-01c097edc7 to /sysroot/etc/hostname Nov 12 21:32:21.702369 initrd-setup-root[850]: cut: /sysroot/etc/group: No such file or directory Nov 12 21:32:21.699101 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 12 21:32:21.707494 initrd-setup-root[858]: cut: /sysroot/etc/shadow: No such file or directory Nov 12 21:32:21.713625 initrd-setup-root[865]: cut: /sysroot/etc/gshadow: No such file or directory Nov 12 21:32:21.807974 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 12 21:32:21.812174 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 12 21:32:21.815243 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 12 21:32:21.829119 kernel: BTRFS info (device sda6): last unmount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 21:32:21.846921 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 12 21:32:21.858012 ignition[937]: INFO : Ignition 2.19.0 Nov 12 21:32:21.858012 ignition[937]: INFO : Stage: mount Nov 12 21:32:21.859467 ignition[937]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 21:32:21.859467 ignition[937]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 12 21:32:21.859467 ignition[937]: INFO : mount: mount passed Nov 12 21:32:21.859467 ignition[937]: INFO : Ignition finished successfully Nov 12 21:32:21.862568 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 12 21:32:21.871292 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 12 21:32:21.923569 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 12 21:32:21.935301 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 21:32:21.951146 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (949) Nov 12 21:32:21.955390 kernel: BTRFS info (device sda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 21:32:21.955460 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 21:32:21.959344 kernel: BTRFS info (device sda6): using free space tree Nov 12 21:32:21.966414 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 12 21:32:21.966505 kernel: BTRFS info (device sda6): auto enabling async discard Nov 12 21:32:21.971746 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 21:32:22.001110 ignition[966]: INFO : Ignition 2.19.0 Nov 12 21:32:22.001110 ignition[966]: INFO : Stage: files Nov 12 21:32:22.002368 ignition[966]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 21:32:22.002368 ignition[966]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 12 21:32:22.003691 ignition[966]: DEBUG : files: compiled without relabeling support, skipping Nov 12 21:32:22.004385 ignition[966]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 12 21:32:22.004385 ignition[966]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 12 21:32:22.007945 ignition[966]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 12 21:32:22.008670 ignition[966]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 12 21:32:22.009494 ignition[966]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 12 21:32:22.008985 unknown[966]: wrote ssh authorized keys file for user: core Nov 12 21:32:22.011423 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 21:32:22.012351 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Nov 12 21:32:22.125408 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 12 21:32:22.436841 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 21:32:22.436841 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 12 21:32:22.439055 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 12 21:32:22.439055 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 12 21:32:22.439055 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 12 21:32:22.439055 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 21:32:22.439055 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 21:32:22.439055 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 21:32:22.439055 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 21:32:22.439055 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 21:32:22.439055 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 21:32:22.439055 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Nov 12 21:32:22.439055 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Nov 12 21:32:22.439055 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Nov 12 21:32:22.439055 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Nov 12 21:32:22.500558 systemd-networkd[785]: eth1: Gained IPv6LL Nov 12 21:32:22.948357 systemd-networkd[785]: eth0: Gained IPv6LL Nov 12 21:32:22.996375 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 12 21:32:23.331883 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Nov 12 21:32:23.331883 ignition[966]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 12 21:32:23.336422 ignition[966]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 21:32:23.336422 ignition[966]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 21:32:23.336422 ignition[966]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 12 21:32:23.336422 ignition[966]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 12 21:32:23.336422 ignition[966]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 12 21:32:23.336422 ignition[966]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 12 21:32:23.336422 ignition[966]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 12 21:32:23.336422 ignition[966]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Nov 12 21:32:23.336422 ignition[966]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Nov 12 21:32:23.336422 ignition[966]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 12 21:32:23.336422 ignition[966]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 12 21:32:23.336422 ignition[966]: INFO : files: files passed Nov 12 21:32:23.336422 ignition[966]: INFO : Ignition finished successfully Nov 12 21:32:23.336642 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 12 21:32:23.344352 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 12 21:32:23.350227 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 12 21:32:23.352195 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 12 21:32:23.352585 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 12 21:32:23.364200 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 21:32:23.364200 initrd-setup-root-after-ignition[994]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 12 21:32:23.366589 initrd-setup-root-after-ignition[998]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 21:32:23.368871 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 21:32:23.369728 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 12 21:32:23.374195 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 12 21:32:23.399944 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 12 21:32:23.400062 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 12 21:32:23.401568 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 12 21:32:23.402949 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 12 21:32:23.403561 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 12 21:32:23.404900 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 12 21:32:23.421409 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 21:32:23.430258 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 12 21:32:23.439972 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 12 21:32:23.440711 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 21:32:23.441825 systemd[1]: Stopped target timers.target - Timer Units. Nov 12 21:32:23.442894 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 12 21:32:23.443017 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 21:32:23.444290 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 12 21:32:23.444992 systemd[1]: Stopped target basic.target - Basic System. Nov 12 21:32:23.446117 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 12 21:32:23.447130 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 21:32:23.448777 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 12 21:32:23.449978 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 12 21:32:23.451126 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 21:32:23.452289 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 12 21:32:23.453406 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 12 21:32:23.454542 systemd[1]: Stopped target swap.target - Swaps. Nov 12 21:32:23.455583 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 12 21:32:23.455710 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 12 21:32:23.457035 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 12 21:32:23.457816 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 21:32:23.458879 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 12 21:32:23.458981 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 21:32:23.460003 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 12 21:32:23.460133 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 12 21:32:23.461702 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 12 21:32:23.461842 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 21:32:23.462600 systemd[1]: ignition-files.service: Deactivated successfully. Nov 12 21:32:23.462777 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 12 21:32:23.463463 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 12 21:32:23.463565 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 12 21:32:23.471689 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 12 21:32:23.475354 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 12 21:32:23.476572 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 12 21:32:23.476874 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 21:32:23.478795 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 12 21:32:23.479036 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 21:32:23.490065 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 12 21:32:23.490200 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 12 21:32:23.496153 ignition[1018]: INFO : Ignition 2.19.0 Nov 12 21:32:23.497295 ignition[1018]: INFO : Stage: umount Nov 12 21:32:23.498371 ignition[1018]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 21:32:23.498371 ignition[1018]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 12 21:32:23.505869 ignition[1018]: INFO : umount: umount passed Nov 12 21:32:23.505869 ignition[1018]: INFO : Ignition finished successfully Nov 12 21:32:23.506625 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 12 21:32:23.506786 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 12 21:32:23.509130 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 12 21:32:23.509246 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 12 21:32:23.509778 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 12 21:32:23.509852 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 12 21:32:23.513708 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 12 21:32:23.513764 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 12 21:32:23.515259 systemd[1]: Stopped target network.target - Network. Nov 12 21:32:23.515670 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 12 21:32:23.515721 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 21:32:23.523388 systemd[1]: Stopped target paths.target - Path Units. Nov 12 21:32:23.525216 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 12 21:32:23.527292 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 21:32:23.531180 systemd[1]: Stopped target slices.target - Slice Units. Nov 12 21:32:23.532947 systemd[1]: Stopped target sockets.target - Socket Units. Nov 12 21:32:23.533431 systemd[1]: iscsid.socket: Deactivated successfully. Nov 12 21:32:23.533488 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 21:32:23.537891 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 12 21:32:23.537952 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 21:32:23.538465 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 12 21:32:23.538521 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 12 21:32:23.542434 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 12 21:32:23.542497 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 12 21:32:23.543305 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 12 21:32:23.546368 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 12 21:32:23.548288 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 12 21:32:23.550142 systemd-networkd[785]: eth0: DHCPv6 lease lost Nov 12 21:32:23.550621 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 12 21:32:23.550747 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 12 21:32:23.552980 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 12 21:32:23.553138 systemd-networkd[785]: eth1: DHCPv6 lease lost Nov 12 21:32:23.554167 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 12 21:32:23.557400 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 12 21:32:23.557546 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 12 21:32:23.558638 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 12 21:32:23.558749 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 12 21:32:23.562775 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 12 21:32:23.562870 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 12 21:32:23.569229 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 12 21:32:23.570368 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 12 21:32:23.570431 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 21:32:23.570984 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 12 21:32:23.571030 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 12 21:32:23.571547 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 12 21:32:23.571597 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 12 21:32:23.572601 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 12 21:32:23.572668 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 21:32:23.574018 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 21:32:23.587400 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 12 21:32:23.587565 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 12 21:32:23.593766 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 12 21:32:23.593944 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 21:32:23.595046 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 12 21:32:23.595110 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 12 21:32:23.595980 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 12 21:32:23.596019 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 21:32:23.597116 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 12 21:32:23.597166 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 12 21:32:23.598839 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 12 21:32:23.598891 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 12 21:32:23.600274 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 21:32:23.600366 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 21:32:23.607595 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 12 21:32:23.608820 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 12 21:32:23.608882 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 21:32:23.610741 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 12 21:32:23.610795 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 21:32:23.611338 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 12 21:32:23.611386 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 21:32:23.611940 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 21:32:23.611993 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 21:32:23.614408 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 12 21:32:23.614510 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 12 21:32:23.615981 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 12 21:32:23.621302 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 12 21:32:23.633253 systemd[1]: Switching root. Nov 12 21:32:23.662825 systemd-journald[188]: Journal stopped Nov 12 21:32:24.898507 systemd-journald[188]: Received SIGTERM from PID 1 (systemd). Nov 12 21:32:24.898581 kernel: SELinux: policy capability network_peer_controls=1 Nov 12 21:32:24.898595 kernel: SELinux: policy capability open_perms=1 Nov 12 21:32:24.898606 kernel: SELinux: policy capability extended_socket_class=1 Nov 12 21:32:24.898625 kernel: SELinux: policy capability always_check_network=0 Nov 12 21:32:24.898640 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 12 21:32:24.898651 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 12 21:32:24.898662 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 12 21:32:24.898672 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 12 21:32:24.898683 kernel: audit: type=1403 audit(1731447143.824:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 12 21:32:24.898695 systemd[1]: Successfully loaded SELinux policy in 47.266ms. Nov 12 21:32:24.898718 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.974ms. Nov 12 21:32:24.898734 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 21:32:24.898746 systemd[1]: Detected virtualization kvm. Nov 12 21:32:24.898758 systemd[1]: Detected architecture x86-64. Nov 12 21:32:24.898770 systemd[1]: Detected first boot. Nov 12 21:32:24.898782 systemd[1]: Hostname set to . Nov 12 21:32:24.898793 systemd[1]: Initializing machine ID from VM UUID. Nov 12 21:32:24.898805 zram_generator::config[1063]: No configuration found. Nov 12 21:32:24.898822 systemd[1]: Populated /etc with preset unit settings. Nov 12 21:32:24.898838 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 12 21:32:24.898849 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 12 21:32:24.898861 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 12 21:32:24.898873 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 12 21:32:24.898885 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 12 21:32:24.898897 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 12 21:32:24.898911 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 12 21:32:24.898934 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 12 21:32:24.898954 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 12 21:32:24.898973 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 12 21:32:24.898988 systemd[1]: Created slice user.slice - User and Session Slice. Nov 12 21:32:24.899002 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 21:32:24.899014 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 21:32:24.899026 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 12 21:32:24.899038 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 12 21:32:24.899050 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 12 21:32:24.899062 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 21:32:24.899100 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 12 21:32:24.899112 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 21:32:24.899124 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 12 21:32:24.899137 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 12 21:32:24.899149 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 12 21:32:24.899160 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 12 21:32:24.899174 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 21:32:24.899186 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 21:32:24.899198 systemd[1]: Reached target slices.target - Slice Units. Nov 12 21:32:24.899209 systemd[1]: Reached target swap.target - Swaps. Nov 12 21:32:24.899221 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 12 21:32:24.899233 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 12 21:32:24.899244 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 21:32:24.899261 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 21:32:24.899272 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 21:32:24.899284 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 12 21:32:24.899298 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 12 21:32:24.899310 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 12 21:32:24.899326 systemd[1]: Mounting media.mount - External Media Directory... Nov 12 21:32:24.899340 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 21:32:24.899352 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 12 21:32:24.899364 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 12 21:32:24.899378 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 12 21:32:24.899391 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 12 21:32:24.899403 systemd[1]: Reached target machines.target - Containers. Nov 12 21:32:24.899415 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 12 21:32:24.899429 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 21:32:24.899441 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 21:32:24.899453 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 12 21:32:24.899465 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 21:32:24.899479 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 21:32:24.899495 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 21:32:24.899507 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 12 21:32:24.899519 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 21:32:24.899531 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 12 21:32:24.899543 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 12 21:32:24.899555 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 12 21:32:24.899566 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 12 21:32:24.899580 systemd[1]: Stopped systemd-fsck-usr.service. Nov 12 21:32:24.899592 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 21:32:24.899604 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 21:32:24.899615 kernel: loop: module loaded Nov 12 21:32:24.899627 kernel: fuse: init (API version 7.39) Nov 12 21:32:24.899638 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 12 21:32:24.899650 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 12 21:32:24.899662 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 21:32:24.899674 systemd[1]: verity-setup.service: Deactivated successfully. Nov 12 21:32:24.899689 systemd[1]: Stopped verity-setup.service. Nov 12 21:32:24.899702 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 21:32:24.899714 kernel: ACPI: bus type drm_connector registered Nov 12 21:32:24.899728 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 12 21:32:24.899744 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 12 21:32:24.899756 systemd[1]: Mounted media.mount - External Media Directory. Nov 12 21:32:24.899768 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 12 21:32:24.899782 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 12 21:32:24.899794 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 12 21:32:24.899806 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 12 21:32:24.899837 systemd-journald[1148]: Collecting audit messages is disabled. Nov 12 21:32:24.899867 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 21:32:24.899881 systemd-journald[1148]: Journal started Nov 12 21:32:24.899905 systemd-journald[1148]: Runtime Journal (/run/log/journal/28039648453243ada8a776fa1fec9872) is 4.8M, max 38.4M, 33.6M free. Nov 12 21:32:24.544423 systemd[1]: Queued start job for default target multi-user.target. Nov 12 21:32:24.565358 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Nov 12 21:32:24.566853 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 12 21:32:24.905815 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 21:32:24.904142 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 12 21:32:24.904318 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 12 21:32:24.905163 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 21:32:24.905319 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 21:32:24.906371 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 21:32:24.906530 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 21:32:24.907399 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 21:32:24.907553 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 21:32:24.908363 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 12 21:32:24.908516 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 12 21:32:24.909538 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 21:32:24.909751 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 21:32:24.910854 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 21:32:24.911676 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 12 21:32:24.912629 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 12 21:32:24.931449 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 12 21:32:24.938131 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 12 21:32:24.942955 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 12 21:32:24.944189 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 12 21:32:24.944220 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 21:32:24.946218 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 12 21:32:24.953528 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 12 21:32:24.957243 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 12 21:32:24.957836 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 21:32:24.961433 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 12 21:32:24.964241 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 12 21:32:24.965886 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 21:32:24.973274 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 12 21:32:24.974205 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 21:32:24.979224 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 21:32:24.983238 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 12 21:32:24.993243 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 21:32:24.995890 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 12 21:32:24.999286 systemd-journald[1148]: Time spent on flushing to /var/log/journal/28039648453243ada8a776fa1fec9872 is 105.192ms for 1134 entries. Nov 12 21:32:24.999286 systemd-journald[1148]: System Journal (/var/log/journal/28039648453243ada8a776fa1fec9872) is 8.0M, max 584.8M, 576.8M free. Nov 12 21:32:25.124137 systemd-journald[1148]: Received client request to flush runtime journal. Nov 12 21:32:25.124196 kernel: loop0: detected capacity change from 0 to 142488 Nov 12 21:32:25.124218 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 12 21:32:25.124239 kernel: loop1: detected capacity change from 0 to 140768 Nov 12 21:32:24.998218 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 12 21:32:24.999001 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 12 21:32:25.034492 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 12 21:32:25.035335 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 12 21:32:25.045363 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 12 21:32:25.106948 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 21:32:25.119000 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 12 21:32:25.121179 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 12 21:32:25.127340 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. Nov 12 21:32:25.127354 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. Nov 12 21:32:25.139990 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 12 21:32:25.142188 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 21:32:25.147112 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 21:32:25.159242 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 12 21:32:25.168259 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 12 21:32:25.179173 kernel: loop2: detected capacity change from 0 to 8 Nov 12 21:32:25.192843 udevadm[1201]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 12 21:32:25.208579 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 12 21:32:25.217884 kernel: loop3: detected capacity change from 0 to 210664 Nov 12 21:32:25.220265 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 21:32:25.240898 systemd-tmpfiles[1205]: ACLs are not supported, ignoring. Nov 12 21:32:25.241264 systemd-tmpfiles[1205]: ACLs are not supported, ignoring. Nov 12 21:32:25.247649 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 21:32:25.284104 kernel: loop4: detected capacity change from 0 to 142488 Nov 12 21:32:25.315103 kernel: loop5: detected capacity change from 0 to 140768 Nov 12 21:32:25.341288 kernel: loop6: detected capacity change from 0 to 8 Nov 12 21:32:25.345119 kernel: loop7: detected capacity change from 0 to 210664 Nov 12 21:32:25.373421 (sd-merge)[1209]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Nov 12 21:32:25.374101 (sd-merge)[1209]: Merged extensions into '/usr'. Nov 12 21:32:25.386866 systemd[1]: Reloading requested from client PID 1181 ('systemd-sysext') (unit systemd-sysext.service)... Nov 12 21:32:25.386886 systemd[1]: Reloading... Nov 12 21:32:25.506105 zram_generator::config[1235]: No configuration found. Nov 12 21:32:25.615366 ldconfig[1176]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 12 21:32:25.682472 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 21:32:25.745325 systemd[1]: Reloading finished in 357 ms. Nov 12 21:32:25.780450 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 12 21:32:25.781805 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 12 21:32:25.795398 systemd[1]: Starting ensure-sysext.service... Nov 12 21:32:25.801820 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 21:32:25.816216 systemd[1]: Reloading requested from client PID 1278 ('systemctl') (unit ensure-sysext.service)... Nov 12 21:32:25.816230 systemd[1]: Reloading... Nov 12 21:32:25.842875 systemd-tmpfiles[1279]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 12 21:32:25.843609 systemd-tmpfiles[1279]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 12 21:32:25.844764 systemd-tmpfiles[1279]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 12 21:32:25.845159 systemd-tmpfiles[1279]: ACLs are not supported, ignoring. Nov 12 21:32:25.845285 systemd-tmpfiles[1279]: ACLs are not supported, ignoring. Nov 12 21:32:25.850361 systemd-tmpfiles[1279]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 21:32:25.850375 systemd-tmpfiles[1279]: Skipping /boot Nov 12 21:32:25.870190 systemd-tmpfiles[1279]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 21:32:25.870207 systemd-tmpfiles[1279]: Skipping /boot Nov 12 21:32:25.945105 zram_generator::config[1320]: No configuration found. Nov 12 21:32:26.064792 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 21:32:26.123355 systemd[1]: Reloading finished in 306 ms. Nov 12 21:32:26.144771 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 12 21:32:26.151557 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 21:32:26.164302 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 12 21:32:26.170720 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 12 21:32:26.175305 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 12 21:32:26.182312 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 21:32:26.195635 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 21:32:26.202380 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 12 21:32:26.214324 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 21:32:26.214629 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 21:32:26.222514 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 21:32:26.233461 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 21:32:26.242609 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 21:32:26.243967 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 21:32:26.244247 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 21:32:26.245985 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 21:32:26.246606 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 21:32:26.251558 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 21:32:26.252796 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 21:32:26.260737 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 21:32:26.261961 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 21:32:26.266320 systemd-udevd[1356]: Using default interface naming scheme 'v255'. Nov 12 21:32:26.275643 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 21:32:26.275871 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 21:32:26.282927 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 21:32:26.291242 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 21:32:26.297000 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 21:32:26.297766 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 21:32:26.307852 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 12 21:32:26.309308 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 21:32:26.312586 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 21:32:26.312810 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 21:32:26.314728 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 21:32:26.315599 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 21:32:26.318036 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 21:32:26.318448 augenrules[1381]: No rules Nov 12 21:32:26.318809 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 21:32:26.320297 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 12 21:32:26.323687 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 12 21:32:26.335663 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 21:32:26.335888 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 21:32:26.343338 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 21:32:26.346535 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 21:32:26.349457 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 21:32:26.353289 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 21:32:26.354284 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 21:32:26.354427 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 21:32:26.355116 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 21:32:26.358165 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 12 21:32:26.370740 systemd[1]: Finished ensure-sysext.service. Nov 12 21:32:26.383465 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 21:32:26.388242 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 12 21:32:26.396113 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 12 21:32:26.397199 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 12 21:32:26.398030 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 21:32:26.399249 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 21:32:26.408672 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 21:32:26.409514 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 21:32:26.415159 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 21:32:26.415517 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 21:32:26.418888 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 12 21:32:26.426373 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 21:32:26.427161 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 12 21:32:26.430646 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 21:32:26.430839 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 21:32:26.431559 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 21:32:26.445420 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 12 21:32:26.543847 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 12 21:32:26.563259 systemd-networkd[1406]: lo: Link UP Nov 12 21:32:26.563271 systemd-networkd[1406]: lo: Gained carrier Nov 12 21:32:26.565248 systemd-networkd[1406]: Enumeration completed Nov 12 21:32:26.565345 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 21:32:26.567535 systemd-networkd[1406]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 21:32:26.567546 systemd-networkd[1406]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 21:32:26.573177 systemd-networkd[1406]: eth1: Link UP Nov 12 21:32:26.573189 systemd-networkd[1406]: eth1: Gained carrier Nov 12 21:32:26.573202 systemd-networkd[1406]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 21:32:26.573537 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 12 21:32:26.580119 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1425) Nov 12 21:32:26.586968 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 12 21:32:26.587618 systemd[1]: Reached target time-set.target - System Time Set. Nov 12 21:32:26.594178 systemd-networkd[1406]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 21:32:26.609115 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1425) Nov 12 21:32:26.612141 systemd-networkd[1406]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 12 21:32:26.613625 systemd-timesyncd[1407]: Network configuration changed, trying to establish connection. Nov 12 21:32:26.617581 systemd-resolved[1355]: Positive Trust Anchors: Nov 12 21:32:26.619112 systemd-resolved[1355]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 21:32:26.619147 systemd-resolved[1355]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 21:32:26.625831 systemd-resolved[1355]: Using system hostname 'ci-4081-2-0-6-01c097edc7'. Nov 12 21:32:26.630209 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 21:32:26.631283 systemd[1]: Reached target network.target - Network. Nov 12 21:32:26.632166 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 21:32:26.639652 systemd-networkd[1406]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 21:32:26.639667 systemd-networkd[1406]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 21:32:26.640443 systemd-timesyncd[1407]: Network configuration changed, trying to establish connection. Nov 12 21:32:26.641654 systemd-networkd[1406]: eth0: Link UP Nov 12 21:32:26.641665 systemd-networkd[1406]: eth0: Gained carrier Nov 12 21:32:26.641680 systemd-networkd[1406]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 21:32:26.646026 systemd-timesyncd[1407]: Network configuration changed, trying to establish connection. Nov 12 21:32:26.682107 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 12 21:32:26.696097 kernel: ACPI: button: Power Button [PWRF] Nov 12 21:32:26.705131 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1411) Nov 12 21:32:26.745206 systemd-networkd[1406]: eth0: DHCPv4 address 188.245.86.234/32, gateway 172.31.1.1 acquired from 172.31.1.1 Nov 12 21:32:26.747208 systemd-timesyncd[1407]: Network configuration changed, trying to establish connection. Nov 12 21:32:26.752193 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Nov 12 21:32:26.752226 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Nov 12 21:32:26.756587 kernel: Console: switching to colour dummy device 80x25 Nov 12 21:32:26.758245 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Nov 12 21:32:26.758290 kernel: [drm] features: -context_init Nov 12 21:32:26.761176 kernel: [drm] number of scanouts: 1 Nov 12 21:32:26.761226 kernel: [drm] number of cap sets: 0 Nov 12 21:32:26.771873 kernel: mousedev: PS/2 mouse device common for all mice Nov 12 21:32:26.772016 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 12 21:32:26.775261 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Nov 12 21:32:26.776147 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 12 21:32:26.777689 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Nov 12 21:32:26.780247 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Nov 12 21:32:26.781926 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 21:32:26.782235 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 21:32:26.789371 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 21:32:26.792454 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 21:32:26.794660 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 21:32:26.794846 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 21:32:26.794884 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 12 21:32:26.794900 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 21:32:26.795405 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 21:32:26.795621 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 21:32:26.802113 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Nov 12 21:32:26.805730 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 21:32:26.806385 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 21:32:26.807721 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 21:32:26.813106 kernel: EDAC MC: Ver: 3.0.0 Nov 12 21:32:26.826784 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 21:32:26.827488 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 21:32:26.831496 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Nov 12 21:32:26.831547 kernel: Console: switching to colour frame buffer device 160x50 Nov 12 21:32:26.840102 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Nov 12 21:32:26.842205 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 21:32:26.863847 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 12 21:32:26.872358 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 12 21:32:26.880732 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 21:32:26.890605 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 21:32:26.891314 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 21:32:26.908282 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 21:32:26.908748 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 12 21:32:26.983111 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 21:32:27.008197 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 12 21:32:27.014386 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 12 21:32:27.038871 lvm[1471]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 21:32:27.073814 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 12 21:32:27.074283 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 21:32:27.074405 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 21:32:27.074626 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 12 21:32:27.074768 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 12 21:32:27.075135 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 12 21:32:27.075422 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 12 21:32:27.075526 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 12 21:32:27.075615 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 12 21:32:27.075650 systemd[1]: Reached target paths.target - Path Units. Nov 12 21:32:27.075729 systemd[1]: Reached target timers.target - Timer Units. Nov 12 21:32:27.076821 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 12 21:32:27.080616 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 12 21:32:27.100766 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 12 21:32:27.110333 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 12 21:32:27.119436 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 12 21:32:27.120780 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 21:32:27.123050 systemd[1]: Reached target basic.target - Basic System. Nov 12 21:32:27.126273 lvm[1475]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 21:32:27.127453 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 12 21:32:27.127519 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 12 21:32:27.135444 systemd[1]: Starting containerd.service - containerd container runtime... Nov 12 21:32:27.151146 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 12 21:32:27.164421 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 12 21:32:27.177277 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 12 21:32:27.185250 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 12 21:32:27.185995 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 12 21:32:27.192290 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 12 21:32:27.199239 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 12 21:32:27.203373 coreos-metadata[1477]: Nov 12 21:32:27.203 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Nov 12 21:32:27.206430 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Nov 12 21:32:27.221616 coreos-metadata[1477]: Nov 12 21:32:27.221 INFO Fetch successful Nov 12 21:32:27.221616 coreos-metadata[1477]: Nov 12 21:32:27.221 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Nov 12 21:32:27.213490 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 12 21:32:27.217265 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 12 21:32:27.224923 coreos-metadata[1477]: Nov 12 21:32:27.221 INFO Fetch successful Nov 12 21:32:27.228965 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 12 21:32:27.230856 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 12 21:32:27.234504 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 12 21:32:27.240108 systemd[1]: Starting update-engine.service - Update Engine... Nov 12 21:32:27.246485 jq[1481]: false Nov 12 21:32:27.251239 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 12 21:32:27.256859 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 12 21:32:27.269541 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 12 21:32:27.270326 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 12 21:32:27.270679 systemd[1]: motdgen.service: Deactivated successfully. Nov 12 21:32:27.271890 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 12 21:32:27.284977 update_engine[1495]: I20241112 21:32:27.280630 1495 main.cc:92] Flatcar Update Engine starting Nov 12 21:32:27.285829 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 12 21:32:27.286363 extend-filesystems[1482]: Found loop4 Nov 12 21:32:27.286363 extend-filesystems[1482]: Found loop5 Nov 12 21:32:27.286363 extend-filesystems[1482]: Found loop6 Nov 12 21:32:27.286363 extend-filesystems[1482]: Found loop7 Nov 12 21:32:27.286363 extend-filesystems[1482]: Found sda Nov 12 21:32:27.286363 extend-filesystems[1482]: Found sda1 Nov 12 21:32:27.286363 extend-filesystems[1482]: Found sda2 Nov 12 21:32:27.343808 extend-filesystems[1482]: Found sda3 Nov 12 21:32:27.343808 extend-filesystems[1482]: Found usr Nov 12 21:32:27.343808 extend-filesystems[1482]: Found sda4 Nov 12 21:32:27.343808 extend-filesystems[1482]: Found sda6 Nov 12 21:32:27.343808 extend-filesystems[1482]: Found sda7 Nov 12 21:32:27.343808 extend-filesystems[1482]: Found sda9 Nov 12 21:32:27.343808 extend-filesystems[1482]: Checking size of /dev/sda9 Nov 12 21:32:27.343808 extend-filesystems[1482]: Resized partition /dev/sda9 Nov 12 21:32:27.412745 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Nov 12 21:32:27.286788 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 12 21:32:27.313172 dbus-daemon[1479]: [system] SELinux support is enabled Nov 12 21:32:27.428622 jq[1496]: true Nov 12 21:32:27.428884 update_engine[1495]: I20241112 21:32:27.319387 1495 update_check_scheduler.cc:74] Next update check in 5m19s Nov 12 21:32:27.428924 extend-filesystems[1511]: resize2fs 1.47.1 (20-May-2024) Nov 12 21:32:27.438494 tar[1501]: linux-amd64/helm Nov 12 21:32:27.335228 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 12 21:32:27.389767 (ntainerd)[1513]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 12 21:32:27.392052 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 12 21:32:27.445253 jq[1505]: true Nov 12 21:32:27.392121 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 12 21:32:27.402170 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 12 21:32:27.402188 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 12 21:32:27.408242 systemd[1]: Started update-engine.service - Update Engine. Nov 12 21:32:27.433543 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 12 21:32:27.464022 systemd-logind[1492]: New seat seat0. Nov 12 21:32:27.468344 systemd-logind[1492]: Watching system buttons on /dev/input/event2 (Power Button) Nov 12 21:32:27.468365 systemd-logind[1492]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 12 21:32:27.468997 systemd[1]: Started systemd-logind.service - User Login Management. Nov 12 21:32:27.516441 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 12 21:32:27.520392 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 12 21:32:27.543924 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1425) Nov 12 21:32:27.586145 bash[1547]: Updated "/home/core/.ssh/authorized_keys" Nov 12 21:32:27.593333 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 12 21:32:27.629922 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Nov 12 21:32:27.634511 systemd[1]: Starting sshkeys.service... Nov 12 21:32:27.663442 extend-filesystems[1511]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Nov 12 21:32:27.663442 extend-filesystems[1511]: old_desc_blocks = 1, new_desc_blocks = 5 Nov 12 21:32:27.663442 extend-filesystems[1511]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Nov 12 21:32:27.677265 extend-filesystems[1482]: Resized filesystem in /dev/sda9 Nov 12 21:32:27.677265 extend-filesystems[1482]: Found sr0 Nov 12 21:32:27.666620 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 12 21:32:27.668993 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 12 21:32:27.686603 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 12 21:32:27.696503 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 12 21:32:27.698525 locksmithd[1525]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 12 21:32:27.745439 coreos-metadata[1561]: Nov 12 21:32:27.745 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Nov 12 21:32:27.746340 coreos-metadata[1561]: Nov 12 21:32:27.746 INFO Fetch successful Nov 12 21:32:27.749708 unknown[1561]: wrote ssh authorized keys file for user: core Nov 12 21:32:27.797321 update-ssh-keys[1565]: Updated "/home/core/.ssh/authorized_keys" Nov 12 21:32:27.798431 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 12 21:32:27.801636 systemd[1]: Finished sshkeys.service. Nov 12 21:32:27.859630 containerd[1513]: time="2024-11-12T21:32:27.859521778Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 12 21:32:27.923327 containerd[1513]: time="2024-11-12T21:32:27.922964942Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 12 21:32:27.926547 containerd[1513]: time="2024-11-12T21:32:27.925132688Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.60-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 12 21:32:27.926547 containerd[1513]: time="2024-11-12T21:32:27.925158076Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 12 21:32:27.926547 containerd[1513]: time="2024-11-12T21:32:27.925172383Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 12 21:32:27.926547 containerd[1513]: time="2024-11-12T21:32:27.925405970Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 12 21:32:27.926547 containerd[1513]: time="2024-11-12T21:32:27.925440245Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 12 21:32:27.926547 containerd[1513]: time="2024-11-12T21:32:27.925538819Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 21:32:27.926547 containerd[1513]: time="2024-11-12T21:32:27.925559529Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 12 21:32:27.926547 containerd[1513]: time="2024-11-12T21:32:27.925795641Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 21:32:27.926547 containerd[1513]: time="2024-11-12T21:32:27.925810600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 12 21:32:27.926547 containerd[1513]: time="2024-11-12T21:32:27.925823484Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 21:32:27.926547 containerd[1513]: time="2024-11-12T21:32:27.925833182Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 12 21:32:27.926784 containerd[1513]: time="2024-11-12T21:32:27.925926487Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 12 21:32:27.928305 containerd[1513]: time="2024-11-12T21:32:27.928282275Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 12 21:32:27.928844 containerd[1513]: time="2024-11-12T21:32:27.928823420Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 21:32:27.929207 containerd[1513]: time="2024-11-12T21:32:27.929191009Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 12 21:32:27.929898 containerd[1513]: time="2024-11-12T21:32:27.929879351Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 12 21:32:27.930011 containerd[1513]: time="2024-11-12T21:32:27.929995719Z" level=info msg="metadata content store policy set" policy=shared Nov 12 21:32:27.935032 containerd[1513]: time="2024-11-12T21:32:27.934998502Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 12 21:32:27.935198 containerd[1513]: time="2024-11-12T21:32:27.935183148Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 12 21:32:27.935862 containerd[1513]: time="2024-11-12T21:32:27.935846693Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 12 21:32:27.935938 containerd[1513]: time="2024-11-12T21:32:27.935925260Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 12 21:32:27.936007 containerd[1513]: time="2024-11-12T21:32:27.935978720Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 12 21:32:27.936232 containerd[1513]: time="2024-11-12T21:32:27.936215044Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 12 21:32:27.937093 containerd[1513]: time="2024-11-12T21:32:27.937026075Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 12 21:32:27.937257 containerd[1513]: time="2024-11-12T21:32:27.937235488Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 12 21:32:27.937347 containerd[1513]: time="2024-11-12T21:32:27.937280452Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 12 21:32:27.937347 containerd[1513]: time="2024-11-12T21:32:27.937300530Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 12 21:32:27.937347 containerd[1513]: time="2024-11-12T21:32:27.937314165Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 12 21:32:27.937347 containerd[1513]: time="2024-11-12T21:32:27.937326609Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 12 21:32:27.937347 containerd[1513]: time="2024-11-12T21:32:27.937338892Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 12 21:32:27.937463 containerd[1513]: time="2024-11-12T21:32:27.937353349Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 12 21:32:27.937463 containerd[1513]: time="2024-11-12T21:32:27.937373036Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 12 21:32:27.937463 containerd[1513]: time="2024-11-12T21:32:27.937386281Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 12 21:32:27.937463 containerd[1513]: time="2024-11-12T21:32:27.937399896Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 12 21:32:27.937463 containerd[1513]: time="2024-11-12T21:32:27.937411178Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 12 21:32:27.937463 containerd[1513]: time="2024-11-12T21:32:27.937430334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 12 21:32:27.937463 containerd[1513]: time="2024-11-12T21:32:27.937442746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 12 21:32:27.937463 containerd[1513]: time="2024-11-12T21:32:27.937454388Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 12 21:32:27.937618 containerd[1513]: time="2024-11-12T21:32:27.937468214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 12 21:32:27.937618 containerd[1513]: time="2024-11-12T21:32:27.937486649Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 12 21:32:27.937618 containerd[1513]: time="2024-11-12T21:32:27.937499383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 12 21:32:27.937618 containerd[1513]: time="2024-11-12T21:32:27.937511205Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 12 21:32:27.937618 containerd[1513]: time="2024-11-12T21:32:27.937523818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 12 21:32:27.937618 containerd[1513]: time="2024-11-12T21:32:27.937535581Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 12 21:32:27.937618 containerd[1513]: time="2024-11-12T21:32:27.937554086Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 12 21:32:27.937618 containerd[1513]: time="2024-11-12T21:32:27.937568913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 12 21:32:27.937618 containerd[1513]: time="2024-11-12T21:32:27.937580535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 12 21:32:27.937618 containerd[1513]: time="2024-11-12T21:32:27.937593449Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 12 21:32:27.937618 containerd[1513]: time="2024-11-12T21:32:27.937612776Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 12 21:32:27.937809 containerd[1513]: time="2024-11-12T21:32:27.937630799Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 12 21:32:27.937809 containerd[1513]: time="2024-11-12T21:32:27.937642171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 12 21:32:27.937809 containerd[1513]: time="2024-11-12T21:32:27.937652170Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 12 21:32:27.937809 containerd[1513]: time="2024-11-12T21:32:27.937709607Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 12 21:32:27.937809 containerd[1513]: time="2024-11-12T21:32:27.937726970Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 12 21:32:27.937809 containerd[1513]: time="2024-11-12T21:32:27.937737550Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 12 21:32:27.937809 containerd[1513]: time="2024-11-12T21:32:27.937748640Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 12 21:32:27.937809 containerd[1513]: time="2024-11-12T21:32:27.937757497Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 12 21:32:27.937809 containerd[1513]: time="2024-11-12T21:32:27.937768718Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 12 21:32:27.937809 containerd[1513]: time="2024-11-12T21:32:27.937778115Z" level=info msg="NRI interface is disabled by configuration." Nov 12 21:32:27.937809 containerd[1513]: time="2024-11-12T21:32:27.937788525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 12 21:32:27.943136 containerd[1513]: time="2024-11-12T21:32:27.941202990Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 12 21:32:27.943136 containerd[1513]: time="2024-11-12T21:32:27.941316191Z" level=info msg="Connect containerd service" Nov 12 21:32:27.943136 containerd[1513]: time="2024-11-12T21:32:27.941419977Z" level=info msg="using legacy CRI server" Nov 12 21:32:27.943136 containerd[1513]: time="2024-11-12T21:32:27.941428853Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 12 21:32:27.946194 containerd[1513]: time="2024-11-12T21:32:27.946159996Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 12 21:32:27.949856 containerd[1513]: time="2024-11-12T21:32:27.949325684Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 12 21:32:27.949856 containerd[1513]: time="2024-11-12T21:32:27.949670171Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 12 21:32:27.949856 containerd[1513]: time="2024-11-12T21:32:27.949716988Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 12 21:32:27.949856 containerd[1513]: time="2024-11-12T21:32:27.949749389Z" level=info msg="Start subscribing containerd event" Nov 12 21:32:27.949856 containerd[1513]: time="2024-11-12T21:32:27.949783753Z" level=info msg="Start recovering state" Nov 12 21:32:27.949856 containerd[1513]: time="2024-11-12T21:32:27.949838687Z" level=info msg="Start event monitor" Nov 12 21:32:27.949856 containerd[1513]: time="2024-11-12T21:32:27.949863794Z" level=info msg="Start snapshots syncer" Nov 12 21:32:27.950026 containerd[1513]: time="2024-11-12T21:32:27.949873752Z" level=info msg="Start cni network conf syncer for default" Nov 12 21:32:27.950026 containerd[1513]: time="2024-11-12T21:32:27.949882919Z" level=info msg="Start streaming server" Nov 12 21:32:27.950020 systemd[1]: Started containerd.service - containerd container runtime. Nov 12 21:32:27.954556 containerd[1513]: time="2024-11-12T21:32:27.954516861Z" level=info msg="containerd successfully booted in 0.097276s" Nov 12 21:32:28.018140 sshd_keygen[1510]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 12 21:32:28.048687 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 12 21:32:28.063437 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 12 21:32:28.078566 systemd[1]: issuegen.service: Deactivated successfully. Nov 12 21:32:28.078846 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 12 21:32:28.091286 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 12 21:32:28.116373 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 12 21:32:28.131162 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 12 21:32:28.140562 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 12 21:32:28.141493 systemd[1]: Reached target getty.target - Login Prompts. Nov 12 21:32:28.206776 tar[1501]: linux-amd64/LICENSE Nov 12 21:32:28.206776 tar[1501]: linux-amd64/README.md Nov 12 21:32:28.221051 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 12 21:32:28.260264 systemd-networkd[1406]: eth1: Gained IPv6LL Nov 12 21:32:28.260921 systemd-timesyncd[1407]: Network configuration changed, trying to establish connection. Nov 12 21:32:28.264397 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 12 21:32:28.265813 systemd[1]: Reached target network-online.target - Network is Online. Nov 12 21:32:28.274380 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 21:32:28.279718 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 12 21:32:28.309227 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 12 21:32:28.580365 systemd-networkd[1406]: eth0: Gained IPv6LL Nov 12 21:32:28.580954 systemd-timesyncd[1407]: Network configuration changed, trying to establish connection. Nov 12 21:32:29.161865 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 21:32:29.163045 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 12 21:32:29.165808 systemd[1]: Startup finished in 1.381s (kernel) + 6.095s (initrd) + 5.387s (userspace) = 12.864s. Nov 12 21:32:29.174630 (kubelet)[1608]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 21:32:29.790018 kubelet[1608]: E1112 21:32:29.789900 1608 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 21:32:29.793255 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 21:32:29.793568 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 21:32:29.794136 systemd[1]: kubelet.service: Consumed 1.055s CPU time. Nov 12 21:32:32.894394 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 12 21:32:32.902867 systemd[1]: Started sshd@0-188.245.86.234:22-35.240.185.59:35396.service - OpenSSH per-connection server daemon (35.240.185.59:35396). Nov 12 21:32:33.756067 sshd[1621]: Connection closed by authenticating user root 35.240.185.59 port 35396 [preauth] Nov 12 21:32:33.759630 systemd[1]: sshd@0-188.245.86.234:22-35.240.185.59:35396.service: Deactivated successfully. Nov 12 21:32:40.044551 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 12 21:32:40.052525 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 21:32:40.269391 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 21:32:40.272369 (kubelet)[1633]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 21:32:40.335369 kubelet[1633]: E1112 21:32:40.335198 1633 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 21:32:40.344745 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 21:32:40.344944 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 21:32:46.058450 systemd[1]: Started sshd@1-188.245.86.234:22-35.240.185.59:59394.service - OpenSSH per-connection server daemon (35.240.185.59:59394). Nov 12 21:32:46.853123 sshd[1642]: Invalid user user from 35.240.185.59 port 59394 Nov 12 21:32:47.014819 sshd[1642]: Connection closed by invalid user user 35.240.185.59 port 59394 [preauth] Nov 12 21:32:47.018678 systemd[1]: sshd@1-188.245.86.234:22-35.240.185.59:59394.service: Deactivated successfully. Nov 12 21:32:50.595290 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 12 21:32:50.603411 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 21:32:50.783782 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 21:32:50.794433 (kubelet)[1654]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 21:32:50.847094 kubelet[1654]: E1112 21:32:50.846968 1654 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 21:32:50.851593 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 21:32:50.851785 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 21:32:59.044575 systemd-timesyncd[1407]: Contacted time server 78.46.102.180:123 (2.flatcar.pool.ntp.org). Nov 12 21:32:59.044649 systemd-timesyncd[1407]: Initial clock synchronization to Tue 2024-11-12 21:32:59.246624 UTC. Nov 12 21:32:59.539396 systemd[1]: Started sshd@2-188.245.86.234:22-35.240.185.59:34050.service - OpenSSH per-connection server daemon (35.240.185.59:34050). Nov 12 21:33:00.401651 sshd[1663]: Connection closed by authenticating user root 35.240.185.59 port 34050 [preauth] Nov 12 21:33:00.407165 systemd[1]: sshd@2-188.245.86.234:22-35.240.185.59:34050.service: Deactivated successfully. Nov 12 21:33:01.102396 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 12 21:33:01.110339 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 21:33:01.280879 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 21:33:01.298571 (kubelet)[1675]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 21:33:01.350660 kubelet[1675]: E1112 21:33:01.350589 1675 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 21:33:01.355688 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 21:33:01.355979 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 21:33:11.467223 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 12 21:33:11.475396 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 21:33:11.649363 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 21:33:11.661647 (kubelet)[1691]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 21:33:11.706713 kubelet[1691]: E1112 21:33:11.706633 1691 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 21:33:11.711127 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 21:33:11.711349 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 21:33:12.324743 update_engine[1495]: I20241112 21:33:12.324608 1495 update_attempter.cc:509] Updating boot flags... Nov 12 21:33:12.395151 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1708) Nov 12 21:33:12.467119 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1708) Nov 12 21:33:12.802505 systemd[1]: Started sshd@3-188.245.86.234:22-35.240.185.59:35216.service - OpenSSH per-connection server daemon (35.240.185.59:35216). Nov 12 21:33:13.666810 sshd[1718]: Invalid user uftp from 35.240.185.59 port 35216 Nov 12 21:33:13.851999 sshd[1718]: Connection closed by invalid user uftp 35.240.185.59 port 35216 [preauth] Nov 12 21:33:13.855060 systemd[1]: sshd@3-188.245.86.234:22-35.240.185.59:35216.service: Deactivated successfully. Nov 12 21:33:21.716908 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Nov 12 21:33:21.723260 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 21:33:21.887395 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 21:33:21.892349 (kubelet)[1730]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 21:33:21.936208 kubelet[1730]: E1112 21:33:21.936139 1730 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 21:33:21.940818 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 21:33:21.941210 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 21:33:26.113555 systemd[1]: Started sshd@4-188.245.86.234:22-35.240.185.59:33948.service - OpenSSH per-connection server daemon (35.240.185.59:33948). Nov 12 21:33:26.802889 sshd[1740]: Invalid user data from 35.240.185.59 port 33948 Nov 12 21:33:26.967226 sshd[1740]: Connection closed by invalid user data 35.240.185.59 port 33948 [preauth] Nov 12 21:33:26.971556 systemd[1]: sshd@4-188.245.86.234:22-35.240.185.59:33948.service: Deactivated successfully. Nov 12 21:33:31.967278 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Nov 12 21:33:31.977446 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 21:33:32.187537 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 21:33:32.200376 (kubelet)[1752]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 21:33:32.243842 kubelet[1752]: E1112 21:33:32.243689 1752 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 21:33:32.251318 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 21:33:32.251543 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 21:33:39.250693 systemd[1]: Started sshd@5-188.245.86.234:22-35.240.185.59:54236.service - OpenSSH per-connection server daemon (35.240.185.59:54236). Nov 12 21:33:40.135695 sshd[1761]: Invalid user bigdata from 35.240.185.59 port 54236 Nov 12 21:33:40.299491 sshd[1761]: Connection closed by invalid user bigdata 35.240.185.59 port 54236 [preauth] Nov 12 21:33:40.303179 systemd[1]: sshd@5-188.245.86.234:22-35.240.185.59:54236.service: Deactivated successfully. Nov 12 21:33:42.466803 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Nov 12 21:33:42.473660 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 21:33:42.665549 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 21:33:42.666031 (kubelet)[1773]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 21:33:42.716083 kubelet[1773]: E1112 21:33:42.715901 1773 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 21:33:42.720403 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 21:33:42.720612 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 21:33:52.546516 systemd[1]: Started sshd@6-188.245.86.234:22-35.240.185.59:34026.service - OpenSSH per-connection server daemon (35.240.185.59:34026). Nov 12 21:33:52.966601 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Nov 12 21:33:52.972248 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 21:33:53.122050 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 21:33:53.127939 (kubelet)[1793]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 21:33:53.173414 kubelet[1793]: E1112 21:33:53.173352 1793 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 21:33:53.177600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 21:33:53.177788 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 21:33:53.218174 sshd[1783]: Invalid user oracle from 35.240.185.59 port 34026 Nov 12 21:33:53.383352 sshd[1783]: Connection closed by invalid user oracle 35.240.185.59 port 34026 [preauth] Nov 12 21:33:53.386278 systemd[1]: sshd@6-188.245.86.234:22-35.240.185.59:34026.service: Deactivated successfully. Nov 12 21:34:03.216796 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Nov 12 21:34:03.222572 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 21:34:03.392342 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 21:34:03.393507 (kubelet)[1811]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 21:34:03.437700 kubelet[1811]: E1112 21:34:03.437651 1811 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 21:34:03.442725 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 21:34:03.442921 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 21:34:05.668688 systemd[1]: Started sshd@7-188.245.86.234:22-35.240.185.59:35190.service - OpenSSH per-connection server daemon (35.240.185.59:35190). Nov 12 21:34:06.633306 sshd[1820]: Invalid user plex from 35.240.185.59 port 35190 Nov 12 21:34:06.835916 sshd[1820]: Connection closed by invalid user plex 35.240.185.59 port 35190 [preauth] Nov 12 21:34:06.838909 systemd[1]: sshd@7-188.245.86.234:22-35.240.185.59:35190.service: Deactivated successfully. Nov 12 21:34:13.467538 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Nov 12 21:34:13.476449 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 21:34:13.686190 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 21:34:13.691140 (kubelet)[1832]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 21:34:13.733856 kubelet[1832]: E1112 21:34:13.733730 1832 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 21:34:13.738484 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 21:34:13.738714 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 21:34:19.132952 systemd[1]: Started sshd@8-188.245.86.234:22-35.240.185.59:33760.service - OpenSSH per-connection server daemon (35.240.185.59:33760). Nov 12 21:34:19.836569 sshd[1841]: Invalid user steam from 35.240.185.59 port 33760 Nov 12 21:34:19.997553 sshd[1841]: Connection closed by invalid user steam 35.240.185.59 port 33760 [preauth] Nov 12 21:34:19.999488 systemd[1]: sshd@8-188.245.86.234:22-35.240.185.59:33760.service: Deactivated successfully. Nov 12 21:34:23.967308 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Nov 12 21:34:23.974472 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 21:34:24.177997 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 21:34:24.182732 (kubelet)[1853]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 21:34:24.225807 kubelet[1853]: E1112 21:34:24.225667 1853 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 21:34:24.230745 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 21:34:24.230941 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 21:34:30.914453 systemd[1]: Started sshd@9-188.245.86.234:22-147.75.109.163:59458.service - OpenSSH per-connection server daemon (147.75.109.163:59458). Nov 12 21:34:31.892596 sshd[1862]: Accepted publickey for core from 147.75.109.163 port 59458 ssh2: RSA SHA256:Wp3PhugUBjqIxkjM6ivv5Eynp411mSX/wWXI2lbvI1Y Nov 12 21:34:31.896744 sshd[1862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 21:34:31.908914 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 12 21:34:31.918551 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 12 21:34:31.921124 systemd-logind[1492]: New session 1 of user core. Nov 12 21:34:31.933842 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 12 21:34:31.941648 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 12 21:34:31.952057 (systemd)[1866]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 12 21:34:32.092354 systemd[1866]: Queued start job for default target default.target. Nov 12 21:34:32.104413 systemd[1866]: Created slice app.slice - User Application Slice. Nov 12 21:34:32.104439 systemd[1866]: Reached target paths.target - Paths. Nov 12 21:34:32.104452 systemd[1866]: Reached target timers.target - Timers. Nov 12 21:34:32.106140 systemd[1866]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 12 21:34:32.120440 systemd[1866]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 12 21:34:32.120568 systemd[1866]: Reached target sockets.target - Sockets. Nov 12 21:34:32.120584 systemd[1866]: Reached target basic.target - Basic System. Nov 12 21:34:32.120628 systemd[1866]: Reached target default.target - Main User Target. Nov 12 21:34:32.120661 systemd[1866]: Startup finished in 159ms. Nov 12 21:34:32.121108 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 12 21:34:32.127258 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 12 21:34:32.209584 systemd[1]: Started sshd@10-188.245.86.234:22-35.240.185.59:36320.service - OpenSSH per-connection server daemon (35.240.185.59:36320). Nov 12 21:34:32.822140 systemd[1]: Started sshd@11-188.245.86.234:22-147.75.109.163:59466.service - OpenSSH per-connection server daemon (147.75.109.163:59466). Nov 12 21:34:33.138890 sshd[1876]: Invalid user esuser from 35.240.185.59 port 36320 Nov 12 21:34:33.321673 sshd[1876]: Connection closed by invalid user esuser 35.240.185.59 port 36320 [preauth] Nov 12 21:34:33.325709 systemd[1]: sshd@10-188.245.86.234:22-35.240.185.59:36320.service: Deactivated successfully. Nov 12 21:34:33.812141 sshd[1880]: Accepted publickey for core from 147.75.109.163 port 59466 ssh2: RSA SHA256:Wp3PhugUBjqIxkjM6ivv5Eynp411mSX/wWXI2lbvI1Y Nov 12 21:34:33.813850 sshd[1880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 21:34:33.818207 systemd-logind[1492]: New session 2 of user core. Nov 12 21:34:33.828241 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 12 21:34:34.339006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Nov 12 21:34:34.353183 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 21:34:34.499900 sshd[1880]: pam_unix(sshd:session): session closed for user core Nov 12 21:34:34.503930 systemd-logind[1492]: Session 2 logged out. Waiting for processes to exit. Nov 12 21:34:34.505724 systemd[1]: sshd@11-188.245.86.234:22-147.75.109.163:59466.service: Deactivated successfully. Nov 12 21:34:34.516596 systemd[1]: session-2.scope: Deactivated successfully. Nov 12 21:34:34.521443 systemd-logind[1492]: Removed session 2. Nov 12 21:34:34.542922 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 21:34:34.548428 (kubelet)[1896]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 21:34:34.600017 kubelet[1896]: E1112 21:34:34.599865 1896 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 21:34:34.605134 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 21:34:34.605395 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 21:34:34.680496 systemd[1]: Started sshd@12-188.245.86.234:22-147.75.109.163:59476.service - OpenSSH per-connection server daemon (147.75.109.163:59476). Nov 12 21:34:35.684454 sshd[1906]: Accepted publickey for core from 147.75.109.163 port 59476 ssh2: RSA SHA256:Wp3PhugUBjqIxkjM6ivv5Eynp411mSX/wWXI2lbvI1Y Nov 12 21:34:35.686561 sshd[1906]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 21:34:35.694013 systemd-logind[1492]: New session 3 of user core. Nov 12 21:34:35.700269 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 12 21:34:36.362380 sshd[1906]: pam_unix(sshd:session): session closed for user core Nov 12 21:34:36.367009 systemd[1]: sshd@12-188.245.86.234:22-147.75.109.163:59476.service: Deactivated successfully. Nov 12 21:34:36.369518 systemd[1]: session-3.scope: Deactivated successfully. Nov 12 21:34:36.370406 systemd-logind[1492]: Session 3 logged out. Waiting for processes to exit. Nov 12 21:34:36.371498 systemd-logind[1492]: Removed session 3. Nov 12 21:34:36.543525 systemd[1]: Started sshd@13-188.245.86.234:22-147.75.109.163:59492.service - OpenSSH per-connection server daemon (147.75.109.163:59492). Nov 12 21:34:37.528818 sshd[1913]: Accepted publickey for core from 147.75.109.163 port 59492 ssh2: RSA SHA256:Wp3PhugUBjqIxkjM6ivv5Eynp411mSX/wWXI2lbvI1Y Nov 12 21:34:37.530671 sshd[1913]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 21:34:37.534922 systemd-logind[1492]: New session 4 of user core. Nov 12 21:34:37.551365 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 12 21:34:38.222424 sshd[1913]: pam_unix(sshd:session): session closed for user core Nov 12 21:34:38.227506 systemd[1]: sshd@13-188.245.86.234:22-147.75.109.163:59492.service: Deactivated successfully. Nov 12 21:34:38.231801 systemd[1]: session-4.scope: Deactivated successfully. Nov 12 21:34:38.236128 systemd-logind[1492]: Session 4 logged out. Waiting for processes to exit. Nov 12 21:34:38.237820 systemd-logind[1492]: Removed session 4. Nov 12 21:34:38.398610 systemd[1]: Started sshd@14-188.245.86.234:22-147.75.109.163:59496.service - OpenSSH per-connection server daemon (147.75.109.163:59496). Nov 12 21:34:39.400599 sshd[1920]: Accepted publickey for core from 147.75.109.163 port 59496 ssh2: RSA SHA256:Wp3PhugUBjqIxkjM6ivv5Eynp411mSX/wWXI2lbvI1Y Nov 12 21:34:39.403352 sshd[1920]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 21:34:39.411800 systemd-logind[1492]: New session 5 of user core. Nov 12 21:34:39.421469 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 12 21:34:39.931113 sudo[1923]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 12 21:34:39.931448 sudo[1923]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 21:34:39.949996 sudo[1923]: pam_unix(sudo:session): session closed for user root Nov 12 21:34:40.110107 sshd[1920]: pam_unix(sshd:session): session closed for user core Nov 12 21:34:40.117863 systemd[1]: sshd@14-188.245.86.234:22-147.75.109.163:59496.service: Deactivated successfully. Nov 12 21:34:40.120579 systemd[1]: session-5.scope: Deactivated successfully. Nov 12 21:34:40.121799 systemd-logind[1492]: Session 5 logged out. Waiting for processes to exit. Nov 12 21:34:40.123280 systemd-logind[1492]: Removed session 5. Nov 12 21:34:40.286392 systemd[1]: Started sshd@15-188.245.86.234:22-147.75.109.163:40104.service - OpenSSH per-connection server daemon (147.75.109.163:40104). Nov 12 21:34:41.266161 sshd[1928]: Accepted publickey for core from 147.75.109.163 port 40104 ssh2: RSA SHA256:Wp3PhugUBjqIxkjM6ivv5Eynp411mSX/wWXI2lbvI1Y Nov 12 21:34:41.268240 sshd[1928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 21:34:41.273153 systemd-logind[1492]: New session 6 of user core. Nov 12 21:34:41.285273 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 12 21:34:41.789252 sudo[1932]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 12 21:34:41.789681 sudo[1932]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 21:34:41.796168 sudo[1932]: pam_unix(sudo:session): session closed for user root Nov 12 21:34:41.808428 sudo[1931]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 12 21:34:41.809214 sudo[1931]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 21:34:41.835355 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 12 21:34:41.839892 auditctl[1935]: No rules Nov 12 21:34:41.840341 systemd[1]: audit-rules.service: Deactivated successfully. Nov 12 21:34:41.840557 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 12 21:34:41.849785 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 12 21:34:41.875653 augenrules[1953]: No rules Nov 12 21:34:41.876589 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 12 21:34:41.878514 sudo[1931]: pam_unix(sudo:session): session closed for user root Nov 12 21:34:42.038356 sshd[1928]: pam_unix(sshd:session): session closed for user core Nov 12 21:34:42.042785 systemd[1]: sshd@15-188.245.86.234:22-147.75.109.163:40104.service: Deactivated successfully. Nov 12 21:34:42.045312 systemd[1]: session-6.scope: Deactivated successfully. Nov 12 21:34:42.046150 systemd-logind[1492]: Session 6 logged out. Waiting for processes to exit. Nov 12 21:34:42.047339 systemd-logind[1492]: Removed session 6. Nov 12 21:34:42.219418 systemd[1]: Started sshd@16-188.245.86.234:22-147.75.109.163:40106.service - OpenSSH per-connection server daemon (147.75.109.163:40106). Nov 12 21:34:43.222474 sshd[1961]: Accepted publickey for core from 147.75.109.163 port 40106 ssh2: RSA SHA256:Wp3PhugUBjqIxkjM6ivv5Eynp411mSX/wWXI2lbvI1Y Nov 12 21:34:43.224836 sshd[1961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 21:34:43.233553 systemd-logind[1492]: New session 7 of user core. Nov 12 21:34:43.243301 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 12 21:34:43.750221 sudo[1964]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 12 21:34:43.750605 sudo[1964]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 21:34:44.079286 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 12 21:34:44.088553 (dockerd)[1980]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 12 21:34:44.414824 dockerd[1980]: time="2024-11-12T21:34:44.414666339Z" level=info msg="Starting up" Nov 12 21:34:44.535876 systemd[1]: var-lib-docker-metacopy\x2dcheck2617885755-merged.mount: Deactivated successfully. Nov 12 21:34:44.557941 dockerd[1980]: time="2024-11-12T21:34:44.557886590Z" level=info msg="Loading containers: start." Nov 12 21:34:44.609911 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Nov 12 21:34:44.617887 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 21:34:44.728131 kernel: Initializing XFRM netlink socket Nov 12 21:34:44.816534 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 21:34:44.819494 (kubelet)[2073]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 21:34:44.833550 systemd-networkd[1406]: docker0: Link UP Nov 12 21:34:44.850563 dockerd[1980]: time="2024-11-12T21:34:44.850312200Z" level=info msg="Loading containers: done." Nov 12 21:34:44.863779 kubelet[2073]: E1112 21:34:44.863728 2073 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 21:34:44.869924 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3201890849-merged.mount: Deactivated successfully. Nov 12 21:34:44.870852 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 21:34:44.871224 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 21:34:44.876825 dockerd[1980]: time="2024-11-12T21:34:44.876783101Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 12 21:34:44.876916 dockerd[1980]: time="2024-11-12T21:34:44.876905679Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 12 21:34:44.877059 dockerd[1980]: time="2024-11-12T21:34:44.877039317Z" level=info msg="Daemon has completed initialization" Nov 12 21:34:44.909637 dockerd[1980]: time="2024-11-12T21:34:44.909573292Z" level=info msg="API listen on /run/docker.sock" Nov 12 21:34:44.910014 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 12 21:34:45.714943 systemd[1]: Started sshd@17-188.245.86.234:22-35.240.185.59:52280.service - OpenSSH per-connection server daemon (35.240.185.59:52280). Nov 12 21:34:46.101978 containerd[1513]: time="2024-11-12T21:34:46.101829933Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.6\"" Nov 12 21:34:46.401372 sshd[2138]: Invalid user observer from 35.240.185.59 port 52280 Nov 12 21:34:46.563018 sshd[2138]: Connection closed by invalid user observer 35.240.185.59 port 52280 [preauth] Nov 12 21:34:46.566450 systemd[1]: sshd@17-188.245.86.234:22-35.240.185.59:52280.service: Deactivated successfully. Nov 12 21:34:46.729301 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2897124132.mount: Deactivated successfully. Nov 12 21:34:47.857293 containerd[1513]: time="2024-11-12T21:34:47.857161461Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 21:34:47.858608 containerd[1513]: time="2024-11-12T21:34:47.858553628Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.6: active requests=0, bytes read=32676535" Nov 12 21:34:47.861513 containerd[1513]: time="2024-11-12T21:34:47.861447145Z" level=info msg="ImageCreate event name:\"sha256:a247bfa6152e770cd36ef6fe2a8831429eb43da1fd506c30b12af93f032ee849\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 21:34:47.865244 containerd[1513]: time="2024-11-12T21:34:47.865174681Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:3a820898379831ecff7cf4ce4954bb7a6505988eefcef146fd1ee2f56a01cdbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 21:34:47.866983 containerd[1513]: time="2024-11-12T21:34:47.866640394Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.6\" with image id \"sha256:a247bfa6152e770cd36ef6fe2a8831429eb43da1fd506c30b12af93f032ee849\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:3a820898379831ecff7cf4ce4954bb7a6505988eefcef146fd1ee2f56a01cdbb\", size \"32673243\" in 1.76476632s" Nov 12 21:34:47.866983 containerd[1513]: time="2024-11-12T21:34:47.866682262Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.6\" returns image reference \"sha256:a247bfa6152e770cd36ef6fe2a8831429eb43da1fd506c30b12af93f032ee849\"" Nov 12 21:34:47.898131 containerd[1513]: time="2024-11-12T21:34:47.898095802Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.6\"" Nov 12 21:34:49.280919 containerd[1513]: time="2024-11-12T21:34:49.280830183Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 21:34:49.282218 containerd[1513]: time="2024-11-12T21:34:49.282159498Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.6: active requests=0, bytes read=29605816" Nov 12 21:34:49.282741 containerd[1513]: time="2024-11-12T21:34:49.282695566Z" level=info msg="ImageCreate event name:\"sha256:382949f9bfdd9da8bf555d18adac4eb0dba8264b7e3b5963e6a26ef8d412477c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 21:34:49.286055 containerd[1513]: time="2024-11-12T21:34:49.285983996Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a412c3cdf35d39c8d37748b457a486faae7c5f2ee1d1ba2059c709bc5534686\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 21:34:49.287597 containerd[1513]: time="2024-11-12T21:34:49.287195842Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.6\" with image id \"sha256:382949f9bfdd9da8bf555d18adac4eb0dba8264b7e3b5963e6a26ef8d412477c\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a412c3cdf35d39c8d37748b457a486faae7c5f2ee1d1ba2059c709bc5534686\", size \"31051162\" in 1.388897653s" Nov 12 21:34:49.287597 containerd[1513]: time="2024-11-12T21:34:49.287235316Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.6\" returns image reference \"sha256:382949f9bfdd9da8bf555d18adac4eb0dba8264b7e3b5963e6a26ef8d412477c\"" Nov 12 21:34:49.311479 containerd[1513]: time="2024-11-12T21:34:49.311434587Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.6\"" Nov 12 21:34:50.322811 containerd[1513]: time="2024-11-12T21:34:50.322696543Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 21:34:50.324177 containerd[1513]: time="2024-11-12T21:34:50.324124053Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.6: active requests=0, bytes read=17784264" Nov 12 21:34:50.325027 containerd[1513]: time="2024-11-12T21:34:50.324958697Z" level=info msg="ImageCreate event name:\"sha256:ad5858afd532223324ff223396490f5fd8228323963b424ad7868407bd4ef1fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 21:34:50.328454 containerd[1513]: time="2024-11-12T21:34:50.328385400Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:948395c284d82c985f2dc0d99b5b51b3ca85eba97003babbc73834e0ab91fa59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 21:34:50.329962 containerd[1513]: time="2024-11-12T21:34:50.329759269Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.6\" with image id \"sha256:ad5858afd532223324ff223396490f5fd8228323963b424ad7868407bd4ef1fb\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:948395c284d82c985f2dc0d99b5b51b3ca85eba97003babbc73834e0ab91fa59\", size \"19229628\" in 1.018284949s" Nov 12 21:34:50.329962 containerd[1513]: time="2024-11-12T21:34:50.329815905Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.6\" returns image reference \"sha256:ad5858afd532223324ff223396490f5fd8228323963b424ad7868407bd4ef1fb\"" Nov 12 21:34:50.360056 containerd[1513]: time="2024-11-12T21:34:50.360014638Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.6\"" Nov 12 21:34:51.398746 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1111636615.mount: Deactivated successfully. Nov 12 21:34:51.752642 containerd[1513]: time="2024-11-12T21:34:51.752550797Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 21:34:51.753918 containerd[1513]: time="2024-11-12T21:34:51.753870898Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.6: active requests=0, bytes read=29054650" Nov 12 21:34:51.755359 containerd[1513]: time="2024-11-12T21:34:51.755285525Z" level=info msg="ImageCreate event name:\"sha256:2cce8902ed3ccdc78ecdb02734bd9ba32e2c7b44fc221663cf9ece2a179ff6a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 21:34:51.758108 containerd[1513]: time="2024-11-12T21:34:51.758029571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:aaf790f611159ab21713affc2c5676f742c9b31db26dd2e61e46c4257dd11b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 21:34:51.759098 containerd[1513]: time="2024-11-12T21:34:51.758535023Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.6\" with image id \"sha256:2cce8902ed3ccdc78ecdb02734bd9ba32e2c7b44fc221663cf9ece2a179ff6a6\", repo tag \"registry.k8s.io/kube-proxy:v1.30.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:aaf790f611159ab21713affc2c5676f742c9b31db26dd2e61e46c4257dd11b76\", size \"29053643\" in 1.398298071s" Nov 12 21:34:51.759098 containerd[1513]: time="2024-11-12T21:34:51.758563265Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.6\" returns image reference \"sha256:2cce8902ed3ccdc78ecdb02734bd9ba32e2c7b44fc221663cf9ece2a179ff6a6\"" Nov 12 21:34:51.784561 containerd[1513]: time="2024-11-12T21:34:51.784517932Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Nov 12 21:34:52.319738 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount410948436.mount: Deactivated successfully. Nov 12 21:34:53.127984 containerd[1513]: time="2024-11-12T21:34:53.127894734Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 21:34:53.129155 containerd[1513]: time="2024-11-12T21:34:53.129116896Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185841" Nov 12 21:34:53.130152 containerd[1513]: time="2024-11-12T21:34:53.130109870Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 21:34:53.132879 containerd[1513]: time="2024-11-12T21:34:53.132844556Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 21:34:53.134096 containerd[1513]: time="2024-11-12T21:34:53.134016173Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.34944882s" Nov 12 21:34:53.134096 containerd[1513]: time="2024-11-12T21:34:53.134051348Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Nov 12 21:34:53.155421 containerd[1513]: time="2024-11-12T21:34:53.155187440Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Nov 12 21:34:53.682200 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount572261809.mount: Deactivated successfully. Nov 12 21:34:53.688188 containerd[1513]: time="2024-11-12T21:34:53.688121333Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 21:34:53.689143 containerd[1513]: time="2024-11-12T21:34:53.689112535Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322310" Nov 12 21:34:53.691408 containerd[1513]: time="2024-11-12T21:34:53.691305057Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 21:34:53.696061 containerd[1513]: time="2024-11-12T21:34:53.695996627Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 21:34:53.698003 containerd[1513]: time="2024-11-12T21:34:53.697688506Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 542.450271ms" Nov 12 21:34:53.698003 containerd[1513]: time="2024-11-12T21:34:53.697745452Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Nov 12 21:34:53.739419 containerd[1513]: time="2024-11-12T21:34:53.739261873Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Nov 12 21:34:54.309224 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1978194306.mount: Deactivated successfully. Nov 12 21:34:54.967090 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 14. Nov 12 21:34:54.978178 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 21:34:55.192168 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 21:34:55.199488 (kubelet)[2346]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 21:34:55.296013 kubelet[2346]: E1112 21:34:55.295824 2346 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 21:34:55.300036 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 21:34:55.300268 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 21:34:55.922614 containerd[1513]: time="2024-11-12T21:34:55.922548594Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 21:34:55.923978 containerd[1513]: time="2024-11-12T21:34:55.923922452Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238651" Nov 12 21:34:55.924749 containerd[1513]: time="2024-11-12T21:34:55.924706057Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 21:34:55.927952 containerd[1513]: time="2024-11-12T21:34:55.927905138Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 21:34:55.929347 containerd[1513]: time="2024-11-12T21:34:55.929141950Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.189817581s" Nov 12 21:34:55.929347 containerd[1513]: time="2024-11-12T21:34:55.929230115Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Nov 12 21:34:58.712676 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 21:34:58.720674 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 21:34:58.750272 systemd[1]: Reloading requested from client PID 2421 ('systemctl') (unit session-7.scope)... Nov 12 21:34:58.750301 systemd[1]: Reloading... Nov 12 21:34:58.941140 zram_generator::config[2464]: No configuration found. Nov 12 21:34:59.081250 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 21:34:59.170671 systemd[1]: Reloading finished in 419 ms. Nov 12 21:34:59.242745 systemd[1]: Started sshd@18-188.245.86.234:22-35.240.185.59:39204.service - OpenSSH per-connection server daemon (35.240.185.59:39204). Nov 12 21:34:59.244610 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 12 21:34:59.244736 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 12 21:34:59.245519 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 21:34:59.250674 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 21:34:59.411249 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 21:34:59.416603 (kubelet)[2519]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 21:34:59.459710 kubelet[2519]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 21:34:59.459710 kubelet[2519]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 21:34:59.459710 kubelet[2519]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 21:34:59.460154 kubelet[2519]: I1112 21:34:59.459768 2519 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 21:34:59.749426 kubelet[2519]: I1112 21:34:59.749375 2519 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Nov 12 21:34:59.749426 kubelet[2519]: I1112 21:34:59.749401 2519 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 21:34:59.749627 kubelet[2519]: I1112 21:34:59.749589 2519 server.go:927] "Client rotation is on, will bootstrap in background" Nov 12 21:34:59.778416 kubelet[2519]: I1112 21:34:59.778123 2519 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 21:34:59.779503 kubelet[2519]: E1112 21:34:59.779326 2519 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://188.245.86.234:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 188.245.86.234:6443: connect: connection refused Nov 12 21:34:59.790820 kubelet[2519]: I1112 21:34:59.790769 2519 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 21:34:59.794265 kubelet[2519]: I1112 21:34:59.794197 2519 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 21:34:59.794493 kubelet[2519]: I1112 21:34:59.794253 2519 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-2-0-6-01c097edc7","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 12 21:34:59.794571 kubelet[2519]: I1112 21:34:59.794500 2519 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 21:34:59.794571 kubelet[2519]: I1112 21:34:59.794514 2519 container_manager_linux.go:301] "Creating device plugin manager" Nov 12 21:34:59.794725 kubelet[2519]: I1112 21:34:59.794694 2519 state_mem.go:36] "Initialized new in-memory state store" Nov 12 21:34:59.796126 kubelet[2519]: W1112 21:34:59.796017 2519 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://188.245.86.234:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-0-6-01c097edc7&limit=500&resourceVersion=0": dial tcp 188.245.86.234:6443: connect: connection refused Nov 12 21:34:59.796126 kubelet[2519]: E1112 21:34:59.796093 2519 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://188.245.86.234:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-0-6-01c097edc7&limit=500&resourceVersion=0": dial tcp 188.245.86.234:6443: connect: connection refused Nov 12 21:34:59.797443 kubelet[2519]: I1112 21:34:59.797406 2519 kubelet.go:400] "Attempting to sync node with API server" Nov 12 21:34:59.797498 kubelet[2519]: I1112 21:34:59.797452 2519 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 21:34:59.797498 kubelet[2519]: I1112 21:34:59.797495 2519 kubelet.go:312] "Adding apiserver pod source" Nov 12 21:34:59.797545 kubelet[2519]: I1112 21:34:59.797517 2519 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 21:34:59.801233 kubelet[2519]: W1112 21:34:59.800943 2519 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://188.245.86.234:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 188.245.86.234:6443: connect: connection refused Nov 12 21:34:59.801233 kubelet[2519]: E1112 21:34:59.801006 2519 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://188.245.86.234:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 188.245.86.234:6443: connect: connection refused Nov 12 21:34:59.801443 kubelet[2519]: I1112 21:34:59.801410 2519 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 12 21:34:59.803607 kubelet[2519]: I1112 21:34:59.803113 2519 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 21:34:59.803607 kubelet[2519]: W1112 21:34:59.803174 2519 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 12 21:34:59.804826 kubelet[2519]: I1112 21:34:59.803734 2519 server.go:1264] "Started kubelet" Nov 12 21:34:59.807833 kubelet[2519]: I1112 21:34:59.807211 2519 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 21:34:59.809025 kubelet[2519]: I1112 21:34:59.808207 2519 server.go:455] "Adding debug handlers to kubelet server" Nov 12 21:34:59.811353 kubelet[2519]: I1112 21:34:59.810879 2519 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 21:34:59.811662 kubelet[2519]: I1112 21:34:59.811592 2519 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 21:34:59.811897 kubelet[2519]: I1112 21:34:59.811862 2519 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 21:34:59.812730 kubelet[2519]: E1112 21:34:59.812003 2519 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://188.245.86.234:6443/api/v1/namespaces/default/events\": dial tcp 188.245.86.234:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-2-0-6-01c097edc7.18075625441c1695 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-2-0-6-01c097edc7,UID:ci-4081-2-0-6-01c097edc7,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-2-0-6-01c097edc7,},FirstTimestamp:2024-11-12 21:34:59.803715221 +0000 UTC m=+0.383454569,LastTimestamp:2024-11-12 21:34:59.803715221 +0000 UTC m=+0.383454569,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-2-0-6-01c097edc7,}" Nov 12 21:34:59.818263 kubelet[2519]: I1112 21:34:59.818237 2519 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 12 21:34:59.821239 kubelet[2519]: I1112 21:34:59.821217 2519 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Nov 12 21:34:59.822683 kubelet[2519]: I1112 21:34:59.822226 2519 reconciler.go:26] "Reconciler: start to sync state" Nov 12 21:34:59.823356 kubelet[2519]: E1112 21:34:59.823327 2519 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.86.234:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-0-6-01c097edc7?timeout=10s\": dial tcp 188.245.86.234:6443: connect: connection refused" interval="200ms" Nov 12 21:34:59.823500 kubelet[2519]: W1112 21:34:59.823467 2519 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://188.245.86.234:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 188.245.86.234:6443: connect: connection refused Nov 12 21:34:59.823578 kubelet[2519]: E1112 21:34:59.823565 2519 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://188.245.86.234:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 188.245.86.234:6443: connect: connection refused Nov 12 21:34:59.827733 kubelet[2519]: I1112 21:34:59.826759 2519 factory.go:221] Registration of the systemd container factory successfully Nov 12 21:34:59.827733 kubelet[2519]: I1112 21:34:59.826900 2519 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 21:34:59.830940 kubelet[2519]: I1112 21:34:59.830855 2519 factory.go:221] Registration of the containerd container factory successfully Nov 12 21:34:59.849125 kubelet[2519]: I1112 21:34:59.847744 2519 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 21:34:59.849526 kubelet[2519]: I1112 21:34:59.849493 2519 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 21:34:59.849579 kubelet[2519]: I1112 21:34:59.849531 2519 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 21:34:59.849579 kubelet[2519]: I1112 21:34:59.849556 2519 kubelet.go:2337] "Starting kubelet main sync loop" Nov 12 21:34:59.849649 kubelet[2519]: E1112 21:34:59.849601 2519 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 21:34:59.849866 kubelet[2519]: E1112 21:34:59.849842 2519 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 21:34:59.859183 kubelet[2519]: W1112 21:34:59.859125 2519 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://188.245.86.234:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 188.245.86.234:6443: connect: connection refused Nov 12 21:34:59.859183 kubelet[2519]: E1112 21:34:59.859179 2519 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://188.245.86.234:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 188.245.86.234:6443: connect: connection refused Nov 12 21:34:59.864905 kubelet[2519]: I1112 21:34:59.864874 2519 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 21:34:59.865054 kubelet[2519]: I1112 21:34:59.865040 2519 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 21:34:59.865171 kubelet[2519]: I1112 21:34:59.865160 2519 state_mem.go:36] "Initialized new in-memory state store" Nov 12 21:34:59.868093 kubelet[2519]: I1112 21:34:59.868046 2519 policy_none.go:49] "None policy: Start" Nov 12 21:34:59.869046 kubelet[2519]: I1112 21:34:59.869029 2519 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 21:34:59.869488 kubelet[2519]: I1112 21:34:59.869155 2519 state_mem.go:35] "Initializing new in-memory state store" Nov 12 21:34:59.875353 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 12 21:34:59.890563 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 12 21:34:59.902649 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 12 21:34:59.904630 kubelet[2519]: I1112 21:34:59.904586 2519 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 21:34:59.905374 kubelet[2519]: I1112 21:34:59.904852 2519 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 12 21:34:59.905374 kubelet[2519]: I1112 21:34:59.905027 2519 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 21:34:59.907708 kubelet[2519]: E1112 21:34:59.907672 2519 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-2-0-6-01c097edc7\" not found" Nov 12 21:34:59.921318 kubelet[2519]: I1112 21:34:59.921267 2519 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-0-6-01c097edc7" Nov 12 21:34:59.921843 kubelet[2519]: E1112 21:34:59.921795 2519 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://188.245.86.234:6443/api/v1/nodes\": dial tcp 188.245.86.234:6443: connect: connection refused" node="ci-4081-2-0-6-01c097edc7" Nov 12 21:34:59.951180 kubelet[2519]: I1112 21:34:59.950981 2519 topology_manager.go:215] "Topology Admit Handler" podUID="862df74c55eda4f90819a8743842ffba" podNamespace="kube-system" podName="kube-apiserver-ci-4081-2-0-6-01c097edc7" Nov 12 21:34:59.953850 kubelet[2519]: I1112 21:34:59.953652 2519 topology_manager.go:215] "Topology Admit Handler" podUID="4a7a2cde9ce81311a03d2b73b604c745" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-2-0-6-01c097edc7" Nov 12 21:34:59.955591 kubelet[2519]: I1112 21:34:59.955440 2519 topology_manager.go:215] "Topology Admit Handler" podUID="7ba04f15b0744f56b39b516d560b8928" podNamespace="kube-system" podName="kube-scheduler-ci-4081-2-0-6-01c097edc7" Nov 12 21:34:59.964279 systemd[1]: Created slice kubepods-burstable-pod862df74c55eda4f90819a8743842ffba.slice - libcontainer container kubepods-burstable-pod862df74c55eda4f90819a8743842ffba.slice. Nov 12 21:34:59.990517 systemd[1]: Created slice kubepods-burstable-pod7ba04f15b0744f56b39b516d560b8928.slice - libcontainer container kubepods-burstable-pod7ba04f15b0744f56b39b516d560b8928.slice. Nov 12 21:34:59.995492 systemd[1]: Created slice kubepods-burstable-pod4a7a2cde9ce81311a03d2b73b604c745.slice - libcontainer container kubepods-burstable-pod4a7a2cde9ce81311a03d2b73b604c745.slice. Nov 12 21:35:00.023346 kubelet[2519]: I1112 21:35:00.023201 2519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/862df74c55eda4f90819a8743842ffba-ca-certs\") pod \"kube-apiserver-ci-4081-2-0-6-01c097edc7\" (UID: \"862df74c55eda4f90819a8743842ffba\") " pod="kube-system/kube-apiserver-ci-4081-2-0-6-01c097edc7" Nov 12 21:35:00.024419 kubelet[2519]: E1112 21:35:00.024382 2519 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.86.234:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-0-6-01c097edc7?timeout=10s\": dial tcp 188.245.86.234:6443: connect: connection refused" interval="400ms" Nov 12 21:35:00.102899 sshd[2508]: Connection closed by authenticating user docker 35.240.185.59 port 39204 [preauth] Nov 12 21:35:00.104956 systemd[1]: sshd@18-188.245.86.234:22-35.240.185.59:39204.service: Deactivated successfully. Nov 12 21:35:00.123429 kubelet[2519]: I1112 21:35:00.123370 2519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4a7a2cde9ce81311a03d2b73b604c745-ca-certs\") pod \"kube-controller-manager-ci-4081-2-0-6-01c097edc7\" (UID: \"4a7a2cde9ce81311a03d2b73b604c745\") " pod="kube-system/kube-controller-manager-ci-4081-2-0-6-01c097edc7" Nov 12 21:35:00.123429 kubelet[2519]: I1112 21:35:00.123408 2519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4a7a2cde9ce81311a03d2b73b604c745-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-2-0-6-01c097edc7\" (UID: \"4a7a2cde9ce81311a03d2b73b604c745\") " pod="kube-system/kube-controller-manager-ci-4081-2-0-6-01c097edc7" Nov 12 21:35:00.123429 kubelet[2519]: I1112 21:35:00.123461 2519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/862df74c55eda4f90819a8743842ffba-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-2-0-6-01c097edc7\" (UID: \"862df74c55eda4f90819a8743842ffba\") " pod="kube-system/kube-apiserver-ci-4081-2-0-6-01c097edc7" Nov 12 21:35:00.123429 kubelet[2519]: I1112 21:35:00.123477 2519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4a7a2cde9ce81311a03d2b73b604c745-k8s-certs\") pod \"kube-controller-manager-ci-4081-2-0-6-01c097edc7\" (UID: \"4a7a2cde9ce81311a03d2b73b604c745\") " pod="kube-system/kube-controller-manager-ci-4081-2-0-6-01c097edc7" Nov 12 21:35:00.123429 kubelet[2519]: I1112 21:35:00.123491 2519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4a7a2cde9ce81311a03d2b73b604c745-kubeconfig\") pod \"kube-controller-manager-ci-4081-2-0-6-01c097edc7\" (UID: \"4a7a2cde9ce81311a03d2b73b604c745\") " pod="kube-system/kube-controller-manager-ci-4081-2-0-6-01c097edc7" Nov 12 21:35:00.124143 kubelet[2519]: I1112 21:35:00.123505 2519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7ba04f15b0744f56b39b516d560b8928-kubeconfig\") pod \"kube-scheduler-ci-4081-2-0-6-01c097edc7\" (UID: \"7ba04f15b0744f56b39b516d560b8928\") " pod="kube-system/kube-scheduler-ci-4081-2-0-6-01c097edc7" Nov 12 21:35:00.124143 kubelet[2519]: I1112 21:35:00.123520 2519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/862df74c55eda4f90819a8743842ffba-k8s-certs\") pod \"kube-apiserver-ci-4081-2-0-6-01c097edc7\" (UID: \"862df74c55eda4f90819a8743842ffba\") " pod="kube-system/kube-apiserver-ci-4081-2-0-6-01c097edc7" Nov 12 21:35:00.124143 kubelet[2519]: I1112 21:35:00.123535 2519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4a7a2cde9ce81311a03d2b73b604c745-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-2-0-6-01c097edc7\" (UID: \"4a7a2cde9ce81311a03d2b73b604c745\") " pod="kube-system/kube-controller-manager-ci-4081-2-0-6-01c097edc7" Nov 12 21:35:00.124143 kubelet[2519]: I1112 21:35:00.123967 2519 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-0-6-01c097edc7" Nov 12 21:35:00.124365 kubelet[2519]: E1112 21:35:00.124344 2519 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://188.245.86.234:6443/api/v1/nodes\": dial tcp 188.245.86.234:6443: connect: connection refused" node="ci-4081-2-0-6-01c097edc7" Nov 12 21:35:00.286651 containerd[1513]: time="2024-11-12T21:35:00.286499405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-2-0-6-01c097edc7,Uid:862df74c55eda4f90819a8743842ffba,Namespace:kube-system,Attempt:0,}" Nov 12 21:35:00.297047 containerd[1513]: time="2024-11-12T21:35:00.296990881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-2-0-6-01c097edc7,Uid:7ba04f15b0744f56b39b516d560b8928,Namespace:kube-system,Attempt:0,}" Nov 12 21:35:00.299109 containerd[1513]: time="2024-11-12T21:35:00.298897888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-2-0-6-01c097edc7,Uid:4a7a2cde9ce81311a03d2b73b604c745,Namespace:kube-system,Attempt:0,}" Nov 12 21:35:00.426023 kubelet[2519]: E1112 21:35:00.425916 2519 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.86.234:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-0-6-01c097edc7?timeout=10s\": dial tcp 188.245.86.234:6443: connect: connection refused" interval="800ms" Nov 12 21:35:00.527018 kubelet[2519]: I1112 21:35:00.526945 2519 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-0-6-01c097edc7" Nov 12 21:35:00.527786 kubelet[2519]: E1112 21:35:00.527265 2519 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://188.245.86.234:6443/api/v1/nodes\": dial tcp 188.245.86.234:6443: connect: connection refused" node="ci-4081-2-0-6-01c097edc7" Nov 12 21:35:00.787621 kubelet[2519]: W1112 21:35:00.787542 2519 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://188.245.86.234:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 188.245.86.234:6443: connect: connection refused Nov 12 21:35:00.787621 kubelet[2519]: E1112 21:35:00.787621 2519 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://188.245.86.234:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 188.245.86.234:6443: connect: connection refused Nov 12 21:35:00.817296 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount984945454.mount: Deactivated successfully. Nov 12 21:35:00.827445 containerd[1513]: time="2024-11-12T21:35:00.826825129Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 21:35:00.828408 containerd[1513]: time="2024-11-12T21:35:00.828287061Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312076" Nov 12 21:35:00.829569 containerd[1513]: time="2024-11-12T21:35:00.829514082Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 21:35:00.832354 containerd[1513]: time="2024-11-12T21:35:00.832274439Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 21:35:00.833524 containerd[1513]: time="2024-11-12T21:35:00.833470222Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 21:35:00.834512 containerd[1513]: time="2024-11-12T21:35:00.834327469Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 21:35:00.837752 containerd[1513]: time="2024-11-12T21:35:00.836541942Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 21:35:00.837752 containerd[1513]: time="2024-11-12T21:35:00.836648913Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 21:35:00.837907 containerd[1513]: time="2024-11-12T21:35:00.837748244Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 551.141699ms" Nov 12 21:35:00.841909 containerd[1513]: time="2024-11-12T21:35:00.841837544Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 544.724994ms" Nov 12 21:35:00.842612 containerd[1513]: time="2024-11-12T21:35:00.842574025Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 543.597439ms" Nov 12 21:35:00.947060 kubelet[2519]: W1112 21:35:00.946554 2519 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://188.245.86.234:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 188.245.86.234:6443: connect: connection refused Nov 12 21:35:00.947060 kubelet[2519]: E1112 21:35:00.946604 2519 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://188.245.86.234:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 188.245.86.234:6443: connect: connection refused Nov 12 21:35:00.993963 containerd[1513]: time="2024-11-12T21:35:00.993809845Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 21:35:00.994238 containerd[1513]: time="2024-11-12T21:35:00.994104257Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 21:35:00.994238 containerd[1513]: time="2024-11-12T21:35:00.994132710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 21:35:00.994414 containerd[1513]: time="2024-11-12T21:35:00.994370636Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 21:35:00.996305 containerd[1513]: time="2024-11-12T21:35:00.996241084Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 21:35:00.996305 containerd[1513]: time="2024-11-12T21:35:00.996287080Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 21:35:00.996538 containerd[1513]: time="2024-11-12T21:35:00.996431280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 21:35:00.996538 containerd[1513]: time="2024-11-12T21:35:00.996504628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 21:35:00.998217 containerd[1513]: time="2024-11-12T21:35:00.997437398Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 21:35:00.998217 containerd[1513]: time="2024-11-12T21:35:00.997567151Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 21:35:01.000392 containerd[1513]: time="2024-11-12T21:35:00.999929963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 21:35:01.000392 containerd[1513]: time="2024-11-12T21:35:01.000037835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 21:35:01.030420 systemd[1]: Started cri-containerd-8c5763051c5d164f4ccd19670372f37e3d486b573708e1390347206a48429f65.scope - libcontainer container 8c5763051c5d164f4ccd19670372f37e3d486b573708e1390347206a48429f65. Nov 12 21:35:01.036873 systemd[1]: Started cri-containerd-19c1d1afbe8143936eb753444b477ef9282d2bf529c2fcb2d08976409229d5fe.scope - libcontainer container 19c1d1afbe8143936eb753444b477ef9282d2bf529c2fcb2d08976409229d5fe. Nov 12 21:35:01.042451 systemd[1]: Started cri-containerd-f7bfcd15e9c3e7fa2e55b5aab7276c6390d8f9bfaff763e093afb60fb71016f5.scope - libcontainer container f7bfcd15e9c3e7fa2e55b5aab7276c6390d8f9bfaff763e093afb60fb71016f5. Nov 12 21:35:01.087859 kubelet[2519]: W1112 21:35:01.087737 2519 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://188.245.86.234:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 188.245.86.234:6443: connect: connection refused Nov 12 21:35:01.087859 kubelet[2519]: E1112 21:35:01.087788 2519 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://188.245.86.234:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 188.245.86.234:6443: connect: connection refused Nov 12 21:35:01.121722 containerd[1513]: time="2024-11-12T21:35:01.121250736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-2-0-6-01c097edc7,Uid:862df74c55eda4f90819a8743842ffba,Namespace:kube-system,Attempt:0,} returns sandbox id \"19c1d1afbe8143936eb753444b477ef9282d2bf529c2fcb2d08976409229d5fe\"" Nov 12 21:35:01.133885 containerd[1513]: time="2024-11-12T21:35:01.133720615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-2-0-6-01c097edc7,Uid:7ba04f15b0744f56b39b516d560b8928,Namespace:kube-system,Attempt:0,} returns sandbox id \"8c5763051c5d164f4ccd19670372f37e3d486b573708e1390347206a48429f65\"" Nov 12 21:35:01.139103 containerd[1513]: time="2024-11-12T21:35:01.138119260Z" level=info msg="CreateContainer within sandbox \"19c1d1afbe8143936eb753444b477ef9282d2bf529c2fcb2d08976409229d5fe\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 12 21:35:01.142064 containerd[1513]: time="2024-11-12T21:35:01.142027643Z" level=info msg="CreateContainer within sandbox \"8c5763051c5d164f4ccd19670372f37e3d486b573708e1390347206a48429f65\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 12 21:35:01.148199 containerd[1513]: time="2024-11-12T21:35:01.148157697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-2-0-6-01c097edc7,Uid:4a7a2cde9ce81311a03d2b73b604c745,Namespace:kube-system,Attempt:0,} returns sandbox id \"f7bfcd15e9c3e7fa2e55b5aab7276c6390d8f9bfaff763e093afb60fb71016f5\"" Nov 12 21:35:01.152389 containerd[1513]: time="2024-11-12T21:35:01.152351445Z" level=info msg="CreateContainer within sandbox \"f7bfcd15e9c3e7fa2e55b5aab7276c6390d8f9bfaff763e093afb60fb71016f5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 12 21:35:01.165336 containerd[1513]: time="2024-11-12T21:35:01.165288723Z" level=info msg="CreateContainer within sandbox \"8c5763051c5d164f4ccd19670372f37e3d486b573708e1390347206a48429f65\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"722041b1eb496d2e550ae0ace69237c313e9516889c9722a64062b7d49b2e092\"" Nov 12 21:35:01.167103 containerd[1513]: time="2024-11-12T21:35:01.166280372Z" level=info msg="StartContainer for \"722041b1eb496d2e550ae0ace69237c313e9516889c9722a64062b7d49b2e092\"" Nov 12 21:35:01.170051 containerd[1513]: time="2024-11-12T21:35:01.169692635Z" level=info msg="CreateContainer within sandbox \"19c1d1afbe8143936eb753444b477ef9282d2bf529c2fcb2d08976409229d5fe\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e86be04fd2a55b4e4a6a6f0696a493f8a93431f3f2e2dc15062e45cc6bce48a4\"" Nov 12 21:35:01.170398 containerd[1513]: time="2024-11-12T21:35:01.170379034Z" level=info msg="StartContainer for \"e86be04fd2a55b4e4a6a6f0696a493f8a93431f3f2e2dc15062e45cc6bce48a4\"" Nov 12 21:35:01.178684 containerd[1513]: time="2024-11-12T21:35:01.178647599Z" level=info msg="CreateContainer within sandbox \"f7bfcd15e9c3e7fa2e55b5aab7276c6390d8f9bfaff763e093afb60fb71016f5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"105ac8aa760e497e919030a023eff5c94262c052a6b5fcea1a6004712cc98d67\"" Nov 12 21:35:01.179301 containerd[1513]: time="2024-11-12T21:35:01.179275237Z" level=info msg="StartContainer for \"105ac8aa760e497e919030a023eff5c94262c052a6b5fcea1a6004712cc98d67\"" Nov 12 21:35:01.200784 systemd[1]: Started cri-containerd-722041b1eb496d2e550ae0ace69237c313e9516889c9722a64062b7d49b2e092.scope - libcontainer container 722041b1eb496d2e550ae0ace69237c313e9516889c9722a64062b7d49b2e092. Nov 12 21:35:01.208868 systemd[1]: Started cri-containerd-e86be04fd2a55b4e4a6a6f0696a493f8a93431f3f2e2dc15062e45cc6bce48a4.scope - libcontainer container e86be04fd2a55b4e4a6a6f0696a493f8a93431f3f2e2dc15062e45cc6bce48a4. Nov 12 21:35:01.226583 kubelet[2519]: E1112 21:35:01.226441 2519 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.86.234:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-0-6-01c097edc7?timeout=10s\": dial tcp 188.245.86.234:6443: connect: connection refused" interval="1.6s" Nov 12 21:35:01.229264 systemd[1]: Started cri-containerd-105ac8aa760e497e919030a023eff5c94262c052a6b5fcea1a6004712cc98d67.scope - libcontainer container 105ac8aa760e497e919030a023eff5c94262c052a6b5fcea1a6004712cc98d67. Nov 12 21:35:01.259829 kubelet[2519]: W1112 21:35:01.259769 2519 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://188.245.86.234:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-0-6-01c097edc7&limit=500&resourceVersion=0": dial tcp 188.245.86.234:6443: connect: connection refused Nov 12 21:35:01.259999 kubelet[2519]: E1112 21:35:01.259978 2519 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://188.245.86.234:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-0-6-01c097edc7&limit=500&resourceVersion=0": dial tcp 188.245.86.234:6443: connect: connection refused Nov 12 21:35:01.272903 containerd[1513]: time="2024-11-12T21:35:01.272861709Z" level=info msg="StartContainer for \"e86be04fd2a55b4e4a6a6f0696a493f8a93431f3f2e2dc15062e45cc6bce48a4\" returns successfully" Nov 12 21:35:01.295288 containerd[1513]: time="2024-11-12T21:35:01.295096282Z" level=info msg="StartContainer for \"722041b1eb496d2e550ae0ace69237c313e9516889c9722a64062b7d49b2e092\" returns successfully" Nov 12 21:35:01.303791 containerd[1513]: time="2024-11-12T21:35:01.303713171Z" level=info msg="StartContainer for \"105ac8aa760e497e919030a023eff5c94262c052a6b5fcea1a6004712cc98d67\" returns successfully" Nov 12 21:35:01.330273 kubelet[2519]: I1112 21:35:01.330239 2519 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-0-6-01c097edc7" Nov 12 21:35:01.330651 kubelet[2519]: E1112 21:35:01.330622 2519 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://188.245.86.234:6443/api/v1/nodes\": dial tcp 188.245.86.234:6443: connect: connection refused" node="ci-4081-2-0-6-01c097edc7" Nov 12 21:35:02.933436 kubelet[2519]: I1112 21:35:02.933388 2519 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-0-6-01c097edc7" Nov 12 21:35:03.286896 kubelet[2519]: E1112 21:35:03.286851 2519 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-2-0-6-01c097edc7\" not found" node="ci-4081-2-0-6-01c097edc7" Nov 12 21:35:03.379049 kubelet[2519]: I1112 21:35:03.378935 2519 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-2-0-6-01c097edc7" Nov 12 21:35:03.396902 kubelet[2519]: E1112 21:35:03.396856 2519 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-0-6-01c097edc7\" not found" Nov 12 21:35:03.498528 kubelet[2519]: E1112 21:35:03.497633 2519 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-0-6-01c097edc7\" not found" Nov 12 21:35:03.598474 kubelet[2519]: E1112 21:35:03.598265 2519 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-0-6-01c097edc7\" not found" Nov 12 21:35:03.699351 kubelet[2519]: E1112 21:35:03.699272 2519 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-0-6-01c097edc7\" not found" Nov 12 21:35:03.799964 kubelet[2519]: E1112 21:35:03.799851 2519 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-0-6-01c097edc7\" not found" Nov 12 21:35:04.801165 kubelet[2519]: I1112 21:35:04.801120 2519 apiserver.go:52] "Watching apiserver" Nov 12 21:35:04.822573 kubelet[2519]: I1112 21:35:04.822524 2519 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Nov 12 21:35:05.357785 systemd[1]: Reloading requested from client PID 2794 ('systemctl') (unit session-7.scope)... Nov 12 21:35:05.357800 systemd[1]: Reloading... Nov 12 21:35:05.488229 zram_generator::config[2840]: No configuration found. Nov 12 21:35:05.612572 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 21:35:05.716053 systemd[1]: Reloading finished in 357 ms. Nov 12 21:35:05.768195 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 21:35:05.779374 systemd[1]: kubelet.service: Deactivated successfully. Nov 12 21:35:05.779728 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 21:35:05.785434 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 21:35:05.956457 (kubelet)[2885]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 21:35:05.956564 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 21:35:06.042158 kubelet[2885]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 21:35:06.042158 kubelet[2885]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 21:35:06.042158 kubelet[2885]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 21:35:06.042680 kubelet[2885]: I1112 21:35:06.042190 2885 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 21:35:06.046936 kubelet[2885]: I1112 21:35:06.046905 2885 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Nov 12 21:35:06.046936 kubelet[2885]: I1112 21:35:06.046925 2885 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 21:35:06.047153 kubelet[2885]: I1112 21:35:06.047132 2885 server.go:927] "Client rotation is on, will bootstrap in background" Nov 12 21:35:06.048377 kubelet[2885]: I1112 21:35:06.048354 2885 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 12 21:35:06.049858 kubelet[2885]: I1112 21:35:06.049753 2885 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 21:35:06.059351 kubelet[2885]: I1112 21:35:06.059319 2885 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 21:35:06.060122 kubelet[2885]: I1112 21:35:06.059782 2885 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 21:35:06.060122 kubelet[2885]: I1112 21:35:06.059817 2885 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-2-0-6-01c097edc7","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 12 21:35:06.060122 kubelet[2885]: I1112 21:35:06.060130 2885 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 21:35:06.060366 kubelet[2885]: I1112 21:35:06.060146 2885 container_manager_linux.go:301] "Creating device plugin manager" Nov 12 21:35:06.060366 kubelet[2885]: I1112 21:35:06.060202 2885 state_mem.go:36] "Initialized new in-memory state store" Nov 12 21:35:06.060366 kubelet[2885]: I1112 21:35:06.060315 2885 kubelet.go:400] "Attempting to sync node with API server" Nov 12 21:35:06.060366 kubelet[2885]: I1112 21:35:06.060330 2885 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 21:35:06.060366 kubelet[2885]: I1112 21:35:06.060357 2885 kubelet.go:312] "Adding apiserver pod source" Nov 12 21:35:06.060366 kubelet[2885]: I1112 21:35:06.060370 2885 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 21:35:06.067029 kubelet[2885]: I1112 21:35:06.066582 2885 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 12 21:35:06.067029 kubelet[2885]: I1112 21:35:06.066817 2885 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 21:35:06.067247 kubelet[2885]: I1112 21:35:06.067225 2885 server.go:1264] "Started kubelet" Nov 12 21:35:06.069036 kubelet[2885]: I1112 21:35:06.068992 2885 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 21:35:06.078160 kubelet[2885]: I1112 21:35:06.077292 2885 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 21:35:06.078477 kubelet[2885]: I1112 21:35:06.078425 2885 server.go:455] "Adding debug handlers to kubelet server" Nov 12 21:35:06.079426 kubelet[2885]: I1112 21:35:06.079364 2885 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 21:35:06.079634 kubelet[2885]: I1112 21:35:06.079609 2885 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 21:35:06.080519 kubelet[2885]: I1112 21:35:06.080491 2885 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 12 21:35:06.080585 kubelet[2885]: I1112 21:35:06.080569 2885 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Nov 12 21:35:06.080705 kubelet[2885]: I1112 21:35:06.080681 2885 reconciler.go:26] "Reconciler: start to sync state" Nov 12 21:35:06.085294 kubelet[2885]: I1112 21:35:06.085019 2885 factory.go:221] Registration of the systemd container factory successfully Nov 12 21:35:06.085542 kubelet[2885]: I1112 21:35:06.085523 2885 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 21:35:06.086268 kubelet[2885]: I1112 21:35:06.086231 2885 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 21:35:06.087248 kubelet[2885]: E1112 21:35:06.087231 2885 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 21:35:06.087494 kubelet[2885]: I1112 21:35:06.087443 2885 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 21:35:06.087872 kubelet[2885]: I1112 21:35:06.087643 2885 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 21:35:06.087872 kubelet[2885]: I1112 21:35:06.087662 2885 kubelet.go:2337] "Starting kubelet main sync loop" Nov 12 21:35:06.087872 kubelet[2885]: E1112 21:35:06.087701 2885 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 21:35:06.091754 kubelet[2885]: I1112 21:35:06.091732 2885 factory.go:221] Registration of the containerd container factory successfully Nov 12 21:35:06.141693 kubelet[2885]: I1112 21:35:06.141653 2885 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 21:35:06.141693 kubelet[2885]: I1112 21:35:06.141671 2885 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 21:35:06.141850 kubelet[2885]: I1112 21:35:06.141744 2885 state_mem.go:36] "Initialized new in-memory state store" Nov 12 21:35:06.141976 kubelet[2885]: I1112 21:35:06.141952 2885 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 12 21:35:06.142010 kubelet[2885]: I1112 21:35:06.141968 2885 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 12 21:35:06.142034 kubelet[2885]: I1112 21:35:06.142011 2885 policy_none.go:49] "None policy: Start" Nov 12 21:35:06.142492 kubelet[2885]: I1112 21:35:06.142468 2885 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 21:35:06.142492 kubelet[2885]: I1112 21:35:06.142489 2885 state_mem.go:35] "Initializing new in-memory state store" Nov 12 21:35:06.142621 kubelet[2885]: I1112 21:35:06.142601 2885 state_mem.go:75] "Updated machine memory state" Nov 12 21:35:06.148046 kubelet[2885]: I1112 21:35:06.148009 2885 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 21:35:06.148518 kubelet[2885]: I1112 21:35:06.148306 2885 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 12 21:35:06.148518 kubelet[2885]: I1112 21:35:06.148407 2885 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 21:35:06.184825 kubelet[2885]: I1112 21:35:06.184530 2885 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-0-6-01c097edc7" Nov 12 21:35:06.188833 kubelet[2885]: I1112 21:35:06.188160 2885 topology_manager.go:215] "Topology Admit Handler" podUID="862df74c55eda4f90819a8743842ffba" podNamespace="kube-system" podName="kube-apiserver-ci-4081-2-0-6-01c097edc7" Nov 12 21:35:06.188833 kubelet[2885]: I1112 21:35:06.188241 2885 topology_manager.go:215] "Topology Admit Handler" podUID="4a7a2cde9ce81311a03d2b73b604c745" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-2-0-6-01c097edc7" Nov 12 21:35:06.188833 kubelet[2885]: I1112 21:35:06.188298 2885 topology_manager.go:215] "Topology Admit Handler" podUID="7ba04f15b0744f56b39b516d560b8928" podNamespace="kube-system" podName="kube-scheduler-ci-4081-2-0-6-01c097edc7" Nov 12 21:35:06.197380 kubelet[2885]: I1112 21:35:06.197353 2885 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081-2-0-6-01c097edc7" Nov 12 21:35:06.197599 kubelet[2885]: I1112 21:35:06.197564 2885 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-2-0-6-01c097edc7" Nov 12 21:35:06.381381 kubelet[2885]: I1112 21:35:06.381255 2885 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/862df74c55eda4f90819a8743842ffba-ca-certs\") pod \"kube-apiserver-ci-4081-2-0-6-01c097edc7\" (UID: \"862df74c55eda4f90819a8743842ffba\") " pod="kube-system/kube-apiserver-ci-4081-2-0-6-01c097edc7" Nov 12 21:35:06.381381 kubelet[2885]: I1112 21:35:06.381305 2885 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4a7a2cde9ce81311a03d2b73b604c745-kubeconfig\") pod \"kube-controller-manager-ci-4081-2-0-6-01c097edc7\" (UID: \"4a7a2cde9ce81311a03d2b73b604c745\") " pod="kube-system/kube-controller-manager-ci-4081-2-0-6-01c097edc7" Nov 12 21:35:06.381381 kubelet[2885]: I1112 21:35:06.381331 2885 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4a7a2cde9ce81311a03d2b73b604c745-ca-certs\") pod \"kube-controller-manager-ci-4081-2-0-6-01c097edc7\" (UID: \"4a7a2cde9ce81311a03d2b73b604c745\") " pod="kube-system/kube-controller-manager-ci-4081-2-0-6-01c097edc7" Nov 12 21:35:06.381381 kubelet[2885]: I1112 21:35:06.381356 2885 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4a7a2cde9ce81311a03d2b73b604c745-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-2-0-6-01c097edc7\" (UID: \"4a7a2cde9ce81311a03d2b73b604c745\") " pod="kube-system/kube-controller-manager-ci-4081-2-0-6-01c097edc7" Nov 12 21:35:06.381381 kubelet[2885]: I1112 21:35:06.381384 2885 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4a7a2cde9ce81311a03d2b73b604c745-k8s-certs\") pod \"kube-controller-manager-ci-4081-2-0-6-01c097edc7\" (UID: \"4a7a2cde9ce81311a03d2b73b604c745\") " pod="kube-system/kube-controller-manager-ci-4081-2-0-6-01c097edc7" Nov 12 21:35:06.381632 kubelet[2885]: I1112 21:35:06.381408 2885 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4a7a2cde9ce81311a03d2b73b604c745-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-2-0-6-01c097edc7\" (UID: \"4a7a2cde9ce81311a03d2b73b604c745\") " pod="kube-system/kube-controller-manager-ci-4081-2-0-6-01c097edc7" Nov 12 21:35:06.381632 kubelet[2885]: I1112 21:35:06.381430 2885 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7ba04f15b0744f56b39b516d560b8928-kubeconfig\") pod \"kube-scheduler-ci-4081-2-0-6-01c097edc7\" (UID: \"7ba04f15b0744f56b39b516d560b8928\") " pod="kube-system/kube-scheduler-ci-4081-2-0-6-01c097edc7" Nov 12 21:35:06.381632 kubelet[2885]: I1112 21:35:06.381450 2885 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/862df74c55eda4f90819a8743842ffba-k8s-certs\") pod \"kube-apiserver-ci-4081-2-0-6-01c097edc7\" (UID: \"862df74c55eda4f90819a8743842ffba\") " pod="kube-system/kube-apiserver-ci-4081-2-0-6-01c097edc7" Nov 12 21:35:06.381632 kubelet[2885]: I1112 21:35:06.381468 2885 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/862df74c55eda4f90819a8743842ffba-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-2-0-6-01c097edc7\" (UID: \"862df74c55eda4f90819a8743842ffba\") " pod="kube-system/kube-apiserver-ci-4081-2-0-6-01c097edc7" Nov 12 21:35:07.061740 kubelet[2885]: I1112 21:35:07.061663 2885 apiserver.go:52] "Watching apiserver" Nov 12 21:35:07.081234 kubelet[2885]: I1112 21:35:07.081122 2885 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Nov 12 21:35:07.129484 kubelet[2885]: E1112 21:35:07.128903 2885 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-2-0-6-01c097edc7\" already exists" pod="kube-system/kube-apiserver-ci-4081-2-0-6-01c097edc7" Nov 12 21:35:07.143708 kubelet[2885]: I1112 21:35:07.143564 2885 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-2-0-6-01c097edc7" podStartSLOduration=1.143344939 podStartE2EDuration="1.143344939s" podCreationTimestamp="2024-11-12 21:35:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 21:35:07.14266588 +0000 UTC m=+1.166891391" watchObservedRunningTime="2024-11-12 21:35:07.143344939 +0000 UTC m=+1.167570449" Nov 12 21:35:07.161681 kubelet[2885]: I1112 21:35:07.160801 2885 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-2-0-6-01c097edc7" podStartSLOduration=1.1607843820000001 podStartE2EDuration="1.160784382s" podCreationTimestamp="2024-11-12 21:35:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 21:35:07.160031825 +0000 UTC m=+1.184257336" watchObservedRunningTime="2024-11-12 21:35:07.160784382 +0000 UTC m=+1.185009892" Nov 12 21:35:07.174730 kubelet[2885]: I1112 21:35:07.174635 2885 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-2-0-6-01c097edc7" podStartSLOduration=1.174613451 podStartE2EDuration="1.174613451s" podCreationTimestamp="2024-11-12 21:35:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 21:35:07.173612847 +0000 UTC m=+1.197838367" watchObservedRunningTime="2024-11-12 21:35:07.174613451 +0000 UTC m=+1.198838961" Nov 12 21:35:11.581285 sudo[1964]: pam_unix(sudo:session): session closed for user root Nov 12 21:35:11.748828 sshd[1961]: pam_unix(sshd:session): session closed for user core Nov 12 21:35:11.752898 systemd[1]: sshd@16-188.245.86.234:22-147.75.109.163:40106.service: Deactivated successfully. Nov 12 21:35:11.755468 systemd[1]: session-7.scope: Deactivated successfully. Nov 12 21:35:11.755941 systemd[1]: session-7.scope: Consumed 5.141s CPU time, 189.2M memory peak, 0B memory swap peak. Nov 12 21:35:11.758369 systemd-logind[1492]: Session 7 logged out. Waiting for processes to exit. Nov 12 21:35:11.759954 systemd-logind[1492]: Removed session 7. Nov 12 21:35:13.095418 systemd[1]: Started sshd@19-188.245.86.234:22-35.240.185.59:33366.service - OpenSSH per-connection server daemon (35.240.185.59:33366). Nov 12 21:35:13.855535 sshd[2963]: Invalid user user from 35.240.185.59 port 33366 Nov 12 21:35:14.016762 sshd[2963]: Connection closed by invalid user user 35.240.185.59 port 33366 [preauth] Nov 12 21:35:14.018952 systemd[1]: sshd@19-188.245.86.234:22-35.240.185.59:33366.service: Deactivated successfully. Nov 12 21:35:20.853029 kubelet[2885]: I1112 21:35:20.852983 2885 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 12 21:35:20.854031 kubelet[2885]: I1112 21:35:20.853975 2885 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 12 21:35:20.854149 containerd[1513]: time="2024-11-12T21:35:20.853436483Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 12 21:35:21.871725 kubelet[2885]: I1112 21:35:21.871682 2885 topology_manager.go:215] "Topology Admit Handler" podUID="1521970a-f58d-4927-9795-846c06ccd1f9" podNamespace="kube-system" podName="kube-proxy-ztvs5" Nov 12 21:35:21.877287 kubelet[2885]: W1112 21:35:21.877236 2885 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4081-2-0-6-01c097edc7" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-2-0-6-01c097edc7' and this object Nov 12 21:35:21.877287 kubelet[2885]: E1112 21:35:21.877273 2885 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4081-2-0-6-01c097edc7" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-2-0-6-01c097edc7' and this object Nov 12 21:35:21.877867 kubelet[2885]: W1112 21:35:21.877847 2885 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081-2-0-6-01c097edc7" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-2-0-6-01c097edc7' and this object Nov 12 21:35:21.877911 kubelet[2885]: E1112 21:35:21.877875 2885 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081-2-0-6-01c097edc7" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-2-0-6-01c097edc7' and this object Nov 12 21:35:21.882183 systemd[1]: Created slice kubepods-besteffort-pod1521970a_f58d_4927_9795_846c06ccd1f9.slice - libcontainer container kubepods-besteffort-pod1521970a_f58d_4927_9795_846c06ccd1f9.slice. Nov 12 21:35:21.976448 kubelet[2885]: I1112 21:35:21.976400 2885 topology_manager.go:215] "Topology Admit Handler" podUID="bd1ff92d-4037-43a0-a020-219a0abf1f6d" podNamespace="tigera-operator" podName="tigera-operator-5645cfc98-9hpjb" Nov 12 21:35:21.985280 kubelet[2885]: I1112 21:35:21.985241 2885 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1521970a-f58d-4927-9795-846c06ccd1f9-xtables-lock\") pod \"kube-proxy-ztvs5\" (UID: \"1521970a-f58d-4927-9795-846c06ccd1f9\") " pod="kube-system/kube-proxy-ztvs5" Nov 12 21:35:21.985280 kubelet[2885]: I1112 21:35:21.985271 2885 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1521970a-f58d-4927-9795-846c06ccd1f9-lib-modules\") pod \"kube-proxy-ztvs5\" (UID: \"1521970a-f58d-4927-9795-846c06ccd1f9\") " pod="kube-system/kube-proxy-ztvs5" Nov 12 21:35:21.985280 kubelet[2885]: I1112 21:35:21.985288 2885 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1521970a-f58d-4927-9795-846c06ccd1f9-kube-proxy\") pod \"kube-proxy-ztvs5\" (UID: \"1521970a-f58d-4927-9795-846c06ccd1f9\") " pod="kube-system/kube-proxy-ztvs5" Nov 12 21:35:21.985499 kubelet[2885]: I1112 21:35:21.985305 2885 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh59t\" (UniqueName: \"kubernetes.io/projected/1521970a-f58d-4927-9795-846c06ccd1f9-kube-api-access-lh59t\") pod \"kube-proxy-ztvs5\" (UID: \"1521970a-f58d-4927-9795-846c06ccd1f9\") " pod="kube-system/kube-proxy-ztvs5" Nov 12 21:35:21.986933 systemd[1]: Created slice kubepods-besteffort-podbd1ff92d_4037_43a0_a020_219a0abf1f6d.slice - libcontainer container kubepods-besteffort-podbd1ff92d_4037_43a0_a020_219a0abf1f6d.slice. Nov 12 21:35:22.086715 kubelet[2885]: I1112 21:35:22.086288 2885 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bd1ff92d-4037-43a0-a020-219a0abf1f6d-var-lib-calico\") pod \"tigera-operator-5645cfc98-9hpjb\" (UID: \"bd1ff92d-4037-43a0-a020-219a0abf1f6d\") " pod="tigera-operator/tigera-operator-5645cfc98-9hpjb" Nov 12 21:35:22.086715 kubelet[2885]: I1112 21:35:22.086369 2885 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtg7r\" (UniqueName: \"kubernetes.io/projected/bd1ff92d-4037-43a0-a020-219a0abf1f6d-kube-api-access-gtg7r\") pod \"tigera-operator-5645cfc98-9hpjb\" (UID: \"bd1ff92d-4037-43a0-a020-219a0abf1f6d\") " pod="tigera-operator/tigera-operator-5645cfc98-9hpjb" Nov 12 21:35:22.292097 containerd[1513]: time="2024-11-12T21:35:22.291982906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5645cfc98-9hpjb,Uid:bd1ff92d-4037-43a0-a020-219a0abf1f6d,Namespace:tigera-operator,Attempt:0,}" Nov 12 21:35:22.337381 containerd[1513]: time="2024-11-12T21:35:22.337017030Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 21:35:22.337381 containerd[1513]: time="2024-11-12T21:35:22.337177723Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 21:35:22.337381 containerd[1513]: time="2024-11-12T21:35:22.337196209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 21:35:22.337381 containerd[1513]: time="2024-11-12T21:35:22.337300024Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 21:35:22.368370 systemd[1]: run-containerd-runc-k8s.io-c743a455cc0b8e406786b9774c3cc868a04d055a67a8f762d84ebcb5572080e9-runc.P11gSV.mount: Deactivated successfully. Nov 12 21:35:22.379345 systemd[1]: Started cri-containerd-c743a455cc0b8e406786b9774c3cc868a04d055a67a8f762d84ebcb5572080e9.scope - libcontainer container c743a455cc0b8e406786b9774c3cc868a04d055a67a8f762d84ebcb5572080e9. Nov 12 21:35:22.435170 containerd[1513]: time="2024-11-12T21:35:22.435028367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5645cfc98-9hpjb,Uid:bd1ff92d-4037-43a0-a020-219a0abf1f6d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c743a455cc0b8e406786b9774c3cc868a04d055a67a8f762d84ebcb5572080e9\"" Nov 12 21:35:22.439239 containerd[1513]: time="2024-11-12T21:35:22.439201415Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.0\"" Nov 12 21:35:23.096935 kubelet[2885]: E1112 21:35:23.096867 2885 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Nov 12 21:35:23.097393 kubelet[2885]: E1112 21:35:23.096944 2885 projected.go:200] Error preparing data for projected volume kube-api-access-lh59t for pod kube-system/kube-proxy-ztvs5: failed to sync configmap cache: timed out waiting for the condition Nov 12 21:35:23.097393 kubelet[2885]: E1112 21:35:23.097109 2885 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1521970a-f58d-4927-9795-846c06ccd1f9-kube-api-access-lh59t podName:1521970a-f58d-4927-9795-846c06ccd1f9 nodeName:}" failed. No retries permitted until 2024-11-12 21:35:23.597039288 +0000 UTC m=+17.621264838 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lh59t" (UniqueName: "kubernetes.io/projected/1521970a-f58d-4927-9795-846c06ccd1f9-kube-api-access-lh59t") pod "kube-proxy-ztvs5" (UID: "1521970a-f58d-4927-9795-846c06ccd1f9") : failed to sync configmap cache: timed out waiting for the condition Nov 12 21:35:23.990723 containerd[1513]: time="2024-11-12T21:35:23.990660202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ztvs5,Uid:1521970a-f58d-4927-9795-846c06ccd1f9,Namespace:kube-system,Attempt:0,}" Nov 12 21:35:24.044811 containerd[1513]: time="2024-11-12T21:35:24.044237131Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 21:35:24.046419 containerd[1513]: time="2024-11-12T21:35:24.045937791Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 21:35:24.046419 containerd[1513]: time="2024-11-12T21:35:24.045962729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 21:35:24.046419 containerd[1513]: time="2024-11-12T21:35:24.046084229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 21:35:24.077296 systemd[1]: Started cri-containerd-0f007b84bbb87554bf100324ca09f65d155bc546879ff670fe66282446f388f3.scope - libcontainer container 0f007b84bbb87554bf100324ca09f65d155bc546879ff670fe66282446f388f3. Nov 12 21:35:24.123873 containerd[1513]: time="2024-11-12T21:35:24.123502656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ztvs5,Uid:1521970a-f58d-4927-9795-846c06ccd1f9,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f007b84bbb87554bf100324ca09f65d155bc546879ff670fe66282446f388f3\"" Nov 12 21:35:24.130828 containerd[1513]: time="2024-11-12T21:35:24.130708491Z" level=info msg="CreateContainer within sandbox \"0f007b84bbb87554bf100324ca09f65d155bc546879ff670fe66282446f388f3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 12 21:35:24.156050 containerd[1513]: time="2024-11-12T21:35:24.156001311Z" level=info msg="CreateContainer within sandbox \"0f007b84bbb87554bf100324ca09f65d155bc546879ff670fe66282446f388f3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"50f09e26b96befe159d30220dd790797ced6b790f822a062f0741ead1d7bb264\"" Nov 12 21:35:24.156803 containerd[1513]: time="2024-11-12T21:35:24.156749468Z" level=info msg="StartContainer for \"50f09e26b96befe159d30220dd790797ced6b790f822a062f0741ead1d7bb264\"" Nov 12 21:35:24.191211 systemd[1]: Started cri-containerd-50f09e26b96befe159d30220dd790797ced6b790f822a062f0741ead1d7bb264.scope - libcontainer container 50f09e26b96befe159d30220dd790797ced6b790f822a062f0741ead1d7bb264. Nov 12 21:35:24.233194 containerd[1513]: time="2024-11-12T21:35:24.233010715Z" level=info msg="StartContainer for \"50f09e26b96befe159d30220dd790797ced6b790f822a062f0741ead1d7bb264\" returns successfully" Nov 12 21:35:24.705388 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3429780614.mount: Deactivated successfully. Nov 12 21:35:24.748888 containerd[1513]: time="2024-11-12T21:35:24.748006976Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 21:35:24.750773 containerd[1513]: time="2024-11-12T21:35:24.750738498Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.0: active requests=0, bytes read=21763371" Nov 12 21:35:24.752180 containerd[1513]: time="2024-11-12T21:35:24.752155901Z" level=info msg="ImageCreate event name:\"sha256:6969e3644ac6358fd921194ec267a243ad5856f3d9595bdbb9a76dc5c5e9875d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 21:35:24.755953 containerd[1513]: time="2024-11-12T21:35:24.755912705Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:67a96f7dcdde24abff66b978202c5e64b9909f4a8fcd9357daca92b499b26e4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 21:35:24.756830 containerd[1513]: time="2024-11-12T21:35:24.756522499Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.0\" with image id \"sha256:6969e3644ac6358fd921194ec267a243ad5856f3d9595bdbb9a76dc5c5e9875d\", repo tag \"quay.io/tigera/operator:v1.36.0\", repo digest \"quay.io/tigera/operator@sha256:67a96f7dcdde24abff66b978202c5e64b9909f4a8fcd9357daca92b499b26e4d\", size \"21757542\" in 2.317283042s" Nov 12 21:35:24.757282 containerd[1513]: time="2024-11-12T21:35:24.757265366Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.0\" returns image reference \"sha256:6969e3644ac6358fd921194ec267a243ad5856f3d9595bdbb9a76dc5c5e9875d\"" Nov 12 21:35:24.764899 containerd[1513]: time="2024-11-12T21:35:24.764794431Z" level=info msg="CreateContainer within sandbox \"c743a455cc0b8e406786b9774c3cc868a04d055a67a8f762d84ebcb5572080e9\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 12 21:35:24.795533 containerd[1513]: time="2024-11-12T21:35:24.794607601Z" level=info msg="CreateContainer within sandbox \"c743a455cc0b8e406786b9774c3cc868a04d055a67a8f762d84ebcb5572080e9\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e54e964b6cbfdddbce2dee1e69361f28e8c0f886eacff755f853347b20ecdf91\"" Nov 12 21:35:24.796977 containerd[1513]: time="2024-11-12T21:35:24.796413481Z" level=info msg="StartContainer for \"e54e964b6cbfdddbce2dee1e69361f28e8c0f886eacff755f853347b20ecdf91\"" Nov 12 21:35:24.839212 systemd[1]: Started cri-containerd-e54e964b6cbfdddbce2dee1e69361f28e8c0f886eacff755f853347b20ecdf91.scope - libcontainer container e54e964b6cbfdddbce2dee1e69361f28e8c0f886eacff755f853347b20ecdf91. Nov 12 21:35:24.881161 containerd[1513]: time="2024-11-12T21:35:24.881017365Z" level=info msg="StartContainer for \"e54e964b6cbfdddbce2dee1e69361f28e8c0f886eacff755f853347b20ecdf91\" returns successfully" Nov 12 21:35:25.202308 kubelet[2885]: I1112 21:35:25.196355 2885 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5645cfc98-9hpjb" podStartSLOduration=1.87523981 podStartE2EDuration="4.196330451s" podCreationTimestamp="2024-11-12 21:35:21 +0000 UTC" firstStartedPulling="2024-11-12 21:35:22.437480278 +0000 UTC m=+16.461705787" lastFinishedPulling="2024-11-12 21:35:24.758570918 +0000 UTC m=+18.782796428" observedRunningTime="2024-11-12 21:35:25.195927829 +0000 UTC m=+19.220153339" watchObservedRunningTime="2024-11-12 21:35:25.196330451 +0000 UTC m=+19.220555991" Nov 12 21:35:25.206745 kubelet[2885]: I1112 21:35:25.206574 2885 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ztvs5" podStartSLOduration=4.206547471 podStartE2EDuration="4.206547471s" podCreationTimestamp="2024-11-12 21:35:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 21:35:25.206363072 +0000 UTC m=+19.230588582" watchObservedRunningTime="2024-11-12 21:35:25.206547471 +0000 UTC m=+19.230773011" Nov 12 21:35:26.618339 systemd[1]: Started sshd@20-188.245.86.234:22-35.240.185.59:40024.service - OpenSSH per-connection server daemon (35.240.185.59:40024). Nov 12 21:35:27.535144 sshd[3256]: Invalid user elastic from 35.240.185.59 port 40024 Nov 12 21:35:27.751961 sshd[3256]: Connection closed by invalid user elastic 35.240.185.59 port 40024 [preauth] Nov 12 21:35:27.754288 systemd[1]: sshd@20-188.245.86.234:22-35.240.185.59:40024.service: Deactivated successfully. Nov 12 21:35:27.845416 kubelet[2885]: I1112 21:35:27.845271 2885 topology_manager.go:215] "Topology Admit Handler" podUID="291f079c-130a-4212-b0f4-3264930879e3" podNamespace="calico-system" podName="calico-typha-bdc5ccb87-ls5wx" Nov 12 21:35:27.857542 systemd[1]: Created slice kubepods-besteffort-pod291f079c_130a_4212_b0f4_3264930879e3.slice - libcontainer container kubepods-besteffort-pod291f079c_130a_4212_b0f4_3264930879e3.slice. Nov 12 21:35:27.972429 kubelet[2885]: I1112 21:35:27.972384 2885 topology_manager.go:215] "Topology Admit Handler" podUID="0aff941e-9bda-4d34-a57e-3c6bf15392cf" podNamespace="calico-system" podName="calico-node-65bz6" Nov 12 21:35:27.990881 systemd[1]: Created slice kubepods-besteffort-pod0aff941e_9bda_4d34_a57e_3c6bf15392cf.slice - libcontainer container kubepods-besteffort-pod0aff941e_9bda_4d34_a57e_3c6bf15392cf.slice. Nov 12 21:35:28.020596 kubelet[2885]: I1112 21:35:28.020539 2885 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/291f079c-130a-4212-b0f4-3264930879e3-tigera-ca-bundle\") pod \"calico-typha-bdc5ccb87-ls5wx\" (UID: \"291f079c-130a-4212-b0f4-3264930879e3\") " pod="calico-system/calico-typha-bdc5ccb87-ls5wx" Nov 12 21:35:28.020596 kubelet[2885]: I1112 21:35:28.020583 2885 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zhjf\" (UniqueName: \"kubernetes.io/projected/291f079c-130a-4212-b0f4-3264930879e3-kube-api-access-6zhjf\") pod \"calico-typha-bdc5ccb87-ls5wx\" (UID: \"291f079c-130a-4212-b0f4-3264930879e3\") " pod="calico-system/calico-typha-bdc5ccb87-ls5wx" Nov 12 21:35:28.020596 kubelet[2885]: I1112 21:35:28.020602 2885 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/291f079c-130a-4212-b0f4-3264930879e3-typha-certs\") pod \"calico-typha-bdc5ccb87-ls5wx\" (UID: \"291f079c-130a-4212-b0f4-3264930879e3\") " pod="calico-system/calico-typha-bdc5ccb87-ls5wx" Nov 12 21:35:28.121953 kubelet[2885]: I1112 21:35:28.121809 2885 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5bk4\" (UniqueName: \"kubernetes.io/projected/0aff941e-9bda-4d34-a57e-3c6bf15392cf-kube-api-access-f5bk4\") pod \"calico-node-65bz6\" (UID: \"0aff941e-9bda-4d34-a57e-3c6bf15392cf\") " pod="calico-system/calico-node-65bz6" Nov 12 21:35:28.121953 kubelet[2885]: I1112 21:35:28.121909 2885 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0aff941e-9bda-4d34-a57e-3c6bf15392cf-xtables-lock\") pod \"calico-node-65bz6\" (UID: \"0aff941e-9bda-4d34-a57e-3c6bf15392cf\") " pod="calico-system/calico-node-65bz6" Nov 12 21:35:28.121953 kubelet[2885]: I1112 21:35:28.121942 2885 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0aff941e-9bda-4d34-a57e-3c6bf15392cf-var-run-calico\") pod \"calico-node-65bz6\" (UID: \"0aff941e-9bda-4d34-a57e-3c6bf15392cf\") " pod="calico-system/calico-node-65bz6" Nov 12 21:35:28.122177 kubelet[2885]: I1112 21:35:28.121967 2885 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0aff941e-9bda-4d34-a57e-3c6bf15392cf-cni-log-dir\") pod \"calico-node-65bz6\" (UID: \"0aff941e-9bda-4d34-a57e-3c6bf15392cf\") " pod="calico-system/calico-node-65bz6" Nov 12 21:35:28.122177 kubelet[2885]: I1112 21:35:28.121984 2885 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0aff941e-9bda-4d34-a57e-3c6bf15392cf-cni-bin-dir\") pod \"calico-node-65bz6\" (UID: \"0aff941e-9bda-4d34-a57e-3c6bf15392cf\") " pod="calico-system/calico-node-65bz6" Nov 12 21:35:28.122177 kubelet[2885]: I1112 21:35:28.122013 2885 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0aff941e-9bda-4d34-a57e-3c6bf15392cf-policysync\") pod \"calico-node-65bz6\" (UID: \"0aff941e-9bda-4d34-a57e-3c6bf15392cf\") " pod="calico-system/calico-node-65bz6" Nov 12 21:35:28.122177 kubelet[2885]: I1112 21:35:28.122027 2885 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0aff941e-9bda-4d34-a57e-3c6bf15392cf-node-certs\") pod \"calico-node-65bz6\" (UID: \"0aff941e-9bda-4d34-a57e-3c6bf15392cf\") " pod="calico-system/calico-node-65bz6" Nov 12 21:35:28.122177 kubelet[2885]: I1112 21:35:28.122040 2885 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0aff941e-9bda-4d34-a57e-3c6bf15392cf-lib-modules\") pod \"calico-node-65bz6\" (UID: \"0aff941e-9bda-4d34-a57e-3c6bf15392cf\") " pod="calico-system/calico-node-65bz6" Nov 12 21:35:28.122325 kubelet[2885]: I1112 21:35:28.122053 2885 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0aff941e-9bda-4d34-a57e-3c6bf15392cf-cni-net-dir\") pod \"calico-node-65bz6\" (UID: \"0aff941e-9bda-4d34-a57e-3c6bf15392cf\") " pod="calico-system/calico-node-65bz6" Nov 12 21:35:28.122325 kubelet[2885]: I1112 21:35:28.122079 2885 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0aff941e-9bda-4d34-a57e-3c6bf15392cf-tigera-ca-bundle\") pod \"calico-node-65bz6\" (UID: \"0aff941e-9bda-4d34-a57e-3c6bf15392cf\") " pod="calico-system/calico-node-65bz6" Nov 12 21:35:28.122325 kubelet[2885]: I1112 21:35:28.122093 2885 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0aff941e-9bda-4d34-a57e-3c6bf15392cf-var-lib-calico\") pod \"calico-node-65bz6\" (UID: \"0aff941e-9bda-4d34-a57e-3c6bf15392cf\") " pod="calico-system/calico-node-65bz6" Nov 12 21:35:28.122325 kubelet[2885]: I1112 21:35:28.122108 2885 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0aff941e-9bda-4d34-a57e-3c6bf15392cf-flexvol-driver-host\") pod \"calico-node-65bz6\" (UID: \"0aff941e-9bda-4d34-a57e-3c6bf15392cf\") " pod="calico-system/calico-node-65bz6" Nov 12 21:35:28.149566 kubelet[2885]: I1112 21:35:28.145677 2885 topology_manager.go:215] "Topology Admit Handler" podUID="23bc69df-3188-4289-a376-a1e2658f5a79" podNamespace="calico-system" podName="csi-node-driver-chjk5" Nov 12 21:35:28.149566 kubelet[2885]: E1112 21:35:28.145970 2885 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-chjk5" podUID="23bc69df-3188-4289-a376-a1e2658f5a79" Nov 12 21:35:28.166700 containerd[1513]: time="2024-11-12T21:35:28.166569338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-bdc5ccb87-ls5wx,Uid:291f079c-130a-4212-b0f4-3264930879e3,Namespace:calico-system,Attempt:0,}" Nov 12 21:35:28.235033 kubelet[2885]: E1112 21:35:28.232460 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:28.235033 kubelet[2885]: W1112 21:35:28.232487 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:28.235033 kubelet[2885]: E1112 21:35:28.232515 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:28.235033 kubelet[2885]: E1112 21:35:28.232755 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:28.235033 kubelet[2885]: W1112 21:35:28.232762 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:28.235033 kubelet[2885]: E1112 21:35:28.233107 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:28.235033 kubelet[2885]: E1112 21:35:28.233247 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:28.235033 kubelet[2885]: W1112 21:35:28.233255 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:28.235033 kubelet[2885]: E1112 21:35:28.233364 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:28.235033 kubelet[2885]: E1112 21:35:28.233657 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:28.235447 kubelet[2885]: W1112 21:35:28.233665 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:28.235447 kubelet[2885]: E1112 21:35:28.234059 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:28.236782 containerd[1513]: time="2024-11-12T21:35:28.235607588Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 21:35:28.236782 containerd[1513]: time="2024-11-12T21:35:28.235699583Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 21:35:28.236782 containerd[1513]: time="2024-11-12T21:35:28.235710724Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 21:35:28.236782 containerd[1513]: time="2024-11-12T21:35:28.235831272Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 21:35:28.237101 kubelet[2885]: E1112 21:35:28.236170 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:28.237101 kubelet[2885]: W1112 21:35:28.236184 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:28.237101 kubelet[2885]: E1112 21:35:28.236206 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:28.239112 kubelet[2885]: E1112 21:35:28.237286 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:28.239112 kubelet[2885]: W1112 21:35:28.237298 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:28.239112 kubelet[2885]: E1112 21:35:28.237408 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:28.239112 kubelet[2885]: E1112 21:35:28.237593 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:28.239112 kubelet[2885]: W1112 21:35:28.237603 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:28.239112 kubelet[2885]: E1112 21:35:28.237615 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:28.239112 kubelet[2885]: E1112 21:35:28.239041 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:28.239112 kubelet[2885]: W1112 21:35:28.239051 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:28.239112 kubelet[2885]: E1112 21:35:28.239061 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:28.241432 kubelet[2885]: E1112 21:35:28.240805 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:28.241432 kubelet[2885]: W1112 21:35:28.240820 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:28.241432 kubelet[2885]: E1112 21:35:28.240842 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:28.241432 kubelet[2885]: E1112 21:35:28.241124 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:28.241432 kubelet[2885]: W1112 21:35:28.241136 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:28.241432 kubelet[2885]: E1112 21:35:28.241148 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:28.259641 kubelet[2885]: E1112 21:35:28.259579 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:28.259641 kubelet[2885]: W1112 21:35:28.259606 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:28.259641 kubelet[2885]: E1112 21:35:28.259626 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:28.280529 systemd[1]: Started cri-containerd-8b35492c5463dea0c4a14fc6f989e37bd36860913c405d9827c124b7ad8e886e.scope - libcontainer container 8b35492c5463dea0c4a14fc6f989e37bd36860913c405d9827c124b7ad8e886e. Nov 12 21:35:28.295471 containerd[1513]: time="2024-11-12T21:35:28.295416476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-65bz6,Uid:0aff941e-9bda-4d34-a57e-3c6bf15392cf,Namespace:calico-system,Attempt:0,}" Nov 12 21:35:28.328448 kubelet[2885]: E1112 21:35:28.328314 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:28.328448 kubelet[2885]: W1112 21:35:28.328345 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:28.328448 kubelet[2885]: E1112 21:35:28.328371 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:28.329190 kubelet[2885]: E1112 21:35:28.328765 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:28.329190 kubelet[2885]: W1112 21:35:28.328780 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:28.329190 kubelet[2885]: E1112 21:35:28.328791 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:28.329190 kubelet[2885]: E1112 21:35:28.329124 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:28.329190 kubelet[2885]: W1112 21:35:28.329135 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:28.329190 kubelet[2885]: E1112 21:35:28.329146 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:28.329807 kubelet[2885]: I1112 21:35:28.329205 2885 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/23bc69df-3188-4289-a376-a1e2658f5a79-varrun\") pod \"csi-node-driver-chjk5\" (UID: \"23bc69df-3188-4289-a376-a1e2658f5a79\") " pod="calico-system/csi-node-driver-chjk5" Nov 12 21:35:28.329807 kubelet[2885]: E1112 21:35:28.329676 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:28.329807 kubelet[2885]: W1112 21:35:28.329687 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:28.329807 kubelet[2885]: E1112 21:35:28.329697 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:28.329807 kubelet[2885]: I1112 21:35:28.329727 2885 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/23bc69df-3188-4289-a376-a1e2658f5a79-kubelet-dir\") pod \"csi-node-driver-chjk5\" (UID: \"23bc69df-3188-4289-a376-a1e2658f5a79\") " pod="calico-system/csi-node-driver-chjk5" Nov 12 21:35:28.330536 kubelet[2885]: E1112 21:35:28.330105 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:28.330536 kubelet[2885]: W1112 21:35:28.330117 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:28.331504 kubelet[2885]: E1112 21:35:28.330706 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:28.331504 kubelet[2885]: I1112 21:35:28.330732 2885 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnqnp\" (UniqueName: \"kubernetes.io/projected/23bc69df-3188-4289-a376-a1e2658f5a79-kube-api-access-qnqnp\") pod \"csi-node-driver-chjk5\" (UID: \"23bc69df-3188-4289-a376-a1e2658f5a79\") " pod="calico-system/csi-node-driver-chjk5" Nov 12 21:35:28.331504 kubelet[2885]: E1112 21:35:28.330815 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:28.331504 kubelet[2885]: W1112 21:35:28.330824 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:28.331504 kubelet[2885]: E1112 21:35:28.330893 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:28.333205 kubelet[2885]: E1112 21:35:28.333139 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:28.333205 kubelet[2885]: W1112 21:35:28.333162 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:28.333205 kubelet[2885]: E1112 21:35:28.333192 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:28.333479 kubelet[2885]: E1112 21:35:28.333452 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:28.333479 kubelet[2885]: W1112 21:35:28.333474 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:28.334173 kubelet[2885]: E1112 21:35:28.334145 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:28.334233 kubelet[2885]: I1112 21:35:28.334194 2885 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/23bc69df-3188-4289-a376-a1e2658f5a79-socket-dir\") pod \"csi-node-driver-chjk5\" (UID: \"23bc69df-3188-4289-a376-a1e2658f5a79\") " pod="calico-system/csi-node-driver-chjk5" Nov 12 21:35:28.334510 kubelet[2885]: E1112 21:35:28.334460 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:28.334510 kubelet[2885]: W1112 21:35:28.334476 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:28.334586 kubelet[2885]: E1112 21:35:28.334574 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:28.334610 kubelet[2885]: I1112 21:35:28.334595 2885 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/23bc69df-3188-4289-a376-a1e2658f5a79-registration-dir\") pod \"csi-node-driver-chjk5\" (UID: \"23bc69df-3188-4289-a376-a1e2658f5a79\") " pod="calico-system/csi-node-driver-chjk5" Nov 12 21:35:28.335104 kubelet[2885]: E1112 21:35:28.334777 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:28.335104 kubelet[2885]: W1112 21:35:28.334792 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:28.335104 kubelet[2885]: E1112 21:35:28.334881 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:28.335104 kubelet[2885]: E1112 21:35:28.335031 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:28.335104 kubelet[2885]: W1112 21:35:28.335043 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:28.335104 kubelet[2885]: E1112 21:35:28.335057 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:28.335417 kubelet[2885]: E1112 21:35:28.335403 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:28.335417 kubelet[2885]: W1112 21:35:28.335414 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:28.335487 kubelet[2885]: E1112 21:35:28.335428 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:28.336122 kubelet[2885]: E1112 21:35:28.336061 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:28.336838 kubelet[2885]: W1112 21:35:28.336813 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:28.336838 kubelet[2885]: E1112 21:35:28.336836 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:28.337410 kubelet[2885]: E1112 21:35:28.337369 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:28.337410 kubelet[2885]: W1112 21:35:28.337389 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:28.337410 kubelet[2885]: E1112 21:35:28.337401 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:28.337877 kubelet[2885]: E1112 21:35:28.337843 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:28.337877 kubelet[2885]: W1112 21:35:28.337860 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:28.337877 kubelet[2885]: E1112 21:35:28.337869 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:28.358967 containerd[1513]: time="2024-11-12T21:35:28.357320124Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 21:35:28.358967 containerd[1513]: time="2024-11-12T21:35:28.358898656Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 21:35:28.358967 containerd[1513]: time="2024-11-12T21:35:28.358929635Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 21:35:28.359244 containerd[1513]: time="2024-11-12T21:35:28.359217390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 21:35:28.396332 systemd[1]: Started cri-containerd-5877f63725eccf5ddf0688649b961d1b1aef23c5b8f6a8dc12b98ab6eaaccc72.scope - libcontainer container 5877f63725eccf5ddf0688649b961d1b1aef23c5b8f6a8dc12b98ab6eaaccc72. Nov 12 21:35:28.411213 containerd[1513]: time="2024-11-12T21:35:28.409799120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-bdc5ccb87-ls5wx,Uid:291f079c-130a-4212-b0f4-3264930879e3,Namespace:calico-system,Attempt:0,} returns sandbox id \"8b35492c5463dea0c4a14fc6f989e37bd36860913c405d9827c124b7ad8e886e\"" Nov 12 21:35:28.412148 containerd[1513]: time="2024-11-12T21:35:28.411724970Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.0\"" Nov 12 21:35:28.435887 kubelet[2885]: E1112 21:35:28.435840 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:28.435887 kubelet[2885]: W1112 21:35:28.435871 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:28.435887 kubelet[2885]: E1112 21:35:28.435893 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:28.437902 kubelet[2885]: E1112 21:35:28.437860 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:28.437902 kubelet[2885]: W1112 21:35:28.437880 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:28.437902 kubelet[2885]: E1112 21:35:28.437897 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:28.440016 kubelet[2885]: E1112 21:35:28.439973 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:28.440016 kubelet[2885]: W1112 21:35:28.439997 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:28.440209 kubelet[2885]: E1112 21:35:28.440023 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:28.440601 kubelet[2885]: E1112 21:35:28.440575 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:28.440741 kubelet[2885]: W1112 21:35:28.440659 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:28.441101 kubelet[2885]: E1112 21:35:28.440913 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:28.441583 kubelet[2885]: E1112 21:35:28.441294 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:28.441583 kubelet[2885]: W1112 21:35:28.441323 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:28.441960 kubelet[2885]: E1112 21:35:28.441824 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:28.441960 kubelet[2885]: E1112 21:35:28.441877 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:28.441960 kubelet[2885]: W1112 21:35:28.441885 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:28.442201 kubelet[2885]: E1112 21:35:28.442129 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:28.442417 kubelet[2885]: E1112 21:35:28.442346 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:28.442417 kubelet[2885]: W1112 21:35:28.442355 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:28.442576 kubelet[2885]: E1112 21:35:28.442496 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:28.442862 kubelet[2885]: E1112 21:35:28.442764 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:28.442862 kubelet[2885]: W1112 21:35:28.442773 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:28.443022 kubelet[2885]: E1112 21:35:28.442947 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:28.443371 kubelet[2885]: E1112 21:35:28.443241 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:28.443371 kubelet[2885]: W1112 21:35:28.443258 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:28.443580 kubelet[2885]: E1112 21:35:28.443565 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:28.443750 kubelet[2885]: E1112 21:35:28.443683 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:28.443750 kubelet[2885]: W1112 21:35:28.443692 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:28.443750 kubelet[2885]: E1112 21:35:28.443736 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:28.444509 kubelet[2885]: E1112 21:35:28.444497 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:28.444600 kubelet[2885]: W1112 21:35:28.444582 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:28.445049 kubelet[2885]: E1112 21:35:28.445037 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:28.445193 kubelet[2885]: W1112 21:35:28.445173 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:28.445474 kubelet[2885]: E1112 21:35:28.445317 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:28.445474 kubelet[2885]: E1112 21:35:28.445443 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:28.445867 kubelet[2885]: E1112 21:35:28.445797 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:28.445867 kubelet[2885]: W1112 21:35:28.445807 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:28.446039 kubelet[2885]: E1112 21:35:28.445947 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:28.446406 kubelet[2885]: E1112 21:35:28.446323 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:28.446406 kubelet[2885]: W1112 21:35:28.446358 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:28.446857 kubelet[2885]: E1112 21:35:28.446436 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:28.447157 kubelet[2885]: E1112 21:35:28.447144 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:28.447222 kubelet[2885]: W1112 21:35:28.447212 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:28.447424 kubelet[2885]: E1112 21:35:28.447381 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:28.449230 kubelet[2885]: E1112 21:35:28.448816 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:28.449230 kubelet[2885]: W1112 21:35:28.448828 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:28.449230 kubelet[2885]: E1112 21:35:28.448871 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:28.449230 kubelet[2885]: E1112 21:35:28.449026 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:28.449230 kubelet[2885]: W1112 21:35:28.449034 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:28.449230 kubelet[2885]: E1112 21:35:28.449120 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:28.449549 kubelet[2885]: E1112 21:35:28.449454 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:28.449549 kubelet[2885]: W1112 21:35:28.449465 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:28.449549 kubelet[2885]: E1112 21:35:28.449499 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:28.449777 kubelet[2885]: E1112 21:35:28.449765 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:28.449959 kubelet[2885]: W1112 21:35:28.449831 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:28.449991 kubelet[2885]: E1112 21:35:28.449958 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:28.450617 kubelet[2885]: E1112 21:35:28.450599 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:28.450843 kubelet[2885]: W1112 21:35:28.450688 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:28.451110 kubelet[2885]: E1112 21:35:28.450900 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:28.452495 kubelet[2885]: E1112 21:35:28.452481 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:28.452678 kubelet[2885]: W1112 21:35:28.452576 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:28.452678 kubelet[2885]: E1112 21:35:28.452634 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:28.453567 kubelet[2885]: E1112 21:35:28.453002 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:28.453567 kubelet[2885]: W1112 21:35:28.453012 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:28.453567 kubelet[2885]: E1112 21:35:28.453123 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:28.453567 kubelet[2885]: E1112 21:35:28.453398 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:28.453567 kubelet[2885]: W1112 21:35:28.453405 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:28.453567 kubelet[2885]: E1112 21:35:28.453527 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:28.454240 kubelet[2885]: E1112 21:35:28.454207 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:28.454420 kubelet[2885]: W1112 21:35:28.454241 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:28.454420 kubelet[2885]: E1112 21:35:28.454276 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:28.454793 kubelet[2885]: E1112 21:35:28.454744 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:28.454793 kubelet[2885]: W1112 21:35:28.454757 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:28.454793 kubelet[2885]: E1112 21:35:28.454768 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:28.464274 containerd[1513]: time="2024-11-12T21:35:28.464208955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-65bz6,Uid:0aff941e-9bda-4d34-a57e-3c6bf15392cf,Namespace:calico-system,Attempt:0,} returns sandbox id \"5877f63725eccf5ddf0688649b961d1b1aef23c5b8f6a8dc12b98ab6eaaccc72\"" Nov 12 21:35:28.468935 kubelet[2885]: E1112 21:35:28.468432 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:28.468935 kubelet[2885]: W1112 21:35:28.468454 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:28.468935 kubelet[2885]: E1112 21:35:28.468474 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:30.089114 kubelet[2885]: E1112 21:35:30.088965 2885 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-chjk5" podUID="23bc69df-3188-4289-a376-a1e2658f5a79" Nov 12 21:35:30.933905 containerd[1513]: time="2024-11-12T21:35:30.933846616Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 21:35:30.935093 containerd[1513]: time="2024-11-12T21:35:30.934981910Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.0: active requests=0, bytes read=29849168" Nov 12 21:35:30.936059 containerd[1513]: time="2024-11-12T21:35:30.936023193Z" level=info msg="ImageCreate event name:\"sha256:eb8a933b39daca50b75ccf193cc6193e39512bc996c16898d43d4c1f39c8603b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 21:35:30.938330 containerd[1513]: time="2024-11-12T21:35:30.938300031Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:850e5f751e100580bffb57d1b70d4e90d90ecaab5ef1b6dc6a43dcd34a5e1057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 21:35:30.938969 containerd[1513]: time="2024-11-12T21:35:30.938842489Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.0\" with image id \"sha256:eb8a933b39daca50b75ccf193cc6193e39512bc996c16898d43d4c1f39c8603b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:850e5f751e100580bffb57d1b70d4e90d90ecaab5ef1b6dc6a43dcd34a5e1057\", size \"31342252\" in 2.527048679s" Nov 12 21:35:30.938969 containerd[1513]: time="2024-11-12T21:35:30.938870042Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.0\" returns image reference \"sha256:eb8a933b39daca50b75ccf193cc6193e39512bc996c16898d43d4c1f39c8603b\"" Nov 12 21:35:30.940108 containerd[1513]: time="2024-11-12T21:35:30.939966642Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\"" Nov 12 21:35:30.957118 containerd[1513]: time="2024-11-12T21:35:30.957058451Z" level=info msg="CreateContainer within sandbox \"8b35492c5463dea0c4a14fc6f989e37bd36860913c405d9827c124b7ad8e886e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 12 21:35:30.976301 containerd[1513]: time="2024-11-12T21:35:30.976257405Z" level=info msg="CreateContainer within sandbox \"8b35492c5463dea0c4a14fc6f989e37bd36860913c405d9827c124b7ad8e886e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b19187b66b83af49da397266c13430877a5e4471b85119a3fdc19faad1a77efd\"" Nov 12 21:35:30.976896 containerd[1513]: time="2024-11-12T21:35:30.976867352Z" level=info msg="StartContainer for \"b19187b66b83af49da397266c13430877a5e4471b85119a3fdc19faad1a77efd\"" Nov 12 21:35:31.036251 systemd[1]: Started cri-containerd-b19187b66b83af49da397266c13430877a5e4471b85119a3fdc19faad1a77efd.scope - libcontainer container b19187b66b83af49da397266c13430877a5e4471b85119a3fdc19faad1a77efd. Nov 12 21:35:31.084611 containerd[1513]: time="2024-11-12T21:35:31.084559621Z" level=info msg="StartContainer for \"b19187b66b83af49da397266c13430877a5e4471b85119a3fdc19faad1a77efd\" returns successfully" Nov 12 21:35:31.203793 kubelet[2885]: I1112 21:35:31.203089 2885 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-bdc5ccb87-ls5wx" podStartSLOduration=1.6744994420000001 podStartE2EDuration="4.203057934s" podCreationTimestamp="2024-11-12 21:35:27 +0000 UTC" firstStartedPulling="2024-11-12 21:35:28.411094506 +0000 UTC m=+22.435320015" lastFinishedPulling="2024-11-12 21:35:30.939652996 +0000 UTC m=+24.963878507" observedRunningTime="2024-11-12 21:35:31.203017298 +0000 UTC m=+25.227242828" watchObservedRunningTime="2024-11-12 21:35:31.203057934 +0000 UTC m=+25.227283444" Nov 12 21:35:31.251363 kubelet[2885]: E1112 21:35:31.251314 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:31.251363 kubelet[2885]: W1112 21:35:31.251344 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:31.251363 kubelet[2885]: E1112 21:35:31.251369 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:31.251630 kubelet[2885]: E1112 21:35:31.251604 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:31.251630 kubelet[2885]: W1112 21:35:31.251623 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:31.251709 kubelet[2885]: E1112 21:35:31.251635 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:31.251875 kubelet[2885]: E1112 21:35:31.251856 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:31.251875 kubelet[2885]: W1112 21:35:31.251869 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:31.251980 kubelet[2885]: E1112 21:35:31.251879 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:31.252154 kubelet[2885]: E1112 21:35:31.252136 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:31.252154 kubelet[2885]: W1112 21:35:31.252149 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:31.252238 kubelet[2885]: E1112 21:35:31.252160 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:31.252464 kubelet[2885]: E1112 21:35:31.252443 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:31.252464 kubelet[2885]: W1112 21:35:31.252456 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:31.252577 kubelet[2885]: E1112 21:35:31.252467 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:31.252708 kubelet[2885]: E1112 21:35:31.252681 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:31.252708 kubelet[2885]: W1112 21:35:31.252695 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:31.252708 kubelet[2885]: E1112 21:35:31.252706 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:31.252953 kubelet[2885]: E1112 21:35:31.252930 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:31.252953 kubelet[2885]: W1112 21:35:31.252944 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:31.253131 kubelet[2885]: E1112 21:35:31.252957 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:31.253197 kubelet[2885]: E1112 21:35:31.253185 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:31.253197 kubelet[2885]: W1112 21:35:31.253196 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:31.253308 kubelet[2885]: E1112 21:35:31.253204 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:31.253444 kubelet[2885]: E1112 21:35:31.253430 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:31.253619 kubelet[2885]: W1112 21:35:31.253512 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:31.253619 kubelet[2885]: E1112 21:35:31.253530 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:31.253942 kubelet[2885]: E1112 21:35:31.253921 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:31.253942 kubelet[2885]: W1112 21:35:31.253936 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:31.254022 kubelet[2885]: E1112 21:35:31.253948 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:31.254349 kubelet[2885]: E1112 21:35:31.254236 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:31.254349 kubelet[2885]: W1112 21:35:31.254251 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:31.254349 kubelet[2885]: E1112 21:35:31.254262 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:31.254574 kubelet[2885]: E1112 21:35:31.254474 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:31.254574 kubelet[2885]: W1112 21:35:31.254482 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:31.254574 kubelet[2885]: E1112 21:35:31.254492 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:31.255249 kubelet[2885]: E1112 21:35:31.254911 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:31.255249 kubelet[2885]: W1112 21:35:31.254921 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:31.255249 kubelet[2885]: E1112 21:35:31.254952 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:31.255472 kubelet[2885]: E1112 21:35:31.255367 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:31.255472 kubelet[2885]: W1112 21:35:31.255378 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:31.255472 kubelet[2885]: E1112 21:35:31.255387 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:31.255712 kubelet[2885]: E1112 21:35:31.255640 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:31.255712 kubelet[2885]: W1112 21:35:31.255658 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:31.255712 kubelet[2885]: E1112 21:35:31.255667 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:31.262142 kubelet[2885]: E1112 21:35:31.262113 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:31.262142 kubelet[2885]: W1112 21:35:31.262135 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:31.262234 kubelet[2885]: E1112 21:35:31.262152 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:31.262523 kubelet[2885]: E1112 21:35:31.262492 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:31.262575 kubelet[2885]: W1112 21:35:31.262528 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:31.262575 kubelet[2885]: E1112 21:35:31.262545 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:31.262843 kubelet[2885]: E1112 21:35:31.262828 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:31.262843 kubelet[2885]: W1112 21:35:31.262841 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:31.263665 kubelet[2885]: E1112 21:35:31.262856 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:31.263665 kubelet[2885]: E1112 21:35:31.263171 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:31.263665 kubelet[2885]: W1112 21:35:31.263182 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:31.263665 kubelet[2885]: E1112 21:35:31.263200 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:31.263665 kubelet[2885]: E1112 21:35:31.263402 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:31.263665 kubelet[2885]: W1112 21:35:31.263412 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:31.263665 kubelet[2885]: E1112 21:35:31.263427 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:31.263665 kubelet[2885]: E1112 21:35:31.263648 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:31.263665 kubelet[2885]: W1112 21:35:31.263658 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:31.264772 kubelet[2885]: E1112 21:35:31.263694 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:31.264772 kubelet[2885]: E1112 21:35:31.263925 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:31.264772 kubelet[2885]: W1112 21:35:31.263934 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:31.264772 kubelet[2885]: E1112 21:35:31.263960 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:31.264772 kubelet[2885]: E1112 21:35:31.264223 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:31.264772 kubelet[2885]: W1112 21:35:31.264241 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:31.264772 kubelet[2885]: E1112 21:35:31.264271 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:31.264772 kubelet[2885]: E1112 21:35:31.264496 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:31.264772 kubelet[2885]: W1112 21:35:31.264505 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:31.264772 kubelet[2885]: E1112 21:35:31.264519 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:31.265121 kubelet[2885]: E1112 21:35:31.264802 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:31.265121 kubelet[2885]: W1112 21:35:31.264813 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:31.265121 kubelet[2885]: E1112 21:35:31.264825 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:31.265121 kubelet[2885]: E1112 21:35:31.265029 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:31.265121 kubelet[2885]: W1112 21:35:31.265039 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:31.265121 kubelet[2885]: E1112 21:35:31.265049 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:31.265356 kubelet[2885]: E1112 21:35:31.265331 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:31.265435 kubelet[2885]: W1112 21:35:31.265346 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:31.266003 kubelet[2885]: E1112 21:35:31.265478 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:31.266003 kubelet[2885]: E1112 21:35:31.265742 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:31.266003 kubelet[2885]: W1112 21:35:31.265752 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:31.266003 kubelet[2885]: E1112 21:35:31.265978 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:31.266003 kubelet[2885]: E1112 21:35:31.265995 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:31.266003 kubelet[2885]: W1112 21:35:31.266005 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:31.266228 kubelet[2885]: E1112 21:35:31.266027 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:31.266393 kubelet[2885]: E1112 21:35:31.266349 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:31.266393 kubelet[2885]: W1112 21:35:31.266386 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:31.266468 kubelet[2885]: E1112 21:35:31.266404 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:31.266742 kubelet[2885]: E1112 21:35:31.266721 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:31.266742 kubelet[2885]: W1112 21:35:31.266733 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:31.266818 kubelet[2885]: E1112 21:35:31.266745 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:31.267319 kubelet[2885]: E1112 21:35:31.267013 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:31.267319 kubelet[2885]: W1112 21:35:31.267027 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:31.267319 kubelet[2885]: E1112 21:35:31.267039 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:31.267474 kubelet[2885]: E1112 21:35:31.267461 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:31.267545 kubelet[2885]: W1112 21:35:31.267520 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:31.267545 kubelet[2885]: E1112 21:35:31.267536 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:32.088938 kubelet[2885]: E1112 21:35:32.088504 2885 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-chjk5" podUID="23bc69df-3188-4289-a376-a1e2658f5a79" Nov 12 21:35:32.197123 kubelet[2885]: I1112 21:35:32.196180 2885 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 21:35:32.263328 kubelet[2885]: E1112 21:35:32.263294 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:32.263951 kubelet[2885]: W1112 21:35:32.263788 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:32.263951 kubelet[2885]: E1112 21:35:32.263821 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:32.264814 kubelet[2885]: E1112 21:35:32.264645 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:32.264814 kubelet[2885]: W1112 21:35:32.264657 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:32.264814 kubelet[2885]: E1112 21:35:32.264667 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:32.265697 kubelet[2885]: E1112 21:35:32.265383 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:32.265697 kubelet[2885]: W1112 21:35:32.265394 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:32.265697 kubelet[2885]: E1112 21:35:32.265405 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:32.266402 kubelet[2885]: E1112 21:35:32.266091 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:32.266402 kubelet[2885]: W1112 21:35:32.266106 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:32.266402 kubelet[2885]: E1112 21:35:32.266119 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:32.266956 kubelet[2885]: E1112 21:35:32.266799 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:32.266956 kubelet[2885]: W1112 21:35:32.266814 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:32.266956 kubelet[2885]: E1112 21:35:32.266825 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:32.267562 kubelet[2885]: E1112 21:35:32.267402 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:32.267562 kubelet[2885]: W1112 21:35:32.267413 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:32.267562 kubelet[2885]: E1112 21:35:32.267424 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:32.268488 kubelet[2885]: E1112 21:35:32.268021 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:32.268488 kubelet[2885]: W1112 21:35:32.268034 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:32.268488 kubelet[2885]: E1112 21:35:32.268045 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:32.268949 kubelet[2885]: E1112 21:35:32.268842 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:32.268949 kubelet[2885]: W1112 21:35:32.268853 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:32.268949 kubelet[2885]: E1112 21:35:32.268864 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:32.269538 kubelet[2885]: E1112 21:35:32.269366 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:32.269538 kubelet[2885]: W1112 21:35:32.269376 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:32.269538 kubelet[2885]: E1112 21:35:32.269387 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:32.269999 kubelet[2885]: E1112 21:35:32.269987 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:32.270234 kubelet[2885]: W1112 21:35:32.270095 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:32.270234 kubelet[2885]: E1112 21:35:32.270109 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:32.270939 kubelet[2885]: E1112 21:35:32.270818 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:32.270939 kubelet[2885]: W1112 21:35:32.270828 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:32.270939 kubelet[2885]: E1112 21:35:32.270838 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:32.272497 kubelet[2885]: E1112 21:35:32.271374 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:32.272497 kubelet[2885]: W1112 21:35:32.271385 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:32.272497 kubelet[2885]: E1112 21:35:32.271396 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:32.272766 kubelet[2885]: E1112 21:35:32.272666 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:32.272766 kubelet[2885]: W1112 21:35:32.272677 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:32.272766 kubelet[2885]: E1112 21:35:32.272688 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:32.272975 kubelet[2885]: E1112 21:35:32.272888 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:32.272975 kubelet[2885]: W1112 21:35:32.272898 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:32.272975 kubelet[2885]: E1112 21:35:32.272909 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:32.273275 kubelet[2885]: E1112 21:35:32.273262 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:32.273372 kubelet[2885]: W1112 21:35:32.273357 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:32.273786 kubelet[2885]: E1112 21:35:32.273456 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:32.274912 kubelet[2885]: E1112 21:35:32.274893 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:32.275279 kubelet[2885]: W1112 21:35:32.275155 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:32.275279 kubelet[2885]: E1112 21:35:32.275177 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:32.275799 kubelet[2885]: E1112 21:35:32.275641 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:32.275799 kubelet[2885]: W1112 21:35:32.275654 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:32.275799 kubelet[2885]: E1112 21:35:32.275668 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:32.278017 kubelet[2885]: E1112 21:35:32.277749 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:32.278017 kubelet[2885]: W1112 21:35:32.277763 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:32.278017 kubelet[2885]: E1112 21:35:32.277785 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:32.278526 kubelet[2885]: E1112 21:35:32.278253 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:32.278526 kubelet[2885]: W1112 21:35:32.278267 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:32.278526 kubelet[2885]: E1112 21:35:32.278320 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:32.278937 kubelet[2885]: E1112 21:35:32.278773 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:32.278937 kubelet[2885]: W1112 21:35:32.278784 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:32.278937 kubelet[2885]: E1112 21:35:32.278815 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:32.279600 kubelet[2885]: E1112 21:35:32.279484 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:32.279600 kubelet[2885]: W1112 21:35:32.279495 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:32.279600 kubelet[2885]: E1112 21:35:32.279556 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:32.280505 kubelet[2885]: E1112 21:35:32.280143 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:32.280505 kubelet[2885]: W1112 21:35:32.280154 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:32.280787 kubelet[2885]: E1112 21:35:32.280624 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:32.280787 kubelet[2885]: W1112 21:35:32.280637 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:32.280982 kubelet[2885]: E1112 21:35:32.280939 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:32.281264 kubelet[2885]: E1112 21:35:32.281122 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:32.281776 kubelet[2885]: E1112 21:35:32.281391 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:32.281776 kubelet[2885]: W1112 21:35:32.281404 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:32.281776 kubelet[2885]: E1112 21:35:32.281647 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:32.282946 kubelet[2885]: E1112 21:35:32.282592 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:32.282946 kubelet[2885]: W1112 21:35:32.282606 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:32.282946 kubelet[2885]: E1112 21:35:32.282623 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:32.283187 kubelet[2885]: E1112 21:35:32.283172 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:32.283271 kubelet[2885]: W1112 21:35:32.283257 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:32.283813 kubelet[2885]: E1112 21:35:32.283574 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:32.284544 kubelet[2885]: E1112 21:35:32.284529 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:32.284671 kubelet[2885]: W1112 21:35:32.284591 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:32.284734 kubelet[2885]: E1112 21:35:32.284719 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:32.285226 kubelet[2885]: E1112 21:35:32.285182 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:32.285226 kubelet[2885]: W1112 21:35:32.285192 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:32.285826 kubelet[2885]: E1112 21:35:32.285536 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:32.286142 kubelet[2885]: E1112 21:35:32.286053 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:32.286142 kubelet[2885]: W1112 21:35:32.286064 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:32.286613 kubelet[2885]: E1112 21:35:32.286513 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:32.287381 kubelet[2885]: E1112 21:35:32.286732 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:32.287381 kubelet[2885]: W1112 21:35:32.286745 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:32.287381 kubelet[2885]: E1112 21:35:32.287113 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:32.288132 kubelet[2885]: E1112 21:35:32.288116 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:32.288568 kubelet[2885]: W1112 21:35:32.288194 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:32.288568 kubelet[2885]: E1112 21:35:32.288218 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:32.289735 kubelet[2885]: E1112 21:35:32.289490 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:32.289735 kubelet[2885]: W1112 21:35:32.289504 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:32.289735 kubelet[2885]: E1112 21:35:32.289589 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:32.289929 kubelet[2885]: E1112 21:35:32.289917 2885 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 21:35:32.290134 kubelet[2885]: W1112 21:35:32.290100 2885 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 21:35:32.290134 kubelet[2885]: E1112 21:35:32.290114 2885 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 21:35:32.390189 containerd[1513]: time="2024-11-12T21:35:32.389613177Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 21:35:32.392406 containerd[1513]: time="2024-11-12T21:35:32.392360168Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0: active requests=0, bytes read=5362116" Nov 12 21:35:32.394192 containerd[1513]: time="2024-11-12T21:35:32.393441049Z" level=info msg="ImageCreate event name:\"sha256:3fbafc0cb73520aede9a07469f27fd8798e681807d14465761f19c8c2bda1cec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 21:35:32.395976 containerd[1513]: time="2024-11-12T21:35:32.395939248Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:bed11f00e388b9bbf6eb3be410d4bc86d7020f790902b87f9e330df5a2058769\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 21:35:32.397581 containerd[1513]: time="2024-11-12T21:35:32.397553199Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" with image id \"sha256:3fbafc0cb73520aede9a07469f27fd8798e681807d14465761f19c8c2bda1cec\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:bed11f00e388b9bbf6eb3be410d4bc86d7020f790902b87f9e330df5a2058769\", size \"6855168\" in 1.457561461s" Nov 12 21:35:32.397677 containerd[1513]: time="2024-11-12T21:35:32.397660934Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" returns image reference \"sha256:3fbafc0cb73520aede9a07469f27fd8798e681807d14465761f19c8c2bda1cec\"" Nov 12 21:35:32.400331 containerd[1513]: time="2024-11-12T21:35:32.400306622Z" level=info msg="CreateContainer within sandbox \"5877f63725eccf5ddf0688649b961d1b1aef23c5b8f6a8dc12b98ab6eaaccc72\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 12 21:35:32.419567 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1501123940.mount: Deactivated successfully. Nov 12 21:35:32.427055 containerd[1513]: time="2024-11-12T21:35:32.426786766Z" level=info msg="CreateContainer within sandbox \"5877f63725eccf5ddf0688649b961d1b1aef23c5b8f6a8dc12b98ab6eaaccc72\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"2cbd8d634d77c86bceddcccc683e1678227921fd1acf7b95750b0948f9e97405\"" Nov 12 21:35:32.429440 containerd[1513]: time="2024-11-12T21:35:32.428223821Z" level=info msg="StartContainer for \"2cbd8d634d77c86bceddcccc683e1678227921fd1acf7b95750b0948f9e97405\"" Nov 12 21:35:32.510546 systemd[1]: Started cri-containerd-2cbd8d634d77c86bceddcccc683e1678227921fd1acf7b95750b0948f9e97405.scope - libcontainer container 2cbd8d634d77c86bceddcccc683e1678227921fd1acf7b95750b0948f9e97405. Nov 12 21:35:32.599869 containerd[1513]: time="2024-11-12T21:35:32.599830879Z" level=info msg="StartContainer for \"2cbd8d634d77c86bceddcccc683e1678227921fd1acf7b95750b0948f9e97405\" returns successfully" Nov 12 21:35:32.625511 systemd[1]: cri-containerd-2cbd8d634d77c86bceddcccc683e1678227921fd1acf7b95750b0948f9e97405.scope: Deactivated successfully. Nov 12 21:35:32.711705 containerd[1513]: time="2024-11-12T21:35:32.676669782Z" level=info msg="shim disconnected" id=2cbd8d634d77c86bceddcccc683e1678227921fd1acf7b95750b0948f9e97405 namespace=k8s.io Nov 12 21:35:32.711705 containerd[1513]: time="2024-11-12T21:35:32.711428750Z" level=warning msg="cleaning up after shim disconnected" id=2cbd8d634d77c86bceddcccc683e1678227921fd1acf7b95750b0948f9e97405 namespace=k8s.io Nov 12 21:35:32.711705 containerd[1513]: time="2024-11-12T21:35:32.711446723Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 21:35:32.947047 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2cbd8d634d77c86bceddcccc683e1678227921fd1acf7b95750b0948f9e97405-rootfs.mount: Deactivated successfully. Nov 12 21:35:33.200661 containerd[1513]: time="2024-11-12T21:35:33.200612584Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.0\"" Nov 12 21:35:34.089523 kubelet[2885]: E1112 21:35:34.088941 2885 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-chjk5" podUID="23bc69df-3188-4289-a376-a1e2658f5a79" Nov 12 21:35:36.089015 kubelet[2885]: E1112 21:35:36.088937 2885 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-chjk5" podUID="23bc69df-3188-4289-a376-a1e2658f5a79" Nov 12 21:35:36.138243 containerd[1513]: time="2024-11-12T21:35:36.138174542Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 21:35:36.139144 containerd[1513]: time="2024-11-12T21:35:36.139100831Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.0: active requests=0, bytes read=96163683" Nov 12 21:35:36.140189 containerd[1513]: time="2024-11-12T21:35:36.140160221Z" level=info msg="ImageCreate event name:\"sha256:124793defc2ae544a3e0dcd1a225bff5166dbebc1bdacb41c4161b9c0c53425c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 21:35:36.142275 containerd[1513]: time="2024-11-12T21:35:36.142215744Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:a7c1b02375aa96ae882655397cd9dd0dcc867d9587ce7b866cf9cd65fd7ca1dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 21:35:36.143099 containerd[1513]: time="2024-11-12T21:35:36.142722095Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.0\" with image id \"sha256:124793defc2ae544a3e0dcd1a225bff5166dbebc1bdacb41c4161b9c0c53425c\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:a7c1b02375aa96ae882655397cd9dd0dcc867d9587ce7b866cf9cd65fd7ca1dd\", size \"97656775\" in 2.942068894s" Nov 12 21:35:36.143099 containerd[1513]: time="2024-11-12T21:35:36.142763173Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.0\" returns image reference \"sha256:124793defc2ae544a3e0dcd1a225bff5166dbebc1bdacb41c4161b9c0c53425c\"" Nov 12 21:35:36.145208 containerd[1513]: time="2024-11-12T21:35:36.145145896Z" level=info msg="CreateContainer within sandbox \"5877f63725eccf5ddf0688649b961d1b1aef23c5b8f6a8dc12b98ab6eaaccc72\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 12 21:35:36.187453 containerd[1513]: time="2024-11-12T21:35:36.187387230Z" level=info msg="CreateContainer within sandbox \"5877f63725eccf5ddf0688649b961d1b1aef23c5b8f6a8dc12b98ab6eaaccc72\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"66620c7c7d17fdf44f3f921cff0f1be1f682156c5a5255d896ca32faaec672c7\"" Nov 12 21:35:36.192326 containerd[1513]: time="2024-11-12T21:35:36.190375333Z" level=info msg="StartContainer for \"66620c7c7d17fdf44f3f921cff0f1be1f682156c5a5255d896ca32faaec672c7\"" Nov 12 21:35:36.262701 systemd[1]: run-containerd-runc-k8s.io-66620c7c7d17fdf44f3f921cff0f1be1f682156c5a5255d896ca32faaec672c7-runc.Zdyzxa.mount: Deactivated successfully. Nov 12 21:35:36.272354 systemd[1]: Started cri-containerd-66620c7c7d17fdf44f3f921cff0f1be1f682156c5a5255d896ca32faaec672c7.scope - libcontainer container 66620c7c7d17fdf44f3f921cff0f1be1f682156c5a5255d896ca32faaec672c7. Nov 12 21:35:36.311573 containerd[1513]: time="2024-11-12T21:35:36.311488991Z" level=info msg="StartContainer for \"66620c7c7d17fdf44f3f921cff0f1be1f682156c5a5255d896ca32faaec672c7\" returns successfully" Nov 12 21:35:36.831932 systemd[1]: cri-containerd-66620c7c7d17fdf44f3f921cff0f1be1f682156c5a5255d896ca32faaec672c7.scope: Deactivated successfully. Nov 12 21:35:36.865815 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-66620c7c7d17fdf44f3f921cff0f1be1f682156c5a5255d896ca32faaec672c7-rootfs.mount: Deactivated successfully. Nov 12 21:35:36.898171 kubelet[2885]: I1112 21:35:36.897730 2885 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Nov 12 21:35:36.902364 containerd[1513]: time="2024-11-12T21:35:36.902249333Z" level=info msg="shim disconnected" id=66620c7c7d17fdf44f3f921cff0f1be1f682156c5a5255d896ca32faaec672c7 namespace=k8s.io Nov 12 21:35:36.902491 containerd[1513]: time="2024-11-12T21:35:36.902365773Z" level=warning msg="cleaning up after shim disconnected" id=66620c7c7d17fdf44f3f921cff0f1be1f682156c5a5255d896ca32faaec672c7 namespace=k8s.io Nov 12 21:35:36.902491 containerd[1513]: time="2024-11-12T21:35:36.902412111Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 21:35:36.923909 containerd[1513]: time="2024-11-12T21:35:36.923814302Z" level=warning msg="cleanup warnings time=\"2024-11-12T21:35:36Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 12 21:35:36.938734 kubelet[2885]: I1112 21:35:36.936773 2885 topology_manager.go:215] "Topology Admit Handler" podUID="fb426bfe-a7f9-4da9-8328-a78ab7f2ab08" podNamespace="kube-system" podName="coredns-7db6d8ff4d-bh8r2" Nov 12 21:35:36.948875 kubelet[2885]: I1112 21:35:36.946189 2885 topology_manager.go:215] "Topology Admit Handler" podUID="cb73e657-cad9-47bd-b125-00eeaf912c0a" podNamespace="calico-system" podName="calico-kube-controllers-78cbf79fc8-jfgm6" Nov 12 21:35:36.948875 kubelet[2885]: I1112 21:35:36.946318 2885 topology_manager.go:215] "Topology Admit Handler" podUID="3ebecd64-9e86-4ac7-9cb2-b37aacbc6d0d" podNamespace="kube-system" podName="coredns-7db6d8ff4d-w2jtl" Nov 12 21:35:36.948875 kubelet[2885]: I1112 21:35:36.946383 2885 topology_manager.go:215] "Topology Admit Handler" podUID="65dd8fef-58ef-4aad-a3ea-c7f983131d2d" podNamespace="calico-apiserver" podName="calico-apiserver-775b57f4d6-282rn" Nov 12 21:35:36.947735 systemd[1]: Created slice kubepods-burstable-podfb426bfe_a7f9_4da9_8328_a78ab7f2ab08.slice - libcontainer container kubepods-burstable-podfb426bfe_a7f9_4da9_8328_a78ab7f2ab08.slice. Nov 12 21:35:36.950760 kubelet[2885]: I1112 21:35:36.950731 2885 topology_manager.go:215] "Topology Admit Handler" podUID="f899bca3-0451-45a6-b02f-8752a2ae4ada" podNamespace="calico-apiserver" podName="calico-apiserver-775b57f4d6-ntjrt" Nov 12 21:35:36.961944 systemd[1]: Created slice kubepods-besteffort-pod65dd8fef_58ef_4aad_a3ea_c7f983131d2d.slice - libcontainer container kubepods-besteffort-pod65dd8fef_58ef_4aad_a3ea_c7f983131d2d.slice. Nov 12 21:35:36.971755 systemd[1]: Created slice kubepods-burstable-pod3ebecd64_9e86_4ac7_9cb2_b37aacbc6d0d.slice - libcontainer container kubepods-burstable-pod3ebecd64_9e86_4ac7_9cb2_b37aacbc6d0d.slice. Nov 12 21:35:36.982633 systemd[1]: Created slice kubepods-besteffort-podf899bca3_0451_45a6_b02f_8752a2ae4ada.slice - libcontainer container kubepods-besteffort-podf899bca3_0451_45a6_b02f_8752a2ae4ada.slice. Nov 12 21:35:36.991758 systemd[1]: Created slice kubepods-besteffort-podcb73e657_cad9_47bd_b125_00eeaf912c0a.slice - libcontainer container kubepods-besteffort-podcb73e657_cad9_47bd_b125_00eeaf912c0a.slice. Nov 12 21:35:37.009549 kubelet[2885]: I1112 21:35:37.009305 2885 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb73e657-cad9-47bd-b125-00eeaf912c0a-tigera-ca-bundle\") pod \"calico-kube-controllers-78cbf79fc8-jfgm6\" (UID: \"cb73e657-cad9-47bd-b125-00eeaf912c0a\") " pod="calico-system/calico-kube-controllers-78cbf79fc8-jfgm6" Nov 12 21:35:37.009549 kubelet[2885]: I1112 21:35:37.009348 2885 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfkvw\" (UniqueName: \"kubernetes.io/projected/3ebecd64-9e86-4ac7-9cb2-b37aacbc6d0d-kube-api-access-dfkvw\") pod \"coredns-7db6d8ff4d-w2jtl\" (UID: \"3ebecd64-9e86-4ac7-9cb2-b37aacbc6d0d\") " pod="kube-system/coredns-7db6d8ff4d-w2jtl" Nov 12 21:35:37.009549 kubelet[2885]: I1112 21:35:37.009369 2885 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/65dd8fef-58ef-4aad-a3ea-c7f983131d2d-calico-apiserver-certs\") pod \"calico-apiserver-775b57f4d6-282rn\" (UID: \"65dd8fef-58ef-4aad-a3ea-c7f983131d2d\") " pod="calico-apiserver/calico-apiserver-775b57f4d6-282rn" Nov 12 21:35:37.009549 kubelet[2885]: I1112 21:35:37.009386 2885 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp6pd\" (UniqueName: \"kubernetes.io/projected/fb426bfe-a7f9-4da9-8328-a78ab7f2ab08-kube-api-access-tp6pd\") pod \"coredns-7db6d8ff4d-bh8r2\" (UID: \"fb426bfe-a7f9-4da9-8328-a78ab7f2ab08\") " pod="kube-system/coredns-7db6d8ff4d-bh8r2" Nov 12 21:35:37.009549 kubelet[2885]: I1112 21:35:37.009460 2885 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbg2m\" (UniqueName: \"kubernetes.io/projected/cb73e657-cad9-47bd-b125-00eeaf912c0a-kube-api-access-tbg2m\") pod \"calico-kube-controllers-78cbf79fc8-jfgm6\" (UID: \"cb73e657-cad9-47bd-b125-00eeaf912c0a\") " pod="calico-system/calico-kube-controllers-78cbf79fc8-jfgm6" Nov 12 21:35:37.009852 kubelet[2885]: I1112 21:35:37.009495 2885 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f899bca3-0451-45a6-b02f-8752a2ae4ada-calico-apiserver-certs\") pod \"calico-apiserver-775b57f4d6-ntjrt\" (UID: \"f899bca3-0451-45a6-b02f-8752a2ae4ada\") " pod="calico-apiserver/calico-apiserver-775b57f4d6-ntjrt" Nov 12 21:35:37.009852 kubelet[2885]: I1112 21:35:37.009511 2885 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb426bfe-a7f9-4da9-8328-a78ab7f2ab08-config-volume\") pod \"coredns-7db6d8ff4d-bh8r2\" (UID: \"fb426bfe-a7f9-4da9-8328-a78ab7f2ab08\") " pod="kube-system/coredns-7db6d8ff4d-bh8r2" Nov 12 21:35:37.010270 kubelet[2885]: I1112 21:35:37.009932 2885 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3ebecd64-9e86-4ac7-9cb2-b37aacbc6d0d-config-volume\") pod \"coredns-7db6d8ff4d-w2jtl\" (UID: \"3ebecd64-9e86-4ac7-9cb2-b37aacbc6d0d\") " pod="kube-system/coredns-7db6d8ff4d-w2jtl" Nov 12 21:35:37.010270 kubelet[2885]: I1112 21:35:37.009954 2885 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fcrc\" (UniqueName: \"kubernetes.io/projected/f899bca3-0451-45a6-b02f-8752a2ae4ada-kube-api-access-5fcrc\") pod \"calico-apiserver-775b57f4d6-ntjrt\" (UID: \"f899bca3-0451-45a6-b02f-8752a2ae4ada\") " pod="calico-apiserver/calico-apiserver-775b57f4d6-ntjrt" Nov 12 21:35:37.010270 kubelet[2885]: I1112 21:35:37.009971 2885 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8dxv\" (UniqueName: \"kubernetes.io/projected/65dd8fef-58ef-4aad-a3ea-c7f983131d2d-kube-api-access-d8dxv\") pod \"calico-apiserver-775b57f4d6-282rn\" (UID: \"65dd8fef-58ef-4aad-a3ea-c7f983131d2d\") " pod="calico-apiserver/calico-apiserver-775b57f4d6-282rn" Nov 12 21:35:37.216669 containerd[1513]: time="2024-11-12T21:35:37.216625855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.0\"" Nov 12 21:35:37.256422 containerd[1513]: time="2024-11-12T21:35:37.255786192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bh8r2,Uid:fb426bfe-a7f9-4da9-8328-a78ab7f2ab08,Namespace:kube-system,Attempt:0,}" Nov 12 21:35:37.269959 containerd[1513]: time="2024-11-12T21:35:37.269906012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-775b57f4d6-282rn,Uid:65dd8fef-58ef-4aad-a3ea-c7f983131d2d,Namespace:calico-apiserver,Attempt:0,}" Nov 12 21:35:37.286945 containerd[1513]: time="2024-11-12T21:35:37.286646037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-w2jtl,Uid:3ebecd64-9e86-4ac7-9cb2-b37aacbc6d0d,Namespace:kube-system,Attempt:0,}" Nov 12 21:35:37.295776 containerd[1513]: time="2024-11-12T21:35:37.295326310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-775b57f4d6-ntjrt,Uid:f899bca3-0451-45a6-b02f-8752a2ae4ada,Namespace:calico-apiserver,Attempt:0,}" Nov 12 21:35:37.297966 containerd[1513]: time="2024-11-12T21:35:37.297931206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78cbf79fc8-jfgm6,Uid:cb73e657-cad9-47bd-b125-00eeaf912c0a,Namespace:calico-system,Attempt:0,}" Nov 12 21:35:37.596398 containerd[1513]: time="2024-11-12T21:35:37.595913111Z" level=error msg="Failed to destroy network for sandbox \"6eeddde6f89415f32a3a1d8714ee27076ab3ee3803274679d14cba5e6ea98103\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 21:35:37.597749 containerd[1513]: time="2024-11-12T21:35:37.597293983Z" level=error msg="Failed to destroy network for sandbox \"e193a8602cd3ccb9d63786c8fe3d8bd4e62e5db12832377de108e7fdaac316ba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 21:35:37.605362 containerd[1513]: time="2024-11-12T21:35:37.605159158Z" level=error msg="encountered an error cleaning up failed sandbox \"e193a8602cd3ccb9d63786c8fe3d8bd4e62e5db12832377de108e7fdaac316ba\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 21:35:37.605526 containerd[1513]: time="2024-11-12T21:35:37.605264447Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bh8r2,Uid:fb426bfe-a7f9-4da9-8328-a78ab7f2ab08,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e193a8602cd3ccb9d63786c8fe3d8bd4e62e5db12832377de108e7fdaac316ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 21:35:37.612450 containerd[1513]: time="2024-11-12T21:35:37.605291149Z" level=error msg="encountered an error cleaning up failed sandbox \"6eeddde6f89415f32a3a1d8714ee27076ab3ee3803274679d14cba5e6ea98103\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 21:35:37.612603 containerd[1513]: time="2024-11-12T21:35:37.612475771Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-775b57f4d6-282rn,Uid:65dd8fef-58ef-4aad-a3ea-c7f983131d2d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6eeddde6f89415f32a3a1d8714ee27076ab3ee3803274679d14cba5e6ea98103\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 21:35:37.612603 containerd[1513]: time="2024-11-12T21:35:37.605369137Z" level=error msg="Failed to destroy network for sandbox \"4df54e520198897be3d1ce315a70fa186f4bdd9c73c33c0e6ec7fc90594a1fc7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 21:35:37.612992 containerd[1513]: time="2024-11-12T21:35:37.612891521Z" level=error msg="encountered an error cleaning up failed sandbox \"4df54e520198897be3d1ce315a70fa186f4bdd9c73c33c0e6ec7fc90594a1fc7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 21:35:37.612992 containerd[1513]: time="2024-11-12T21:35:37.612926657Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-775b57f4d6-ntjrt,Uid:f899bca3-0451-45a6-b02f-8752a2ae4ada,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4df54e520198897be3d1ce315a70fa186f4bdd9c73c33c0e6ec7fc90594a1fc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 21:35:37.613527 kubelet[2885]: E1112 21:35:37.613310 2885 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4df54e520198897be3d1ce315a70fa186f4bdd9c73c33c0e6ec7fc90594a1fc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 21:35:37.613527 kubelet[2885]: E1112 21:35:37.613381 2885 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4df54e520198897be3d1ce315a70fa186f4bdd9c73c33c0e6ec7fc90594a1fc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-775b57f4d6-ntjrt" Nov 12 21:35:37.613527 kubelet[2885]: E1112 21:35:37.613401 2885 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4df54e520198897be3d1ce315a70fa186f4bdd9c73c33c0e6ec7fc90594a1fc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-775b57f4d6-ntjrt" Nov 12 21:35:37.615335 kubelet[2885]: E1112 21:35:37.614310 2885 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e193a8602cd3ccb9d63786c8fe3d8bd4e62e5db12832377de108e7fdaac316ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 21:35:37.615335 kubelet[2885]: E1112 21:35:37.614344 2885 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e193a8602cd3ccb9d63786c8fe3d8bd4e62e5db12832377de108e7fdaac316ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-bh8r2" Nov 12 21:35:37.615335 kubelet[2885]: E1112 21:35:37.614369 2885 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e193a8602cd3ccb9d63786c8fe3d8bd4e62e5db12832377de108e7fdaac316ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-bh8r2" Nov 12 21:35:37.617227 containerd[1513]: time="2024-11-12T21:35:37.614796417Z" level=error msg="Failed to destroy network for sandbox \"9d2f8436b5618039a0964be04b8d3610f4cccf22590f613967f75a2a13398bc0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 21:35:37.617227 containerd[1513]: time="2024-11-12T21:35:37.615598000Z" level=error msg="encountered an error cleaning up failed sandbox \"9d2f8436b5618039a0964be04b8d3610f4cccf22590f613967f75a2a13398bc0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 21:35:37.617227 containerd[1513]: time="2024-11-12T21:35:37.615630020Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78cbf79fc8-jfgm6,Uid:cb73e657-cad9-47bd-b125-00eeaf912c0a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9d2f8436b5618039a0964be04b8d3610f4cccf22590f613967f75a2a13398bc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 21:35:37.617338 kubelet[2885]: E1112 21:35:37.614404 2885 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-bh8r2_kube-system(fb426bfe-a7f9-4da9-8328-a78ab7f2ab08)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-bh8r2_kube-system(fb426bfe-a7f9-4da9-8328-a78ab7f2ab08)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e193a8602cd3ccb9d63786c8fe3d8bd4e62e5db12832377de108e7fdaac316ba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-bh8r2" podUID="fb426bfe-a7f9-4da9-8328-a78ab7f2ab08" Nov 12 21:35:37.617338 kubelet[2885]: E1112 21:35:37.614454 2885 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6eeddde6f89415f32a3a1d8714ee27076ab3ee3803274679d14cba5e6ea98103\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 21:35:37.617338 kubelet[2885]: E1112 21:35:37.614477 2885 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6eeddde6f89415f32a3a1d8714ee27076ab3ee3803274679d14cba5e6ea98103\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-775b57f4d6-282rn" Nov 12 21:35:37.617451 kubelet[2885]: E1112 21:35:37.614494 2885 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6eeddde6f89415f32a3a1d8714ee27076ab3ee3803274679d14cba5e6ea98103\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-775b57f4d6-282rn" Nov 12 21:35:37.617451 kubelet[2885]: E1112 21:35:37.614519 2885 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-775b57f4d6-282rn_calico-apiserver(65dd8fef-58ef-4aad-a3ea-c7f983131d2d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-775b57f4d6-282rn_calico-apiserver(65dd8fef-58ef-4aad-a3ea-c7f983131d2d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6eeddde6f89415f32a3a1d8714ee27076ab3ee3803274679d14cba5e6ea98103\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-775b57f4d6-282rn" podUID="65dd8fef-58ef-4aad-a3ea-c7f983131d2d" Nov 12 21:35:37.617523 kubelet[2885]: E1112 21:35:37.613460 2885 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-775b57f4d6-ntjrt_calico-apiserver(f899bca3-0451-45a6-b02f-8752a2ae4ada)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-775b57f4d6-ntjrt_calico-apiserver(f899bca3-0451-45a6-b02f-8752a2ae4ada)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4df54e520198897be3d1ce315a70fa186f4bdd9c73c33c0e6ec7fc90594a1fc7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-775b57f4d6-ntjrt" podUID="f899bca3-0451-45a6-b02f-8752a2ae4ada" Nov 12 21:35:37.617523 kubelet[2885]: E1112 21:35:37.616965 2885 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d2f8436b5618039a0964be04b8d3610f4cccf22590f613967f75a2a13398bc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 21:35:37.617523 kubelet[2885]: E1112 21:35:37.617013 2885 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d2f8436b5618039a0964be04b8d3610f4cccf22590f613967f75a2a13398bc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78cbf79fc8-jfgm6" Nov 12 21:35:37.618192 kubelet[2885]: E1112 21:35:37.617034 2885 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d2f8436b5618039a0964be04b8d3610f4cccf22590f613967f75a2a13398bc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78cbf79fc8-jfgm6" Nov 12 21:35:37.618192 kubelet[2885]: E1112 21:35:37.617061 2885 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-78cbf79fc8-jfgm6_calico-system(cb73e657-cad9-47bd-b125-00eeaf912c0a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-78cbf79fc8-jfgm6_calico-system(cb73e657-cad9-47bd-b125-00eeaf912c0a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9d2f8436b5618039a0964be04b8d3610f4cccf22590f613967f75a2a13398bc0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78cbf79fc8-jfgm6" podUID="cb73e657-cad9-47bd-b125-00eeaf912c0a" Nov 12 21:35:37.619975 containerd[1513]: time="2024-11-12T21:35:37.619848439Z" level=error msg="Failed to destroy network for sandbox \"62dad4452bfc23a677bb01d27f28c25ba0c58f194a2869eadb5c393fcafb21d1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 21:35:37.620406 containerd[1513]: time="2024-11-12T21:35:37.620374880Z" level=error msg="encountered an error cleaning up failed sandbox \"62dad4452bfc23a677bb01d27f28c25ba0c58f194a2869eadb5c393fcafb21d1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 21:35:37.620473 containerd[1513]: time="2024-11-12T21:35:37.620418573Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-w2jtl,Uid:3ebecd64-9e86-4ac7-9cb2-b37aacbc6d0d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"62dad4452bfc23a677bb01d27f28c25ba0c58f194a2869eadb5c393fcafb21d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 21:35:37.620699 kubelet[2885]: E1112 21:35:37.620614 2885 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62dad4452bfc23a677bb01d27f28c25ba0c58f194a2869eadb5c393fcafb21d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 21:35:37.620699 kubelet[2885]: E1112 21:35:37.620689 2885 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62dad4452bfc23a677bb01d27f28c25ba0c58f194a2869eadb5c393fcafb21d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-w2jtl" Nov 12 21:35:37.620767 kubelet[2885]: E1112 21:35:37.620707 2885 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62dad4452bfc23a677bb01d27f28c25ba0c58f194a2869eadb5c393fcafb21d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-w2jtl" Nov 12 21:35:37.620767 kubelet[2885]: E1112 21:35:37.620746 2885 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-w2jtl_kube-system(3ebecd64-9e86-4ac7-9cb2-b37aacbc6d0d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-w2jtl_kube-system(3ebecd64-9e86-4ac7-9cb2-b37aacbc6d0d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"62dad4452bfc23a677bb01d27f28c25ba0c58f194a2869eadb5c393fcafb21d1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-w2jtl" podUID="3ebecd64-9e86-4ac7-9cb2-b37aacbc6d0d" Nov 12 21:35:38.104332 systemd[1]: Created slice kubepods-besteffort-pod23bc69df_3188_4289_a376_a1e2658f5a79.slice - libcontainer container kubepods-besteffort-pod23bc69df_3188_4289_a376_a1e2658f5a79.slice. Nov 12 21:35:38.109660 containerd[1513]: time="2024-11-12T21:35:38.109587307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-chjk5,Uid:23bc69df-3188-4289-a376-a1e2658f5a79,Namespace:calico-system,Attempt:0,}" Nov 12 21:35:38.185642 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-62dad4452bfc23a677bb01d27f28c25ba0c58f194a2869eadb5c393fcafb21d1-shm.mount: Deactivated successfully. Nov 12 21:35:38.186312 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6eeddde6f89415f32a3a1d8714ee27076ab3ee3803274679d14cba5e6ea98103-shm.mount: Deactivated successfully. Nov 12 21:35:38.186503 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e193a8602cd3ccb9d63786c8fe3d8bd4e62e5db12832377de108e7fdaac316ba-shm.mount: Deactivated successfully. Nov 12 21:35:38.211727 containerd[1513]: time="2024-11-12T21:35:38.211640508Z" level=error msg="Failed to destroy network for sandbox \"de9e54db176e81dc03a8dbaa7c3a1062418e30c48bb64f5ec3ef4c548f2b7a68\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 21:35:38.212776 containerd[1513]: time="2024-11-12T21:35:38.212690792Z" level=error msg="encountered an error cleaning up failed sandbox \"de9e54db176e81dc03a8dbaa7c3a1062418e30c48bb64f5ec3ef4c548f2b7a68\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 21:35:38.212872 containerd[1513]: time="2024-11-12T21:35:38.212822212Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-chjk5,Uid:23bc69df-3188-4289-a376-a1e2658f5a79,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"de9e54db176e81dc03a8dbaa7c3a1062418e30c48bb64f5ec3ef4c548f2b7a68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 21:35:38.215135 kubelet[2885]: E1112 21:35:38.215057 2885 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de9e54db176e81dc03a8dbaa7c3a1062418e30c48bb64f5ec3ef4c548f2b7a68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 21:35:38.215208 kubelet[2885]: E1112 21:35:38.215176 2885 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de9e54db176e81dc03a8dbaa7c3a1062418e30c48bb64f5ec3ef4c548f2b7a68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-chjk5" Nov 12 21:35:38.215256 kubelet[2885]: E1112 21:35:38.215223 2885 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de9e54db176e81dc03a8dbaa7c3a1062418e30c48bb64f5ec3ef4c548f2b7a68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-chjk5" Nov 12 21:35:38.215449 kubelet[2885]: E1112 21:35:38.215320 2885 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-chjk5_calico-system(23bc69df-3188-4289-a376-a1e2658f5a79)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-chjk5_calico-system(23bc69df-3188-4289-a376-a1e2658f5a79)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"de9e54db176e81dc03a8dbaa7c3a1062418e30c48bb64f5ec3ef4c548f2b7a68\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-chjk5" podUID="23bc69df-3188-4289-a376-a1e2658f5a79" Nov 12 21:35:38.218684 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-de9e54db176e81dc03a8dbaa7c3a1062418e30c48bb64f5ec3ef4c548f2b7a68-shm.mount: Deactivated successfully. Nov 12 21:35:38.219587 kubelet[2885]: I1112 21:35:38.218729 2885 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de9e54db176e81dc03a8dbaa7c3a1062418e30c48bb64f5ec3ef4c548f2b7a68" Nov 12 21:35:38.223788 kubelet[2885]: I1112 21:35:38.223408 2885 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62dad4452bfc23a677bb01d27f28c25ba0c58f194a2869eadb5c393fcafb21d1" Nov 12 21:35:38.224816 containerd[1513]: time="2024-11-12T21:35:38.224542110Z" level=info msg="StopPodSandbox for \"62dad4452bfc23a677bb01d27f28c25ba0c58f194a2869eadb5c393fcafb21d1\"" Nov 12 21:35:38.226500 containerd[1513]: time="2024-11-12T21:35:38.225891212Z" level=info msg="StopPodSandbox for \"de9e54db176e81dc03a8dbaa7c3a1062418e30c48bb64f5ec3ef4c548f2b7a68\"" Nov 12 21:35:38.228567 containerd[1513]: time="2024-11-12T21:35:38.228522890Z" level=info msg="Ensure that sandbox de9e54db176e81dc03a8dbaa7c3a1062418e30c48bb64f5ec3ef4c548f2b7a68 in task-service has been cleanup successfully" Nov 12 21:35:38.230781 containerd[1513]: time="2024-11-12T21:35:38.230191057Z" level=info msg="Ensure that sandbox 62dad4452bfc23a677bb01d27f28c25ba0c58f194a2869eadb5c393fcafb21d1 in task-service has been cleanup successfully" Nov 12 21:35:38.231339 kubelet[2885]: I1112 21:35:38.231305 2885 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e193a8602cd3ccb9d63786c8fe3d8bd4e62e5db12832377de108e7fdaac316ba" Nov 12 21:35:38.233815 containerd[1513]: time="2024-11-12T21:35:38.233758963Z" level=info msg="StopPodSandbox for \"e193a8602cd3ccb9d63786c8fe3d8bd4e62e5db12832377de108e7fdaac316ba\"" Nov 12 21:35:38.234010 containerd[1513]: time="2024-11-12T21:35:38.233982838Z" level=info msg="Ensure that sandbox e193a8602cd3ccb9d63786c8fe3d8bd4e62e5db12832377de108e7fdaac316ba in task-service has been cleanup successfully" Nov 12 21:35:38.236302 kubelet[2885]: I1112 21:35:38.236287 2885 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d2f8436b5618039a0964be04b8d3610f4cccf22590f613967f75a2a13398bc0" Nov 12 21:35:38.237387 containerd[1513]: time="2024-11-12T21:35:38.237044824Z" level=info msg="StopPodSandbox for \"9d2f8436b5618039a0964be04b8d3610f4cccf22590f613967f75a2a13398bc0\"" Nov 12 21:35:38.237387 containerd[1513]: time="2024-11-12T21:35:38.237192434Z" level=info msg="Ensure that sandbox 9d2f8436b5618039a0964be04b8d3610f4cccf22590f613967f75a2a13398bc0 in task-service has been cleanup successfully" Nov 12 21:35:38.241715 kubelet[2885]: I1112 21:35:38.241690 2885 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4df54e520198897be3d1ce315a70fa186f4bdd9c73c33c0e6ec7fc90594a1fc7" Nov 12 21:35:38.243526 containerd[1513]: time="2024-11-12T21:35:38.243492217Z" level=info msg="StopPodSandbox for \"4df54e520198897be3d1ce315a70fa186f4bdd9c73c33c0e6ec7fc90594a1fc7\"" Nov 12 21:35:38.243749 containerd[1513]: time="2024-11-12T21:35:38.243691686Z" level=info msg="Ensure that sandbox 4df54e520198897be3d1ce315a70fa186f4bdd9c73c33c0e6ec7fc90594a1fc7 in task-service has been cleanup successfully" Nov 12 21:35:38.245000 kubelet[2885]: I1112 21:35:38.244962 2885 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6eeddde6f89415f32a3a1d8714ee27076ab3ee3803274679d14cba5e6ea98103" Nov 12 21:35:38.245553 containerd[1513]: time="2024-11-12T21:35:38.245524908Z" level=info msg="StopPodSandbox for \"6eeddde6f89415f32a3a1d8714ee27076ab3ee3803274679d14cba5e6ea98103\"" Nov 12 21:35:38.245920 containerd[1513]: time="2024-11-12T21:35:38.245862820Z" level=info msg="Ensure that sandbox 6eeddde6f89415f32a3a1d8714ee27076ab3ee3803274679d14cba5e6ea98103 in task-service has been cleanup successfully" Nov 12 21:35:38.326065 containerd[1513]: time="2024-11-12T21:35:38.326020008Z" level=error msg="StopPodSandbox for \"9d2f8436b5618039a0964be04b8d3610f4cccf22590f613967f75a2a13398bc0\" failed" error="failed to destroy network for sandbox \"9d2f8436b5618039a0964be04b8d3610f4cccf22590f613967f75a2a13398bc0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 21:35:38.327237 kubelet[2885]: E1112 21:35:38.327208 2885 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9d2f8436b5618039a0964be04b8d3610f4cccf22590f613967f75a2a13398bc0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9d2f8436b5618039a0964be04b8d3610f4cccf22590f613967f75a2a13398bc0" Nov 12 21:35:38.327474 kubelet[2885]: E1112 21:35:38.327336 2885 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9d2f8436b5618039a0964be04b8d3610f4cccf22590f613967f75a2a13398bc0"} Nov 12 21:35:38.327474 kubelet[2885]: E1112 21:35:38.327410 2885 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cb73e657-cad9-47bd-b125-00eeaf912c0a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9d2f8436b5618039a0964be04b8d3610f4cccf22590f613967f75a2a13398bc0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 21:35:38.327474 kubelet[2885]: E1112 21:35:38.327436 2885 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cb73e657-cad9-47bd-b125-00eeaf912c0a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9d2f8436b5618039a0964be04b8d3610f4cccf22590f613967f75a2a13398bc0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78cbf79fc8-jfgm6" podUID="cb73e657-cad9-47bd-b125-00eeaf912c0a" Nov 12 21:35:38.330149 containerd[1513]: time="2024-11-12T21:35:38.330058377Z" level=error msg="StopPodSandbox for \"6eeddde6f89415f32a3a1d8714ee27076ab3ee3803274679d14cba5e6ea98103\" failed" error="failed to destroy network for sandbox \"6eeddde6f89415f32a3a1d8714ee27076ab3ee3803274679d14cba5e6ea98103\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 21:35:38.330310 kubelet[2885]: E1112 21:35:38.330207 2885 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6eeddde6f89415f32a3a1d8714ee27076ab3ee3803274679d14cba5e6ea98103\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6eeddde6f89415f32a3a1d8714ee27076ab3ee3803274679d14cba5e6ea98103" Nov 12 21:35:38.330310 kubelet[2885]: E1112 21:35:38.330237 2885 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6eeddde6f89415f32a3a1d8714ee27076ab3ee3803274679d14cba5e6ea98103"} Nov 12 21:35:38.330310 kubelet[2885]: E1112 21:35:38.330259 2885 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"65dd8fef-58ef-4aad-a3ea-c7f983131d2d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6eeddde6f89415f32a3a1d8714ee27076ab3ee3803274679d14cba5e6ea98103\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 21:35:38.330310 kubelet[2885]: E1112 21:35:38.330280 2885 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"65dd8fef-58ef-4aad-a3ea-c7f983131d2d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6eeddde6f89415f32a3a1d8714ee27076ab3ee3803274679d14cba5e6ea98103\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-775b57f4d6-282rn" podUID="65dd8fef-58ef-4aad-a3ea-c7f983131d2d" Nov 12 21:35:38.346254 containerd[1513]: time="2024-11-12T21:35:38.346065436Z" level=error msg="StopPodSandbox for \"de9e54db176e81dc03a8dbaa7c3a1062418e30c48bb64f5ec3ef4c548f2b7a68\" failed" error="failed to destroy network for sandbox \"de9e54db176e81dc03a8dbaa7c3a1062418e30c48bb64f5ec3ef4c548f2b7a68\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 21:35:38.346691 kubelet[2885]: E1112 21:35:38.346468 2885 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"de9e54db176e81dc03a8dbaa7c3a1062418e30c48bb64f5ec3ef4c548f2b7a68\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="de9e54db176e81dc03a8dbaa7c3a1062418e30c48bb64f5ec3ef4c548f2b7a68" Nov 12 21:35:38.346691 kubelet[2885]: E1112 21:35:38.346517 2885 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"de9e54db176e81dc03a8dbaa7c3a1062418e30c48bb64f5ec3ef4c548f2b7a68"} Nov 12 21:35:38.346691 kubelet[2885]: E1112 21:35:38.346551 2885 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"23bc69df-3188-4289-a376-a1e2658f5a79\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"de9e54db176e81dc03a8dbaa7c3a1062418e30c48bb64f5ec3ef4c548f2b7a68\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 21:35:38.346691 kubelet[2885]: E1112 21:35:38.346579 2885 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"23bc69df-3188-4289-a376-a1e2658f5a79\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"de9e54db176e81dc03a8dbaa7c3a1062418e30c48bb64f5ec3ef4c548f2b7a68\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-chjk5" podUID="23bc69df-3188-4289-a376-a1e2658f5a79" Nov 12 21:35:38.350888 containerd[1513]: time="2024-11-12T21:35:38.350698535Z" level=error msg="StopPodSandbox for \"e193a8602cd3ccb9d63786c8fe3d8bd4e62e5db12832377de108e7fdaac316ba\" failed" error="failed to destroy network for sandbox \"e193a8602cd3ccb9d63786c8fe3d8bd4e62e5db12832377de108e7fdaac316ba\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 21:35:38.351342 kubelet[2885]: E1112 21:35:38.351142 2885 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e193a8602cd3ccb9d63786c8fe3d8bd4e62e5db12832377de108e7fdaac316ba\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e193a8602cd3ccb9d63786c8fe3d8bd4e62e5db12832377de108e7fdaac316ba" Nov 12 21:35:38.351342 kubelet[2885]: E1112 21:35:38.351221 2885 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e193a8602cd3ccb9d63786c8fe3d8bd4e62e5db12832377de108e7fdaac316ba"} Nov 12 21:35:38.351342 kubelet[2885]: E1112 21:35:38.351266 2885 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fb426bfe-a7f9-4da9-8328-a78ab7f2ab08\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e193a8602cd3ccb9d63786c8fe3d8bd4e62e5db12832377de108e7fdaac316ba\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 21:35:38.351342 kubelet[2885]: E1112 21:35:38.351307 2885 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fb426bfe-a7f9-4da9-8328-a78ab7f2ab08\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e193a8602cd3ccb9d63786c8fe3d8bd4e62e5db12832377de108e7fdaac316ba\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-bh8r2" podUID="fb426bfe-a7f9-4da9-8328-a78ab7f2ab08" Nov 12 21:35:38.356173 containerd[1513]: time="2024-11-12T21:35:38.355356030Z" level=error msg="StopPodSandbox for \"62dad4452bfc23a677bb01d27f28c25ba0c58f194a2869eadb5c393fcafb21d1\" failed" error="failed to destroy network for sandbox \"62dad4452bfc23a677bb01d27f28c25ba0c58f194a2869eadb5c393fcafb21d1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 21:35:38.356249 kubelet[2885]: E1112 21:35:38.355696 2885 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"62dad4452bfc23a677bb01d27f28c25ba0c58f194a2869eadb5c393fcafb21d1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="62dad4452bfc23a677bb01d27f28c25ba0c58f194a2869eadb5c393fcafb21d1" Nov 12 21:35:38.356249 kubelet[2885]: E1112 21:35:38.355759 2885 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"62dad4452bfc23a677bb01d27f28c25ba0c58f194a2869eadb5c393fcafb21d1"} Nov 12 21:35:38.356249 kubelet[2885]: E1112 21:35:38.355803 2885 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3ebecd64-9e86-4ac7-9cb2-b37aacbc6d0d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"62dad4452bfc23a677bb01d27f28c25ba0c58f194a2869eadb5c393fcafb21d1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 21:35:38.356249 kubelet[2885]: E1112 21:35:38.355833 2885 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3ebecd64-9e86-4ac7-9cb2-b37aacbc6d0d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"62dad4452bfc23a677bb01d27f28c25ba0c58f194a2869eadb5c393fcafb21d1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-w2jtl" podUID="3ebecd64-9e86-4ac7-9cb2-b37aacbc6d0d" Nov 12 21:35:38.359778 containerd[1513]: time="2024-11-12T21:35:38.359723905Z" level=error msg="StopPodSandbox for \"4df54e520198897be3d1ce315a70fa186f4bdd9c73c33c0e6ec7fc90594a1fc7\" failed" error="failed to destroy network for sandbox \"4df54e520198897be3d1ce315a70fa186f4bdd9c73c33c0e6ec7fc90594a1fc7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 21:35:38.360102 kubelet[2885]: E1112 21:35:38.360037 2885 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4df54e520198897be3d1ce315a70fa186f4bdd9c73c33c0e6ec7fc90594a1fc7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4df54e520198897be3d1ce315a70fa186f4bdd9c73c33c0e6ec7fc90594a1fc7" Nov 12 21:35:38.360102 kubelet[2885]: E1112 21:35:38.360087 2885 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4df54e520198897be3d1ce315a70fa186f4bdd9c73c33c0e6ec7fc90594a1fc7"} Nov 12 21:35:38.360192 kubelet[2885]: E1112 21:35:38.360124 2885 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f899bca3-0451-45a6-b02f-8752a2ae4ada\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4df54e520198897be3d1ce315a70fa186f4bdd9c73c33c0e6ec7fc90594a1fc7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 21:35:38.360192 kubelet[2885]: E1112 21:35:38.360145 2885 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f899bca3-0451-45a6-b02f-8752a2ae4ada\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4df54e520198897be3d1ce315a70fa186f4bdd9c73c33c0e6ec7fc90594a1fc7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-775b57f4d6-ntjrt" podUID="f899bca3-0451-45a6-b02f-8752a2ae4ada" Nov 12 21:35:40.085410 systemd[1]: Started sshd@21-188.245.86.234:22-35.240.185.59:39318.service - OpenSSH per-connection server daemon (35.240.185.59:39318). Nov 12 21:35:40.805977 sshd[3961]: Invalid user oracle from 35.240.185.59 port 39318 Nov 12 21:35:40.968216 sshd[3961]: Connection closed by invalid user oracle 35.240.185.59 port 39318 [preauth] Nov 12 21:35:40.973127 systemd[1]: sshd@21-188.245.86.234:22-35.240.185.59:39318.service: Deactivated successfully. Nov 12 21:35:43.957325 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2877680521.mount: Deactivated successfully. Nov 12 21:35:44.128753 containerd[1513]: time="2024-11-12T21:35:44.094330431Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.0: active requests=0, bytes read=140580710" Nov 12 21:35:44.168190 containerd[1513]: time="2024-11-12T21:35:44.168052306Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 21:35:44.199372 containerd[1513]: time="2024-11-12T21:35:44.199311417Z" level=info msg="ImageCreate event name:\"sha256:df7e265d5ccd035f529156d2ef608d879200d07c1539ca9cac539da91478bc9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 21:35:44.203355 containerd[1513]: time="2024-11-12T21:35:44.202924104Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:0761a4b4a20aefdf788f2b42a221bfcfe926a474152b74fbe091d847f5d823d7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 21:35:44.207280 containerd[1513]: time="2024-11-12T21:35:44.207186856Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.0\" with image id \"sha256:df7e265d5ccd035f529156d2ef608d879200d07c1539ca9cac539da91478bc9f\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:0761a4b4a20aefdf788f2b42a221bfcfe926a474152b74fbe091d847f5d823d7\", size \"140580572\" in 6.986653555s" Nov 12 21:35:44.207280 containerd[1513]: time="2024-11-12T21:35:44.207218697Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.0\" returns image reference \"sha256:df7e265d5ccd035f529156d2ef608d879200d07c1539ca9cac539da91478bc9f\"" Nov 12 21:35:44.414184 containerd[1513]: time="2024-11-12T21:35:44.414131472Z" level=info msg="CreateContainer within sandbox \"5877f63725eccf5ddf0688649b961d1b1aef23c5b8f6a8dc12b98ab6eaaccc72\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 12 21:35:44.502360 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount942687943.mount: Deactivated successfully. Nov 12 21:35:44.522867 containerd[1513]: time="2024-11-12T21:35:44.522806489Z" level=info msg="CreateContainer within sandbox \"5877f63725eccf5ddf0688649b961d1b1aef23c5b8f6a8dc12b98ab6eaaccc72\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8bd69c6611a99fbc792bbfdc7cb96c218d63525f84b07707edf5bd12ac747cc3\"" Nov 12 21:35:44.539097 containerd[1513]: time="2024-11-12T21:35:44.538503829Z" level=info msg="StartContainer for \"8bd69c6611a99fbc792bbfdc7cb96c218d63525f84b07707edf5bd12ac747cc3\"" Nov 12 21:35:44.687454 systemd[1]: Started cri-containerd-8bd69c6611a99fbc792bbfdc7cb96c218d63525f84b07707edf5bd12ac747cc3.scope - libcontainer container 8bd69c6611a99fbc792bbfdc7cb96c218d63525f84b07707edf5bd12ac747cc3. Nov 12 21:35:44.740079 containerd[1513]: time="2024-11-12T21:35:44.739738100Z" level=info msg="StartContainer for \"8bd69c6611a99fbc792bbfdc7cb96c218d63525f84b07707edf5bd12ac747cc3\" returns successfully" Nov 12 21:35:44.837391 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 12 21:35:44.838029 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 12 21:35:45.581572 kubelet[2885]: I1112 21:35:45.580395 2885 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-65bz6" podStartSLOduration=2.8251622899999997 podStartE2EDuration="18.578417964s" podCreationTimestamp="2024-11-12 21:35:27 +0000 UTC" firstStartedPulling="2024-11-12 21:35:28.470523503 +0000 UTC m=+22.494749013" lastFinishedPulling="2024-11-12 21:35:44.223779176 +0000 UTC m=+38.248004687" observedRunningTime="2024-11-12 21:35:45.575283095 +0000 UTC m=+39.599508604" watchObservedRunningTime="2024-11-12 21:35:45.578417964 +0000 UTC m=+39.602643474" Nov 12 21:35:51.090130 containerd[1513]: time="2024-11-12T21:35:51.089765206Z" level=info msg="StopPodSandbox for \"9d2f8436b5618039a0964be04b8d3610f4cccf22590f613967f75a2a13398bc0\"" Nov 12 21:35:51.090892 containerd[1513]: time="2024-11-12T21:35:51.090661862Z" level=info msg="StopPodSandbox for \"62dad4452bfc23a677bb01d27f28c25ba0c58f194a2869eadb5c393fcafb21d1\"" Nov 12 21:35:51.232096 kubelet[2885]: I1112 21:35:51.231536 2885 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 21:35:51.382308 containerd[1513]: 2024-11-12 21:35:51.168 [INFO][4291] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9d2f8436b5618039a0964be04b8d3610f4cccf22590f613967f75a2a13398bc0" Nov 12 21:35:51.382308 containerd[1513]: 2024-11-12 21:35:51.169 [INFO][4291] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9d2f8436b5618039a0964be04b8d3610f4cccf22590f613967f75a2a13398bc0" iface="eth0" netns="/var/run/netns/cni-8580614e-7f98-d785-a65f-e41147024c57" Nov 12 21:35:51.382308 containerd[1513]: 2024-11-12 21:35:51.170 [INFO][4291] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9d2f8436b5618039a0964be04b8d3610f4cccf22590f613967f75a2a13398bc0" iface="eth0" netns="/var/run/netns/cni-8580614e-7f98-d785-a65f-e41147024c57" Nov 12 21:35:51.382308 containerd[1513]: 2024-11-12 21:35:51.172 [INFO][4291] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9d2f8436b5618039a0964be04b8d3610f4cccf22590f613967f75a2a13398bc0" iface="eth0" netns="/var/run/netns/cni-8580614e-7f98-d785-a65f-e41147024c57" Nov 12 21:35:51.382308 containerd[1513]: 2024-11-12 21:35:51.172 [INFO][4291] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9d2f8436b5618039a0964be04b8d3610f4cccf22590f613967f75a2a13398bc0" Nov 12 21:35:51.382308 containerd[1513]: 2024-11-12 21:35:51.172 [INFO][4291] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9d2f8436b5618039a0964be04b8d3610f4cccf22590f613967f75a2a13398bc0" Nov 12 21:35:51.382308 containerd[1513]: 2024-11-12 21:35:51.362 [INFO][4304] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9d2f8436b5618039a0964be04b8d3610f4cccf22590f613967f75a2a13398bc0" HandleID="k8s-pod-network.9d2f8436b5618039a0964be04b8d3610f4cccf22590f613967f75a2a13398bc0" Workload="ci--4081--2--0--6--01c097edc7-k8s-calico--kube--controllers--78cbf79fc8--jfgm6-eth0" Nov 12 21:35:51.382308 containerd[1513]: 2024-11-12 21:35:51.362 [INFO][4304] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 21:35:51.382308 containerd[1513]: 2024-11-12 21:35:51.363 [INFO][4304] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 21:35:51.382308 containerd[1513]: 2024-11-12 21:35:51.372 [WARNING][4304] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9d2f8436b5618039a0964be04b8d3610f4cccf22590f613967f75a2a13398bc0" HandleID="k8s-pod-network.9d2f8436b5618039a0964be04b8d3610f4cccf22590f613967f75a2a13398bc0" Workload="ci--4081--2--0--6--01c097edc7-k8s-calico--kube--controllers--78cbf79fc8--jfgm6-eth0" Nov 12 21:35:51.382308 containerd[1513]: 2024-11-12 21:35:51.372 [INFO][4304] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9d2f8436b5618039a0964be04b8d3610f4cccf22590f613967f75a2a13398bc0" HandleID="k8s-pod-network.9d2f8436b5618039a0964be04b8d3610f4cccf22590f613967f75a2a13398bc0" Workload="ci--4081--2--0--6--01c097edc7-k8s-calico--kube--controllers--78cbf79fc8--jfgm6-eth0" Nov 12 21:35:51.382308 containerd[1513]: 2024-11-12 21:35:51.374 [INFO][4304] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 21:35:51.382308 containerd[1513]: 2024-11-12 21:35:51.376 [INFO][4291] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9d2f8436b5618039a0964be04b8d3610f4cccf22590f613967f75a2a13398bc0" Nov 12 21:35:51.391754 containerd[1513]: 2024-11-12 21:35:51.167 [INFO][4292] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="62dad4452bfc23a677bb01d27f28c25ba0c58f194a2869eadb5c393fcafb21d1" Nov 12 21:35:51.391754 containerd[1513]: 2024-11-12 21:35:51.170 [INFO][4292] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="62dad4452bfc23a677bb01d27f28c25ba0c58f194a2869eadb5c393fcafb21d1" iface="eth0" netns="/var/run/netns/cni-6be68566-6ab3-6de8-20ee-25055d0a17c2" Nov 12 21:35:51.391754 containerd[1513]: 2024-11-12 21:35:51.170 [INFO][4292] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="62dad4452bfc23a677bb01d27f28c25ba0c58f194a2869eadb5c393fcafb21d1" iface="eth0" netns="/var/run/netns/cni-6be68566-6ab3-6de8-20ee-25055d0a17c2" Nov 12 21:35:51.391754 containerd[1513]: 2024-11-12 21:35:51.172 [INFO][4292] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="62dad4452bfc23a677bb01d27f28c25ba0c58f194a2869eadb5c393fcafb21d1" iface="eth0" netns="/var/run/netns/cni-6be68566-6ab3-6de8-20ee-25055d0a17c2" Nov 12 21:35:51.391754 containerd[1513]: 2024-11-12 21:35:51.172 [INFO][4292] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="62dad4452bfc23a677bb01d27f28c25ba0c58f194a2869eadb5c393fcafb21d1" Nov 12 21:35:51.391754 containerd[1513]: 2024-11-12 21:35:51.172 [INFO][4292] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="62dad4452bfc23a677bb01d27f28c25ba0c58f194a2869eadb5c393fcafb21d1" Nov 12 21:35:51.391754 containerd[1513]: 2024-11-12 21:35:51.361 [INFO][4303] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="62dad4452bfc23a677bb01d27f28c25ba0c58f194a2869eadb5c393fcafb21d1" HandleID="k8s-pod-network.62dad4452bfc23a677bb01d27f28c25ba0c58f194a2869eadb5c393fcafb21d1" Workload="ci--4081--2--0--6--01c097edc7-k8s-coredns--7db6d8ff4d--w2jtl-eth0" Nov 12 21:35:51.391754 containerd[1513]: 2024-11-12 21:35:51.362 [INFO][4303] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 21:35:51.391754 containerd[1513]: 2024-11-12 21:35:51.374 [INFO][4303] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 21:35:51.391754 containerd[1513]: 2024-11-12 21:35:51.379 [WARNING][4303] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="62dad4452bfc23a677bb01d27f28c25ba0c58f194a2869eadb5c393fcafb21d1" HandleID="k8s-pod-network.62dad4452bfc23a677bb01d27f28c25ba0c58f194a2869eadb5c393fcafb21d1" Workload="ci--4081--2--0--6--01c097edc7-k8s-coredns--7db6d8ff4d--w2jtl-eth0" Nov 12 21:35:51.391754 containerd[1513]: 2024-11-12 21:35:51.380 [INFO][4303] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="62dad4452bfc23a677bb01d27f28c25ba0c58f194a2869eadb5c393fcafb21d1" HandleID="k8s-pod-network.62dad4452bfc23a677bb01d27f28c25ba0c58f194a2869eadb5c393fcafb21d1" Workload="ci--4081--2--0--6--01c097edc7-k8s-coredns--7db6d8ff4d--w2jtl-eth0" Nov 12 21:35:51.391754 containerd[1513]: 2024-11-12 21:35:51.381 [INFO][4303] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 21:35:51.391754 containerd[1513]: 2024-11-12 21:35:51.385 [INFO][4292] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="62dad4452bfc23a677bb01d27f28c25ba0c58f194a2869eadb5c393fcafb21d1" Nov 12 21:35:51.391556 systemd[1]: run-netns-cni\x2d8580614e\x2d7f98\x2dd785\x2da65f\x2de41147024c57.mount: Deactivated successfully. Nov 12 21:35:51.397858 containerd[1513]: time="2024-11-12T21:35:51.393622436Z" level=info msg="TearDown network for sandbox \"62dad4452bfc23a677bb01d27f28c25ba0c58f194a2869eadb5c393fcafb21d1\" successfully" Nov 12 21:35:51.397858 containerd[1513]: time="2024-11-12T21:35:51.393654457Z" level=info msg="StopPodSandbox for \"62dad4452bfc23a677bb01d27f28c25ba0c58f194a2869eadb5c393fcafb21d1\" returns successfully" Nov 12 21:35:51.397858 containerd[1513]: time="2024-11-12T21:35:51.394544981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-w2jtl,Uid:3ebecd64-9e86-4ac7-9cb2-b37aacbc6d0d,Namespace:kube-system,Attempt:1,}" Nov 12 21:35:51.398028 systemd[1]: run-netns-cni\x2d6be68566\x2d6ab3\x2d6de8\x2d20ee\x2d25055d0a17c2.mount: Deactivated successfully. Nov 12 21:35:51.398630 containerd[1513]: time="2024-11-12T21:35:51.398596579Z" level=info msg="TearDown network for sandbox \"9d2f8436b5618039a0964be04b8d3610f4cccf22590f613967f75a2a13398bc0\" successfully" Nov 12 21:35:51.398721 containerd[1513]: time="2024-11-12T21:35:51.398631115Z" level=info msg="StopPodSandbox for \"9d2f8436b5618039a0964be04b8d3610f4cccf22590f613967f75a2a13398bc0\" returns successfully" Nov 12 21:35:51.400124 containerd[1513]: time="2024-11-12T21:35:51.400043051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78cbf79fc8-jfgm6,Uid:cb73e657-cad9-47bd-b125-00eeaf912c0a,Namespace:calico-system,Attempt:1,}" Nov 12 21:35:51.583843 systemd-networkd[1406]: calif2d98f555e4: Link UP Nov 12 21:35:51.584062 systemd-networkd[1406]: calif2d98f555e4: Gained carrier Nov 12 21:35:51.607016 containerd[1513]: 2024-11-12 21:35:51.460 [INFO][4321] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 12 21:35:51.607016 containerd[1513]: 2024-11-12 21:35:51.481 [INFO][4321] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--0--6--01c097edc7-k8s-calico--kube--controllers--78cbf79fc8--jfgm6-eth0 calico-kube-controllers-78cbf79fc8- calico-system cb73e657-cad9-47bd-b125-00eeaf912c0a 773 0 2024-11-12 21:35:28 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:78cbf79fc8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-2-0-6-01c097edc7 calico-kube-controllers-78cbf79fc8-jfgm6 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif2d98f555e4 [] []}} ContainerID="336c9a1c0d8705660b300e01feb131ef45bf792905265d7fb5abc04959c1f4fb" Namespace="calico-system" Pod="calico-kube-controllers-78cbf79fc8-jfgm6" WorkloadEndpoint="ci--4081--2--0--6--01c097edc7-k8s-calico--kube--controllers--78cbf79fc8--jfgm6-" Nov 12 21:35:51.607016 containerd[1513]: 2024-11-12 21:35:51.481 [INFO][4321] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="336c9a1c0d8705660b300e01feb131ef45bf792905265d7fb5abc04959c1f4fb" Namespace="calico-system" Pod="calico-kube-controllers-78cbf79fc8-jfgm6" WorkloadEndpoint="ci--4081--2--0--6--01c097edc7-k8s-calico--kube--controllers--78cbf79fc8--jfgm6-eth0" Nov 12 21:35:51.607016 containerd[1513]: 2024-11-12 21:35:51.512 [INFO][4341] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="336c9a1c0d8705660b300e01feb131ef45bf792905265d7fb5abc04959c1f4fb" HandleID="k8s-pod-network.336c9a1c0d8705660b300e01feb131ef45bf792905265d7fb5abc04959c1f4fb" Workload="ci--4081--2--0--6--01c097edc7-k8s-calico--kube--controllers--78cbf79fc8--jfgm6-eth0" Nov 12 21:35:51.607016 containerd[1513]: 2024-11-12 21:35:51.527 [INFO][4341] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="336c9a1c0d8705660b300e01feb131ef45bf792905265d7fb5abc04959c1f4fb" HandleID="k8s-pod-network.336c9a1c0d8705660b300e01feb131ef45bf792905265d7fb5abc04959c1f4fb" Workload="ci--4081--2--0--6--01c097edc7-k8s-calico--kube--controllers--78cbf79fc8--jfgm6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319370), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-2-0-6-01c097edc7", "pod":"calico-kube-controllers-78cbf79fc8-jfgm6", "timestamp":"2024-11-12 21:35:51.512593403 +0000 UTC"}, Hostname:"ci-4081-2-0-6-01c097edc7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 21:35:51.607016 containerd[1513]: 2024-11-12 21:35:51.527 [INFO][4341] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 21:35:51.607016 containerd[1513]: 2024-11-12 21:35:51.527 [INFO][4341] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 21:35:51.607016 containerd[1513]: 2024-11-12 21:35:51.527 [INFO][4341] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-0-6-01c097edc7' Nov 12 21:35:51.607016 containerd[1513]: 2024-11-12 21:35:51.530 [INFO][4341] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.336c9a1c0d8705660b300e01feb131ef45bf792905265d7fb5abc04959c1f4fb" host="ci-4081-2-0-6-01c097edc7" Nov 12 21:35:51.607016 containerd[1513]: 2024-11-12 21:35:51.538 [INFO][4341] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-0-6-01c097edc7" Nov 12 21:35:51.607016 containerd[1513]: 2024-11-12 21:35:51.543 [INFO][4341] ipam/ipam.go 489: Trying affinity for 192.168.75.64/26 host="ci-4081-2-0-6-01c097edc7" Nov 12 21:35:51.607016 containerd[1513]: 2024-11-12 21:35:51.545 [INFO][4341] ipam/ipam.go 155: Attempting to load block cidr=192.168.75.64/26 host="ci-4081-2-0-6-01c097edc7" Nov 12 21:35:51.607016 containerd[1513]: 2024-11-12 21:35:51.546 [INFO][4341] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.75.64/26 host="ci-4081-2-0-6-01c097edc7" Nov 12 21:35:51.607016 containerd[1513]: 2024-11-12 21:35:51.547 [INFO][4341] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.75.64/26 handle="k8s-pod-network.336c9a1c0d8705660b300e01feb131ef45bf792905265d7fb5abc04959c1f4fb" host="ci-4081-2-0-6-01c097edc7" Nov 12 21:35:51.607016 containerd[1513]: 2024-11-12 21:35:51.548 [INFO][4341] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.336c9a1c0d8705660b300e01feb131ef45bf792905265d7fb5abc04959c1f4fb Nov 12 21:35:51.607016 containerd[1513]: 2024-11-12 21:35:51.552 [INFO][4341] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.75.64/26 handle="k8s-pod-network.336c9a1c0d8705660b300e01feb131ef45bf792905265d7fb5abc04959c1f4fb" host="ci-4081-2-0-6-01c097edc7" Nov 12 21:35:51.607016 containerd[1513]: 2024-11-12 21:35:51.557 [INFO][4341] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.75.65/26] block=192.168.75.64/26 handle="k8s-pod-network.336c9a1c0d8705660b300e01feb131ef45bf792905265d7fb5abc04959c1f4fb" host="ci-4081-2-0-6-01c097edc7" Nov 12 21:35:51.607016 containerd[1513]: 2024-11-12 21:35:51.557 [INFO][4341] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.75.65/26] handle="k8s-pod-network.336c9a1c0d8705660b300e01feb131ef45bf792905265d7fb5abc04959c1f4fb" host="ci-4081-2-0-6-01c097edc7" Nov 12 21:35:51.607016 containerd[1513]: 2024-11-12 21:35:51.557 [INFO][4341] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 21:35:51.607016 containerd[1513]: 2024-11-12 21:35:51.557 [INFO][4341] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.65/26] IPv6=[] ContainerID="336c9a1c0d8705660b300e01feb131ef45bf792905265d7fb5abc04959c1f4fb" HandleID="k8s-pod-network.336c9a1c0d8705660b300e01feb131ef45bf792905265d7fb5abc04959c1f4fb" Workload="ci--4081--2--0--6--01c097edc7-k8s-calico--kube--controllers--78cbf79fc8--jfgm6-eth0" Nov 12 21:35:51.608728 containerd[1513]: 2024-11-12 21:35:51.560 [INFO][4321] cni-plugin/k8s.go 386: Populated endpoint ContainerID="336c9a1c0d8705660b300e01feb131ef45bf792905265d7fb5abc04959c1f4fb" Namespace="calico-system" Pod="calico-kube-controllers-78cbf79fc8-jfgm6" WorkloadEndpoint="ci--4081--2--0--6--01c097edc7-k8s-calico--kube--controllers--78cbf79fc8--jfgm6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--6--01c097edc7-k8s-calico--kube--controllers--78cbf79fc8--jfgm6-eth0", GenerateName:"calico-kube-controllers-78cbf79fc8-", Namespace:"calico-system", SelfLink:"", UID:"cb73e657-cad9-47bd-b125-00eeaf912c0a", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 21, 35, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78cbf79fc8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-6-01c097edc7", ContainerID:"", Pod:"calico-kube-controllers-78cbf79fc8-jfgm6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.75.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif2d98f555e4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 21:35:51.608728 containerd[1513]: 2024-11-12 21:35:51.560 [INFO][4321] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.75.65/32] ContainerID="336c9a1c0d8705660b300e01feb131ef45bf792905265d7fb5abc04959c1f4fb" Namespace="calico-system" Pod="calico-kube-controllers-78cbf79fc8-jfgm6" WorkloadEndpoint="ci--4081--2--0--6--01c097edc7-k8s-calico--kube--controllers--78cbf79fc8--jfgm6-eth0" Nov 12 21:35:51.608728 containerd[1513]: 2024-11-12 21:35:51.560 [INFO][4321] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif2d98f555e4 ContainerID="336c9a1c0d8705660b300e01feb131ef45bf792905265d7fb5abc04959c1f4fb" Namespace="calico-system" Pod="calico-kube-controllers-78cbf79fc8-jfgm6" WorkloadEndpoint="ci--4081--2--0--6--01c097edc7-k8s-calico--kube--controllers--78cbf79fc8--jfgm6-eth0" Nov 12 21:35:51.608728 containerd[1513]: 2024-11-12 21:35:51.582 [INFO][4321] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="336c9a1c0d8705660b300e01feb131ef45bf792905265d7fb5abc04959c1f4fb" Namespace="calico-system" Pod="calico-kube-controllers-78cbf79fc8-jfgm6" WorkloadEndpoint="ci--4081--2--0--6--01c097edc7-k8s-calico--kube--controllers--78cbf79fc8--jfgm6-eth0" Nov 12 21:35:51.608728 containerd[1513]: 2024-11-12 21:35:51.585 [INFO][4321] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="336c9a1c0d8705660b300e01feb131ef45bf792905265d7fb5abc04959c1f4fb" Namespace="calico-system" Pod="calico-kube-controllers-78cbf79fc8-jfgm6" WorkloadEndpoint="ci--4081--2--0--6--01c097edc7-k8s-calico--kube--controllers--78cbf79fc8--jfgm6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--6--01c097edc7-k8s-calico--kube--controllers--78cbf79fc8--jfgm6-eth0", GenerateName:"calico-kube-controllers-78cbf79fc8-", Namespace:"calico-system", SelfLink:"", UID:"cb73e657-cad9-47bd-b125-00eeaf912c0a", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 21, 35, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78cbf79fc8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-6-01c097edc7", ContainerID:"336c9a1c0d8705660b300e01feb131ef45bf792905265d7fb5abc04959c1f4fb", Pod:"calico-kube-controllers-78cbf79fc8-jfgm6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.75.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif2d98f555e4", MAC:"12:e6:01:bb:e7:a9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 21:35:51.608728 containerd[1513]: 2024-11-12 21:35:51.598 [INFO][4321] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="336c9a1c0d8705660b300e01feb131ef45bf792905265d7fb5abc04959c1f4fb" Namespace="calico-system" Pod="calico-kube-controllers-78cbf79fc8-jfgm6" WorkloadEndpoint="ci--4081--2--0--6--01c097edc7-k8s-calico--kube--controllers--78cbf79fc8--jfgm6-eth0" Nov 12 21:35:51.620327 systemd-networkd[1406]: calie44bdbaf52f: Link UP Nov 12 21:35:51.620604 systemd-networkd[1406]: calie44bdbaf52f: Gained carrier Nov 12 21:35:51.647444 containerd[1513]: 2024-11-12 21:35:51.463 [INFO][4319] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 12 21:35:51.647444 containerd[1513]: 2024-11-12 21:35:51.477 [INFO][4319] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--0--6--01c097edc7-k8s-coredns--7db6d8ff4d--w2jtl-eth0 coredns-7db6d8ff4d- kube-system 3ebecd64-9e86-4ac7-9cb2-b37aacbc6d0d 774 0 2024-11-12 21:35:21 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-2-0-6-01c097edc7 coredns-7db6d8ff4d-w2jtl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie44bdbaf52f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="f1ff990acc2c82975b6e17335e144acf4e258a368d2dbdd85270ba1e291079f5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-w2jtl" WorkloadEndpoint="ci--4081--2--0--6--01c097edc7-k8s-coredns--7db6d8ff4d--w2jtl-" Nov 12 21:35:51.647444 containerd[1513]: 2024-11-12 21:35:51.477 [INFO][4319] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f1ff990acc2c82975b6e17335e144acf4e258a368d2dbdd85270ba1e291079f5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-w2jtl" WorkloadEndpoint="ci--4081--2--0--6--01c097edc7-k8s-coredns--7db6d8ff4d--w2jtl-eth0" Nov 12 21:35:51.647444 containerd[1513]: 2024-11-12 21:35:51.516 [INFO][4342] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f1ff990acc2c82975b6e17335e144acf4e258a368d2dbdd85270ba1e291079f5" HandleID="k8s-pod-network.f1ff990acc2c82975b6e17335e144acf4e258a368d2dbdd85270ba1e291079f5" Workload="ci--4081--2--0--6--01c097edc7-k8s-coredns--7db6d8ff4d--w2jtl-eth0" Nov 12 21:35:51.647444 containerd[1513]: 2024-11-12 21:35:51.535 [INFO][4342] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f1ff990acc2c82975b6e17335e144acf4e258a368d2dbdd85270ba1e291079f5" HandleID="k8s-pod-network.f1ff990acc2c82975b6e17335e144acf4e258a368d2dbdd85270ba1e291079f5" Workload="ci--4081--2--0--6--01c097edc7-k8s-coredns--7db6d8ff4d--w2jtl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003be6c0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-2-0-6-01c097edc7", "pod":"coredns-7db6d8ff4d-w2jtl", "timestamp":"2024-11-12 21:35:51.516434752 +0000 UTC"}, Hostname:"ci-4081-2-0-6-01c097edc7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 21:35:51.647444 containerd[1513]: 2024-11-12 21:35:51.535 [INFO][4342] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 21:35:51.647444 containerd[1513]: 2024-11-12 21:35:51.557 [INFO][4342] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 21:35:51.647444 containerd[1513]: 2024-11-12 21:35:51.558 [INFO][4342] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-0-6-01c097edc7' Nov 12 21:35:51.647444 containerd[1513]: 2024-11-12 21:35:51.560 [INFO][4342] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f1ff990acc2c82975b6e17335e144acf4e258a368d2dbdd85270ba1e291079f5" host="ci-4081-2-0-6-01c097edc7" Nov 12 21:35:51.647444 containerd[1513]: 2024-11-12 21:35:51.563 [INFO][4342] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-0-6-01c097edc7" Nov 12 21:35:51.647444 containerd[1513]: 2024-11-12 21:35:51.569 [INFO][4342] ipam/ipam.go 489: Trying affinity for 192.168.75.64/26 host="ci-4081-2-0-6-01c097edc7" Nov 12 21:35:51.647444 containerd[1513]: 2024-11-12 21:35:51.571 [INFO][4342] ipam/ipam.go 155: Attempting to load block cidr=192.168.75.64/26 host="ci-4081-2-0-6-01c097edc7" Nov 12 21:35:51.647444 containerd[1513]: 2024-11-12 21:35:51.578 [INFO][4342] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.75.64/26 host="ci-4081-2-0-6-01c097edc7" Nov 12 21:35:51.647444 containerd[1513]: 2024-11-12 21:35:51.579 [INFO][4342] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.75.64/26 handle="k8s-pod-network.f1ff990acc2c82975b6e17335e144acf4e258a368d2dbdd85270ba1e291079f5" host="ci-4081-2-0-6-01c097edc7" Nov 12 21:35:51.647444 containerd[1513]: 2024-11-12 21:35:51.581 [INFO][4342] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f1ff990acc2c82975b6e17335e144acf4e258a368d2dbdd85270ba1e291079f5 Nov 12 21:35:51.647444 containerd[1513]: 2024-11-12 21:35:51.591 [INFO][4342] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.75.64/26 handle="k8s-pod-network.f1ff990acc2c82975b6e17335e144acf4e258a368d2dbdd85270ba1e291079f5" host="ci-4081-2-0-6-01c097edc7" Nov 12 21:35:51.647444 containerd[1513]: 2024-11-12 21:35:51.607 [INFO][4342] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.75.66/26] block=192.168.75.64/26 handle="k8s-pod-network.f1ff990acc2c82975b6e17335e144acf4e258a368d2dbdd85270ba1e291079f5" host="ci-4081-2-0-6-01c097edc7" Nov 12 21:35:51.647444 containerd[1513]: 2024-11-12 21:35:51.607 [INFO][4342] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.75.66/26] handle="k8s-pod-network.f1ff990acc2c82975b6e17335e144acf4e258a368d2dbdd85270ba1e291079f5" host="ci-4081-2-0-6-01c097edc7" Nov 12 21:35:51.647444 containerd[1513]: 2024-11-12 21:35:51.607 [INFO][4342] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 21:35:51.647444 containerd[1513]: 2024-11-12 21:35:51.608 [INFO][4342] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.66/26] IPv6=[] ContainerID="f1ff990acc2c82975b6e17335e144acf4e258a368d2dbdd85270ba1e291079f5" HandleID="k8s-pod-network.f1ff990acc2c82975b6e17335e144acf4e258a368d2dbdd85270ba1e291079f5" Workload="ci--4081--2--0--6--01c097edc7-k8s-coredns--7db6d8ff4d--w2jtl-eth0" Nov 12 21:35:51.648919 containerd[1513]: 2024-11-12 21:35:51.614 [INFO][4319] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f1ff990acc2c82975b6e17335e144acf4e258a368d2dbdd85270ba1e291079f5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-w2jtl" WorkloadEndpoint="ci--4081--2--0--6--01c097edc7-k8s-coredns--7db6d8ff4d--w2jtl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--6--01c097edc7-k8s-coredns--7db6d8ff4d--w2jtl-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"3ebecd64-9e86-4ac7-9cb2-b37aacbc6d0d", ResourceVersion:"774", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 21, 35, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-6-01c097edc7", ContainerID:"", Pod:"coredns-7db6d8ff4d-w2jtl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie44bdbaf52f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 21:35:51.648919 containerd[1513]: 2024-11-12 21:35:51.616 [INFO][4319] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.75.66/32] ContainerID="f1ff990acc2c82975b6e17335e144acf4e258a368d2dbdd85270ba1e291079f5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-w2jtl" WorkloadEndpoint="ci--4081--2--0--6--01c097edc7-k8s-coredns--7db6d8ff4d--w2jtl-eth0" Nov 12 21:35:51.648919 containerd[1513]: 2024-11-12 21:35:51.616 [INFO][4319] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie44bdbaf52f ContainerID="f1ff990acc2c82975b6e17335e144acf4e258a368d2dbdd85270ba1e291079f5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-w2jtl" WorkloadEndpoint="ci--4081--2--0--6--01c097edc7-k8s-coredns--7db6d8ff4d--w2jtl-eth0" Nov 12 21:35:51.648919 containerd[1513]: 2024-11-12 21:35:51.620 [INFO][4319] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f1ff990acc2c82975b6e17335e144acf4e258a368d2dbdd85270ba1e291079f5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-w2jtl" WorkloadEndpoint="ci--4081--2--0--6--01c097edc7-k8s-coredns--7db6d8ff4d--w2jtl-eth0" Nov 12 21:35:51.648919 containerd[1513]: 2024-11-12 21:35:51.620 [INFO][4319] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f1ff990acc2c82975b6e17335e144acf4e258a368d2dbdd85270ba1e291079f5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-w2jtl" WorkloadEndpoint="ci--4081--2--0--6--01c097edc7-k8s-coredns--7db6d8ff4d--w2jtl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--6--01c097edc7-k8s-coredns--7db6d8ff4d--w2jtl-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"3ebecd64-9e86-4ac7-9cb2-b37aacbc6d0d", ResourceVersion:"774", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 21, 35, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-6-01c097edc7", ContainerID:"f1ff990acc2c82975b6e17335e144acf4e258a368d2dbdd85270ba1e291079f5", Pod:"coredns-7db6d8ff4d-w2jtl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie44bdbaf52f", MAC:"96:08:47:d0:9a:7d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 21:35:51.648919 containerd[1513]: 2024-11-12 21:35:51.642 [INFO][4319] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f1ff990acc2c82975b6e17335e144acf4e258a368d2dbdd85270ba1e291079f5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-w2jtl" WorkloadEndpoint="ci--4081--2--0--6--01c097edc7-k8s-coredns--7db6d8ff4d--w2jtl-eth0" Nov 12 21:35:51.684487 containerd[1513]: time="2024-11-12T21:35:51.683886014Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 21:35:51.684487 containerd[1513]: time="2024-11-12T21:35:51.683940037Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 21:35:51.684487 containerd[1513]: time="2024-11-12T21:35:51.683953142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 21:35:51.684487 containerd[1513]: time="2024-11-12T21:35:51.684027473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 21:35:51.690427 containerd[1513]: time="2024-11-12T21:35:51.688441130Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 21:35:51.690427 containerd[1513]: time="2024-11-12T21:35:51.688488251Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 21:35:51.690427 containerd[1513]: time="2024-11-12T21:35:51.688514119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 21:35:51.690427 containerd[1513]: time="2024-11-12T21:35:51.688597668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 21:35:51.724323 systemd[1]: Started cri-containerd-f1ff990acc2c82975b6e17335e144acf4e258a368d2dbdd85270ba1e291079f5.scope - libcontainer container f1ff990acc2c82975b6e17335e144acf4e258a368d2dbdd85270ba1e291079f5. Nov 12 21:35:51.729232 systemd[1]: Started cri-containerd-336c9a1c0d8705660b300e01feb131ef45bf792905265d7fb5abc04959c1f4fb.scope - libcontainer container 336c9a1c0d8705660b300e01feb131ef45bf792905265d7fb5abc04959c1f4fb. Nov 12 21:35:51.780950 containerd[1513]: time="2024-11-12T21:35:51.780627309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-w2jtl,Uid:3ebecd64-9e86-4ac7-9cb2-b37aacbc6d0d,Namespace:kube-system,Attempt:1,} returns sandbox id \"f1ff990acc2c82975b6e17335e144acf4e258a368d2dbdd85270ba1e291079f5\"" Nov 12 21:35:51.787732 containerd[1513]: time="2024-11-12T21:35:51.787678323Z" level=info msg="CreateContainer within sandbox \"f1ff990acc2c82975b6e17335e144acf4e258a368d2dbdd85270ba1e291079f5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 21:35:51.810036 containerd[1513]: time="2024-11-12T21:35:51.809989100Z" level=info msg="CreateContainer within sandbox \"f1ff990acc2c82975b6e17335e144acf4e258a368d2dbdd85270ba1e291079f5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"eebcdb2ed1176de98fb4bbf2350b7ecde8b041d3afac1b40e65a01067de8c38d\"" Nov 12 21:35:51.810645 containerd[1513]: time="2024-11-12T21:35:51.810616624Z" level=info msg="StartContainer for \"eebcdb2ed1176de98fb4bbf2350b7ecde8b041d3afac1b40e65a01067de8c38d\"" Nov 12 21:35:51.848563 containerd[1513]: time="2024-11-12T21:35:51.848513174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78cbf79fc8-jfgm6,Uid:cb73e657-cad9-47bd-b125-00eeaf912c0a,Namespace:calico-system,Attempt:1,} returns sandbox id \"336c9a1c0d8705660b300e01feb131ef45bf792905265d7fb5abc04959c1f4fb\"" Nov 12 21:35:51.864106 containerd[1513]: time="2024-11-12T21:35:51.861805474Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\"" Nov 12 21:35:51.867315 systemd[1]: Started cri-containerd-eebcdb2ed1176de98fb4bbf2350b7ecde8b041d3afac1b40e65a01067de8c38d.scope - libcontainer container eebcdb2ed1176de98fb4bbf2350b7ecde8b041d3afac1b40e65a01067de8c38d. Nov 12 21:35:51.928906 containerd[1513]: time="2024-11-12T21:35:51.928791157Z" level=info msg="StartContainer for \"eebcdb2ed1176de98fb4bbf2350b7ecde8b041d3afac1b40e65a01067de8c38d\" returns successfully" Nov 12 21:35:52.012419 kernel: bpftool[4529]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 12 21:35:52.090750 containerd[1513]: time="2024-11-12T21:35:52.090707898Z" level=info msg="StopPodSandbox for \"de9e54db176e81dc03a8dbaa7c3a1062418e30c48bb64f5ec3ef4c548f2b7a68\"" Nov 12 21:35:52.097516 containerd[1513]: time="2024-11-12T21:35:52.097272829Z" level=info msg="StopPodSandbox for \"6eeddde6f89415f32a3a1d8714ee27076ab3ee3803274679d14cba5e6ea98103\"" Nov 12 21:35:52.277104 containerd[1513]: 2024-11-12 21:35:52.198 [INFO][4560] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6eeddde6f89415f32a3a1d8714ee27076ab3ee3803274679d14cba5e6ea98103" Nov 12 21:35:52.277104 containerd[1513]: 2024-11-12 21:35:52.199 [INFO][4560] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6eeddde6f89415f32a3a1d8714ee27076ab3ee3803274679d14cba5e6ea98103" iface="eth0" netns="/var/run/netns/cni-d40b11ce-bcf6-8e8f-182b-af6172453093" Nov 12 21:35:52.277104 containerd[1513]: 2024-11-12 21:35:52.199 [INFO][4560] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6eeddde6f89415f32a3a1d8714ee27076ab3ee3803274679d14cba5e6ea98103" iface="eth0" netns="/var/run/netns/cni-d40b11ce-bcf6-8e8f-182b-af6172453093" Nov 12 21:35:52.277104 containerd[1513]: 2024-11-12 21:35:52.200 [INFO][4560] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6eeddde6f89415f32a3a1d8714ee27076ab3ee3803274679d14cba5e6ea98103" iface="eth0" netns="/var/run/netns/cni-d40b11ce-bcf6-8e8f-182b-af6172453093" Nov 12 21:35:52.277104 containerd[1513]: 2024-11-12 21:35:52.200 [INFO][4560] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6eeddde6f89415f32a3a1d8714ee27076ab3ee3803274679d14cba5e6ea98103" Nov 12 21:35:52.277104 containerd[1513]: 2024-11-12 21:35:52.200 [INFO][4560] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6eeddde6f89415f32a3a1d8714ee27076ab3ee3803274679d14cba5e6ea98103" Nov 12 21:35:52.277104 containerd[1513]: 2024-11-12 21:35:52.247 [INFO][4575] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6eeddde6f89415f32a3a1d8714ee27076ab3ee3803274679d14cba5e6ea98103" HandleID="k8s-pod-network.6eeddde6f89415f32a3a1d8714ee27076ab3ee3803274679d14cba5e6ea98103" Workload="ci--4081--2--0--6--01c097edc7-k8s-calico--apiserver--775b57f4d6--282rn-eth0" Nov 12 21:35:52.277104 containerd[1513]: 2024-11-12 21:35:52.248 [INFO][4575] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 21:35:52.277104 containerd[1513]: 2024-11-12 21:35:52.249 [INFO][4575] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 21:35:52.277104 containerd[1513]: 2024-11-12 21:35:52.257 [WARNING][4575] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6eeddde6f89415f32a3a1d8714ee27076ab3ee3803274679d14cba5e6ea98103" HandleID="k8s-pod-network.6eeddde6f89415f32a3a1d8714ee27076ab3ee3803274679d14cba5e6ea98103" Workload="ci--4081--2--0--6--01c097edc7-k8s-calico--apiserver--775b57f4d6--282rn-eth0" Nov 12 21:35:52.277104 containerd[1513]: 2024-11-12 21:35:52.257 [INFO][4575] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6eeddde6f89415f32a3a1d8714ee27076ab3ee3803274679d14cba5e6ea98103" HandleID="k8s-pod-network.6eeddde6f89415f32a3a1d8714ee27076ab3ee3803274679d14cba5e6ea98103" Workload="ci--4081--2--0--6--01c097edc7-k8s-calico--apiserver--775b57f4d6--282rn-eth0" Nov 12 21:35:52.277104 containerd[1513]: 2024-11-12 21:35:52.260 [INFO][4575] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 21:35:52.277104 containerd[1513]: 2024-11-12 21:35:52.265 [INFO][4560] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6eeddde6f89415f32a3a1d8714ee27076ab3ee3803274679d14cba5e6ea98103" Nov 12 21:35:52.277104 containerd[1513]: time="2024-11-12T21:35:52.273488274Z" level=info msg="TearDown network for sandbox \"6eeddde6f89415f32a3a1d8714ee27076ab3ee3803274679d14cba5e6ea98103\" successfully" Nov 12 21:35:52.277104 containerd[1513]: time="2024-11-12T21:35:52.273526075Z" level=info msg="StopPodSandbox for \"6eeddde6f89415f32a3a1d8714ee27076ab3ee3803274679d14cba5e6ea98103\" returns successfully" Nov 12 21:35:52.277104 containerd[1513]: time="2024-11-12T21:35:52.274507462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-775b57f4d6-282rn,Uid:65dd8fef-58ef-4aad-a3ea-c7f983131d2d,Namespace:calico-apiserver,Attempt:1,}" Nov 12 21:35:52.315353 containerd[1513]: 2024-11-12 21:35:52.236 [INFO][4561] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="de9e54db176e81dc03a8dbaa7c3a1062418e30c48bb64f5ec3ef4c548f2b7a68" Nov 12 21:35:52.315353 containerd[1513]: 2024-11-12 21:35:52.236 [INFO][4561] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="de9e54db176e81dc03a8dbaa7c3a1062418e30c48bb64f5ec3ef4c548f2b7a68" iface="eth0" netns="/var/run/netns/cni-eaac23f8-e4c6-4eb5-0457-75884bb4f499" Nov 12 21:35:52.315353 containerd[1513]: 2024-11-12 21:35:52.238 [INFO][4561] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="de9e54db176e81dc03a8dbaa7c3a1062418e30c48bb64f5ec3ef4c548f2b7a68" iface="eth0" netns="/var/run/netns/cni-eaac23f8-e4c6-4eb5-0457-75884bb4f499" Nov 12 21:35:52.315353 containerd[1513]: 2024-11-12 21:35:52.239 [INFO][4561] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="de9e54db176e81dc03a8dbaa7c3a1062418e30c48bb64f5ec3ef4c548f2b7a68" iface="eth0" netns="/var/run/netns/cni-eaac23f8-e4c6-4eb5-0457-75884bb4f499" Nov 12 21:35:52.315353 containerd[1513]: 2024-11-12 21:35:52.239 [INFO][4561] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="de9e54db176e81dc03a8dbaa7c3a1062418e30c48bb64f5ec3ef4c548f2b7a68" Nov 12 21:35:52.315353 containerd[1513]: 2024-11-12 21:35:52.239 [INFO][4561] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="de9e54db176e81dc03a8dbaa7c3a1062418e30c48bb64f5ec3ef4c548f2b7a68" Nov 12 21:35:52.315353 containerd[1513]: 2024-11-12 21:35:52.287 [INFO][4583] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="de9e54db176e81dc03a8dbaa7c3a1062418e30c48bb64f5ec3ef4c548f2b7a68" HandleID="k8s-pod-network.de9e54db176e81dc03a8dbaa7c3a1062418e30c48bb64f5ec3ef4c548f2b7a68" Workload="ci--4081--2--0--6--01c097edc7-k8s-csi--node--driver--chjk5-eth0" Nov 12 21:35:52.315353 containerd[1513]: 2024-11-12 21:35:52.288 [INFO][4583] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 21:35:52.315353 containerd[1513]: 2024-11-12 21:35:52.288 [INFO][4583] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 21:35:52.315353 containerd[1513]: 2024-11-12 21:35:52.298 [WARNING][4583] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="de9e54db176e81dc03a8dbaa7c3a1062418e30c48bb64f5ec3ef4c548f2b7a68" HandleID="k8s-pod-network.de9e54db176e81dc03a8dbaa7c3a1062418e30c48bb64f5ec3ef4c548f2b7a68" Workload="ci--4081--2--0--6--01c097edc7-k8s-csi--node--driver--chjk5-eth0" Nov 12 21:35:52.315353 containerd[1513]: 2024-11-12 21:35:52.298 [INFO][4583] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="de9e54db176e81dc03a8dbaa7c3a1062418e30c48bb64f5ec3ef4c548f2b7a68" HandleID="k8s-pod-network.de9e54db176e81dc03a8dbaa7c3a1062418e30c48bb64f5ec3ef4c548f2b7a68" Workload="ci--4081--2--0--6--01c097edc7-k8s-csi--node--driver--chjk5-eth0" Nov 12 21:35:52.315353 containerd[1513]: 2024-11-12 21:35:52.301 [INFO][4583] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 21:35:52.315353 containerd[1513]: 2024-11-12 21:35:52.307 [INFO][4561] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="de9e54db176e81dc03a8dbaa7c3a1062418e30c48bb64f5ec3ef4c548f2b7a68" Nov 12 21:35:52.315823 containerd[1513]: time="2024-11-12T21:35:52.315791486Z" level=info msg="TearDown network for sandbox \"de9e54db176e81dc03a8dbaa7c3a1062418e30c48bb64f5ec3ef4c548f2b7a68\" successfully" Nov 12 21:35:52.315823 containerd[1513]: time="2024-11-12T21:35:52.315821603Z" level=info msg="StopPodSandbox for \"de9e54db176e81dc03a8dbaa7c3a1062418e30c48bb64f5ec3ef4c548f2b7a68\" returns successfully" Nov 12 21:35:52.317288 containerd[1513]: time="2024-11-12T21:35:52.317262104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-chjk5,Uid:23bc69df-3188-4289-a376-a1e2658f5a79,Namespace:calico-system,Attempt:1,}" Nov 12 21:35:52.391496 systemd[1]: run-netns-cni\x2deaac23f8\x2de4c6\x2d4eb5\x2d0457\x2d75884bb4f499.mount: Deactivated successfully. Nov 12 21:35:52.393519 systemd[1]: run-netns-cni\x2dd40b11ce\x2dbcf6\x2d8e8f\x2d182b\x2daf6172453093.mount: Deactivated successfully. Nov 12 21:35:52.480244 kubelet[2885]: I1112 21:35:52.479530 2885 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-w2jtl" podStartSLOduration=31.479510352 podStartE2EDuration="31.479510352s" podCreationTimestamp="2024-11-12 21:35:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 21:35:52.432208198 +0000 UTC m=+46.456433738" watchObservedRunningTime="2024-11-12 21:35:52.479510352 +0000 UTC m=+46.503735862" Nov 12 21:35:52.657149 systemd-networkd[1406]: cali30834f0ca55: Link UP Nov 12 21:35:52.659997 systemd-networkd[1406]: cali30834f0ca55: Gained carrier Nov 12 21:35:52.683129 containerd[1513]: 2024-11-12 21:35:52.542 [INFO][4611] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--0--6--01c097edc7-k8s-calico--apiserver--775b57f4d6--282rn-eth0 calico-apiserver-775b57f4d6- calico-apiserver 65dd8fef-58ef-4aad-a3ea-c7f983131d2d 796 0 2024-11-12 21:35:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:775b57f4d6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-2-0-6-01c097edc7 calico-apiserver-775b57f4d6-282rn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali30834f0ca55 [] []}} ContainerID="b5757267c592d305676df5d1aaef4ecd533cc88852d657fbb2587db242c8007e" Namespace="calico-apiserver" Pod="calico-apiserver-775b57f4d6-282rn" WorkloadEndpoint="ci--4081--2--0--6--01c097edc7-k8s-calico--apiserver--775b57f4d6--282rn-" Nov 12 21:35:52.683129 containerd[1513]: 2024-11-12 21:35:52.542 [INFO][4611] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b5757267c592d305676df5d1aaef4ecd533cc88852d657fbb2587db242c8007e" Namespace="calico-apiserver" Pod="calico-apiserver-775b57f4d6-282rn" WorkloadEndpoint="ci--4081--2--0--6--01c097edc7-k8s-calico--apiserver--775b57f4d6--282rn-eth0" Nov 12 21:35:52.683129 containerd[1513]: 2024-11-12 21:35:52.586 [INFO][4634] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b5757267c592d305676df5d1aaef4ecd533cc88852d657fbb2587db242c8007e" HandleID="k8s-pod-network.b5757267c592d305676df5d1aaef4ecd533cc88852d657fbb2587db242c8007e" Workload="ci--4081--2--0--6--01c097edc7-k8s-calico--apiserver--775b57f4d6--282rn-eth0" Nov 12 21:35:52.683129 containerd[1513]: 2024-11-12 21:35:52.609 [INFO][4634] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b5757267c592d305676df5d1aaef4ecd533cc88852d657fbb2587db242c8007e" HandleID="k8s-pod-network.b5757267c592d305676df5d1aaef4ecd533cc88852d657fbb2587db242c8007e" Workload="ci--4081--2--0--6--01c097edc7-k8s-calico--apiserver--775b57f4d6--282rn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319ab0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-2-0-6-01c097edc7", "pod":"calico-apiserver-775b57f4d6-282rn", "timestamp":"2024-11-12 21:35:52.585871671 +0000 UTC"}, Hostname:"ci-4081-2-0-6-01c097edc7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 21:35:52.683129 containerd[1513]: 2024-11-12 21:35:52.609 [INFO][4634] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 21:35:52.683129 containerd[1513]: 2024-11-12 21:35:52.610 [INFO][4634] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 21:35:52.683129 containerd[1513]: 2024-11-12 21:35:52.610 [INFO][4634] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-0-6-01c097edc7' Nov 12 21:35:52.683129 containerd[1513]: 2024-11-12 21:35:52.613 [INFO][4634] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b5757267c592d305676df5d1aaef4ecd533cc88852d657fbb2587db242c8007e" host="ci-4081-2-0-6-01c097edc7" Nov 12 21:35:52.683129 containerd[1513]: 2024-11-12 21:35:52.618 [INFO][4634] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-0-6-01c097edc7" Nov 12 21:35:52.683129 containerd[1513]: 2024-11-12 21:35:52.624 [INFO][4634] ipam/ipam.go 489: Trying affinity for 192.168.75.64/26 host="ci-4081-2-0-6-01c097edc7" Nov 12 21:35:52.683129 containerd[1513]: 2024-11-12 21:35:52.627 [INFO][4634] ipam/ipam.go 155: Attempting to load block cidr=192.168.75.64/26 host="ci-4081-2-0-6-01c097edc7" Nov 12 21:35:52.683129 containerd[1513]: 2024-11-12 21:35:52.633 [INFO][4634] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.75.64/26 host="ci-4081-2-0-6-01c097edc7" Nov 12 21:35:52.683129 containerd[1513]: 2024-11-12 21:35:52.633 [INFO][4634] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.75.64/26 handle="k8s-pod-network.b5757267c592d305676df5d1aaef4ecd533cc88852d657fbb2587db242c8007e" host="ci-4081-2-0-6-01c097edc7" Nov 12 21:35:52.683129 containerd[1513]: 2024-11-12 21:35:52.635 [INFO][4634] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b5757267c592d305676df5d1aaef4ecd533cc88852d657fbb2587db242c8007e Nov 12 21:35:52.683129 containerd[1513]: 2024-11-12 21:35:52.640 [INFO][4634] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.75.64/26 handle="k8s-pod-network.b5757267c592d305676df5d1aaef4ecd533cc88852d657fbb2587db242c8007e" host="ci-4081-2-0-6-01c097edc7" Nov 12 21:35:52.683129 containerd[1513]: 2024-11-12 21:35:52.648 [INFO][4634] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.75.67/26] block=192.168.75.64/26 handle="k8s-pod-network.b5757267c592d305676df5d1aaef4ecd533cc88852d657fbb2587db242c8007e" host="ci-4081-2-0-6-01c097edc7" Nov 12 21:35:52.683129 containerd[1513]: 2024-11-12 21:35:52.648 [INFO][4634] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.75.67/26] handle="k8s-pod-network.b5757267c592d305676df5d1aaef4ecd533cc88852d657fbb2587db242c8007e" host="ci-4081-2-0-6-01c097edc7" Nov 12 21:35:52.683129 containerd[1513]: 2024-11-12 21:35:52.648 [INFO][4634] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 21:35:52.683129 containerd[1513]: 2024-11-12 21:35:52.648 [INFO][4634] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.67/26] IPv6=[] ContainerID="b5757267c592d305676df5d1aaef4ecd533cc88852d657fbb2587db242c8007e" HandleID="k8s-pod-network.b5757267c592d305676df5d1aaef4ecd533cc88852d657fbb2587db242c8007e" Workload="ci--4081--2--0--6--01c097edc7-k8s-calico--apiserver--775b57f4d6--282rn-eth0" Nov 12 21:35:52.685899 containerd[1513]: 2024-11-12 21:35:52.652 [INFO][4611] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b5757267c592d305676df5d1aaef4ecd533cc88852d657fbb2587db242c8007e" Namespace="calico-apiserver" Pod="calico-apiserver-775b57f4d6-282rn" WorkloadEndpoint="ci--4081--2--0--6--01c097edc7-k8s-calico--apiserver--775b57f4d6--282rn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--6--01c097edc7-k8s-calico--apiserver--775b57f4d6--282rn-eth0", GenerateName:"calico-apiserver-775b57f4d6-", Namespace:"calico-apiserver", SelfLink:"", UID:"65dd8fef-58ef-4aad-a3ea-c7f983131d2d", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 21, 35, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"775b57f4d6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-6-01c097edc7", ContainerID:"", Pod:"calico-apiserver-775b57f4d6-282rn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali30834f0ca55", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 21:35:52.685899 containerd[1513]: 2024-11-12 21:35:52.652 [INFO][4611] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.75.67/32] ContainerID="b5757267c592d305676df5d1aaef4ecd533cc88852d657fbb2587db242c8007e" Namespace="calico-apiserver" Pod="calico-apiserver-775b57f4d6-282rn" WorkloadEndpoint="ci--4081--2--0--6--01c097edc7-k8s-calico--apiserver--775b57f4d6--282rn-eth0" Nov 12 21:35:52.685899 containerd[1513]: 2024-11-12 21:35:52.652 [INFO][4611] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali30834f0ca55 ContainerID="b5757267c592d305676df5d1aaef4ecd533cc88852d657fbb2587db242c8007e" Namespace="calico-apiserver" Pod="calico-apiserver-775b57f4d6-282rn" WorkloadEndpoint="ci--4081--2--0--6--01c097edc7-k8s-calico--apiserver--775b57f4d6--282rn-eth0" Nov 12 21:35:52.685899 containerd[1513]: 2024-11-12 21:35:52.661 [INFO][4611] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b5757267c592d305676df5d1aaef4ecd533cc88852d657fbb2587db242c8007e" Namespace="calico-apiserver" Pod="calico-apiserver-775b57f4d6-282rn" WorkloadEndpoint="ci--4081--2--0--6--01c097edc7-k8s-calico--apiserver--775b57f4d6--282rn-eth0" Nov 12 21:35:52.685899 containerd[1513]: 2024-11-12 21:35:52.661 [INFO][4611] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b5757267c592d305676df5d1aaef4ecd533cc88852d657fbb2587db242c8007e" Namespace="calico-apiserver" Pod="calico-apiserver-775b57f4d6-282rn" WorkloadEndpoint="ci--4081--2--0--6--01c097edc7-k8s-calico--apiserver--775b57f4d6--282rn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--6--01c097edc7-k8s-calico--apiserver--775b57f4d6--282rn-eth0", GenerateName:"calico-apiserver-775b57f4d6-", Namespace:"calico-apiserver", SelfLink:"", UID:"65dd8fef-58ef-4aad-a3ea-c7f983131d2d", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 21, 35, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"775b57f4d6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-6-01c097edc7", ContainerID:"b5757267c592d305676df5d1aaef4ecd533cc88852d657fbb2587db242c8007e", Pod:"calico-apiserver-775b57f4d6-282rn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali30834f0ca55", MAC:"36:b4:49:31:63:ee", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 21:35:52.685899 containerd[1513]: 2024-11-12 21:35:52.670 [INFO][4611] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b5757267c592d305676df5d1aaef4ecd533cc88852d657fbb2587db242c8007e" Namespace="calico-apiserver" Pod="calico-apiserver-775b57f4d6-282rn" WorkloadEndpoint="ci--4081--2--0--6--01c097edc7-k8s-calico--apiserver--775b57f4d6--282rn-eth0" Nov 12 21:35:52.726047 containerd[1513]: time="2024-11-12T21:35:52.724665220Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 21:35:52.726047 containerd[1513]: time="2024-11-12T21:35:52.725314936Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 21:35:52.726047 containerd[1513]: time="2024-11-12T21:35:52.725329334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 21:35:52.726047 containerd[1513]: time="2024-11-12T21:35:52.725429514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 21:35:52.746936 systemd-networkd[1406]: calif2d98f555e4: Gained IPv6LL Nov 12 21:35:52.756681 systemd-networkd[1406]: cali8b0007e223a: Link UP Nov 12 21:35:52.757754 systemd-networkd[1406]: cali8b0007e223a: Gained carrier Nov 12 21:35:52.798161 systemd[1]: Started cri-containerd-b5757267c592d305676df5d1aaef4ecd533cc88852d657fbb2587db242c8007e.scope - libcontainer container b5757267c592d305676df5d1aaef4ecd533cc88852d657fbb2587db242c8007e. Nov 12 21:35:52.802613 containerd[1513]: 2024-11-12 21:35:52.540 [INFO][4606] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--0--6--01c097edc7-k8s-csi--node--driver--chjk5-eth0 csi-node-driver- calico-system 23bc69df-3188-4289-a376-a1e2658f5a79 797 0 2024-11-12 21:35:28 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:85bdc57578 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-2-0-6-01c097edc7 csi-node-driver-chjk5 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali8b0007e223a [] []}} ContainerID="340351aa1b5edd9597e5690c5187aa7b43842363056ac7c56fc26d4317b753b6" Namespace="calico-system" Pod="csi-node-driver-chjk5" WorkloadEndpoint="ci--4081--2--0--6--01c097edc7-k8s-csi--node--driver--chjk5-" Nov 12 21:35:52.802613 containerd[1513]: 2024-11-12 21:35:52.540 [INFO][4606] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="340351aa1b5edd9597e5690c5187aa7b43842363056ac7c56fc26d4317b753b6" Namespace="calico-system" Pod="csi-node-driver-chjk5" WorkloadEndpoint="ci--4081--2--0--6--01c097edc7-k8s-csi--node--driver--chjk5-eth0" Nov 12 21:35:52.802613 containerd[1513]: 2024-11-12 21:35:52.615 [INFO][4633] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="340351aa1b5edd9597e5690c5187aa7b43842363056ac7c56fc26d4317b753b6" HandleID="k8s-pod-network.340351aa1b5edd9597e5690c5187aa7b43842363056ac7c56fc26d4317b753b6" Workload="ci--4081--2--0--6--01c097edc7-k8s-csi--node--driver--chjk5-eth0" Nov 12 21:35:52.802613 containerd[1513]: 2024-11-12 21:35:52.632 [INFO][4633] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="340351aa1b5edd9597e5690c5187aa7b43842363056ac7c56fc26d4317b753b6" HandleID="k8s-pod-network.340351aa1b5edd9597e5690c5187aa7b43842363056ac7c56fc26d4317b753b6" Workload="ci--4081--2--0--6--01c097edc7-k8s-csi--node--driver--chjk5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051ea0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-2-0-6-01c097edc7", "pod":"csi-node-driver-chjk5", "timestamp":"2024-11-12 21:35:52.614906467 +0000 UTC"}, Hostname:"ci-4081-2-0-6-01c097edc7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 21:35:52.802613 containerd[1513]: 2024-11-12 21:35:52.633 [INFO][4633] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 21:35:52.802613 containerd[1513]: 2024-11-12 21:35:52.648 [INFO][4633] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 21:35:52.802613 containerd[1513]: 2024-11-12 21:35:52.649 [INFO][4633] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-0-6-01c097edc7' Nov 12 21:35:52.802613 containerd[1513]: 2024-11-12 21:35:52.652 [INFO][4633] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.340351aa1b5edd9597e5690c5187aa7b43842363056ac7c56fc26d4317b753b6" host="ci-4081-2-0-6-01c097edc7" Nov 12 21:35:52.802613 containerd[1513]: 2024-11-12 21:35:52.675 [INFO][4633] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-0-6-01c097edc7" Nov 12 21:35:52.802613 containerd[1513]: 2024-11-12 21:35:52.683 [INFO][4633] ipam/ipam.go 489: Trying affinity for 192.168.75.64/26 host="ci-4081-2-0-6-01c097edc7" Nov 12 21:35:52.802613 containerd[1513]: 2024-11-12 21:35:52.687 [INFO][4633] ipam/ipam.go 155: Attempting to load block cidr=192.168.75.64/26 host="ci-4081-2-0-6-01c097edc7" Nov 12 21:35:52.802613 containerd[1513]: 2024-11-12 21:35:52.691 [INFO][4633] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.75.64/26 host="ci-4081-2-0-6-01c097edc7" Nov 12 21:35:52.802613 containerd[1513]: 2024-11-12 21:35:52.691 [INFO][4633] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.75.64/26 handle="k8s-pod-network.340351aa1b5edd9597e5690c5187aa7b43842363056ac7c56fc26d4317b753b6" host="ci-4081-2-0-6-01c097edc7" Nov 12 21:35:52.802613 containerd[1513]: 2024-11-12 21:35:52.695 [INFO][4633] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.340351aa1b5edd9597e5690c5187aa7b43842363056ac7c56fc26d4317b753b6 Nov 12 21:35:52.802613 containerd[1513]: 2024-11-12 21:35:52.702 [INFO][4633] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.75.64/26 handle="k8s-pod-network.340351aa1b5edd9597e5690c5187aa7b43842363056ac7c56fc26d4317b753b6" host="ci-4081-2-0-6-01c097edc7" Nov 12 21:35:52.802613 containerd[1513]: 2024-11-12 21:35:52.713 [INFO][4633] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.75.68/26] block=192.168.75.64/26 handle="k8s-pod-network.340351aa1b5edd9597e5690c5187aa7b43842363056ac7c56fc26d4317b753b6" host="ci-4081-2-0-6-01c097edc7" Nov 12 21:35:52.802613 containerd[1513]: 2024-11-12 21:35:52.715 [INFO][4633] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.75.68/26] handle="k8s-pod-network.340351aa1b5edd9597e5690c5187aa7b43842363056ac7c56fc26d4317b753b6" host="ci-4081-2-0-6-01c097edc7" Nov 12 21:35:52.802613 containerd[1513]: 2024-11-12 21:35:52.715 [INFO][4633] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 21:35:52.802613 containerd[1513]: 2024-11-12 21:35:52.715 [INFO][4633] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.68/26] IPv6=[] ContainerID="340351aa1b5edd9597e5690c5187aa7b43842363056ac7c56fc26d4317b753b6" HandleID="k8s-pod-network.340351aa1b5edd9597e5690c5187aa7b43842363056ac7c56fc26d4317b753b6" Workload="ci--4081--2--0--6--01c097edc7-k8s-csi--node--driver--chjk5-eth0" Nov 12 21:35:52.805958 containerd[1513]: 2024-11-12 21:35:52.741 [INFO][4606] cni-plugin/k8s.go 386: Populated endpoint ContainerID="340351aa1b5edd9597e5690c5187aa7b43842363056ac7c56fc26d4317b753b6" Namespace="calico-system" Pod="csi-node-driver-chjk5" WorkloadEndpoint="ci--4081--2--0--6--01c097edc7-k8s-csi--node--driver--chjk5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--6--01c097edc7-k8s-csi--node--driver--chjk5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"23bc69df-3188-4289-a376-a1e2658f5a79", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 21, 35, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"85bdc57578", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-6-01c097edc7", ContainerID:"", Pod:"csi-node-driver-chjk5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.75.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8b0007e223a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 21:35:52.805958 containerd[1513]: 2024-11-12 21:35:52.743 [INFO][4606] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.75.68/32] ContainerID="340351aa1b5edd9597e5690c5187aa7b43842363056ac7c56fc26d4317b753b6" Namespace="calico-system" Pod="csi-node-driver-chjk5" WorkloadEndpoint="ci--4081--2--0--6--01c097edc7-k8s-csi--node--driver--chjk5-eth0" Nov 12 21:35:52.805958 containerd[1513]: 2024-11-12 21:35:52.744 [INFO][4606] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8b0007e223a ContainerID="340351aa1b5edd9597e5690c5187aa7b43842363056ac7c56fc26d4317b753b6" Namespace="calico-system" Pod="csi-node-driver-chjk5" WorkloadEndpoint="ci--4081--2--0--6--01c097edc7-k8s-csi--node--driver--chjk5-eth0" Nov 12 21:35:52.805958 containerd[1513]: 2024-11-12 21:35:52.757 [INFO][4606] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="340351aa1b5edd9597e5690c5187aa7b43842363056ac7c56fc26d4317b753b6" Namespace="calico-system" Pod="csi-node-driver-chjk5" WorkloadEndpoint="ci--4081--2--0--6--01c097edc7-k8s-csi--node--driver--chjk5-eth0" Nov 12 21:35:52.805958 containerd[1513]: 2024-11-12 21:35:52.758 [INFO][4606] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="340351aa1b5edd9597e5690c5187aa7b43842363056ac7c56fc26d4317b753b6" Namespace="calico-system" Pod="csi-node-driver-chjk5" WorkloadEndpoint="ci--4081--2--0--6--01c097edc7-k8s-csi--node--driver--chjk5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--6--01c097edc7-k8s-csi--node--driver--chjk5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"23bc69df-3188-4289-a376-a1e2658f5a79", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 21, 35, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"85bdc57578", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-6-01c097edc7", ContainerID:"340351aa1b5edd9597e5690c5187aa7b43842363056ac7c56fc26d4317b753b6", Pod:"csi-node-driver-chjk5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.75.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8b0007e223a", MAC:"a6:e0:4e:68:97:68", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 21:35:52.805958 containerd[1513]: 2024-11-12 21:35:52.780 [INFO][4606] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="340351aa1b5edd9597e5690c5187aa7b43842363056ac7c56fc26d4317b753b6" Namespace="calico-system" Pod="csi-node-driver-chjk5" WorkloadEndpoint="ci--4081--2--0--6--01c097edc7-k8s-csi--node--driver--chjk5-eth0" Nov 12 21:35:52.818235 systemd-networkd[1406]: vxlan.calico: Link UP Nov 12 21:35:52.818245 systemd-networkd[1406]: vxlan.calico: Gained carrier Nov 12 21:35:52.863765 containerd[1513]: time="2024-11-12T21:35:52.862844060Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 21:35:52.863765 containerd[1513]: time="2024-11-12T21:35:52.862922119Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 21:35:52.863765 containerd[1513]: time="2024-11-12T21:35:52.862939371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 21:35:52.863765 containerd[1513]: time="2024-11-12T21:35:52.863026778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 21:35:52.869165 systemd-networkd[1406]: calie44bdbaf52f: Gained IPv6LL Nov 12 21:35:52.906685 systemd[1]: Started cri-containerd-340351aa1b5edd9597e5690c5187aa7b43842363056ac7c56fc26d4317b753b6.scope - libcontainer container 340351aa1b5edd9597e5690c5187aa7b43842363056ac7c56fc26d4317b753b6. Nov 12 21:35:52.960576 containerd[1513]: time="2024-11-12T21:35:52.960453376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-775b57f4d6-282rn,Uid:65dd8fef-58ef-4aad-a3ea-c7f983131d2d,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"b5757267c592d305676df5d1aaef4ecd533cc88852d657fbb2587db242c8007e\"" Nov 12 21:35:52.992646 containerd[1513]: time="2024-11-12T21:35:52.992606696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-chjk5,Uid:23bc69df-3188-4289-a376-a1e2658f5a79,Namespace:calico-system,Attempt:1,} returns sandbox id \"340351aa1b5edd9597e5690c5187aa7b43842363056ac7c56fc26d4317b753b6\"" Nov 12 21:35:53.068190 systemd[1]: Started sshd@22-188.245.86.234:22-35.240.185.59:39756.service - OpenSSH per-connection server daemon (35.240.185.59:39756). Nov 12 21:35:53.092283 containerd[1513]: time="2024-11-12T21:35:53.092144850Z" level=info msg="StopPodSandbox for \"e193a8602cd3ccb9d63786c8fe3d8bd4e62e5db12832377de108e7fdaac316ba\"" Nov 12 21:35:53.094129 containerd[1513]: time="2024-11-12T21:35:53.093190430Z" level=info msg="StopPodSandbox for \"4df54e520198897be3d1ce315a70fa186f4bdd9c73c33c0e6ec7fc90594a1fc7\"" Nov 12 21:35:53.295119 containerd[1513]: 2024-11-12 21:35:53.206 [INFO][4812] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4df54e520198897be3d1ce315a70fa186f4bdd9c73c33c0e6ec7fc90594a1fc7" Nov 12 21:35:53.295119 containerd[1513]: 2024-11-12 21:35:53.207 [INFO][4812] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4df54e520198897be3d1ce315a70fa186f4bdd9c73c33c0e6ec7fc90594a1fc7" iface="eth0" netns="/var/run/netns/cni-813a1fb9-7596-f463-5786-510fb748f02c" Nov 12 21:35:53.295119 containerd[1513]: 2024-11-12 21:35:53.208 [INFO][4812] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4df54e520198897be3d1ce315a70fa186f4bdd9c73c33c0e6ec7fc90594a1fc7" iface="eth0" netns="/var/run/netns/cni-813a1fb9-7596-f463-5786-510fb748f02c" Nov 12 21:35:53.295119 containerd[1513]: 2024-11-12 21:35:53.209 [INFO][4812] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4df54e520198897be3d1ce315a70fa186f4bdd9c73c33c0e6ec7fc90594a1fc7" iface="eth0" netns="/var/run/netns/cni-813a1fb9-7596-f463-5786-510fb748f02c" Nov 12 21:35:53.295119 containerd[1513]: 2024-11-12 21:35:53.209 [INFO][4812] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4df54e520198897be3d1ce315a70fa186f4bdd9c73c33c0e6ec7fc90594a1fc7" Nov 12 21:35:53.295119 containerd[1513]: 2024-11-12 21:35:53.209 [INFO][4812] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4df54e520198897be3d1ce315a70fa186f4bdd9c73c33c0e6ec7fc90594a1fc7" Nov 12 21:35:53.295119 containerd[1513]: 2024-11-12 21:35:53.259 [INFO][4833] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4df54e520198897be3d1ce315a70fa186f4bdd9c73c33c0e6ec7fc90594a1fc7" HandleID="k8s-pod-network.4df54e520198897be3d1ce315a70fa186f4bdd9c73c33c0e6ec7fc90594a1fc7" Workload="ci--4081--2--0--6--01c097edc7-k8s-calico--apiserver--775b57f4d6--ntjrt-eth0" Nov 12 21:35:53.295119 containerd[1513]: 2024-11-12 21:35:53.259 [INFO][4833] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 21:35:53.295119 containerd[1513]: 2024-11-12 21:35:53.259 [INFO][4833] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 21:35:53.295119 containerd[1513]: 2024-11-12 21:35:53.266 [WARNING][4833] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4df54e520198897be3d1ce315a70fa186f4bdd9c73c33c0e6ec7fc90594a1fc7" HandleID="k8s-pod-network.4df54e520198897be3d1ce315a70fa186f4bdd9c73c33c0e6ec7fc90594a1fc7" Workload="ci--4081--2--0--6--01c097edc7-k8s-calico--apiserver--775b57f4d6--ntjrt-eth0" Nov 12 21:35:53.295119 containerd[1513]: 2024-11-12 21:35:53.266 [INFO][4833] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4df54e520198897be3d1ce315a70fa186f4bdd9c73c33c0e6ec7fc90594a1fc7" HandleID="k8s-pod-network.4df54e520198897be3d1ce315a70fa186f4bdd9c73c33c0e6ec7fc90594a1fc7" Workload="ci--4081--2--0--6--01c097edc7-k8s-calico--apiserver--775b57f4d6--ntjrt-eth0" Nov 12 21:35:53.295119 containerd[1513]: 2024-11-12 21:35:53.271 [INFO][4833] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 21:35:53.295119 containerd[1513]: 2024-11-12 21:35:53.288 [INFO][4812] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4df54e520198897be3d1ce315a70fa186f4bdd9c73c33c0e6ec7fc90594a1fc7" Nov 12 21:35:53.296979 containerd[1513]: time="2024-11-12T21:35:53.295798203Z" level=info msg="TearDown network for sandbox \"4df54e520198897be3d1ce315a70fa186f4bdd9c73c33c0e6ec7fc90594a1fc7\" successfully" Nov 12 21:35:53.296979 containerd[1513]: time="2024-11-12T21:35:53.295824513Z" level=info msg="StopPodSandbox for \"4df54e520198897be3d1ce315a70fa186f4bdd9c73c33c0e6ec7fc90594a1fc7\" returns successfully" Nov 12 21:35:53.297705 containerd[1513]: time="2024-11-12T21:35:53.297662450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-775b57f4d6-ntjrt,Uid:f899bca3-0451-45a6-b02f-8752a2ae4ada,Namespace:calico-apiserver,Attempt:1,}" Nov 12 21:35:53.347064 containerd[1513]: 2024-11-12 21:35:53.234 [INFO][4813] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e193a8602cd3ccb9d63786c8fe3d8bd4e62e5db12832377de108e7fdaac316ba" Nov 12 21:35:53.347064 containerd[1513]: 2024-11-12 21:35:53.235 [INFO][4813] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e193a8602cd3ccb9d63786c8fe3d8bd4e62e5db12832377de108e7fdaac316ba" iface="eth0" netns="/var/run/netns/cni-cddc620c-3417-0810-c8dc-7dfab18ab5f3" Nov 12 21:35:53.347064 containerd[1513]: 2024-11-12 21:35:53.235 [INFO][4813] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e193a8602cd3ccb9d63786c8fe3d8bd4e62e5db12832377de108e7fdaac316ba" iface="eth0" netns="/var/run/netns/cni-cddc620c-3417-0810-c8dc-7dfab18ab5f3" Nov 12 21:35:53.347064 containerd[1513]: 2024-11-12 21:35:53.236 [INFO][4813] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e193a8602cd3ccb9d63786c8fe3d8bd4e62e5db12832377de108e7fdaac316ba" iface="eth0" netns="/var/run/netns/cni-cddc620c-3417-0810-c8dc-7dfab18ab5f3" Nov 12 21:35:53.347064 containerd[1513]: 2024-11-12 21:35:53.236 [INFO][4813] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e193a8602cd3ccb9d63786c8fe3d8bd4e62e5db12832377de108e7fdaac316ba" Nov 12 21:35:53.347064 containerd[1513]: 2024-11-12 21:35:53.236 [INFO][4813] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e193a8602cd3ccb9d63786c8fe3d8bd4e62e5db12832377de108e7fdaac316ba" Nov 12 21:35:53.347064 containerd[1513]: 2024-11-12 21:35:53.317 [INFO][4844] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e193a8602cd3ccb9d63786c8fe3d8bd4e62e5db12832377de108e7fdaac316ba" HandleID="k8s-pod-network.e193a8602cd3ccb9d63786c8fe3d8bd4e62e5db12832377de108e7fdaac316ba" Workload="ci--4081--2--0--6--01c097edc7-k8s-coredns--7db6d8ff4d--bh8r2-eth0" Nov 12 21:35:53.347064 containerd[1513]: 2024-11-12 21:35:53.320 [INFO][4844] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 21:35:53.347064 containerd[1513]: 2024-11-12 21:35:53.320 [INFO][4844] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 21:35:53.347064 containerd[1513]: 2024-11-12 21:35:53.332 [WARNING][4844] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e193a8602cd3ccb9d63786c8fe3d8bd4e62e5db12832377de108e7fdaac316ba" HandleID="k8s-pod-network.e193a8602cd3ccb9d63786c8fe3d8bd4e62e5db12832377de108e7fdaac316ba" Workload="ci--4081--2--0--6--01c097edc7-k8s-coredns--7db6d8ff4d--bh8r2-eth0" Nov 12 21:35:53.347064 containerd[1513]: 2024-11-12 21:35:53.332 [INFO][4844] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e193a8602cd3ccb9d63786c8fe3d8bd4e62e5db12832377de108e7fdaac316ba" HandleID="k8s-pod-network.e193a8602cd3ccb9d63786c8fe3d8bd4e62e5db12832377de108e7fdaac316ba" Workload="ci--4081--2--0--6--01c097edc7-k8s-coredns--7db6d8ff4d--bh8r2-eth0" Nov 12 21:35:53.347064 containerd[1513]: 2024-11-12 21:35:53.335 [INFO][4844] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 21:35:53.347064 containerd[1513]: 2024-11-12 21:35:53.342 [INFO][4813] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e193a8602cd3ccb9d63786c8fe3d8bd4e62e5db12832377de108e7fdaac316ba" Nov 12 21:35:53.349781 containerd[1513]: time="2024-11-12T21:35:53.347853152Z" level=info msg="TearDown network for sandbox \"e193a8602cd3ccb9d63786c8fe3d8bd4e62e5db12832377de108e7fdaac316ba\" successfully" Nov 12 21:35:53.349781 containerd[1513]: time="2024-11-12T21:35:53.348366478Z" level=info msg="StopPodSandbox for \"e193a8602cd3ccb9d63786c8fe3d8bd4e62e5db12832377de108e7fdaac316ba\" returns successfully" Nov 12 21:35:53.350827 containerd[1513]: time="2024-11-12T21:35:53.350363048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bh8r2,Uid:fb426bfe-a7f9-4da9-8328-a78ab7f2ab08,Namespace:kube-system,Attempt:1,}" Nov 12 21:35:53.396592 systemd[1]: run-netns-cni\x2d813a1fb9\x2d7596\x2df463\x2d5786\x2d510fb748f02c.mount: Deactivated successfully. Nov 12 21:35:53.397365 systemd[1]: run-netns-cni\x2dcddc620c\x2d3417\x2d0810\x2dc8dc\x2d7dfab18ab5f3.mount: Deactivated successfully. Nov 12 21:35:53.506542 systemd-networkd[1406]: cali2184f0a01fb: Link UP Nov 12 21:35:53.508270 systemd-networkd[1406]: cali2184f0a01fb: Gained carrier Nov 12 21:35:53.538176 containerd[1513]: 2024-11-12 21:35:53.395 [INFO][4876] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--0--6--01c097edc7-k8s-calico--apiserver--775b57f4d6--ntjrt-eth0 calico-apiserver-775b57f4d6- calico-apiserver f899bca3-0451-45a6-b02f-8752a2ae4ada 816 0 2024-11-12 21:35:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:775b57f4d6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-2-0-6-01c097edc7 calico-apiserver-775b57f4d6-ntjrt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2184f0a01fb [] []}} ContainerID="4435566239d0bb65db9045df284c9e943524e343b83fd2d67f8d6893bf0427e1" Namespace="calico-apiserver" Pod="calico-apiserver-775b57f4d6-ntjrt" WorkloadEndpoint="ci--4081--2--0--6--01c097edc7-k8s-calico--apiserver--775b57f4d6--ntjrt-" Nov 12 21:35:53.538176 containerd[1513]: 2024-11-12 21:35:53.396 [INFO][4876] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4435566239d0bb65db9045df284c9e943524e343b83fd2d67f8d6893bf0427e1" Namespace="calico-apiserver" Pod="calico-apiserver-775b57f4d6-ntjrt" WorkloadEndpoint="ci--4081--2--0--6--01c097edc7-k8s-calico--apiserver--775b57f4d6--ntjrt-eth0" Nov 12 21:35:53.538176 containerd[1513]: 2024-11-12 21:35:53.446 [INFO][4900] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4435566239d0bb65db9045df284c9e943524e343b83fd2d67f8d6893bf0427e1" HandleID="k8s-pod-network.4435566239d0bb65db9045df284c9e943524e343b83fd2d67f8d6893bf0427e1" Workload="ci--4081--2--0--6--01c097edc7-k8s-calico--apiserver--775b57f4d6--ntjrt-eth0" Nov 12 21:35:53.538176 containerd[1513]: 2024-11-12 21:35:53.459 [INFO][4900] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4435566239d0bb65db9045df284c9e943524e343b83fd2d67f8d6893bf0427e1" HandleID="k8s-pod-network.4435566239d0bb65db9045df284c9e943524e343b83fd2d67f8d6893bf0427e1" Workload="ci--4081--2--0--6--01c097edc7-k8s-calico--apiserver--775b57f4d6--ntjrt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003194d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-2-0-6-01c097edc7", "pod":"calico-apiserver-775b57f4d6-ntjrt", "timestamp":"2024-11-12 21:35:53.446907777 +0000 UTC"}, Hostname:"ci-4081-2-0-6-01c097edc7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 21:35:53.538176 containerd[1513]: 2024-11-12 21:35:53.459 [INFO][4900] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 21:35:53.538176 containerd[1513]: 2024-11-12 21:35:53.459 [INFO][4900] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 21:35:53.538176 containerd[1513]: 2024-11-12 21:35:53.459 [INFO][4900] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-0-6-01c097edc7' Nov 12 21:35:53.538176 containerd[1513]: 2024-11-12 21:35:53.462 [INFO][4900] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4435566239d0bb65db9045df284c9e943524e343b83fd2d67f8d6893bf0427e1" host="ci-4081-2-0-6-01c097edc7" Nov 12 21:35:53.538176 containerd[1513]: 2024-11-12 21:35:53.467 [INFO][4900] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-0-6-01c097edc7" Nov 12 21:35:53.538176 containerd[1513]: 2024-11-12 21:35:53.471 [INFO][4900] ipam/ipam.go 489: Trying affinity for 192.168.75.64/26 host="ci-4081-2-0-6-01c097edc7" Nov 12 21:35:53.538176 containerd[1513]: 2024-11-12 21:35:53.474 [INFO][4900] ipam/ipam.go 155: Attempting to load block cidr=192.168.75.64/26 host="ci-4081-2-0-6-01c097edc7" Nov 12 21:35:53.538176 containerd[1513]: 2024-11-12 21:35:53.477 [INFO][4900] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.75.64/26 host="ci-4081-2-0-6-01c097edc7" Nov 12 21:35:53.538176 containerd[1513]: 2024-11-12 21:35:53.477 [INFO][4900] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.75.64/26 handle="k8s-pod-network.4435566239d0bb65db9045df284c9e943524e343b83fd2d67f8d6893bf0427e1" host="ci-4081-2-0-6-01c097edc7" Nov 12 21:35:53.538176 containerd[1513]: 2024-11-12 21:35:53.479 [INFO][4900] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4435566239d0bb65db9045df284c9e943524e343b83fd2d67f8d6893bf0427e1 Nov 12 21:35:53.538176 containerd[1513]: 2024-11-12 21:35:53.485 [INFO][4900] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.75.64/26 handle="k8s-pod-network.4435566239d0bb65db9045df284c9e943524e343b83fd2d67f8d6893bf0427e1" host="ci-4081-2-0-6-01c097edc7" Nov 12 21:35:53.538176 containerd[1513]: 2024-11-12 21:35:53.491 [INFO][4900] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.75.69/26] block=192.168.75.64/26 handle="k8s-pod-network.4435566239d0bb65db9045df284c9e943524e343b83fd2d67f8d6893bf0427e1" host="ci-4081-2-0-6-01c097edc7" Nov 12 21:35:53.538176 containerd[1513]: 2024-11-12 21:35:53.491 [INFO][4900] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.75.69/26] handle="k8s-pod-network.4435566239d0bb65db9045df284c9e943524e343b83fd2d67f8d6893bf0427e1" host="ci-4081-2-0-6-01c097edc7" Nov 12 21:35:53.538176 containerd[1513]: 2024-11-12 21:35:53.491 [INFO][4900] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 21:35:53.538176 containerd[1513]: 2024-11-12 21:35:53.491 [INFO][4900] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.69/26] IPv6=[] ContainerID="4435566239d0bb65db9045df284c9e943524e343b83fd2d67f8d6893bf0427e1" HandleID="k8s-pod-network.4435566239d0bb65db9045df284c9e943524e343b83fd2d67f8d6893bf0427e1" Workload="ci--4081--2--0--6--01c097edc7-k8s-calico--apiserver--775b57f4d6--ntjrt-eth0" Nov 12 21:35:53.540012 containerd[1513]: 2024-11-12 21:35:53.495 [INFO][4876] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4435566239d0bb65db9045df284c9e943524e343b83fd2d67f8d6893bf0427e1" Namespace="calico-apiserver" Pod="calico-apiserver-775b57f4d6-ntjrt" WorkloadEndpoint="ci--4081--2--0--6--01c097edc7-k8s-calico--apiserver--775b57f4d6--ntjrt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--6--01c097edc7-k8s-calico--apiserver--775b57f4d6--ntjrt-eth0", GenerateName:"calico-apiserver-775b57f4d6-", Namespace:"calico-apiserver", SelfLink:"", UID:"f899bca3-0451-45a6-b02f-8752a2ae4ada", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 21, 35, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"775b57f4d6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-6-01c097edc7", ContainerID:"", Pod:"calico-apiserver-775b57f4d6-ntjrt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2184f0a01fb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 21:35:53.540012 containerd[1513]: 2024-11-12 21:35:53.495 [INFO][4876] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.75.69/32] ContainerID="4435566239d0bb65db9045df284c9e943524e343b83fd2d67f8d6893bf0427e1" Namespace="calico-apiserver" Pod="calico-apiserver-775b57f4d6-ntjrt" WorkloadEndpoint="ci--4081--2--0--6--01c097edc7-k8s-calico--apiserver--775b57f4d6--ntjrt-eth0" Nov 12 21:35:53.540012 containerd[1513]: 2024-11-12 21:35:53.495 [INFO][4876] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2184f0a01fb ContainerID="4435566239d0bb65db9045df284c9e943524e343b83fd2d67f8d6893bf0427e1" Namespace="calico-apiserver" Pod="calico-apiserver-775b57f4d6-ntjrt" WorkloadEndpoint="ci--4081--2--0--6--01c097edc7-k8s-calico--apiserver--775b57f4d6--ntjrt-eth0" Nov 12 21:35:53.540012 containerd[1513]: 2024-11-12 21:35:53.508 [INFO][4876] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4435566239d0bb65db9045df284c9e943524e343b83fd2d67f8d6893bf0427e1" Namespace="calico-apiserver" Pod="calico-apiserver-775b57f4d6-ntjrt" WorkloadEndpoint="ci--4081--2--0--6--01c097edc7-k8s-calico--apiserver--775b57f4d6--ntjrt-eth0" Nov 12 21:35:53.540012 containerd[1513]: 2024-11-12 21:35:53.510 [INFO][4876] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4435566239d0bb65db9045df284c9e943524e343b83fd2d67f8d6893bf0427e1" Namespace="calico-apiserver" Pod="calico-apiserver-775b57f4d6-ntjrt" WorkloadEndpoint="ci--4081--2--0--6--01c097edc7-k8s-calico--apiserver--775b57f4d6--ntjrt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--6--01c097edc7-k8s-calico--apiserver--775b57f4d6--ntjrt-eth0", GenerateName:"calico-apiserver-775b57f4d6-", Namespace:"calico-apiserver", SelfLink:"", UID:"f899bca3-0451-45a6-b02f-8752a2ae4ada", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 21, 35, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"775b57f4d6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-6-01c097edc7", ContainerID:"4435566239d0bb65db9045df284c9e943524e343b83fd2d67f8d6893bf0427e1", Pod:"calico-apiserver-775b57f4d6-ntjrt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2184f0a01fb", MAC:"06:68:66:18:d6:07", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 21:35:53.540012 containerd[1513]: 2024-11-12 21:35:53.527 [INFO][4876] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4435566239d0bb65db9045df284c9e943524e343b83fd2d67f8d6893bf0427e1" Namespace="calico-apiserver" Pod="calico-apiserver-775b57f4d6-ntjrt" WorkloadEndpoint="ci--4081--2--0--6--01c097edc7-k8s-calico--apiserver--775b57f4d6--ntjrt-eth0" Nov 12 21:35:53.564770 systemd-networkd[1406]: cali192450cfb36: Link UP Nov 12 21:35:53.567854 systemd-networkd[1406]: cali192450cfb36: Gained carrier Nov 12 21:35:53.596303 containerd[1513]: time="2024-11-12T21:35:53.596110012Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 21:35:53.596303 containerd[1513]: time="2024-11-12T21:35:53.596215413Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 21:35:53.596878 containerd[1513]: time="2024-11-12T21:35:53.596280818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 21:35:53.596878 containerd[1513]: time="2024-11-12T21:35:53.596625042Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 21:35:53.613306 containerd[1513]: 2024-11-12 21:35:53.418 [INFO][4887] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--0--6--01c097edc7-k8s-coredns--7db6d8ff4d--bh8r2-eth0 coredns-7db6d8ff4d- kube-system fb426bfe-a7f9-4da9-8328-a78ab7f2ab08 817 0 2024-11-12 21:35:21 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-2-0-6-01c097edc7 coredns-7db6d8ff4d-bh8r2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali192450cfb36 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="b357be0a03def338af0a34f3f14035e586c03ee86ad86929b7271338e01b2d61" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bh8r2" WorkloadEndpoint="ci--4081--2--0--6--01c097edc7-k8s-coredns--7db6d8ff4d--bh8r2-" Nov 12 21:35:53.613306 containerd[1513]: 2024-11-12 21:35:53.419 [INFO][4887] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b357be0a03def338af0a34f3f14035e586c03ee86ad86929b7271338e01b2d61" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bh8r2" WorkloadEndpoint="ci--4081--2--0--6--01c097edc7-k8s-coredns--7db6d8ff4d--bh8r2-eth0" Nov 12 21:35:53.613306 containerd[1513]: 2024-11-12 21:35:53.467 [INFO][4904] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b357be0a03def338af0a34f3f14035e586c03ee86ad86929b7271338e01b2d61" HandleID="k8s-pod-network.b357be0a03def338af0a34f3f14035e586c03ee86ad86929b7271338e01b2d61" Workload="ci--4081--2--0--6--01c097edc7-k8s-coredns--7db6d8ff4d--bh8r2-eth0" Nov 12 21:35:53.613306 containerd[1513]: 2024-11-12 21:35:53.477 [INFO][4904] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b357be0a03def338af0a34f3f14035e586c03ee86ad86929b7271338e01b2d61" HandleID="k8s-pod-network.b357be0a03def338af0a34f3f14035e586c03ee86ad86929b7271338e01b2d61" Workload="ci--4081--2--0--6--01c097edc7-k8s-coredns--7db6d8ff4d--bh8r2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290b70), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-2-0-6-01c097edc7", "pod":"coredns-7db6d8ff4d-bh8r2", "timestamp":"2024-11-12 21:35:53.467479204 +0000 UTC"}, Hostname:"ci-4081-2-0-6-01c097edc7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 21:35:53.613306 containerd[1513]: 2024-11-12 21:35:53.478 [INFO][4904] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 21:35:53.613306 containerd[1513]: 2024-11-12 21:35:53.491 [INFO][4904] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 21:35:53.613306 containerd[1513]: 2024-11-12 21:35:53.491 [INFO][4904] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-0-6-01c097edc7' Nov 12 21:35:53.613306 containerd[1513]: 2024-11-12 21:35:53.496 [INFO][4904] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b357be0a03def338af0a34f3f14035e586c03ee86ad86929b7271338e01b2d61" host="ci-4081-2-0-6-01c097edc7" Nov 12 21:35:53.613306 containerd[1513]: 2024-11-12 21:35:53.507 [INFO][4904] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-0-6-01c097edc7" Nov 12 21:35:53.613306 containerd[1513]: 2024-11-12 21:35:53.519 [INFO][4904] ipam/ipam.go 489: Trying affinity for 192.168.75.64/26 host="ci-4081-2-0-6-01c097edc7" Nov 12 21:35:53.613306 containerd[1513]: 2024-11-12 21:35:53.522 [INFO][4904] ipam/ipam.go 155: Attempting to load block cidr=192.168.75.64/26 host="ci-4081-2-0-6-01c097edc7" Nov 12 21:35:53.613306 containerd[1513]: 2024-11-12 21:35:53.525 [INFO][4904] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.75.64/26 host="ci-4081-2-0-6-01c097edc7" Nov 12 21:35:53.613306 containerd[1513]: 2024-11-12 21:35:53.525 [INFO][4904] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.75.64/26 handle="k8s-pod-network.b357be0a03def338af0a34f3f14035e586c03ee86ad86929b7271338e01b2d61" host="ci-4081-2-0-6-01c097edc7" Nov 12 21:35:53.613306 containerd[1513]: 2024-11-12 21:35:53.531 [INFO][4904] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b357be0a03def338af0a34f3f14035e586c03ee86ad86929b7271338e01b2d61 Nov 12 21:35:53.613306 containerd[1513]: 2024-11-12 21:35:53.540 [INFO][4904] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.75.64/26 handle="k8s-pod-network.b357be0a03def338af0a34f3f14035e586c03ee86ad86929b7271338e01b2d61" host="ci-4081-2-0-6-01c097edc7" Nov 12 21:35:53.613306 containerd[1513]: 2024-11-12 21:35:53.553 [INFO][4904] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.75.70/26] block=192.168.75.64/26 handle="k8s-pod-network.b357be0a03def338af0a34f3f14035e586c03ee86ad86929b7271338e01b2d61" host="ci-4081-2-0-6-01c097edc7" Nov 12 21:35:53.613306 containerd[1513]: 2024-11-12 21:35:53.553 [INFO][4904] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.75.70/26] handle="k8s-pod-network.b357be0a03def338af0a34f3f14035e586c03ee86ad86929b7271338e01b2d61" host="ci-4081-2-0-6-01c097edc7" Nov 12 21:35:53.613306 containerd[1513]: 2024-11-12 21:35:53.553 [INFO][4904] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 21:35:53.613306 containerd[1513]: 2024-11-12 21:35:53.553 [INFO][4904] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.70/26] IPv6=[] ContainerID="b357be0a03def338af0a34f3f14035e586c03ee86ad86929b7271338e01b2d61" HandleID="k8s-pod-network.b357be0a03def338af0a34f3f14035e586c03ee86ad86929b7271338e01b2d61" Workload="ci--4081--2--0--6--01c097edc7-k8s-coredns--7db6d8ff4d--bh8r2-eth0" Nov 12 21:35:53.614153 containerd[1513]: 2024-11-12 21:35:53.559 [INFO][4887] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b357be0a03def338af0a34f3f14035e586c03ee86ad86929b7271338e01b2d61" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bh8r2" WorkloadEndpoint="ci--4081--2--0--6--01c097edc7-k8s-coredns--7db6d8ff4d--bh8r2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--6--01c097edc7-k8s-coredns--7db6d8ff4d--bh8r2-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"fb426bfe-a7f9-4da9-8328-a78ab7f2ab08", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 21, 35, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-6-01c097edc7", ContainerID:"", Pod:"coredns-7db6d8ff4d-bh8r2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali192450cfb36", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 21:35:53.614153 containerd[1513]: 2024-11-12 21:35:53.559 [INFO][4887] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.75.70/32] ContainerID="b357be0a03def338af0a34f3f14035e586c03ee86ad86929b7271338e01b2d61" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bh8r2" WorkloadEndpoint="ci--4081--2--0--6--01c097edc7-k8s-coredns--7db6d8ff4d--bh8r2-eth0" Nov 12 21:35:53.614153 containerd[1513]: 2024-11-12 21:35:53.560 [INFO][4887] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali192450cfb36 ContainerID="b357be0a03def338af0a34f3f14035e586c03ee86ad86929b7271338e01b2d61" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bh8r2" WorkloadEndpoint="ci--4081--2--0--6--01c097edc7-k8s-coredns--7db6d8ff4d--bh8r2-eth0" Nov 12 21:35:53.614153 containerd[1513]: 2024-11-12 21:35:53.571 [INFO][4887] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b357be0a03def338af0a34f3f14035e586c03ee86ad86929b7271338e01b2d61" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bh8r2" WorkloadEndpoint="ci--4081--2--0--6--01c097edc7-k8s-coredns--7db6d8ff4d--bh8r2-eth0" Nov 12 21:35:53.614153 containerd[1513]: 2024-11-12 21:35:53.572 [INFO][4887] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b357be0a03def338af0a34f3f14035e586c03ee86ad86929b7271338e01b2d61" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bh8r2" WorkloadEndpoint="ci--4081--2--0--6--01c097edc7-k8s-coredns--7db6d8ff4d--bh8r2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--6--01c097edc7-k8s-coredns--7db6d8ff4d--bh8r2-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"fb426bfe-a7f9-4da9-8328-a78ab7f2ab08", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 21, 35, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-6-01c097edc7", ContainerID:"b357be0a03def338af0a34f3f14035e586c03ee86ad86929b7271338e01b2d61", Pod:"coredns-7db6d8ff4d-bh8r2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali192450cfb36", MAC:"ee:2d:0f:3f:e3:a2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 21:35:53.614153 containerd[1513]: 2024-11-12 21:35:53.588 [INFO][4887] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b357be0a03def338af0a34f3f14035e586c03ee86ad86929b7271338e01b2d61" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bh8r2" WorkloadEndpoint="ci--4081--2--0--6--01c097edc7-k8s-coredns--7db6d8ff4d--bh8r2-eth0" Nov 12 21:35:53.630273 systemd[1]: Started cri-containerd-4435566239d0bb65db9045df284c9e943524e343b83fd2d67f8d6893bf0427e1.scope - libcontainer container 4435566239d0bb65db9045df284c9e943524e343b83fd2d67f8d6893bf0427e1. Nov 12 21:35:53.664020 containerd[1513]: time="2024-11-12T21:35:53.662952551Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 21:35:53.667015 containerd[1513]: time="2024-11-12T21:35:53.665060182Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 21:35:53.667015 containerd[1513]: time="2024-11-12T21:35:53.666533795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 21:35:53.667015 containerd[1513]: time="2024-11-12T21:35:53.666659584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 21:35:53.713282 systemd[1]: Started cri-containerd-b357be0a03def338af0a34f3f14035e586c03ee86ad86929b7271338e01b2d61.scope - libcontainer container b357be0a03def338af0a34f3f14035e586c03ee86ad86929b7271338e01b2d61. Nov 12 21:35:53.774500 containerd[1513]: time="2024-11-12T21:35:53.774204668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-775b57f4d6-ntjrt,Uid:f899bca3-0451-45a6-b02f-8752a2ae4ada,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"4435566239d0bb65db9045df284c9e943524e343b83fd2d67f8d6893bf0427e1\"" Nov 12 21:35:53.788824 containerd[1513]: time="2024-11-12T21:35:53.788747181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bh8r2,Uid:fb426bfe-a7f9-4da9-8328-a78ab7f2ab08,Namespace:kube-system,Attempt:1,} returns sandbox id \"b357be0a03def338af0a34f3f14035e586c03ee86ad86929b7271338e01b2d61\"" Nov 12 21:35:53.793192 containerd[1513]: time="2024-11-12T21:35:53.793135071Z" level=info msg="CreateContainer within sandbox \"b357be0a03def338af0a34f3f14035e586c03ee86ad86929b7271338e01b2d61\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 21:35:53.807946 containerd[1513]: time="2024-11-12T21:35:53.807879099Z" level=info msg="CreateContainer within sandbox \"b357be0a03def338af0a34f3f14035e586c03ee86ad86929b7271338e01b2d61\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5569d7052734397f9a7ee8375207b95776faf1f927d90a9f428bccfee4eb65f8\"" Nov 12 21:35:53.809799 containerd[1513]: time="2024-11-12T21:35:53.808743824Z" level=info msg="StartContainer for \"5569d7052734397f9a7ee8375207b95776faf1f927d90a9f428bccfee4eb65f8\"" Nov 12 21:35:53.842377 systemd[1]: Started cri-containerd-5569d7052734397f9a7ee8375207b95776faf1f927d90a9f428bccfee4eb65f8.scope - libcontainer container 5569d7052734397f9a7ee8375207b95776faf1f927d90a9f428bccfee4eb65f8. Nov 12 21:35:53.886027 containerd[1513]: time="2024-11-12T21:35:53.885462377Z" level=info msg="StartContainer for \"5569d7052734397f9a7ee8375207b95776faf1f927d90a9f428bccfee4eb65f8\" returns successfully" Nov 12 21:35:54.022181 systemd-networkd[1406]: vxlan.calico: Gained IPv6LL Nov 12 21:35:54.048125 sshd[4784]: Invalid user postgres from 35.240.185.59 port 39756 Nov 12 21:35:54.269381 sshd[4784]: Connection closed by invalid user postgres 35.240.185.59 port 39756 [preauth] Nov 12 21:35:54.275242 systemd[1]: sshd@22-188.245.86.234:22-35.240.185.59:39756.service: Deactivated successfully. Nov 12 21:35:54.278162 systemd-networkd[1406]: cali30834f0ca55: Gained IPv6LL Nov 12 21:35:54.724414 systemd-networkd[1406]: cali8b0007e223a: Gained IPv6LL Nov 12 21:35:54.836255 containerd[1513]: time="2024-11-12T21:35:54.836181374Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 21:35:54.837857 containerd[1513]: time="2024-11-12T21:35:54.837753225Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.0: active requests=0, bytes read=34152461" Nov 12 21:35:54.840778 containerd[1513]: time="2024-11-12T21:35:54.840674314Z" level=info msg="ImageCreate event name:\"sha256:48cc7c24253a8037ceea486888a8c75cd74cbf20752c30b86fae718f5a3fc134\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 21:35:54.843833 containerd[1513]: time="2024-11-12T21:35:54.843780465Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:8242cd7e9b9b505c73292dd812ce1669bca95cacc56d30687f49e6e0b95c5535\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 21:35:54.845127 containerd[1513]: time="2024-11-12T21:35:54.844988284Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" with image id \"sha256:48cc7c24253a8037ceea486888a8c75cd74cbf20752c30b86fae718f5a3fc134\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:8242cd7e9b9b505c73292dd812ce1669bca95cacc56d30687f49e6e0b95c5535\", size \"35645521\" in 2.983134268s" Nov 12 21:35:54.845127 containerd[1513]: time="2024-11-12T21:35:54.845049170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" returns image reference \"sha256:48cc7c24253a8037ceea486888a8c75cd74cbf20752c30b86fae718f5a3fc134\"" Nov 12 21:35:54.861060 containerd[1513]: time="2024-11-12T21:35:54.860951495Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\"" Nov 12 21:35:54.885675 containerd[1513]: time="2024-11-12T21:35:54.885180957Z" level=info msg="CreateContainer within sandbox \"336c9a1c0d8705660b300e01feb131ef45bf792905265d7fb5abc04959c1f4fb\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Nov 12 21:35:54.904615 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2440519884.mount: Deactivated successfully. Nov 12 21:35:54.908388 containerd[1513]: time="2024-11-12T21:35:54.908303032Z" level=info msg="CreateContainer within sandbox \"336c9a1c0d8705660b300e01feb131ef45bf792905265d7fb5abc04959c1f4fb\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"69e3016a161adc080f9844c44bc7ad03c934e025260230928c84646e37796c21\"" Nov 12 21:35:54.911201 containerd[1513]: time="2024-11-12T21:35:54.910527284Z" level=info msg="StartContainer for \"69e3016a161adc080f9844c44bc7ad03c934e025260230928c84646e37796c21\"" Nov 12 21:35:54.948312 systemd[1]: Started cri-containerd-69e3016a161adc080f9844c44bc7ad03c934e025260230928c84646e37796c21.scope - libcontainer container 69e3016a161adc080f9844c44bc7ad03c934e025260230928c84646e37796c21. Nov 12 21:35:54.997757 containerd[1513]: time="2024-11-12T21:35:54.997605367Z" level=info msg="StartContainer for \"69e3016a161adc080f9844c44bc7ad03c934e025260230928c84646e37796c21\" returns successfully" Nov 12 21:35:55.044851 systemd-networkd[1406]: cali192450cfb36: Gained IPv6LL Nov 12 21:35:55.364288 systemd-networkd[1406]: cali2184f0a01fb: Gained IPv6LL Nov 12 21:35:55.461148 kubelet[2885]: I1112 21:35:55.461042 2885 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-bh8r2" podStartSLOduration=34.461018088 podStartE2EDuration="34.461018088s" podCreationTimestamp="2024-11-12 21:35:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 21:35:54.444841906 +0000 UTC m=+48.469067416" watchObservedRunningTime="2024-11-12 21:35:55.461018088 +0000 UTC m=+49.485243618" Nov 12 21:35:55.498600 kubelet[2885]: I1112 21:35:55.497815 2885 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-78cbf79fc8-jfgm6" podStartSLOduration=24.494352833 podStartE2EDuration="27.497790859s" podCreationTimestamp="2024-11-12 21:35:28 +0000 UTC" firstStartedPulling="2024-11-12 21:35:51.855422118 +0000 UTC m=+45.879647628" lastFinishedPulling="2024-11-12 21:35:54.858860144 +0000 UTC m=+48.883085654" observedRunningTime="2024-11-12 21:35:55.461877924 +0000 UTC m=+49.486103455" watchObservedRunningTime="2024-11-12 21:35:55.497790859 +0000 UTC m=+49.522016369" Nov 12 21:35:56.899531 containerd[1513]: time="2024-11-12T21:35:56.899480556Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 21:35:56.900642 containerd[1513]: time="2024-11-12T21:35:56.900603293Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.0: active requests=0, bytes read=41963930" Nov 12 21:35:56.902885 containerd[1513]: time="2024-11-12T21:35:56.902833618Z" level=info msg="ImageCreate event name:\"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 21:35:56.905615 containerd[1513]: time="2024-11-12T21:35:56.905530412Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 21:35:56.906622 containerd[1513]: time="2024-11-12T21:35:56.906156013Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" with image id \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\", size \"43457038\" in 2.044856034s" Nov 12 21:35:56.906622 containerd[1513]: time="2024-11-12T21:35:56.906188465Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" returns image reference \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\"" Nov 12 21:35:56.907918 containerd[1513]: time="2024-11-12T21:35:56.907584903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.0\"" Nov 12 21:35:56.909264 containerd[1513]: time="2024-11-12T21:35:56.909232549Z" level=info msg="CreateContainer within sandbox \"b5757267c592d305676df5d1aaef4ecd533cc88852d657fbb2587db242c8007e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Nov 12 21:35:56.931201 containerd[1513]: time="2024-11-12T21:35:56.931135803Z" level=info msg="CreateContainer within sandbox \"b5757267c592d305676df5d1aaef4ecd533cc88852d657fbb2587db242c8007e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0fcefd7d76b750fe6909693b202211a1f62853d500d6de49a6664b656549210c\"" Nov 12 21:35:56.932562 containerd[1513]: time="2024-11-12T21:35:56.932508164Z" level=info msg="StartContainer for \"0fcefd7d76b750fe6909693b202211a1f62853d500d6de49a6664b656549210c\"" Nov 12 21:35:56.986367 systemd[1]: Started cri-containerd-0fcefd7d76b750fe6909693b202211a1f62853d500d6de49a6664b656549210c.scope - libcontainer container 0fcefd7d76b750fe6909693b202211a1f62853d500d6de49a6664b656549210c. Nov 12 21:35:57.029696 containerd[1513]: time="2024-11-12T21:35:57.029380369Z" level=info msg="StartContainer for \"0fcefd7d76b750fe6909693b202211a1f62853d500d6de49a6664b656549210c\" returns successfully" Nov 12 21:35:58.240402 containerd[1513]: time="2024-11-12T21:35:58.240313516Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 21:35:58.241590 containerd[1513]: time="2024-11-12T21:35:58.241494114Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.0: active requests=0, bytes read=7902635" Nov 12 21:35:58.243391 containerd[1513]: time="2024-11-12T21:35:58.242288105Z" level=info msg="ImageCreate event name:\"sha256:a58f4c4b5a7fc2dc0036f198a37464aa007ff2dfe31c8fddad993477291bea46\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 21:35:58.244675 containerd[1513]: time="2024-11-12T21:35:58.244648078Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:034dac492808ec38cd5e596ef6c97d7cd01aaab29a4952c746b27c75ecab8cf5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 21:35:58.245402 containerd[1513]: time="2024-11-12T21:35:58.245370775Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.0\" with image id \"sha256:a58f4c4b5a7fc2dc0036f198a37464aa007ff2dfe31c8fddad993477291bea46\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:034dac492808ec38cd5e596ef6c97d7cd01aaab29a4952c746b27c75ecab8cf5\", size \"9395727\" in 1.337738371s" Nov 12 21:35:58.246109 containerd[1513]: time="2024-11-12T21:35:58.245406253Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.0\" returns image reference \"sha256:a58f4c4b5a7fc2dc0036f198a37464aa007ff2dfe31c8fddad993477291bea46\"" Nov 12 21:35:58.247991 containerd[1513]: time="2024-11-12T21:35:58.247942510Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\"" Nov 12 21:35:58.249753 containerd[1513]: time="2024-11-12T21:35:58.249710918Z" level=info msg="CreateContainer within sandbox \"340351aa1b5edd9597e5690c5187aa7b43842363056ac7c56fc26d4317b753b6\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Nov 12 21:35:58.275164 containerd[1513]: time="2024-11-12T21:35:58.275122730Z" level=info msg="CreateContainer within sandbox \"340351aa1b5edd9597e5690c5187aa7b43842363056ac7c56fc26d4317b753b6\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"a3978342a158a317cbcb7d34112185829df4a06fcb87530de970768098754d76\"" Nov 12 21:35:58.276753 containerd[1513]: time="2024-11-12T21:35:58.275837620Z" level=info msg="StartContainer for \"a3978342a158a317cbcb7d34112185829df4a06fcb87530de970768098754d76\"" Nov 12 21:35:58.310257 systemd[1]: Started cri-containerd-a3978342a158a317cbcb7d34112185829df4a06fcb87530de970768098754d76.scope - libcontainer container a3978342a158a317cbcb7d34112185829df4a06fcb87530de970768098754d76. Nov 12 21:35:58.350599 containerd[1513]: time="2024-11-12T21:35:58.350557507Z" level=info msg="StartContainer for \"a3978342a158a317cbcb7d34112185829df4a06fcb87530de970768098754d76\" returns successfully" Nov 12 21:35:58.458573 kubelet[2885]: I1112 21:35:58.458533 2885 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 21:35:58.625676 containerd[1513]: time="2024-11-12T21:35:58.625142763Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 21:35:58.626405 containerd[1513]: time="2024-11-12T21:35:58.626351836Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.0: active requests=0, bytes read=77" Nov 12 21:35:58.628914 containerd[1513]: time="2024-11-12T21:35:58.628859740Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" with image id \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\", size \"43457038\" in 380.883855ms" Nov 12 21:35:58.628914 containerd[1513]: time="2024-11-12T21:35:58.628896310Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" returns image reference \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\"" Nov 12 21:35:58.630711 containerd[1513]: time="2024-11-12T21:35:58.630628468Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\"" Nov 12 21:35:58.632599 containerd[1513]: time="2024-11-12T21:35:58.632495533Z" level=info msg="CreateContainer within sandbox \"4435566239d0bb65db9045df284c9e943524e343b83fd2d67f8d6893bf0427e1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Nov 12 21:35:58.659222 containerd[1513]: time="2024-11-12T21:35:58.659146804Z" level=info msg="CreateContainer within sandbox \"4435566239d0bb65db9045df284c9e943524e343b83fd2d67f8d6893bf0427e1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4874dfeb42c3037fbba440e690153ebb74b71f754592a5d0ef53648e9863e5bb\"" Nov 12 21:35:58.660308 containerd[1513]: time="2024-11-12T21:35:58.660256837Z" level=info msg="StartContainer for \"4874dfeb42c3037fbba440e690153ebb74b71f754592a5d0ef53648e9863e5bb\"" Nov 12 21:35:58.703623 systemd[1]: Started cri-containerd-4874dfeb42c3037fbba440e690153ebb74b71f754592a5d0ef53648e9863e5bb.scope - libcontainer container 4874dfeb42c3037fbba440e690153ebb74b71f754592a5d0ef53648e9863e5bb. Nov 12 21:35:58.768436 containerd[1513]: time="2024-11-12T21:35:58.768362275Z" level=info msg="StartContainer for \"4874dfeb42c3037fbba440e690153ebb74b71f754592a5d0ef53648e9863e5bb\" returns successfully" Nov 12 21:35:59.476570 kubelet[2885]: I1112 21:35:59.476161 2885 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-775b57f4d6-282rn" podStartSLOduration=28.533950383 podStartE2EDuration="32.476142221s" podCreationTimestamp="2024-11-12 21:35:27 +0000 UTC" firstStartedPulling="2024-11-12 21:35:52.964842096 +0000 UTC m=+46.989067607" lastFinishedPulling="2024-11-12 21:35:56.907033934 +0000 UTC m=+50.931259445" observedRunningTime="2024-11-12 21:35:57.466729401 +0000 UTC m=+51.490954901" watchObservedRunningTime="2024-11-12 21:35:59.476142221 +0000 UTC m=+53.500367730" Nov 12 21:35:59.476570 kubelet[2885]: I1112 21:35:59.476248 2885 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-775b57f4d6-ntjrt" podStartSLOduration=27.621575254 podStartE2EDuration="32.476244976s" podCreationTimestamp="2024-11-12 21:35:27 +0000 UTC" firstStartedPulling="2024-11-12 21:35:53.775475175 +0000 UTC m=+47.799700675" lastFinishedPulling="2024-11-12 21:35:58.630144887 +0000 UTC m=+52.654370397" observedRunningTime="2024-11-12 21:35:59.475872026 +0000 UTC m=+53.500097535" watchObservedRunningTime="2024-11-12 21:35:59.476244976 +0000 UTC m=+53.500470486" Nov 12 21:36:00.088399 containerd[1513]: time="2024-11-12T21:36:00.088342400Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 21:36:00.090994 containerd[1513]: time="2024-11-12T21:36:00.090937953Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0: active requests=0, bytes read=10501080" Nov 12 21:36:00.091709 containerd[1513]: time="2024-11-12T21:36:00.091481197Z" level=info msg="ImageCreate event name:\"sha256:448cca84519399c3138626aff1a43b0b9168ecbe27e0e8e6df63416012eeeaae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 21:36:00.094907 containerd[1513]: time="2024-11-12T21:36:00.094836817Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:69153d7038238f84185e52b4a84e11c5cf5af716ef8613fb0a475ea311dca0cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 21:36:00.096465 containerd[1513]: time="2024-11-12T21:36:00.096057932Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" with image id \"sha256:448cca84519399c3138626aff1a43b0b9168ecbe27e0e8e6df63416012eeeaae\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:69153d7038238f84185e52b4a84e11c5cf5af716ef8613fb0a475ea311dca0cb\", size \"11994124\" in 1.465370421s" Nov 12 21:36:00.096465 containerd[1513]: time="2024-11-12T21:36:00.096116222Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" returns image reference \"sha256:448cca84519399c3138626aff1a43b0b9168ecbe27e0e8e6df63416012eeeaae\"" Nov 12 21:36:00.099466 containerd[1513]: time="2024-11-12T21:36:00.099441074Z" level=info msg="CreateContainer within sandbox \"340351aa1b5edd9597e5690c5187aa7b43842363056ac7c56fc26d4317b753b6\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Nov 12 21:36:00.118168 containerd[1513]: time="2024-11-12T21:36:00.118126004Z" level=info msg="CreateContainer within sandbox \"340351aa1b5edd9597e5690c5187aa7b43842363056ac7c56fc26d4317b753b6\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a7742d412ec6246e5ea9d8fe50c3de3fa327babf10f8bc5d310d9128ecb4778c\"" Nov 12 21:36:00.119962 containerd[1513]: time="2024-11-12T21:36:00.119939868Z" level=info msg="StartContainer for \"a7742d412ec6246e5ea9d8fe50c3de3fa327babf10f8bc5d310d9128ecb4778c\"" Nov 12 21:36:00.162267 systemd[1]: Started cri-containerd-a7742d412ec6246e5ea9d8fe50c3de3fa327babf10f8bc5d310d9128ecb4778c.scope - libcontainer container a7742d412ec6246e5ea9d8fe50c3de3fa327babf10f8bc5d310d9128ecb4778c. Nov 12 21:36:00.217741 containerd[1513]: time="2024-11-12T21:36:00.217680466Z" level=info msg="StartContainer for \"a7742d412ec6246e5ea9d8fe50c3de3fa327babf10f8bc5d310d9128ecb4778c\" returns successfully" Nov 12 21:36:00.270424 systemd[1]: run-containerd-runc-k8s.io-a7742d412ec6246e5ea9d8fe50c3de3fa327babf10f8bc5d310d9128ecb4778c-runc.WQf9rb.mount: Deactivated successfully. Nov 12 21:36:00.474124 kubelet[2885]: I1112 21:36:00.473268 2885 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 21:36:00.497610 kubelet[2885]: I1112 21:36:00.497530 2885 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-chjk5" podStartSLOduration=25.394354192 podStartE2EDuration="32.497514587s" podCreationTimestamp="2024-11-12 21:35:28 +0000 UTC" firstStartedPulling="2024-11-12 21:35:52.994405648 +0000 UTC m=+47.018631159" lastFinishedPulling="2024-11-12 21:36:00.097566043 +0000 UTC m=+54.121791554" observedRunningTime="2024-11-12 21:36:00.496341854 +0000 UTC m=+54.520567354" watchObservedRunningTime="2024-11-12 21:36:00.497514587 +0000 UTC m=+54.521740087" Nov 12 21:36:01.386447 kubelet[2885]: I1112 21:36:01.386361 2885 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Nov 12 21:36:01.392527 kubelet[2885]: I1112 21:36:01.392495 2885 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Nov 12 21:36:03.387433 kubelet[2885]: I1112 21:36:03.387370 2885 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 21:36:06.134866 containerd[1513]: time="2024-11-12T21:36:06.133896568Z" level=info msg="StopPodSandbox for \"e193a8602cd3ccb9d63786c8fe3d8bd4e62e5db12832377de108e7fdaac316ba\"" Nov 12 21:36:06.306357 containerd[1513]: 2024-11-12 21:36:06.248 [WARNING][5330] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e193a8602cd3ccb9d63786c8fe3d8bd4e62e5db12832377de108e7fdaac316ba" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--6--01c097edc7-k8s-coredns--7db6d8ff4d--bh8r2-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"fb426bfe-a7f9-4da9-8328-a78ab7f2ab08", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 21, 35, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-6-01c097edc7", ContainerID:"b357be0a03def338af0a34f3f14035e586c03ee86ad86929b7271338e01b2d61", Pod:"coredns-7db6d8ff4d-bh8r2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali192450cfb36", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 21:36:06.306357 containerd[1513]: 2024-11-12 21:36:06.249 [INFO][5330] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e193a8602cd3ccb9d63786c8fe3d8bd4e62e5db12832377de108e7fdaac316ba" Nov 12 21:36:06.306357 containerd[1513]: 2024-11-12 21:36:06.249 [INFO][5330] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e193a8602cd3ccb9d63786c8fe3d8bd4e62e5db12832377de108e7fdaac316ba" iface="eth0" netns="" Nov 12 21:36:06.306357 containerd[1513]: 2024-11-12 21:36:06.249 [INFO][5330] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e193a8602cd3ccb9d63786c8fe3d8bd4e62e5db12832377de108e7fdaac316ba" Nov 12 21:36:06.306357 containerd[1513]: 2024-11-12 21:36:06.249 [INFO][5330] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e193a8602cd3ccb9d63786c8fe3d8bd4e62e5db12832377de108e7fdaac316ba" Nov 12 21:36:06.306357 containerd[1513]: 2024-11-12 21:36:06.289 [INFO][5336] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e193a8602cd3ccb9d63786c8fe3d8bd4e62e5db12832377de108e7fdaac316ba" HandleID="k8s-pod-network.e193a8602cd3ccb9d63786c8fe3d8bd4e62e5db12832377de108e7fdaac316ba" Workload="ci--4081--2--0--6--01c097edc7-k8s-coredns--7db6d8ff4d--bh8r2-eth0" Nov 12 21:36:06.306357 containerd[1513]: 2024-11-12 21:36:06.290 [INFO][5336] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 21:36:06.306357 containerd[1513]: 2024-11-12 21:36:06.290 [INFO][5336] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 21:36:06.306357 containerd[1513]: 2024-11-12 21:36:06.296 [WARNING][5336] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e193a8602cd3ccb9d63786c8fe3d8bd4e62e5db12832377de108e7fdaac316ba" HandleID="k8s-pod-network.e193a8602cd3ccb9d63786c8fe3d8bd4e62e5db12832377de108e7fdaac316ba" Workload="ci--4081--2--0--6--01c097edc7-k8s-coredns--7db6d8ff4d--bh8r2-eth0" Nov 12 21:36:06.306357 containerd[1513]: 2024-11-12 21:36:06.296 [INFO][5336] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e193a8602cd3ccb9d63786c8fe3d8bd4e62e5db12832377de108e7fdaac316ba" HandleID="k8s-pod-network.e193a8602cd3ccb9d63786c8fe3d8bd4e62e5db12832377de108e7fdaac316ba" Workload="ci--4081--2--0--6--01c097edc7-k8s-coredns--7db6d8ff4d--bh8r2-eth0" Nov 12 21:36:06.306357 containerd[1513]: 2024-11-12 21:36:06.298 [INFO][5336] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 21:36:06.306357 containerd[1513]: 2024-11-12 21:36:06.303 [INFO][5330] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e193a8602cd3ccb9d63786c8fe3d8bd4e62e5db12832377de108e7fdaac316ba" Nov 12 21:36:06.309513 containerd[1513]: time="2024-11-12T21:36:06.306423605Z" level=info msg="TearDown network for sandbox \"e193a8602cd3ccb9d63786c8fe3d8bd4e62e5db12832377de108e7fdaac316ba\" successfully" Nov 12 21:36:06.309513 containerd[1513]: time="2024-11-12T21:36:06.306450156Z" level=info msg="StopPodSandbox for \"e193a8602cd3ccb9d63786c8fe3d8bd4e62e5db12832377de108e7fdaac316ba\" returns successfully" Nov 12 21:36:06.309513 containerd[1513]: time="2024-11-12T21:36:06.307372522Z" level=info msg="RemovePodSandbox for \"e193a8602cd3ccb9d63786c8fe3d8bd4e62e5db12832377de108e7fdaac316ba\"" Nov 12 21:36:06.309513 containerd[1513]: time="2024-11-12T21:36:06.307409423Z" level=info msg="Forcibly stopping sandbox \"e193a8602cd3ccb9d63786c8fe3d8bd4e62e5db12832377de108e7fdaac316ba\"" Nov 12 21:36:06.405545 containerd[1513]: 2024-11-12 21:36:06.363 [WARNING][5358] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e193a8602cd3ccb9d63786c8fe3d8bd4e62e5db12832377de108e7fdaac316ba" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--6--01c097edc7-k8s-coredns--7db6d8ff4d--bh8r2-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"fb426bfe-a7f9-4da9-8328-a78ab7f2ab08", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 21, 35, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-6-01c097edc7", ContainerID:"b357be0a03def338af0a34f3f14035e586c03ee86ad86929b7271338e01b2d61", Pod:"coredns-7db6d8ff4d-bh8r2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali192450cfb36", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 21:36:06.405545 containerd[1513]: 2024-11-12 21:36:06.364 [INFO][5358] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e193a8602cd3ccb9d63786c8fe3d8bd4e62e5db12832377de108e7fdaac316ba" Nov 12 21:36:06.405545 containerd[1513]: 2024-11-12 21:36:06.364 [INFO][5358] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e193a8602cd3ccb9d63786c8fe3d8bd4e62e5db12832377de108e7fdaac316ba" iface="eth0" netns="" Nov 12 21:36:06.405545 containerd[1513]: 2024-11-12 21:36:06.365 [INFO][5358] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e193a8602cd3ccb9d63786c8fe3d8bd4e62e5db12832377de108e7fdaac316ba" Nov 12 21:36:06.405545 containerd[1513]: 2024-11-12 21:36:06.365 [INFO][5358] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e193a8602cd3ccb9d63786c8fe3d8bd4e62e5db12832377de108e7fdaac316ba" Nov 12 21:36:06.405545 containerd[1513]: 2024-11-12 21:36:06.389 [INFO][5365] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e193a8602cd3ccb9d63786c8fe3d8bd4e62e5db12832377de108e7fdaac316ba" HandleID="k8s-pod-network.e193a8602cd3ccb9d63786c8fe3d8bd4e62e5db12832377de108e7fdaac316ba" Workload="ci--4081--2--0--6--01c097edc7-k8s-coredns--7db6d8ff4d--bh8r2-eth0" Nov 12 21:36:06.405545 containerd[1513]: 2024-11-12 21:36:06.389 [INFO][5365] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 21:36:06.405545 containerd[1513]: 2024-11-12 21:36:06.389 [INFO][5365] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 21:36:06.405545 containerd[1513]: 2024-11-12 21:36:06.395 [WARNING][5365] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e193a8602cd3ccb9d63786c8fe3d8bd4e62e5db12832377de108e7fdaac316ba" HandleID="k8s-pod-network.e193a8602cd3ccb9d63786c8fe3d8bd4e62e5db12832377de108e7fdaac316ba" Workload="ci--4081--2--0--6--01c097edc7-k8s-coredns--7db6d8ff4d--bh8r2-eth0" Nov 12 21:36:06.405545 containerd[1513]: 2024-11-12 21:36:06.395 [INFO][5365] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e193a8602cd3ccb9d63786c8fe3d8bd4e62e5db12832377de108e7fdaac316ba" HandleID="k8s-pod-network.e193a8602cd3ccb9d63786c8fe3d8bd4e62e5db12832377de108e7fdaac316ba" Workload="ci--4081--2--0--6--01c097edc7-k8s-coredns--7db6d8ff4d--bh8r2-eth0" Nov 12 21:36:06.405545 containerd[1513]: 2024-11-12 21:36:06.397 [INFO][5365] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 21:36:06.405545 containerd[1513]: 2024-11-12 21:36:06.401 [INFO][5358] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e193a8602cd3ccb9d63786c8fe3d8bd4e62e5db12832377de108e7fdaac316ba" Nov 12 21:36:06.405545 containerd[1513]: time="2024-11-12T21:36:06.405129427Z" level=info msg="TearDown network for sandbox \"e193a8602cd3ccb9d63786c8fe3d8bd4e62e5db12832377de108e7fdaac316ba\" successfully" Nov 12 21:36:06.419573 containerd[1513]: time="2024-11-12T21:36:06.419460660Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e193a8602cd3ccb9d63786c8fe3d8bd4e62e5db12832377de108e7fdaac316ba\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 21:36:06.428628 containerd[1513]: time="2024-11-12T21:36:06.428561467Z" level=info msg="RemovePodSandbox \"e193a8602cd3ccb9d63786c8fe3d8bd4e62e5db12832377de108e7fdaac316ba\" returns successfully" Nov 12 21:36:06.429297 containerd[1513]: time="2024-11-12T21:36:06.429241513Z" level=info msg="StopPodSandbox for \"9d2f8436b5618039a0964be04b8d3610f4cccf22590f613967f75a2a13398bc0\"" Nov 12 21:36:06.510868 containerd[1513]: 2024-11-12 21:36:06.469 [WARNING][5385] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9d2f8436b5618039a0964be04b8d3610f4cccf22590f613967f75a2a13398bc0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--6--01c097edc7-k8s-calico--kube--controllers--78cbf79fc8--jfgm6-eth0", GenerateName:"calico-kube-controllers-78cbf79fc8-", Namespace:"calico-system", SelfLink:"", UID:"cb73e657-cad9-47bd-b125-00eeaf912c0a", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 21, 35, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78cbf79fc8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-6-01c097edc7", ContainerID:"336c9a1c0d8705660b300e01feb131ef45bf792905265d7fb5abc04959c1f4fb", Pod:"calico-kube-controllers-78cbf79fc8-jfgm6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.75.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif2d98f555e4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 21:36:06.510868 containerd[1513]: 2024-11-12 21:36:06.470 [INFO][5385] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9d2f8436b5618039a0964be04b8d3610f4cccf22590f613967f75a2a13398bc0" Nov 12 21:36:06.510868 containerd[1513]: 2024-11-12 21:36:06.470 [INFO][5385] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9d2f8436b5618039a0964be04b8d3610f4cccf22590f613967f75a2a13398bc0" iface="eth0" netns="" Nov 12 21:36:06.510868 containerd[1513]: 2024-11-12 21:36:06.470 [INFO][5385] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9d2f8436b5618039a0964be04b8d3610f4cccf22590f613967f75a2a13398bc0" Nov 12 21:36:06.510868 containerd[1513]: 2024-11-12 21:36:06.470 [INFO][5385] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9d2f8436b5618039a0964be04b8d3610f4cccf22590f613967f75a2a13398bc0" Nov 12 21:36:06.510868 containerd[1513]: 2024-11-12 21:36:06.496 [INFO][5391] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9d2f8436b5618039a0964be04b8d3610f4cccf22590f613967f75a2a13398bc0" HandleID="k8s-pod-network.9d2f8436b5618039a0964be04b8d3610f4cccf22590f613967f75a2a13398bc0" Workload="ci--4081--2--0--6--01c097edc7-k8s-calico--kube--controllers--78cbf79fc8--jfgm6-eth0" Nov 12 21:36:06.510868 containerd[1513]: 2024-11-12 21:36:06.496 [INFO][5391] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 21:36:06.510868 containerd[1513]: 2024-11-12 21:36:06.496 [INFO][5391] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 21:36:06.510868 containerd[1513]: 2024-11-12 21:36:06.503 [WARNING][5391] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9d2f8436b5618039a0964be04b8d3610f4cccf22590f613967f75a2a13398bc0" HandleID="k8s-pod-network.9d2f8436b5618039a0964be04b8d3610f4cccf22590f613967f75a2a13398bc0" Workload="ci--4081--2--0--6--01c097edc7-k8s-calico--kube--controllers--78cbf79fc8--jfgm6-eth0" Nov 12 21:36:06.510868 containerd[1513]: 2024-11-12 21:36:06.503 [INFO][5391] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9d2f8436b5618039a0964be04b8d3610f4cccf22590f613967f75a2a13398bc0" HandleID="k8s-pod-network.9d2f8436b5618039a0964be04b8d3610f4cccf22590f613967f75a2a13398bc0" Workload="ci--4081--2--0--6--01c097edc7-k8s-calico--kube--controllers--78cbf79fc8--jfgm6-eth0" Nov 12 21:36:06.510868 containerd[1513]: 2024-11-12 21:36:06.504 [INFO][5391] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 21:36:06.510868 containerd[1513]: 2024-11-12 21:36:06.507 [INFO][5385] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9d2f8436b5618039a0964be04b8d3610f4cccf22590f613967f75a2a13398bc0" Nov 12 21:36:06.512269 containerd[1513]: time="2024-11-12T21:36:06.510913564Z" level=info msg="TearDown network for sandbox \"9d2f8436b5618039a0964be04b8d3610f4cccf22590f613967f75a2a13398bc0\" successfully" Nov 12 21:36:06.512269 containerd[1513]: time="2024-11-12T21:36:06.510946918Z" level=info msg="StopPodSandbox for \"9d2f8436b5618039a0964be04b8d3610f4cccf22590f613967f75a2a13398bc0\" returns successfully" Nov 12 21:36:06.512269 containerd[1513]: time="2024-11-12T21:36:06.511403187Z" level=info msg="RemovePodSandbox for \"9d2f8436b5618039a0964be04b8d3610f4cccf22590f613967f75a2a13398bc0\"" Nov 12 21:36:06.512269 containerd[1513]: time="2024-11-12T21:36:06.511427443Z" level=info msg="Forcibly stopping sandbox \"9d2f8436b5618039a0964be04b8d3610f4cccf22590f613967f75a2a13398bc0\"" Nov 12 21:36:06.544662 systemd[1]: Started sshd@23-188.245.86.234:22-35.240.185.59:36278.service - OpenSSH per-connection server daemon (35.240.185.59:36278). Nov 12 21:36:06.633793 containerd[1513]: 2024-11-12 21:36:06.584 [WARNING][5409] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9d2f8436b5618039a0964be04b8d3610f4cccf22590f613967f75a2a13398bc0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--6--01c097edc7-k8s-calico--kube--controllers--78cbf79fc8--jfgm6-eth0", GenerateName:"calico-kube-controllers-78cbf79fc8-", Namespace:"calico-system", SelfLink:"", UID:"cb73e657-cad9-47bd-b125-00eeaf912c0a", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 21, 35, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78cbf79fc8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-6-01c097edc7", ContainerID:"336c9a1c0d8705660b300e01feb131ef45bf792905265d7fb5abc04959c1f4fb", Pod:"calico-kube-controllers-78cbf79fc8-jfgm6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.75.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif2d98f555e4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 21:36:06.633793 containerd[1513]: 2024-11-12 21:36:06.585 [INFO][5409] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9d2f8436b5618039a0964be04b8d3610f4cccf22590f613967f75a2a13398bc0" Nov 12 21:36:06.633793 containerd[1513]: 2024-11-12 21:36:06.585 [INFO][5409] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9d2f8436b5618039a0964be04b8d3610f4cccf22590f613967f75a2a13398bc0" iface="eth0" netns="" Nov 12 21:36:06.633793 containerd[1513]: 2024-11-12 21:36:06.585 [INFO][5409] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9d2f8436b5618039a0964be04b8d3610f4cccf22590f613967f75a2a13398bc0" Nov 12 21:36:06.633793 containerd[1513]: 2024-11-12 21:36:06.585 [INFO][5409] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9d2f8436b5618039a0964be04b8d3610f4cccf22590f613967f75a2a13398bc0" Nov 12 21:36:06.633793 containerd[1513]: 2024-11-12 21:36:06.618 [INFO][5417] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9d2f8436b5618039a0964be04b8d3610f4cccf22590f613967f75a2a13398bc0" HandleID="k8s-pod-network.9d2f8436b5618039a0964be04b8d3610f4cccf22590f613967f75a2a13398bc0" Workload="ci--4081--2--0--6--01c097edc7-k8s-calico--kube--controllers--78cbf79fc8--jfgm6-eth0" Nov 12 21:36:06.633793 containerd[1513]: 2024-11-12 21:36:06.618 [INFO][5417] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 21:36:06.633793 containerd[1513]: 2024-11-12 21:36:06.618 [INFO][5417] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 21:36:06.633793 containerd[1513]: 2024-11-12 21:36:06.625 [WARNING][5417] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9d2f8436b5618039a0964be04b8d3610f4cccf22590f613967f75a2a13398bc0" HandleID="k8s-pod-network.9d2f8436b5618039a0964be04b8d3610f4cccf22590f613967f75a2a13398bc0" Workload="ci--4081--2--0--6--01c097edc7-k8s-calico--kube--controllers--78cbf79fc8--jfgm6-eth0" Nov 12 21:36:06.633793 containerd[1513]: 2024-11-12 21:36:06.625 [INFO][5417] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9d2f8436b5618039a0964be04b8d3610f4cccf22590f613967f75a2a13398bc0" HandleID="k8s-pod-network.9d2f8436b5618039a0964be04b8d3610f4cccf22590f613967f75a2a13398bc0" Workload="ci--4081--2--0--6--01c097edc7-k8s-calico--kube--controllers--78cbf79fc8--jfgm6-eth0" Nov 12 21:36:06.633793 containerd[1513]: 2024-11-12 21:36:06.627 [INFO][5417] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 21:36:06.633793 containerd[1513]: 2024-11-12 21:36:06.630 [INFO][5409] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9d2f8436b5618039a0964be04b8d3610f4cccf22590f613967f75a2a13398bc0" Nov 12 21:36:06.636314 containerd[1513]: time="2024-11-12T21:36:06.634290337Z" level=info msg="TearDown network for sandbox \"9d2f8436b5618039a0964be04b8d3610f4cccf22590f613967f75a2a13398bc0\" successfully" Nov 12 21:36:06.639607 containerd[1513]: time="2024-11-12T21:36:06.639547452Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9d2f8436b5618039a0964be04b8d3610f4cccf22590f613967f75a2a13398bc0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 21:36:06.639607 containerd[1513]: time="2024-11-12T21:36:06.639622415Z" level=info msg="RemovePodSandbox \"9d2f8436b5618039a0964be04b8d3610f4cccf22590f613967f75a2a13398bc0\" returns successfully" Nov 12 21:36:06.640398 containerd[1513]: time="2024-11-12T21:36:06.640361403Z" level=info msg="StopPodSandbox for \"de9e54db176e81dc03a8dbaa7c3a1062418e30c48bb64f5ec3ef4c548f2b7a68\"" Nov 12 21:36:06.724795 containerd[1513]: 2024-11-12 21:36:06.682 [WARNING][5437] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="de9e54db176e81dc03a8dbaa7c3a1062418e30c48bb64f5ec3ef4c548f2b7a68" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--6--01c097edc7-k8s-csi--node--driver--chjk5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"23bc69df-3188-4289-a376-a1e2658f5a79", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 21, 35, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"85bdc57578", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-6-01c097edc7", ContainerID:"340351aa1b5edd9597e5690c5187aa7b43842363056ac7c56fc26d4317b753b6", Pod:"csi-node-driver-chjk5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.75.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8b0007e223a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 21:36:06.724795 containerd[1513]: 2024-11-12 21:36:06.682 [INFO][5437] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="de9e54db176e81dc03a8dbaa7c3a1062418e30c48bb64f5ec3ef4c548f2b7a68" Nov 12 21:36:06.724795 containerd[1513]: 2024-11-12 21:36:06.682 [INFO][5437] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="de9e54db176e81dc03a8dbaa7c3a1062418e30c48bb64f5ec3ef4c548f2b7a68" iface="eth0" netns="" Nov 12 21:36:06.724795 containerd[1513]: 2024-11-12 21:36:06.682 [INFO][5437] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="de9e54db176e81dc03a8dbaa7c3a1062418e30c48bb64f5ec3ef4c548f2b7a68" Nov 12 21:36:06.724795 containerd[1513]: 2024-11-12 21:36:06.682 [INFO][5437] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="de9e54db176e81dc03a8dbaa7c3a1062418e30c48bb64f5ec3ef4c548f2b7a68" Nov 12 21:36:06.724795 containerd[1513]: 2024-11-12 21:36:06.708 [INFO][5443] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="de9e54db176e81dc03a8dbaa7c3a1062418e30c48bb64f5ec3ef4c548f2b7a68" HandleID="k8s-pod-network.de9e54db176e81dc03a8dbaa7c3a1062418e30c48bb64f5ec3ef4c548f2b7a68" Workload="ci--4081--2--0--6--01c097edc7-k8s-csi--node--driver--chjk5-eth0" Nov 12 21:36:06.724795 containerd[1513]: 2024-11-12 21:36:06.708 [INFO][5443] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 21:36:06.724795 containerd[1513]: 2024-11-12 21:36:06.708 [INFO][5443] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 21:36:06.724795 containerd[1513]: 2024-11-12 21:36:06.714 [WARNING][5443] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="de9e54db176e81dc03a8dbaa7c3a1062418e30c48bb64f5ec3ef4c548f2b7a68" HandleID="k8s-pod-network.de9e54db176e81dc03a8dbaa7c3a1062418e30c48bb64f5ec3ef4c548f2b7a68" Workload="ci--4081--2--0--6--01c097edc7-k8s-csi--node--driver--chjk5-eth0" Nov 12 21:36:06.724795 containerd[1513]: 2024-11-12 21:36:06.714 [INFO][5443] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="de9e54db176e81dc03a8dbaa7c3a1062418e30c48bb64f5ec3ef4c548f2b7a68" HandleID="k8s-pod-network.de9e54db176e81dc03a8dbaa7c3a1062418e30c48bb64f5ec3ef4c548f2b7a68" Workload="ci--4081--2--0--6--01c097edc7-k8s-csi--node--driver--chjk5-eth0" Nov 12 21:36:06.724795 containerd[1513]: 2024-11-12 21:36:06.717 [INFO][5443] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 21:36:06.724795 containerd[1513]: 2024-11-12 21:36:06.720 [INFO][5437] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="de9e54db176e81dc03a8dbaa7c3a1062418e30c48bb64f5ec3ef4c548f2b7a68" Nov 12 21:36:06.724795 containerd[1513]: time="2024-11-12T21:36:06.724650710Z" level=info msg="TearDown network for sandbox \"de9e54db176e81dc03a8dbaa7c3a1062418e30c48bb64f5ec3ef4c548f2b7a68\" successfully" Nov 12 21:36:06.724795 containerd[1513]: time="2024-11-12T21:36:06.724681519Z" level=info msg="StopPodSandbox for \"de9e54db176e81dc03a8dbaa7c3a1062418e30c48bb64f5ec3ef4c548f2b7a68\" returns successfully" Nov 12 21:36:06.726866 containerd[1513]: time="2024-11-12T21:36:06.725584660Z" level=info msg="RemovePodSandbox for \"de9e54db176e81dc03a8dbaa7c3a1062418e30c48bb64f5ec3ef4c548f2b7a68\"" Nov 12 21:36:06.726866 containerd[1513]: time="2024-11-12T21:36:06.725610138Z" level=info msg="Forcibly stopping sandbox \"de9e54db176e81dc03a8dbaa7c3a1062418e30c48bb64f5ec3ef4c548f2b7a68\"" Nov 12 21:36:06.796622 containerd[1513]: 2024-11-12 21:36:06.759 [WARNING][5461] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="de9e54db176e81dc03a8dbaa7c3a1062418e30c48bb64f5ec3ef4c548f2b7a68" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--6--01c097edc7-k8s-csi--node--driver--chjk5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"23bc69df-3188-4289-a376-a1e2658f5a79", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 21, 35, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"85bdc57578", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-6-01c097edc7", ContainerID:"340351aa1b5edd9597e5690c5187aa7b43842363056ac7c56fc26d4317b753b6", Pod:"csi-node-driver-chjk5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.75.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8b0007e223a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 21:36:06.796622 containerd[1513]: 2024-11-12 21:36:06.759 [INFO][5461] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="de9e54db176e81dc03a8dbaa7c3a1062418e30c48bb64f5ec3ef4c548f2b7a68" Nov 12 21:36:06.796622 containerd[1513]: 2024-11-12 21:36:06.759 [INFO][5461] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="de9e54db176e81dc03a8dbaa7c3a1062418e30c48bb64f5ec3ef4c548f2b7a68" iface="eth0" netns="" Nov 12 21:36:06.796622 containerd[1513]: 2024-11-12 21:36:06.759 [INFO][5461] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="de9e54db176e81dc03a8dbaa7c3a1062418e30c48bb64f5ec3ef4c548f2b7a68" Nov 12 21:36:06.796622 containerd[1513]: 2024-11-12 21:36:06.759 [INFO][5461] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="de9e54db176e81dc03a8dbaa7c3a1062418e30c48bb64f5ec3ef4c548f2b7a68" Nov 12 21:36:06.796622 containerd[1513]: 2024-11-12 21:36:06.784 [INFO][5467] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="de9e54db176e81dc03a8dbaa7c3a1062418e30c48bb64f5ec3ef4c548f2b7a68" HandleID="k8s-pod-network.de9e54db176e81dc03a8dbaa7c3a1062418e30c48bb64f5ec3ef4c548f2b7a68" Workload="ci--4081--2--0--6--01c097edc7-k8s-csi--node--driver--chjk5-eth0" Nov 12 21:36:06.796622 containerd[1513]: 2024-11-12 21:36:06.784 [INFO][5467] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 21:36:06.796622 containerd[1513]: 2024-11-12 21:36:06.784 [INFO][5467] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 21:36:06.796622 containerd[1513]: 2024-11-12 21:36:06.789 [WARNING][5467] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="de9e54db176e81dc03a8dbaa7c3a1062418e30c48bb64f5ec3ef4c548f2b7a68" HandleID="k8s-pod-network.de9e54db176e81dc03a8dbaa7c3a1062418e30c48bb64f5ec3ef4c548f2b7a68" Workload="ci--4081--2--0--6--01c097edc7-k8s-csi--node--driver--chjk5-eth0" Nov 12 21:36:06.796622 containerd[1513]: 2024-11-12 21:36:06.789 [INFO][5467] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="de9e54db176e81dc03a8dbaa7c3a1062418e30c48bb64f5ec3ef4c548f2b7a68" HandleID="k8s-pod-network.de9e54db176e81dc03a8dbaa7c3a1062418e30c48bb64f5ec3ef4c548f2b7a68" Workload="ci--4081--2--0--6--01c097edc7-k8s-csi--node--driver--chjk5-eth0" Nov 12 21:36:06.796622 containerd[1513]: 2024-11-12 21:36:06.790 [INFO][5467] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 21:36:06.796622 containerd[1513]: 2024-11-12 21:36:06.793 [INFO][5461] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="de9e54db176e81dc03a8dbaa7c3a1062418e30c48bb64f5ec3ef4c548f2b7a68" Nov 12 21:36:06.797484 containerd[1513]: time="2024-11-12T21:36:06.796664562Z" level=info msg="TearDown network for sandbox \"de9e54db176e81dc03a8dbaa7c3a1062418e30c48bb64f5ec3ef4c548f2b7a68\" successfully" Nov 12 21:36:06.801557 containerd[1513]: time="2024-11-12T21:36:06.801520785Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"de9e54db176e81dc03a8dbaa7c3a1062418e30c48bb64f5ec3ef4c548f2b7a68\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 21:36:06.801557 containerd[1513]: time="2024-11-12T21:36:06.801586218Z" level=info msg="RemovePodSandbox \"de9e54db176e81dc03a8dbaa7c3a1062418e30c48bb64f5ec3ef4c548f2b7a68\" returns successfully" Nov 12 21:36:06.807489 containerd[1513]: time="2024-11-12T21:36:06.807458566Z" level=info msg="StopPodSandbox for \"6eeddde6f89415f32a3a1d8714ee27076ab3ee3803274679d14cba5e6ea98103\"" Nov 12 21:36:06.889472 containerd[1513]: 2024-11-12 21:36:06.849 [WARNING][5485] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6eeddde6f89415f32a3a1d8714ee27076ab3ee3803274679d14cba5e6ea98103" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--6--01c097edc7-k8s-calico--apiserver--775b57f4d6--282rn-eth0", GenerateName:"calico-apiserver-775b57f4d6-", Namespace:"calico-apiserver", SelfLink:"", UID:"65dd8fef-58ef-4aad-a3ea-c7f983131d2d", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 21, 35, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"775b57f4d6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-6-01c097edc7", ContainerID:"b5757267c592d305676df5d1aaef4ecd533cc88852d657fbb2587db242c8007e", Pod:"calico-apiserver-775b57f4d6-282rn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali30834f0ca55", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 21:36:06.889472 containerd[1513]: 2024-11-12 21:36:06.850 [INFO][5485] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6eeddde6f89415f32a3a1d8714ee27076ab3ee3803274679d14cba5e6ea98103" Nov 12 21:36:06.889472 containerd[1513]: 2024-11-12 21:36:06.850 [INFO][5485] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6eeddde6f89415f32a3a1d8714ee27076ab3ee3803274679d14cba5e6ea98103" iface="eth0" netns="" Nov 12 21:36:06.889472 containerd[1513]: 2024-11-12 21:36:06.850 [INFO][5485] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6eeddde6f89415f32a3a1d8714ee27076ab3ee3803274679d14cba5e6ea98103" Nov 12 21:36:06.889472 containerd[1513]: 2024-11-12 21:36:06.850 [INFO][5485] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6eeddde6f89415f32a3a1d8714ee27076ab3ee3803274679d14cba5e6ea98103" Nov 12 21:36:06.889472 containerd[1513]: 2024-11-12 21:36:06.875 [INFO][5491] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6eeddde6f89415f32a3a1d8714ee27076ab3ee3803274679d14cba5e6ea98103" HandleID="k8s-pod-network.6eeddde6f89415f32a3a1d8714ee27076ab3ee3803274679d14cba5e6ea98103" Workload="ci--4081--2--0--6--01c097edc7-k8s-calico--apiserver--775b57f4d6--282rn-eth0" Nov 12 21:36:06.889472 containerd[1513]: 2024-11-12 21:36:06.876 [INFO][5491] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 21:36:06.889472 containerd[1513]: 2024-11-12 21:36:06.876 [INFO][5491] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 21:36:06.889472 containerd[1513]: 2024-11-12 21:36:06.881 [WARNING][5491] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6eeddde6f89415f32a3a1d8714ee27076ab3ee3803274679d14cba5e6ea98103" HandleID="k8s-pod-network.6eeddde6f89415f32a3a1d8714ee27076ab3ee3803274679d14cba5e6ea98103" Workload="ci--4081--2--0--6--01c097edc7-k8s-calico--apiserver--775b57f4d6--282rn-eth0" Nov 12 21:36:06.889472 containerd[1513]: 2024-11-12 21:36:06.881 [INFO][5491] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6eeddde6f89415f32a3a1d8714ee27076ab3ee3803274679d14cba5e6ea98103" HandleID="k8s-pod-network.6eeddde6f89415f32a3a1d8714ee27076ab3ee3803274679d14cba5e6ea98103" Workload="ci--4081--2--0--6--01c097edc7-k8s-calico--apiserver--775b57f4d6--282rn-eth0" Nov 12 21:36:06.889472 containerd[1513]: 2024-11-12 21:36:06.882 [INFO][5491] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 21:36:06.889472 containerd[1513]: 2024-11-12 21:36:06.885 [INFO][5485] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6eeddde6f89415f32a3a1d8714ee27076ab3ee3803274679d14cba5e6ea98103" Nov 12 21:36:06.889472 containerd[1513]: time="2024-11-12T21:36:06.889342872Z" level=info msg="TearDown network for sandbox \"6eeddde6f89415f32a3a1d8714ee27076ab3ee3803274679d14cba5e6ea98103\" successfully" Nov 12 21:36:06.889472 containerd[1513]: time="2024-11-12T21:36:06.889368421Z" level=info msg="StopPodSandbox for \"6eeddde6f89415f32a3a1d8714ee27076ab3ee3803274679d14cba5e6ea98103\" returns successfully" Nov 12 21:36:06.890519 containerd[1513]: time="2024-11-12T21:36:06.889931594Z" level=info msg="RemovePodSandbox for \"6eeddde6f89415f32a3a1d8714ee27076ab3ee3803274679d14cba5e6ea98103\"" Nov 12 21:36:06.890519 containerd[1513]: time="2024-11-12T21:36:06.889972101Z" level=info msg="Forcibly stopping sandbox \"6eeddde6f89415f32a3a1d8714ee27076ab3ee3803274679d14cba5e6ea98103\"" Nov 12 21:36:06.976245 containerd[1513]: 2024-11-12 21:36:06.936 [WARNING][5509] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6eeddde6f89415f32a3a1d8714ee27076ab3ee3803274679d14cba5e6ea98103" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--6--01c097edc7-k8s-calico--apiserver--775b57f4d6--282rn-eth0", GenerateName:"calico-apiserver-775b57f4d6-", Namespace:"calico-apiserver", SelfLink:"", UID:"65dd8fef-58ef-4aad-a3ea-c7f983131d2d", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 21, 35, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"775b57f4d6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-6-01c097edc7", ContainerID:"b5757267c592d305676df5d1aaef4ecd533cc88852d657fbb2587db242c8007e", Pod:"calico-apiserver-775b57f4d6-282rn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali30834f0ca55", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 21:36:06.976245 containerd[1513]: 2024-11-12 21:36:06.937 [INFO][5509] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6eeddde6f89415f32a3a1d8714ee27076ab3ee3803274679d14cba5e6ea98103" Nov 12 21:36:06.976245 containerd[1513]: 2024-11-12 21:36:06.937 [INFO][5509] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6eeddde6f89415f32a3a1d8714ee27076ab3ee3803274679d14cba5e6ea98103" iface="eth0" netns="" Nov 12 21:36:06.976245 containerd[1513]: 2024-11-12 21:36:06.937 [INFO][5509] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6eeddde6f89415f32a3a1d8714ee27076ab3ee3803274679d14cba5e6ea98103" Nov 12 21:36:06.976245 containerd[1513]: 2024-11-12 21:36:06.937 [INFO][5509] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6eeddde6f89415f32a3a1d8714ee27076ab3ee3803274679d14cba5e6ea98103" Nov 12 21:36:06.976245 containerd[1513]: 2024-11-12 21:36:06.963 [INFO][5515] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6eeddde6f89415f32a3a1d8714ee27076ab3ee3803274679d14cba5e6ea98103" HandleID="k8s-pod-network.6eeddde6f89415f32a3a1d8714ee27076ab3ee3803274679d14cba5e6ea98103" Workload="ci--4081--2--0--6--01c097edc7-k8s-calico--apiserver--775b57f4d6--282rn-eth0" Nov 12 21:36:06.976245 containerd[1513]: 2024-11-12 21:36:06.963 [INFO][5515] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 21:36:06.976245 containerd[1513]: 2024-11-12 21:36:06.963 [INFO][5515] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 21:36:06.976245 containerd[1513]: 2024-11-12 21:36:06.968 [WARNING][5515] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6eeddde6f89415f32a3a1d8714ee27076ab3ee3803274679d14cba5e6ea98103" HandleID="k8s-pod-network.6eeddde6f89415f32a3a1d8714ee27076ab3ee3803274679d14cba5e6ea98103" Workload="ci--4081--2--0--6--01c097edc7-k8s-calico--apiserver--775b57f4d6--282rn-eth0" Nov 12 21:36:06.976245 containerd[1513]: 2024-11-12 21:36:06.968 [INFO][5515] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6eeddde6f89415f32a3a1d8714ee27076ab3ee3803274679d14cba5e6ea98103" HandleID="k8s-pod-network.6eeddde6f89415f32a3a1d8714ee27076ab3ee3803274679d14cba5e6ea98103" Workload="ci--4081--2--0--6--01c097edc7-k8s-calico--apiserver--775b57f4d6--282rn-eth0" Nov 12 21:36:06.976245 containerd[1513]: 2024-11-12 21:36:06.969 [INFO][5515] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 21:36:06.976245 containerd[1513]: 2024-11-12 21:36:06.973 [INFO][5509] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6eeddde6f89415f32a3a1d8714ee27076ab3ee3803274679d14cba5e6ea98103" Nov 12 21:36:06.976245 containerd[1513]: time="2024-11-12T21:36:06.976171137Z" level=info msg="TearDown network for sandbox \"6eeddde6f89415f32a3a1d8714ee27076ab3ee3803274679d14cba5e6ea98103\" successfully" Nov 12 21:36:06.986873 containerd[1513]: time="2024-11-12T21:36:06.986805656Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6eeddde6f89415f32a3a1d8714ee27076ab3ee3803274679d14cba5e6ea98103\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 21:36:06.987020 containerd[1513]: time="2024-11-12T21:36:06.986892902Z" level=info msg="RemovePodSandbox \"6eeddde6f89415f32a3a1d8714ee27076ab3ee3803274679d14cba5e6ea98103\" returns successfully" Nov 12 21:36:06.996303 containerd[1513]: time="2024-11-12T21:36:06.996254427Z" level=info msg="StopPodSandbox for \"62dad4452bfc23a677bb01d27f28c25ba0c58f194a2869eadb5c393fcafb21d1\"" Nov 12 21:36:07.096961 containerd[1513]: 2024-11-12 21:36:07.046 [WARNING][5533] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="62dad4452bfc23a677bb01d27f28c25ba0c58f194a2869eadb5c393fcafb21d1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--6--01c097edc7-k8s-coredns--7db6d8ff4d--w2jtl-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"3ebecd64-9e86-4ac7-9cb2-b37aacbc6d0d", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 21, 35, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-6-01c097edc7", ContainerID:"f1ff990acc2c82975b6e17335e144acf4e258a368d2dbdd85270ba1e291079f5", Pod:"coredns-7db6d8ff4d-w2jtl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie44bdbaf52f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 21:36:07.096961 containerd[1513]: 2024-11-12 21:36:07.046 [INFO][5533] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="62dad4452bfc23a677bb01d27f28c25ba0c58f194a2869eadb5c393fcafb21d1" Nov 12 21:36:07.096961 containerd[1513]: 2024-11-12 21:36:07.046 [INFO][5533] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="62dad4452bfc23a677bb01d27f28c25ba0c58f194a2869eadb5c393fcafb21d1" iface="eth0" netns="" Nov 12 21:36:07.096961 containerd[1513]: 2024-11-12 21:36:07.046 [INFO][5533] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="62dad4452bfc23a677bb01d27f28c25ba0c58f194a2869eadb5c393fcafb21d1" Nov 12 21:36:07.096961 containerd[1513]: 2024-11-12 21:36:07.046 [INFO][5533] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="62dad4452bfc23a677bb01d27f28c25ba0c58f194a2869eadb5c393fcafb21d1" Nov 12 21:36:07.096961 containerd[1513]: 2024-11-12 21:36:07.077 [INFO][5539] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="62dad4452bfc23a677bb01d27f28c25ba0c58f194a2869eadb5c393fcafb21d1" HandleID="k8s-pod-network.62dad4452bfc23a677bb01d27f28c25ba0c58f194a2869eadb5c393fcafb21d1" Workload="ci--4081--2--0--6--01c097edc7-k8s-coredns--7db6d8ff4d--w2jtl-eth0" Nov 12 21:36:07.096961 containerd[1513]: 2024-11-12 21:36:07.077 [INFO][5539] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 21:36:07.096961 containerd[1513]: 2024-11-12 21:36:07.077 [INFO][5539] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 21:36:07.096961 containerd[1513]: 2024-11-12 21:36:07.084 [WARNING][5539] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="62dad4452bfc23a677bb01d27f28c25ba0c58f194a2869eadb5c393fcafb21d1" HandleID="k8s-pod-network.62dad4452bfc23a677bb01d27f28c25ba0c58f194a2869eadb5c393fcafb21d1" Workload="ci--4081--2--0--6--01c097edc7-k8s-coredns--7db6d8ff4d--w2jtl-eth0" Nov 12 21:36:07.096961 containerd[1513]: 2024-11-12 21:36:07.085 [INFO][5539] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="62dad4452bfc23a677bb01d27f28c25ba0c58f194a2869eadb5c393fcafb21d1" HandleID="k8s-pod-network.62dad4452bfc23a677bb01d27f28c25ba0c58f194a2869eadb5c393fcafb21d1" Workload="ci--4081--2--0--6--01c097edc7-k8s-coredns--7db6d8ff4d--w2jtl-eth0" Nov 12 21:36:07.096961 containerd[1513]: 2024-11-12 21:36:07.087 [INFO][5539] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 21:36:07.096961 containerd[1513]: 2024-11-12 21:36:07.091 [INFO][5533] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="62dad4452bfc23a677bb01d27f28c25ba0c58f194a2869eadb5c393fcafb21d1" Nov 12 21:36:07.098223 containerd[1513]: time="2024-11-12T21:36:07.097006260Z" level=info msg="TearDown network for sandbox \"62dad4452bfc23a677bb01d27f28c25ba0c58f194a2869eadb5c393fcafb21d1\" successfully" Nov 12 21:36:07.098223 containerd[1513]: time="2024-11-12T21:36:07.097109586Z" level=info msg="StopPodSandbox for \"62dad4452bfc23a677bb01d27f28c25ba0c58f194a2869eadb5c393fcafb21d1\" returns successfully" Nov 12 21:36:07.098593 containerd[1513]: time="2024-11-12T21:36:07.098110373Z" level=info msg="RemovePodSandbox for \"62dad4452bfc23a677bb01d27f28c25ba0c58f194a2869eadb5c393fcafb21d1\"" Nov 12 21:36:07.098593 containerd[1513]: time="2024-11-12T21:36:07.098391809Z" level=info msg="Forcibly stopping sandbox \"62dad4452bfc23a677bb01d27f28c25ba0c58f194a2869eadb5c393fcafb21d1\"" Nov 12 21:36:07.204870 containerd[1513]: 2024-11-12 21:36:07.159 [WARNING][5558] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="62dad4452bfc23a677bb01d27f28c25ba0c58f194a2869eadb5c393fcafb21d1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--6--01c097edc7-k8s-coredns--7db6d8ff4d--w2jtl-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"3ebecd64-9e86-4ac7-9cb2-b37aacbc6d0d", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 21, 35, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-6-01c097edc7", ContainerID:"f1ff990acc2c82975b6e17335e144acf4e258a368d2dbdd85270ba1e291079f5", Pod:"coredns-7db6d8ff4d-w2jtl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie44bdbaf52f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 21:36:07.204870 containerd[1513]: 2024-11-12 21:36:07.159 [INFO][5558] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="62dad4452bfc23a677bb01d27f28c25ba0c58f194a2869eadb5c393fcafb21d1" Nov 12 21:36:07.204870 containerd[1513]: 2024-11-12 21:36:07.159 [INFO][5558] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="62dad4452bfc23a677bb01d27f28c25ba0c58f194a2869eadb5c393fcafb21d1" iface="eth0" netns="" Nov 12 21:36:07.204870 containerd[1513]: 2024-11-12 21:36:07.159 [INFO][5558] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="62dad4452bfc23a677bb01d27f28c25ba0c58f194a2869eadb5c393fcafb21d1" Nov 12 21:36:07.204870 containerd[1513]: 2024-11-12 21:36:07.159 [INFO][5558] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="62dad4452bfc23a677bb01d27f28c25ba0c58f194a2869eadb5c393fcafb21d1" Nov 12 21:36:07.204870 containerd[1513]: 2024-11-12 21:36:07.188 [INFO][5564] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="62dad4452bfc23a677bb01d27f28c25ba0c58f194a2869eadb5c393fcafb21d1" HandleID="k8s-pod-network.62dad4452bfc23a677bb01d27f28c25ba0c58f194a2869eadb5c393fcafb21d1" Workload="ci--4081--2--0--6--01c097edc7-k8s-coredns--7db6d8ff4d--w2jtl-eth0" Nov 12 21:36:07.204870 containerd[1513]: 2024-11-12 21:36:07.188 [INFO][5564] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 21:36:07.204870 containerd[1513]: 2024-11-12 21:36:07.188 [INFO][5564] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 21:36:07.204870 containerd[1513]: 2024-11-12 21:36:07.195 [WARNING][5564] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="62dad4452bfc23a677bb01d27f28c25ba0c58f194a2869eadb5c393fcafb21d1" HandleID="k8s-pod-network.62dad4452bfc23a677bb01d27f28c25ba0c58f194a2869eadb5c393fcafb21d1" Workload="ci--4081--2--0--6--01c097edc7-k8s-coredns--7db6d8ff4d--w2jtl-eth0" Nov 12 21:36:07.204870 containerd[1513]: 2024-11-12 21:36:07.195 [INFO][5564] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="62dad4452bfc23a677bb01d27f28c25ba0c58f194a2869eadb5c393fcafb21d1" HandleID="k8s-pod-network.62dad4452bfc23a677bb01d27f28c25ba0c58f194a2869eadb5c393fcafb21d1" Workload="ci--4081--2--0--6--01c097edc7-k8s-coredns--7db6d8ff4d--w2jtl-eth0" Nov 12 21:36:07.204870 containerd[1513]: 2024-11-12 21:36:07.196 [INFO][5564] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 21:36:07.204870 containerd[1513]: 2024-11-12 21:36:07.201 [INFO][5558] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="62dad4452bfc23a677bb01d27f28c25ba0c58f194a2869eadb5c393fcafb21d1" Nov 12 21:36:07.204870 containerd[1513]: time="2024-11-12T21:36:07.204818875Z" level=info msg="TearDown network for sandbox \"62dad4452bfc23a677bb01d27f28c25ba0c58f194a2869eadb5c393fcafb21d1\" successfully" Nov 12 21:36:07.209880 containerd[1513]: time="2024-11-12T21:36:07.209713420Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"62dad4452bfc23a677bb01d27f28c25ba0c58f194a2869eadb5c393fcafb21d1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 21:36:07.209880 containerd[1513]: time="2024-11-12T21:36:07.209784306Z" level=info msg="RemovePodSandbox \"62dad4452bfc23a677bb01d27f28c25ba0c58f194a2869eadb5c393fcafb21d1\" returns successfully" Nov 12 21:36:07.210952 containerd[1513]: time="2024-11-12T21:36:07.210912916Z" level=info msg="StopPodSandbox for \"4df54e520198897be3d1ce315a70fa186f4bdd9c73c33c0e6ec7fc90594a1fc7\"" Nov 12 21:36:07.330267 containerd[1513]: 2024-11-12 21:36:07.263 [WARNING][5582] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4df54e520198897be3d1ce315a70fa186f4bdd9c73c33c0e6ec7fc90594a1fc7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--6--01c097edc7-k8s-calico--apiserver--775b57f4d6--ntjrt-eth0", GenerateName:"calico-apiserver-775b57f4d6-", Namespace:"calico-apiserver", SelfLink:"", UID:"f899bca3-0451-45a6-b02f-8752a2ae4ada", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 21, 35, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"775b57f4d6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-6-01c097edc7", ContainerID:"4435566239d0bb65db9045df284c9e943524e343b83fd2d67f8d6893bf0427e1", Pod:"calico-apiserver-775b57f4d6-ntjrt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2184f0a01fb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 21:36:07.330267 containerd[1513]: 2024-11-12 21:36:07.264 [INFO][5582] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4df54e520198897be3d1ce315a70fa186f4bdd9c73c33c0e6ec7fc90594a1fc7" Nov 12 21:36:07.330267 containerd[1513]: 2024-11-12 21:36:07.264 [INFO][5582] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4df54e520198897be3d1ce315a70fa186f4bdd9c73c33c0e6ec7fc90594a1fc7" iface="eth0" netns="" Nov 12 21:36:07.330267 containerd[1513]: 2024-11-12 21:36:07.264 [INFO][5582] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4df54e520198897be3d1ce315a70fa186f4bdd9c73c33c0e6ec7fc90594a1fc7" Nov 12 21:36:07.330267 containerd[1513]: 2024-11-12 21:36:07.264 [INFO][5582] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4df54e520198897be3d1ce315a70fa186f4bdd9c73c33c0e6ec7fc90594a1fc7" Nov 12 21:36:07.330267 containerd[1513]: 2024-11-12 21:36:07.303 [INFO][5588] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4df54e520198897be3d1ce315a70fa186f4bdd9c73c33c0e6ec7fc90594a1fc7" HandleID="k8s-pod-network.4df54e520198897be3d1ce315a70fa186f4bdd9c73c33c0e6ec7fc90594a1fc7" Workload="ci--4081--2--0--6--01c097edc7-k8s-calico--apiserver--775b57f4d6--ntjrt-eth0" Nov 12 21:36:07.330267 containerd[1513]: 2024-11-12 21:36:07.303 [INFO][5588] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 21:36:07.330267 containerd[1513]: 2024-11-12 21:36:07.303 [INFO][5588] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 21:36:07.330267 containerd[1513]: 2024-11-12 21:36:07.312 [WARNING][5588] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4df54e520198897be3d1ce315a70fa186f4bdd9c73c33c0e6ec7fc90594a1fc7" HandleID="k8s-pod-network.4df54e520198897be3d1ce315a70fa186f4bdd9c73c33c0e6ec7fc90594a1fc7" Workload="ci--4081--2--0--6--01c097edc7-k8s-calico--apiserver--775b57f4d6--ntjrt-eth0" Nov 12 21:36:07.330267 containerd[1513]: 2024-11-12 21:36:07.312 [INFO][5588] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4df54e520198897be3d1ce315a70fa186f4bdd9c73c33c0e6ec7fc90594a1fc7" HandleID="k8s-pod-network.4df54e520198897be3d1ce315a70fa186f4bdd9c73c33c0e6ec7fc90594a1fc7" Workload="ci--4081--2--0--6--01c097edc7-k8s-calico--apiserver--775b57f4d6--ntjrt-eth0" Nov 12 21:36:07.330267 containerd[1513]: 2024-11-12 21:36:07.315 [INFO][5588] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 21:36:07.330267 containerd[1513]: 2024-11-12 21:36:07.320 [INFO][5582] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4df54e520198897be3d1ce315a70fa186f4bdd9c73c33c0e6ec7fc90594a1fc7" Nov 12 21:36:07.330267 containerd[1513]: time="2024-11-12T21:36:07.329310056Z" level=info msg="TearDown network for sandbox \"4df54e520198897be3d1ce315a70fa186f4bdd9c73c33c0e6ec7fc90594a1fc7\" successfully" Nov 12 21:36:07.330267 containerd[1513]: time="2024-11-12T21:36:07.329358760Z" level=info msg="StopPodSandbox for \"4df54e520198897be3d1ce315a70fa186f4bdd9c73c33c0e6ec7fc90594a1fc7\" returns successfully" Nov 12 21:36:07.336422 containerd[1513]: time="2024-11-12T21:36:07.336044718Z" level=info msg="RemovePodSandbox for \"4df54e520198897be3d1ce315a70fa186f4bdd9c73c33c0e6ec7fc90594a1fc7\"" Nov 12 21:36:07.336422 containerd[1513]: time="2024-11-12T21:36:07.336097800Z" level=info msg="Forcibly stopping sandbox \"4df54e520198897be3d1ce315a70fa186f4bdd9c73c33c0e6ec7fc90594a1fc7\"" Nov 12 21:36:07.341576 systemd[1]: run-containerd-runc-k8s.io-69e3016a161adc080f9844c44bc7ad03c934e025260230928c84646e37796c21-runc.bhN2R6.mount: Deactivated successfully. Nov 12 21:36:07.448490 sshd[5415]: Invalid user ts from 35.240.185.59 port 36278 Nov 12 21:36:07.475318 containerd[1513]: 2024-11-12 21:36:07.420 [WARNING][5622] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4df54e520198897be3d1ce315a70fa186f4bdd9c73c33c0e6ec7fc90594a1fc7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--6--01c097edc7-k8s-calico--apiserver--775b57f4d6--ntjrt-eth0", GenerateName:"calico-apiserver-775b57f4d6-", Namespace:"calico-apiserver", SelfLink:"", UID:"f899bca3-0451-45a6-b02f-8752a2ae4ada", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 21, 35, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"775b57f4d6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-6-01c097edc7", ContainerID:"4435566239d0bb65db9045df284c9e943524e343b83fd2d67f8d6893bf0427e1", Pod:"calico-apiserver-775b57f4d6-ntjrt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2184f0a01fb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 21:36:07.475318 containerd[1513]: 2024-11-12 21:36:07.420 [INFO][5622] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4df54e520198897be3d1ce315a70fa186f4bdd9c73c33c0e6ec7fc90594a1fc7" Nov 12 21:36:07.475318 containerd[1513]: 2024-11-12 21:36:07.420 [INFO][5622] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4df54e520198897be3d1ce315a70fa186f4bdd9c73c33c0e6ec7fc90594a1fc7" iface="eth0" netns="" Nov 12 21:36:07.475318 containerd[1513]: 2024-11-12 21:36:07.420 [INFO][5622] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4df54e520198897be3d1ce315a70fa186f4bdd9c73c33c0e6ec7fc90594a1fc7" Nov 12 21:36:07.475318 containerd[1513]: 2024-11-12 21:36:07.420 [INFO][5622] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4df54e520198897be3d1ce315a70fa186f4bdd9c73c33c0e6ec7fc90594a1fc7" Nov 12 21:36:07.475318 containerd[1513]: 2024-11-12 21:36:07.460 [INFO][5632] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4df54e520198897be3d1ce315a70fa186f4bdd9c73c33c0e6ec7fc90594a1fc7" HandleID="k8s-pod-network.4df54e520198897be3d1ce315a70fa186f4bdd9c73c33c0e6ec7fc90594a1fc7" Workload="ci--4081--2--0--6--01c097edc7-k8s-calico--apiserver--775b57f4d6--ntjrt-eth0" Nov 12 21:36:07.475318 containerd[1513]: 2024-11-12 21:36:07.460 [INFO][5632] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 21:36:07.475318 containerd[1513]: 2024-11-12 21:36:07.460 [INFO][5632] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 21:36:07.475318 containerd[1513]: 2024-11-12 21:36:07.467 [WARNING][5632] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4df54e520198897be3d1ce315a70fa186f4bdd9c73c33c0e6ec7fc90594a1fc7" HandleID="k8s-pod-network.4df54e520198897be3d1ce315a70fa186f4bdd9c73c33c0e6ec7fc90594a1fc7" Workload="ci--4081--2--0--6--01c097edc7-k8s-calico--apiserver--775b57f4d6--ntjrt-eth0" Nov 12 21:36:07.475318 containerd[1513]: 2024-11-12 21:36:07.467 [INFO][5632] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4df54e520198897be3d1ce315a70fa186f4bdd9c73c33c0e6ec7fc90594a1fc7" HandleID="k8s-pod-network.4df54e520198897be3d1ce315a70fa186f4bdd9c73c33c0e6ec7fc90594a1fc7" Workload="ci--4081--2--0--6--01c097edc7-k8s-calico--apiserver--775b57f4d6--ntjrt-eth0" Nov 12 21:36:07.475318 containerd[1513]: 2024-11-12 21:36:07.469 [INFO][5632] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 21:36:07.475318 containerd[1513]: 2024-11-12 21:36:07.472 [INFO][5622] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4df54e520198897be3d1ce315a70fa186f4bdd9c73c33c0e6ec7fc90594a1fc7" Nov 12 21:36:07.475856 containerd[1513]: time="2024-11-12T21:36:07.475806013Z" level=info msg="TearDown network for sandbox \"4df54e520198897be3d1ce315a70fa186f4bdd9c73c33c0e6ec7fc90594a1fc7\" successfully" Nov 12 21:36:07.480162 containerd[1513]: time="2024-11-12T21:36:07.480124750Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4df54e520198897be3d1ce315a70fa186f4bdd9c73c33c0e6ec7fc90594a1fc7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 21:36:07.480162 containerd[1513]: time="2024-11-12T21:36:07.480176770Z" level=info msg="RemovePodSandbox \"4df54e520198897be3d1ce315a70fa186f4bdd9c73c33c0e6ec7fc90594a1fc7\" returns successfully" Nov 12 21:36:07.648642 sshd[5415]: Connection closed by invalid user ts 35.240.185.59 port 36278 [preauth] Nov 12 21:36:07.653036 systemd[1]: sshd@23-188.245.86.234:22-35.240.185.59:36278.service: Deactivated successfully. Nov 12 21:36:12.739098 systemd[1]: run-containerd-runc-k8s.io-8bd69c6611a99fbc792bbfdc7cb96c218d63525f84b07707edf5bd12ac747cc3-runc.QD0xsr.mount: Deactivated successfully. Nov 12 21:36:19.822348 systemd[1]: Started sshd@24-188.245.86.234:22-35.240.185.59:50816.service - OpenSSH per-connection server daemon (35.240.185.59:50816). Nov 12 21:36:20.877365 sshd[5674]: Connection closed by authenticating user root 35.240.185.59 port 50816 [preauth] Nov 12 21:36:20.885583 systemd[1]: sshd@24-188.245.86.234:22-35.240.185.59:50816.service: Deactivated successfully. Nov 12 21:36:33.001869 systemd[1]: Started sshd@25-188.245.86.234:22-35.240.185.59:56288.service - OpenSSH per-connection server daemon (35.240.185.59:56288). Nov 12 21:36:33.702128 sshd[5682]: Invalid user ftpuser from 35.240.185.59 port 56288 Nov 12 21:36:33.864958 sshd[5682]: Connection closed by invalid user ftpuser 35.240.185.59 port 56288 [preauth] Nov 12 21:36:33.869993 systemd[1]: sshd@25-188.245.86.234:22-35.240.185.59:56288.service: Deactivated successfully. Nov 12 21:36:35.713724 kubelet[2885]: I1112 21:36:35.713413 2885 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 21:36:46.155503 systemd[1]: Started sshd@26-188.245.86.234:22-35.240.185.59:45118.service - OpenSSH per-connection server daemon (35.240.185.59:45118). Nov 12 21:36:46.874344 sshd[5738]: Invalid user test from 35.240.185.59 port 45118 Nov 12 21:36:47.034624 sshd[5738]: Connection closed by invalid user test 35.240.185.59 port 45118 [preauth] Nov 12 21:36:47.038156 systemd[1]: sshd@26-188.245.86.234:22-35.240.185.59:45118.service: Deactivated successfully. Nov 12 21:36:59.263334 systemd[1]: Started sshd@27-188.245.86.234:22-35.240.185.59:51040.service - OpenSSH per-connection server daemon (35.240.185.59:51040). Nov 12 21:36:59.982553 sshd[5764]: Invalid user gitlab from 35.240.185.59 port 51040 Nov 12 21:37:00.154967 sshd[5764]: Connection closed by invalid user gitlab 35.240.185.59 port 51040 [preauth] Nov 12 21:37:00.158012 systemd[1]: sshd@27-188.245.86.234:22-35.240.185.59:51040.service: Deactivated successfully. Nov 12 21:37:07.323314 systemd[1]: run-containerd-runc-k8s.io-69e3016a161adc080f9844c44bc7ad03c934e025260230928c84646e37796c21-runc.HsCwna.mount: Deactivated successfully. Nov 12 21:37:12.529034 systemd[1]: Started sshd@28-188.245.86.234:22-35.240.185.59:57068.service - OpenSSH per-connection server daemon (35.240.185.59:57068). Nov 12 21:37:13.214401 sshd[5790]: Invalid user guest from 35.240.185.59 port 57068 Nov 12 21:37:13.377121 sshd[5790]: Connection closed by invalid user guest 35.240.185.59 port 57068 [preauth] Nov 12 21:37:13.380657 systemd[1]: sshd@28-188.245.86.234:22-35.240.185.59:57068.service: Deactivated successfully. Nov 12 21:37:25.763452 systemd[1]: Started sshd@29-188.245.86.234:22-35.240.185.59:52944.service - OpenSSH per-connection server daemon (35.240.185.59:52944). Nov 12 21:37:26.716276 sshd[5840]: Invalid user worker from 35.240.185.59 port 52944 Nov 12 21:37:26.886482 sshd[5840]: Connection closed by invalid user worker 35.240.185.59 port 52944 [preauth] Nov 12 21:37:26.891893 systemd[1]: sshd@29-188.245.86.234:22-35.240.185.59:52944.service: Deactivated successfully. Nov 12 21:37:39.149299 systemd[1]: Started sshd@30-188.245.86.234:22-35.240.185.59:45822.service - OpenSSH per-connection server daemon (35.240.185.59:45822). Nov 12 21:37:39.914620 sshd[5869]: Invalid user flask from 35.240.185.59 port 45822 Nov 12 21:37:40.078387 sshd[5869]: Connection closed by invalid user flask 35.240.185.59 port 45822 [preauth] Nov 12 21:37:40.083223 systemd[1]: sshd@30-188.245.86.234:22-35.240.185.59:45822.service: Deactivated successfully. Nov 12 21:37:46.455906 update_engine[1495]: I20241112 21:37:46.455360 1495 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Nov 12 21:37:46.455906 update_engine[1495]: I20241112 21:37:46.455505 1495 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Nov 12 21:37:46.460300 update_engine[1495]: I20241112 21:37:46.460101 1495 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Nov 12 21:37:46.461698 update_engine[1495]: I20241112 21:37:46.461120 1495 omaha_request_params.cc:62] Current group set to stable Nov 12 21:37:46.461698 update_engine[1495]: I20241112 21:37:46.461284 1495 update_attempter.cc:499] Already updated boot flags. Skipping. Nov 12 21:37:46.461698 update_engine[1495]: I20241112 21:37:46.461300 1495 update_attempter.cc:643] Scheduling an action processor start. Nov 12 21:37:46.461698 update_engine[1495]: I20241112 21:37:46.461328 1495 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Nov 12 21:37:46.461698 update_engine[1495]: I20241112 21:37:46.461403 1495 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Nov 12 21:37:46.461698 update_engine[1495]: I20241112 21:37:46.461484 1495 omaha_request_action.cc:271] Posting an Omaha request to disabled Nov 12 21:37:46.461698 update_engine[1495]: I20241112 21:37:46.461494 1495 omaha_request_action.cc:272] Request: Nov 12 21:37:46.461698 update_engine[1495]: Nov 12 21:37:46.461698 update_engine[1495]: Nov 12 21:37:46.461698 update_engine[1495]: Nov 12 21:37:46.461698 update_engine[1495]: Nov 12 21:37:46.461698 update_engine[1495]: Nov 12 21:37:46.461698 update_engine[1495]: Nov 12 21:37:46.461698 update_engine[1495]: Nov 12 21:37:46.461698 update_engine[1495]: Nov 12 21:37:46.461698 update_engine[1495]: I20241112 21:37:46.461503 1495 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 12 21:37:46.478082 locksmithd[1525]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Nov 12 21:37:46.479592 update_engine[1495]: I20241112 21:37:46.478842 1495 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 12 21:37:46.479592 update_engine[1495]: I20241112 21:37:46.479241 1495 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 12 21:37:46.480264 update_engine[1495]: E20241112 21:37:46.480226 1495 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 12 21:37:46.480323 update_engine[1495]: I20241112 21:37:46.480293 1495 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Nov 12 21:37:52.771780 systemd[1]: Started sshd@31-188.245.86.234:22-35.240.185.59:50410.service - OpenSSH per-connection server daemon (35.240.185.59:50410). Nov 12 21:37:53.452677 sshd[5919]: Invalid user gpuadmin from 35.240.185.59 port 50410 Nov 12 21:37:53.612465 sshd[5919]: Connection closed by invalid user gpuadmin 35.240.185.59 port 50410 [preauth] Nov 12 21:37:53.616391 systemd[1]: sshd@31-188.245.86.234:22-35.240.185.59:50410.service: Deactivated successfully. Nov 12 21:37:56.308499 update_engine[1495]: I20241112 21:37:56.308398 1495 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 12 21:37:56.309749 update_engine[1495]: I20241112 21:37:56.308701 1495 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 12 21:37:56.309749 update_engine[1495]: I20241112 21:37:56.309007 1495 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 12 21:37:56.309815 update_engine[1495]: E20241112 21:37:56.309757 1495 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 12 21:37:56.309815 update_engine[1495]: I20241112 21:37:56.309804 1495 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Nov 12 21:38:06.049174 systemd[1]: Started sshd@32-188.245.86.234:22-35.240.185.59:60454.service - OpenSSH per-connection server daemon (35.240.185.59:60454). Nov 12 21:38:06.305785 update_engine[1495]: I20241112 21:38:06.305565 1495 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 12 21:38:06.306262 update_engine[1495]: I20241112 21:38:06.305968 1495 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 12 21:38:06.306671 update_engine[1495]: I20241112 21:38:06.306381 1495 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 12 21:38:06.307142 update_engine[1495]: E20241112 21:38:06.307097 1495 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 12 21:38:06.307184 update_engine[1495]: I20241112 21:38:06.307170 1495 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Nov 12 21:38:06.794471 sshd[5927]: Invalid user zabbix from 35.240.185.59 port 60454 Nov 12 21:38:06.958033 sshd[5927]: Connection closed by invalid user zabbix 35.240.185.59 port 60454 [preauth] Nov 12 21:38:06.961011 systemd[1]: sshd@32-188.245.86.234:22-35.240.185.59:60454.service: Deactivated successfully. Nov 12 21:38:16.307242 update_engine[1495]: I20241112 21:38:16.307137 1495 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 12 21:38:16.307805 update_engine[1495]: I20241112 21:38:16.307432 1495 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 12 21:38:16.307805 update_engine[1495]: I20241112 21:38:16.307663 1495 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 12 21:38:16.308336 update_engine[1495]: E20241112 21:38:16.308300 1495 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 12 21:38:16.308461 update_engine[1495]: I20241112 21:38:16.308344 1495 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Nov 12 21:38:16.308461 update_engine[1495]: I20241112 21:38:16.308354 1495 omaha_request_action.cc:617] Omaha request response: Nov 12 21:38:16.308461 update_engine[1495]: E20241112 21:38:16.308447 1495 omaha_request_action.cc:636] Omaha request network transfer failed. Nov 12 21:38:16.308561 update_engine[1495]: I20241112 21:38:16.308471 1495 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Nov 12 21:38:16.308561 update_engine[1495]: I20241112 21:38:16.308480 1495 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Nov 12 21:38:16.308561 update_engine[1495]: I20241112 21:38:16.308486 1495 update_attempter.cc:306] Processing Done. Nov 12 21:38:16.310490 update_engine[1495]: E20241112 21:38:16.310452 1495 update_attempter.cc:619] Update failed. Nov 12 21:38:16.311349 update_engine[1495]: I20241112 21:38:16.310626 1495 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Nov 12 21:38:16.311349 update_engine[1495]: I20241112 21:38:16.310642 1495 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Nov 12 21:38:16.311349 update_engine[1495]: I20241112 21:38:16.310673 1495 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Nov 12 21:38:16.311349 update_engine[1495]: I20241112 21:38:16.310743 1495 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Nov 12 21:38:16.311349 update_engine[1495]: I20241112 21:38:16.310762 1495 omaha_request_action.cc:271] Posting an Omaha request to disabled Nov 12 21:38:16.311349 update_engine[1495]: I20241112 21:38:16.310770 1495 omaha_request_action.cc:272] Request: Nov 12 21:38:16.311349 update_engine[1495]: Nov 12 21:38:16.311349 update_engine[1495]: Nov 12 21:38:16.311349 update_engine[1495]: Nov 12 21:38:16.311349 update_engine[1495]: Nov 12 21:38:16.311349 update_engine[1495]: Nov 12 21:38:16.311349 update_engine[1495]: Nov 12 21:38:16.311349 update_engine[1495]: I20241112 21:38:16.310777 1495 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 12 21:38:16.311349 update_engine[1495]: I20241112 21:38:16.310923 1495 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 12 21:38:16.311349 update_engine[1495]: I20241112 21:38:16.311157 1495 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 12 21:38:16.313285 update_engine[1495]: E20241112 21:38:16.312141 1495 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 12 21:38:16.313285 update_engine[1495]: I20241112 21:38:16.312211 1495 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Nov 12 21:38:16.313285 update_engine[1495]: I20241112 21:38:16.312223 1495 omaha_request_action.cc:617] Omaha request response: Nov 12 21:38:16.313285 update_engine[1495]: I20241112 21:38:16.312237 1495 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Nov 12 21:38:16.313285 update_engine[1495]: I20241112 21:38:16.312245 1495 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Nov 12 21:38:16.313285 update_engine[1495]: I20241112 21:38:16.312253 1495 update_attempter.cc:306] Processing Done. Nov 12 21:38:16.313285 update_engine[1495]: I20241112 21:38:16.312263 1495 update_attempter.cc:310] Error event sent. Nov 12 21:38:16.313285 update_engine[1495]: I20241112 21:38:16.312274 1495 update_check_scheduler.cc:74] Next update check in 44m53s Nov 12 21:38:16.313704 locksmithd[1525]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Nov 12 21:38:16.313704 locksmithd[1525]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Nov 12 21:38:19.066619 systemd[1]: Started sshd@33-188.245.86.234:22-35.240.185.59:54502.service - OpenSSH per-connection server daemon (35.240.185.59:54502). Nov 12 21:38:20.025434 sshd[5975]: Connection closed by authenticating user root 35.240.185.59 port 54502 [preauth] Nov 12 21:38:20.027857 systemd[1]: sshd@33-188.245.86.234:22-35.240.185.59:54502.service: Deactivated successfully. Nov 12 21:38:32.114499 systemd[1]: Started sshd@34-188.245.86.234:22-35.240.185.59:53274.service - OpenSSH per-connection server daemon (35.240.185.59:53274). Nov 12 21:38:32.788587 sshd[5982]: Invalid user flask from 35.240.185.59 port 53274 Nov 12 21:38:32.949022 sshd[5982]: Connection closed by invalid user flask 35.240.185.59 port 53274 [preauth] Nov 12 21:38:32.952739 systemd[1]: sshd@34-188.245.86.234:22-35.240.185.59:53274.service: Deactivated successfully. Nov 12 21:38:45.253379 systemd[1]: Started sshd@35-188.245.86.234:22-35.240.185.59:36868.service - OpenSSH per-connection server daemon (35.240.185.59:36868). Nov 12 21:38:45.963367 sshd[6036]: Invalid user gitlab from 35.240.185.59 port 36868 Nov 12 21:38:46.126385 sshd[6036]: Connection closed by invalid user gitlab 35.240.185.59 port 36868 [preauth] Nov 12 21:38:46.129719 systemd[1]: sshd@35-188.245.86.234:22-35.240.185.59:36868.service: Deactivated successfully. Nov 12 21:38:50.635265 systemd[1]: run-containerd-runc-k8s.io-69e3016a161adc080f9844c44bc7ad03c934e025260230928c84646e37796c21-runc.sUdyjO.mount: Deactivated successfully. Nov 12 21:38:58.145566 systemd[1]: Started sshd@36-188.245.86.234:22-35.240.185.59:33852.service - OpenSSH per-connection server daemon (35.240.185.59:33852). Nov 12 21:38:59.121493 sshd[6074]: Invalid user testuser from 35.240.185.59 port 33852 Nov 12 21:38:59.293955 sshd[6074]: Connection closed by invalid user testuser 35.240.185.59 port 33852 [preauth] Nov 12 21:38:59.297401 systemd[1]: sshd@36-188.245.86.234:22-35.240.185.59:33852.service: Deactivated successfully. Nov 12 21:39:11.354302 systemd[1]: Started sshd@37-188.245.86.234:22-35.240.185.59:44492.service - OpenSSH per-connection server daemon (35.240.185.59:44492). Nov 12 21:39:12.020490 sshd[6105]: Invalid user postgres from 35.240.185.59 port 44492 Nov 12 21:39:12.181665 sshd[6105]: Connection closed by invalid user postgres 35.240.185.59 port 44492 [preauth] Nov 12 21:39:12.184707 systemd[1]: sshd@37-188.245.86.234:22-35.240.185.59:44492.service: Deactivated successfully. Nov 12 21:39:12.734989 systemd[1]: run-containerd-runc-k8s.io-8bd69c6611a99fbc792bbfdc7cb96c218d63525f84b07707edf5bd12ac747cc3-runc.VNUxsW.mount: Deactivated successfully. Nov 12 21:39:24.197588 systemd[1]: Started sshd@38-188.245.86.234:22-35.240.185.59:47164.service - OpenSSH per-connection server daemon (35.240.185.59:47164). Nov 12 21:39:25.132193 sshd[6131]: Invalid user jenkins from 35.240.185.59 port 47164 Nov 12 21:39:25.293574 sshd[6131]: Connection closed by invalid user jenkins 35.240.185.59 port 47164 [preauth] Nov 12 21:39:25.297336 systemd[1]: sshd@38-188.245.86.234:22-35.240.185.59:47164.service: Deactivated successfully. Nov 12 21:39:37.328546 systemd[1]: Started sshd@39-188.245.86.234:22-35.240.185.59:49114.service - OpenSSH per-connection server daemon (35.240.185.59:49114). Nov 12 21:39:38.193643 sshd[6155]: Connection closed by authenticating user root 35.240.185.59 port 49114 [preauth] Nov 12 21:39:38.197493 systemd[1]: sshd@39-188.245.86.234:22-35.240.185.59:49114.service: Deactivated successfully. Nov 12 21:39:50.535638 systemd[1]: Started sshd@40-188.245.86.234:22-35.240.185.59:53486.service - OpenSSH per-connection server daemon (35.240.185.59:53486). Nov 12 21:39:51.442444 sshd[6185]: Invalid user admin from 35.240.185.59 port 53486 Nov 12 21:39:51.602600 sshd[6185]: Connection closed by invalid user admin 35.240.185.59 port 53486 [preauth] Nov 12 21:39:51.606229 systemd[1]: sshd@40-188.245.86.234:22-35.240.185.59:53486.service: Deactivated successfully. Nov 12 21:39:56.166461 systemd[1]: Started sshd@41-188.245.86.234:22-147.75.109.163:57970.service - OpenSSH per-connection server daemon (147.75.109.163:57970). Nov 12 21:39:57.147397 sshd[6214]: Accepted publickey for core from 147.75.109.163 port 57970 ssh2: RSA SHA256:Wp3PhugUBjqIxkjM6ivv5Eynp411mSX/wWXI2lbvI1Y Nov 12 21:39:57.149537 sshd[6214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 21:39:57.164366 systemd-logind[1492]: New session 8 of user core. Nov 12 21:39:57.171446 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 12 21:39:58.367653 sshd[6214]: pam_unix(sshd:session): session closed for user core Nov 12 21:39:58.376465 systemd[1]: sshd@41-188.245.86.234:22-147.75.109.163:57970.service: Deactivated successfully. Nov 12 21:39:58.381434 systemd[1]: session-8.scope: Deactivated successfully. Nov 12 21:39:58.382855 systemd-logind[1492]: Session 8 logged out. Waiting for processes to exit. Nov 12 21:39:58.384844 systemd-logind[1492]: Removed session 8. Nov 12 21:40:03.543757 systemd[1]: Started sshd@42-188.245.86.234:22-147.75.109.163:53410.service - OpenSSH per-connection server daemon (147.75.109.163:53410). Nov 12 21:40:03.984743 systemd[1]: Started sshd@43-188.245.86.234:22-35.240.185.59:51780.service - OpenSSH per-connection server daemon (35.240.185.59:51780). Nov 12 21:40:04.571827 sshd[6229]: Accepted publickey for core from 147.75.109.163 port 53410 ssh2: RSA SHA256:Wp3PhugUBjqIxkjM6ivv5Eynp411mSX/wWXI2lbvI1Y Nov 12 21:40:04.576245 sshd[6229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 21:40:04.588965 systemd-logind[1492]: New session 9 of user core. Nov 12 21:40:04.599387 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 12 21:40:04.674884 sshd[6232]: Invalid user weblogic from 35.240.185.59 port 51780 Nov 12 21:40:04.837370 sshd[6232]: Connection closed by invalid user weblogic 35.240.185.59 port 51780 [preauth] Nov 12 21:40:04.841261 systemd[1]: sshd@43-188.245.86.234:22-35.240.185.59:51780.service: Deactivated successfully. Nov 12 21:40:05.361948 sshd[6229]: pam_unix(sshd:session): session closed for user core Nov 12 21:40:05.365796 systemd[1]: sshd@42-188.245.86.234:22-147.75.109.163:53410.service: Deactivated successfully. Nov 12 21:40:05.368862 systemd[1]: session-9.scope: Deactivated successfully. Nov 12 21:40:05.371155 systemd-logind[1492]: Session 9 logged out. Waiting for processes to exit. Nov 12 21:40:05.372794 systemd-logind[1492]: Removed session 9. Nov 12 21:40:10.536377 systemd[1]: Started sshd@44-188.245.86.234:22-147.75.109.163:43252.service - OpenSSH per-connection server daemon (147.75.109.163:43252). Nov 12 21:40:11.513234 sshd[6269]: Accepted publickey for core from 147.75.109.163 port 43252 ssh2: RSA SHA256:Wp3PhugUBjqIxkjM6ivv5Eynp411mSX/wWXI2lbvI1Y Nov 12 21:40:11.515975 sshd[6269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 21:40:11.523967 systemd-logind[1492]: New session 10 of user core. Nov 12 21:40:11.534352 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 12 21:40:12.252843 sshd[6269]: pam_unix(sshd:session): session closed for user core Nov 12 21:40:12.257299 systemd[1]: sshd@44-188.245.86.234:22-147.75.109.163:43252.service: Deactivated successfully. Nov 12 21:40:12.261533 systemd[1]: session-10.scope: Deactivated successfully. Nov 12 21:40:12.264772 systemd-logind[1492]: Session 10 logged out. Waiting for processes to exit. Nov 12 21:40:12.267191 systemd-logind[1492]: Removed session 10. Nov 12 21:40:12.425146 systemd[1]: Started sshd@45-188.245.86.234:22-147.75.109.163:43268.service - OpenSSH per-connection server daemon (147.75.109.163:43268). Nov 12 21:40:13.430239 sshd[6283]: Accepted publickey for core from 147.75.109.163 port 43268 ssh2: RSA SHA256:Wp3PhugUBjqIxkjM6ivv5Eynp411mSX/wWXI2lbvI1Y Nov 12 21:40:13.432008 sshd[6283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 21:40:13.436606 systemd-logind[1492]: New session 11 of user core. Nov 12 21:40:13.442308 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 12 21:40:14.241421 sshd[6283]: pam_unix(sshd:session): session closed for user core Nov 12 21:40:14.246030 systemd[1]: sshd@45-188.245.86.234:22-147.75.109.163:43268.service: Deactivated successfully. Nov 12 21:40:14.248910 systemd[1]: session-11.scope: Deactivated successfully. Nov 12 21:40:14.250655 systemd-logind[1492]: Session 11 logged out. Waiting for processes to exit. Nov 12 21:40:14.252363 systemd-logind[1492]: Removed session 11. Nov 12 21:40:14.419631 systemd[1]: Started sshd@46-188.245.86.234:22-147.75.109.163:43280.service - OpenSSH per-connection server daemon (147.75.109.163:43280). Nov 12 21:40:15.418128 sshd[6318]: Accepted publickey for core from 147.75.109.163 port 43280 ssh2: RSA SHA256:Wp3PhugUBjqIxkjM6ivv5Eynp411mSX/wWXI2lbvI1Y Nov 12 21:40:15.420756 sshd[6318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 21:40:15.427049 systemd-logind[1492]: New session 12 of user core. Nov 12 21:40:15.439344 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 12 21:40:16.223581 sshd[6318]: pam_unix(sshd:session): session closed for user core Nov 12 21:40:16.228557 systemd[1]: sshd@46-188.245.86.234:22-147.75.109.163:43280.service: Deactivated successfully. Nov 12 21:40:16.231464 systemd[1]: session-12.scope: Deactivated successfully. Nov 12 21:40:16.232914 systemd-logind[1492]: Session 12 logged out. Waiting for processes to exit. Nov 12 21:40:16.234327 systemd-logind[1492]: Removed session 12. Nov 12 21:40:17.101626 systemd[1]: Started sshd@47-188.245.86.234:22-35.240.185.59:57668.service - OpenSSH per-connection server daemon (35.240.185.59:57668). Nov 12 21:40:17.995651 sshd[6331]: Invalid user centos from 35.240.185.59 port 57668 Nov 12 21:40:18.158466 sshd[6331]: Connection closed by invalid user centos 35.240.185.59 port 57668 [preauth] Nov 12 21:40:18.164001 systemd[1]: sshd@47-188.245.86.234:22-35.240.185.59:57668.service: Deactivated successfully. Nov 12 21:40:21.395314 systemd[1]: Started sshd@48-188.245.86.234:22-147.75.109.163:56226.service - OpenSSH per-connection server daemon (147.75.109.163:56226). Nov 12 21:40:22.370153 sshd[6336]: Accepted publickey for core from 147.75.109.163 port 56226 ssh2: RSA SHA256:Wp3PhugUBjqIxkjM6ivv5Eynp411mSX/wWXI2lbvI1Y Nov 12 21:40:22.373771 sshd[6336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 21:40:22.380810 systemd-logind[1492]: New session 13 of user core. Nov 12 21:40:22.386482 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 12 21:40:23.128634 sshd[6336]: pam_unix(sshd:session): session closed for user core Nov 12 21:40:23.133670 systemd[1]: sshd@48-188.245.86.234:22-147.75.109.163:56226.service: Deactivated successfully. Nov 12 21:40:23.136472 systemd[1]: session-13.scope: Deactivated successfully. Nov 12 21:40:23.137486 systemd-logind[1492]: Session 13 logged out. Waiting for processes to exit. Nov 12 21:40:23.138739 systemd-logind[1492]: Removed session 13. Nov 12 21:40:23.309258 systemd[1]: Started sshd@49-188.245.86.234:22-147.75.109.163:56242.service - OpenSSH per-connection server daemon (147.75.109.163:56242). Nov 12 21:40:24.306322 sshd[6349]: Accepted publickey for core from 147.75.109.163 port 56242 ssh2: RSA SHA256:Wp3PhugUBjqIxkjM6ivv5Eynp411mSX/wWXI2lbvI1Y Nov 12 21:40:24.308674 sshd[6349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 21:40:24.314751 systemd-logind[1492]: New session 14 of user core. Nov 12 21:40:24.318586 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 12 21:40:25.266788 sshd[6349]: pam_unix(sshd:session): session closed for user core Nov 12 21:40:25.272116 systemd[1]: sshd@49-188.245.86.234:22-147.75.109.163:56242.service: Deactivated successfully. Nov 12 21:40:25.274642 systemd[1]: session-14.scope: Deactivated successfully. Nov 12 21:40:25.275860 systemd-logind[1492]: Session 14 logged out. Waiting for processes to exit. Nov 12 21:40:25.277699 systemd-logind[1492]: Removed session 14. Nov 12 21:40:25.442463 systemd[1]: Started sshd@50-188.245.86.234:22-147.75.109.163:56248.service - OpenSSH per-connection server daemon (147.75.109.163:56248). Nov 12 21:40:26.416232 sshd[6362]: Accepted publickey for core from 147.75.109.163 port 56248 ssh2: RSA SHA256:Wp3PhugUBjqIxkjM6ivv5Eynp411mSX/wWXI2lbvI1Y Nov 12 21:40:26.418901 sshd[6362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 21:40:26.428177 systemd-logind[1492]: New session 15 of user core. Nov 12 21:40:26.434416 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 12 21:40:29.123508 sshd[6362]: pam_unix(sshd:session): session closed for user core Nov 12 21:40:29.131389 systemd[1]: sshd@50-188.245.86.234:22-147.75.109.163:56248.service: Deactivated successfully. Nov 12 21:40:29.134128 systemd[1]: session-15.scope: Deactivated successfully. Nov 12 21:40:29.136311 systemd-logind[1492]: Session 15 logged out. Waiting for processes to exit. Nov 12 21:40:29.138334 systemd-logind[1492]: Removed session 15. Nov 12 21:40:29.301635 systemd[1]: Started sshd@51-188.245.86.234:22-147.75.109.163:42154.service - OpenSSH per-connection server daemon (147.75.109.163:42154). Nov 12 21:40:30.320009 sshd[6392]: Accepted publickey for core from 147.75.109.163 port 42154 ssh2: RSA SHA256:Wp3PhugUBjqIxkjM6ivv5Eynp411mSX/wWXI2lbvI1Y Nov 12 21:40:30.322245 sshd[6392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 21:40:30.327529 systemd-logind[1492]: New session 16 of user core. Nov 12 21:40:30.331362 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 12 21:40:30.354544 systemd[1]: Started sshd@52-188.245.86.234:22-35.240.185.59:34246.service - OpenSSH per-connection server daemon (35.240.185.59:34246). Nov 12 21:40:31.026575 sshd[6396]: Invalid user steam from 35.240.185.59 port 34246 Nov 12 21:40:31.191112 sshd[6396]: Connection closed by invalid user steam 35.240.185.59 port 34246 [preauth] Nov 12 21:40:31.194774 systemd[1]: sshd@52-188.245.86.234:22-35.240.185.59:34246.service: Deactivated successfully. Nov 12 21:40:31.397660 sshd[6392]: pam_unix(sshd:session): session closed for user core Nov 12 21:40:31.406731 systemd[1]: sshd@51-188.245.86.234:22-147.75.109.163:42154.service: Deactivated successfully. Nov 12 21:40:31.413425 systemd[1]: session-16.scope: Deactivated successfully. Nov 12 21:40:31.418930 systemd-logind[1492]: Session 16 logged out. Waiting for processes to exit. Nov 12 21:40:31.421675 systemd-logind[1492]: Removed session 16. Nov 12 21:40:31.572496 systemd[1]: Started sshd@53-188.245.86.234:22-147.75.109.163:42162.service - OpenSSH per-connection server daemon (147.75.109.163:42162). Nov 12 21:40:32.542459 sshd[6408]: Accepted publickey for core from 147.75.109.163 port 42162 ssh2: RSA SHA256:Wp3PhugUBjqIxkjM6ivv5Eynp411mSX/wWXI2lbvI1Y Nov 12 21:40:32.545393 sshd[6408]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 21:40:32.552545 systemd-logind[1492]: New session 17 of user core. Nov 12 21:40:32.559398 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 12 21:40:33.282961 sshd[6408]: pam_unix(sshd:session): session closed for user core Nov 12 21:40:33.286913 systemd[1]: sshd@53-188.245.86.234:22-147.75.109.163:42162.service: Deactivated successfully. Nov 12 21:40:33.289146 systemd[1]: session-17.scope: Deactivated successfully. Nov 12 21:40:33.291623 systemd-logind[1492]: Session 17 logged out. Waiting for processes to exit. Nov 12 21:40:33.292810 systemd-logind[1492]: Removed session 17. Nov 12 21:40:38.459441 systemd[1]: Started sshd@54-188.245.86.234:22-147.75.109.163:42174.service - OpenSSH per-connection server daemon (147.75.109.163:42174). Nov 12 21:40:39.444122 sshd[6448]: Accepted publickey for core from 147.75.109.163 port 42174 ssh2: RSA SHA256:Wp3PhugUBjqIxkjM6ivv5Eynp411mSX/wWXI2lbvI1Y Nov 12 21:40:39.447951 sshd[6448]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 21:40:39.455490 systemd-logind[1492]: New session 18 of user core. Nov 12 21:40:39.466254 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 12 21:40:40.201244 sshd[6448]: pam_unix(sshd:session): session closed for user core Nov 12 21:40:40.205785 systemd-logind[1492]: Session 18 logged out. Waiting for processes to exit. Nov 12 21:40:40.206507 systemd[1]: sshd@54-188.245.86.234:22-147.75.109.163:42174.service: Deactivated successfully. Nov 12 21:40:40.209312 systemd[1]: session-18.scope: Deactivated successfully. Nov 12 21:40:40.210904 systemd-logind[1492]: Removed session 18. Nov 12 21:40:43.298580 systemd[1]: Started sshd@55-188.245.86.234:22-35.240.185.59:53234.service - OpenSSH per-connection server daemon (35.240.185.59:53234). Nov 12 21:40:44.210354 sshd[6483]: Invalid user test from 35.240.185.59 port 53234 Nov 12 21:40:44.371357 sshd[6483]: Connection closed by invalid user test 35.240.185.59 port 53234 [preauth] Nov 12 21:40:44.375089 systemd[1]: sshd@55-188.245.86.234:22-35.240.185.59:53234.service: Deactivated successfully. Nov 12 21:40:45.380609 systemd[1]: Started sshd@56-188.245.86.234:22-147.75.109.163:38704.service - OpenSSH per-connection server daemon (147.75.109.163:38704). Nov 12 21:40:46.379508 sshd[6488]: Accepted publickey for core from 147.75.109.163 port 38704 ssh2: RSA SHA256:Wp3PhugUBjqIxkjM6ivv5Eynp411mSX/wWXI2lbvI1Y Nov 12 21:40:46.381393 sshd[6488]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 21:40:46.385970 systemd-logind[1492]: New session 19 of user core. Nov 12 21:40:46.395344 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 12 21:40:47.140328 sshd[6488]: pam_unix(sshd:session): session closed for user core Nov 12 21:40:47.144532 systemd[1]: sshd@56-188.245.86.234:22-147.75.109.163:38704.service: Deactivated successfully. Nov 12 21:40:47.147266 systemd[1]: session-19.scope: Deactivated successfully. Nov 12 21:40:47.147937 systemd-logind[1492]: Session 19 logged out. Waiting for processes to exit. Nov 12 21:40:47.149522 systemd-logind[1492]: Removed session 19. Nov 12 21:40:50.632845 systemd[1]: run-containerd-runc-k8s.io-69e3016a161adc080f9844c44bc7ad03c934e025260230928c84646e37796c21-runc.l26Li3.mount: Deactivated successfully. Nov 12 21:40:56.645406 systemd[1]: Started sshd@57-188.245.86.234:22-35.240.185.59:52618.service - OpenSSH per-connection server daemon (35.240.185.59:52618). Nov 12 21:40:57.305819 sshd[6522]: Invalid user test from 35.240.185.59 port 52618 Nov 12 21:40:57.467281 sshd[6522]: Connection closed by invalid user test 35.240.185.59 port 52618 [preauth] Nov 12 21:40:57.471440 systemd[1]: sshd@57-188.245.86.234:22-35.240.185.59:52618.service: Deactivated successfully. Nov 12 21:41:04.407878 systemd[1]: cri-containerd-105ac8aa760e497e919030a023eff5c94262c052a6b5fcea1a6004712cc98d67.scope: Deactivated successfully. Nov 12 21:41:04.408625 systemd[1]: cri-containerd-105ac8aa760e497e919030a023eff5c94262c052a6b5fcea1a6004712cc98d67.scope: Consumed 6.419s CPU time, 22.0M memory peak, 0B memory swap peak. Nov 12 21:41:04.506803 systemd[1]: cri-containerd-e54e964b6cbfdddbce2dee1e69361f28e8c0f886eacff755f853347b20ecdf91.scope: Deactivated successfully. Nov 12 21:41:04.507108 systemd[1]: cri-containerd-e54e964b6cbfdddbce2dee1e69361f28e8c0f886eacff755f853347b20ecdf91.scope: Consumed 5.319s CPU time. Nov 12 21:41:04.580063 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-105ac8aa760e497e919030a023eff5c94262c052a6b5fcea1a6004712cc98d67-rootfs.mount: Deactivated successfully. Nov 12 21:41:04.627919 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e54e964b6cbfdddbce2dee1e69361f28e8c0f886eacff755f853347b20ecdf91-rootfs.mount: Deactivated successfully. Nov 12 21:41:04.633016 containerd[1513]: time="2024-11-12T21:41:04.591729045Z" level=info msg="shim disconnected" id=105ac8aa760e497e919030a023eff5c94262c052a6b5fcea1a6004712cc98d67 namespace=k8s.io Nov 12 21:41:04.634730 containerd[1513]: time="2024-11-12T21:41:04.622586592Z" level=info msg="shim disconnected" id=e54e964b6cbfdddbce2dee1e69361f28e8c0f886eacff755f853347b20ecdf91 namespace=k8s.io Nov 12 21:41:04.639766 containerd[1513]: time="2024-11-12T21:41:04.639525497Z" level=warning msg="cleaning up after shim disconnected" id=e54e964b6cbfdddbce2dee1e69361f28e8c0f886eacff755f853347b20ecdf91 namespace=k8s.io Nov 12 21:41:04.639766 containerd[1513]: time="2024-11-12T21:41:04.639559088Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 21:41:04.642571 containerd[1513]: time="2024-11-12T21:41:04.641005407Z" level=warning msg="cleaning up after shim disconnected" id=105ac8aa760e497e919030a023eff5c94262c052a6b5fcea1a6004712cc98d67 namespace=k8s.io Nov 12 21:41:04.642571 containerd[1513]: time="2024-11-12T21:41:04.641053537Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 21:41:04.689493 systemd[1]: cri-containerd-722041b1eb496d2e550ae0ace69237c313e9516889c9722a64062b7d49b2e092.scope: Deactivated successfully. Nov 12 21:41:04.689820 systemd[1]: cri-containerd-722041b1eb496d2e550ae0ace69237c313e9516889c9722a64062b7d49b2e092.scope: Consumed 1.886s CPU time, 17.1M memory peak, 0B memory swap peak. Nov 12 21:41:04.719633 kubelet[2885]: E1112 21:41:04.719590 2885 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:59304->10.0.0.2:2379: read: connection timed out" Nov 12 21:41:04.746099 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-722041b1eb496d2e550ae0ace69237c313e9516889c9722a64062b7d49b2e092-rootfs.mount: Deactivated successfully. Nov 12 21:41:04.749575 containerd[1513]: time="2024-11-12T21:41:04.749431792Z" level=info msg="shim disconnected" id=722041b1eb496d2e550ae0ace69237c313e9516889c9722a64062b7d49b2e092 namespace=k8s.io Nov 12 21:41:04.749692 containerd[1513]: time="2024-11-12T21:41:04.749676905Z" level=warning msg="cleaning up after shim disconnected" id=722041b1eb496d2e550ae0ace69237c313e9516889c9722a64062b7d49b2e092 namespace=k8s.io Nov 12 21:41:04.749738 containerd[1513]: time="2024-11-12T21:41:04.749727097Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 21:41:04.789516 containerd[1513]: time="2024-11-12T21:41:04.789463478Z" level=warning msg="cleanup warnings time=\"2024-11-12T21:41:04Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 12 21:41:05.519959 kubelet[2885]: I1112 21:41:05.518985 2885 scope.go:117] "RemoveContainer" containerID="e54e964b6cbfdddbce2dee1e69361f28e8c0f886eacff755f853347b20ecdf91" Nov 12 21:41:05.520496 kubelet[2885]: I1112 21:41:05.520467 2885 scope.go:117] "RemoveContainer" containerID="105ac8aa760e497e919030a023eff5c94262c052a6b5fcea1a6004712cc98d67" Nov 12 21:41:05.523269 kubelet[2885]: I1112 21:41:05.523240 2885 scope.go:117] "RemoveContainer" containerID="722041b1eb496d2e550ae0ace69237c313e9516889c9722a64062b7d49b2e092" Nov 12 21:41:05.546564 containerd[1513]: time="2024-11-12T21:41:05.546505716Z" level=info msg="CreateContainer within sandbox \"8c5763051c5d164f4ccd19670372f37e3d486b573708e1390347206a48429f65\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Nov 12 21:41:05.551025 containerd[1513]: time="2024-11-12T21:41:05.550788460Z" level=info msg="CreateContainer within sandbox \"c743a455cc0b8e406786b9774c3cc868a04d055a67a8f762d84ebcb5572080e9\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Nov 12 21:41:05.567963 containerd[1513]: time="2024-11-12T21:41:05.567342480Z" level=info msg="CreateContainer within sandbox \"f7bfcd15e9c3e7fa2e55b5aab7276c6390d8f9bfaff763e093afb60fb71016f5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Nov 12 21:41:05.701042 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3781024146.mount: Deactivated successfully. Nov 12 21:41:05.708664 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1967617245.mount: Deactivated successfully. Nov 12 21:41:05.720166 containerd[1513]: time="2024-11-12T21:41:05.720059198Z" level=info msg="CreateContainer within sandbox \"8c5763051c5d164f4ccd19670372f37e3d486b573708e1390347206a48429f65\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"a66fe128e443328d33508c110ece48d2c888a520e3d885bcd618034c3d371ca2\"" Nov 12 21:41:05.727666 containerd[1513]: time="2024-11-12T21:41:05.726244824Z" level=info msg="StartContainer for \"a66fe128e443328d33508c110ece48d2c888a520e3d885bcd618034c3d371ca2\"" Nov 12 21:41:05.730451 containerd[1513]: time="2024-11-12T21:41:05.730416442Z" level=info msg="CreateContainer within sandbox \"f7bfcd15e9c3e7fa2e55b5aab7276c6390d8f9bfaff763e093afb60fb71016f5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"3c23708938aa12a09fdc3d0da7104fdcfc90fa6b2b12787da6145c0ddf3d57e8\"" Nov 12 21:41:05.731436 containerd[1513]: time="2024-11-12T21:41:05.731416779Z" level=info msg="StartContainer for \"3c23708938aa12a09fdc3d0da7104fdcfc90fa6b2b12787da6145c0ddf3d57e8\"" Nov 12 21:41:05.735392 containerd[1513]: time="2024-11-12T21:41:05.735354515Z" level=info msg="CreateContainer within sandbox \"c743a455cc0b8e406786b9774c3cc868a04d055a67a8f762d84ebcb5572080e9\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"fdcdd3d31305d8bff4fc8401a9deab48039c7d26349eda433f5f9b1121e35ff8\"" Nov 12 21:41:05.736147 containerd[1513]: time="2024-11-12T21:41:05.736131219Z" level=info msg="StartContainer for \"fdcdd3d31305d8bff4fc8401a9deab48039c7d26349eda433f5f9b1121e35ff8\"" Nov 12 21:41:05.785377 systemd[1]: Started cri-containerd-3c23708938aa12a09fdc3d0da7104fdcfc90fa6b2b12787da6145c0ddf3d57e8.scope - libcontainer container 3c23708938aa12a09fdc3d0da7104fdcfc90fa6b2b12787da6145c0ddf3d57e8. Nov 12 21:41:05.786601 systemd[1]: Started cri-containerd-a66fe128e443328d33508c110ece48d2c888a520e3d885bcd618034c3d371ca2.scope - libcontainer container a66fe128e443328d33508c110ece48d2c888a520e3d885bcd618034c3d371ca2. Nov 12 21:41:05.792719 systemd[1]: Started cri-containerd-fdcdd3d31305d8bff4fc8401a9deab48039c7d26349eda433f5f9b1121e35ff8.scope - libcontainer container fdcdd3d31305d8bff4fc8401a9deab48039c7d26349eda433f5f9b1121e35ff8. Nov 12 21:41:05.882109 containerd[1513]: time="2024-11-12T21:41:05.881891207Z" level=info msg="StartContainer for \"3c23708938aa12a09fdc3d0da7104fdcfc90fa6b2b12787da6145c0ddf3d57e8\" returns successfully" Nov 12 21:41:05.883157 containerd[1513]: time="2024-11-12T21:41:05.882488810Z" level=info msg="StartContainer for \"a66fe128e443328d33508c110ece48d2c888a520e3d885bcd618034c3d371ca2\" returns successfully" Nov 12 21:41:05.911323 containerd[1513]: time="2024-11-12T21:41:05.911212901Z" level=info msg="StartContainer for \"fdcdd3d31305d8bff4fc8401a9deab48039c7d26349eda433f5f9b1121e35ff8\" returns successfully" Nov 12 21:41:07.585048 kubelet[2885]: E1112 21:41:07.569293 2885 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:59108->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-2-0-6-01c097edc7.1807567870ae8403 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-2-0-6-01c097edc7,UID:862df74c55eda4f90819a8743842ffba,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-2-0-6-01c097edc7,},FirstTimestamp:2024-11-12 21:40:57.033794563 +0000 UTC m=+351.058020073,LastTimestamp:2024-11-12 21:40:57.033794563 +0000 UTC m=+351.058020073,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-2-0-6-01c097edc7,}"