Nov 13 12:05:27.036293 kernel: Linux version 6.6.60-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Nov 12 16:20:46 -00 2024 Nov 13 12:05:27.036330 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 13 12:05:27.036346 kernel: BIOS-provided physical RAM map: Nov 13 12:05:27.036374 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 13 12:05:27.036385 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 13 12:05:27.036395 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 13 12:05:27.036407 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Nov 13 12:05:27.036430 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Nov 13 12:05:27.036440 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 13 12:05:27.036450 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 13 12:05:27.036460 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 13 12:05:27.036470 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 13 12:05:27.036498 kernel: NX (Execute Disable) protection: active Nov 13 12:05:27.036508 kernel: APIC: Static calls initialized Nov 13 12:05:27.036520 kernel: SMBIOS 2.8 present. Nov 13 12:05:27.036531 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Nov 13 12:05:27.037184 kernel: Hypervisor detected: KVM Nov 13 12:05:27.037206 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 13 12:05:27.037219 kernel: kvm-clock: using sched offset of 4408919587 cycles Nov 13 12:05:27.037232 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 13 12:05:27.037244 kernel: tsc: Detected 2499.998 MHz processor Nov 13 12:05:27.037256 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 13 12:05:27.037268 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 13 12:05:27.037280 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Nov 13 12:05:27.037292 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 13 12:05:27.037304 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 13 12:05:27.037321 kernel: Using GB pages for direct mapping Nov 13 12:05:27.037333 kernel: ACPI: Early table checksum verification disabled Nov 13 12:05:27.037345 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Nov 13 12:05:27.037357 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 13 12:05:27.037369 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 13 12:05:27.037381 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 13 12:05:27.037393 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Nov 13 12:05:27.037405 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 13 12:05:27.037417 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 13 12:05:27.037434 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 13 12:05:27.037446 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 13 12:05:27.037458 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Nov 13 12:05:27.037470 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Nov 13 12:05:27.037482 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Nov 13 12:05:27.037517 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Nov 13 12:05:27.037532 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Nov 13 12:05:27.037551 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Nov 13 12:05:27.037564 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Nov 13 12:05:27.037576 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 13 12:05:27.037601 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 13 12:05:27.037612 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Nov 13 12:05:27.037624 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Nov 13 12:05:27.037635 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Nov 13 12:05:27.037651 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Nov 13 12:05:27.037662 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Nov 13 12:05:27.037674 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Nov 13 12:05:27.037685 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Nov 13 12:05:27.037697 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Nov 13 12:05:27.037708 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Nov 13 12:05:27.037719 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Nov 13 12:05:27.037731 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Nov 13 12:05:27.037742 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Nov 13 12:05:27.037753 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Nov 13 12:05:27.037769 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Nov 13 12:05:27.037781 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 13 12:05:27.037792 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Nov 13 12:05:27.037804 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Nov 13 12:05:27.037828 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Nov 13 12:05:27.037840 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Nov 13 12:05:27.037853 kernel: Zone ranges: Nov 13 12:05:27.037865 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 13 12:05:27.037889 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Nov 13 12:05:27.037905 kernel: Normal empty Nov 13 12:05:27.037917 kernel: Movable zone start for each node Nov 13 12:05:27.037928 kernel: Early memory node ranges Nov 13 12:05:27.037940 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 13 12:05:27.037951 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Nov 13 12:05:27.037963 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Nov 13 12:05:27.037974 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 13 12:05:27.037986 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 13 12:05:27.037998 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Nov 13 12:05:27.038021 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 13 12:05:27.038050 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 13 12:05:27.038063 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 13 12:05:27.038076 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 13 12:05:27.038088 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 13 12:05:27.038100 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 13 12:05:27.038112 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 13 12:05:27.038125 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 13 12:05:27.038137 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 13 12:05:27.038149 kernel: TSC deadline timer available Nov 13 12:05:27.038167 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Nov 13 12:05:27.038180 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 13 12:05:27.038192 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 13 12:05:27.038205 kernel: Booting paravirtualized kernel on KVM Nov 13 12:05:27.038217 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 13 12:05:27.038230 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Nov 13 12:05:27.038243 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Nov 13 12:05:27.038255 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Nov 13 12:05:27.038267 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Nov 13 12:05:27.038285 kernel: kvm-guest: PV spinlocks enabled Nov 13 12:05:27.038297 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 13 12:05:27.038312 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 13 12:05:27.038327 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 13 12:05:27.038339 kernel: random: crng init done Nov 13 12:05:27.038351 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 13 12:05:27.038364 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 13 12:05:27.038377 kernel: Fallback order for Node 0: 0 Nov 13 12:05:27.038407 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Nov 13 12:05:27.038418 kernel: Policy zone: DMA32 Nov 13 12:05:27.038430 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 13 12:05:27.038454 kernel: software IO TLB: area num 16. Nov 13 12:05:27.038466 kernel: Memory: 1901524K/2096616K available (12288K kernel code, 2305K rwdata, 22724K rodata, 42828K init, 2360K bss, 194832K reserved, 0K cma-reserved) Nov 13 12:05:27.038477 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Nov 13 12:05:27.038488 kernel: Kernel/User page tables isolation: enabled Nov 13 12:05:27.038499 kernel: ftrace: allocating 37799 entries in 148 pages Nov 13 12:05:27.038511 kernel: ftrace: allocated 148 pages with 3 groups Nov 13 12:05:27.038541 kernel: Dynamic Preempt: voluntary Nov 13 12:05:27.038555 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 13 12:05:27.038567 kernel: rcu: RCU event tracing is enabled. Nov 13 12:05:27.038578 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Nov 13 12:05:27.038590 kernel: Trampoline variant of Tasks RCU enabled. Nov 13 12:05:27.038629 kernel: Rude variant of Tasks RCU enabled. Nov 13 12:05:27.038645 kernel: Tracing variant of Tasks RCU enabled. Nov 13 12:05:27.038658 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 13 12:05:27.038682 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Nov 13 12:05:27.038695 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Nov 13 12:05:27.038707 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 13 12:05:27.038720 kernel: Console: colour VGA+ 80x25 Nov 13 12:05:27.038737 kernel: printk: console [tty0] enabled Nov 13 12:05:27.038750 kernel: printk: console [ttyS0] enabled Nov 13 12:05:27.038763 kernel: ACPI: Core revision 20230628 Nov 13 12:05:27.038788 kernel: APIC: Switch to symmetric I/O mode setup Nov 13 12:05:27.038801 kernel: x2apic enabled Nov 13 12:05:27.038819 kernel: APIC: Switched APIC routing to: physical x2apic Nov 13 12:05:27.038832 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Nov 13 12:05:27.038846 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Nov 13 12:05:27.038863 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 13 12:05:27.038876 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Nov 13 12:05:27.038889 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Nov 13 12:05:27.038902 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 13 12:05:27.038915 kernel: Spectre V2 : Mitigation: Retpolines Nov 13 12:05:27.038935 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Nov 13 12:05:27.038953 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Nov 13 12:05:27.038966 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Nov 13 12:05:27.038979 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 13 12:05:27.039004 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 13 12:05:27.039016 kernel: MDS: Mitigation: Clear CPU buffers Nov 13 12:05:27.039037 kernel: MMIO Stale Data: Unknown: No mitigations Nov 13 12:05:27.039051 kernel: SRBDS: Unknown: Dependent on hypervisor status Nov 13 12:05:27.039077 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 13 12:05:27.039090 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 13 12:05:27.039103 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 13 12:05:27.039116 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 13 12:05:27.039135 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Nov 13 12:05:27.039148 kernel: Freeing SMP alternatives memory: 32K Nov 13 12:05:27.039161 kernel: pid_max: default: 32768 minimum: 301 Nov 13 12:05:27.039174 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 13 12:05:27.039186 kernel: landlock: Up and running. Nov 13 12:05:27.039199 kernel: SELinux: Initializing. Nov 13 12:05:27.039212 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 13 12:05:27.039226 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 13 12:05:27.039239 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Nov 13 12:05:27.039252 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 13 12:05:27.039265 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 13 12:05:27.039284 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 13 12:05:27.039297 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Nov 13 12:05:27.039310 kernel: signal: max sigframe size: 1776 Nov 13 12:05:27.039323 kernel: rcu: Hierarchical SRCU implementation. Nov 13 12:05:27.039337 kernel: rcu: Max phase no-delay instances is 400. Nov 13 12:05:27.039350 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 13 12:05:27.039363 kernel: smp: Bringing up secondary CPUs ... Nov 13 12:05:27.039376 kernel: smpboot: x86: Booting SMP configuration: Nov 13 12:05:27.039389 kernel: .... node #0, CPUs: #1 Nov 13 12:05:27.039407 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Nov 13 12:05:27.039433 kernel: smp: Brought up 1 node, 2 CPUs Nov 13 12:05:27.039446 kernel: smpboot: Max logical packages: 16 Nov 13 12:05:27.039458 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Nov 13 12:05:27.039471 kernel: devtmpfs: initialized Nov 13 12:05:27.039495 kernel: x86/mm: Memory block size: 128MB Nov 13 12:05:27.039508 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 13 12:05:27.039520 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Nov 13 12:05:27.039532 kernel: pinctrl core: initialized pinctrl subsystem Nov 13 12:05:27.039639 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 13 12:05:27.039653 kernel: audit: initializing netlink subsys (disabled) Nov 13 12:05:27.039665 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 13 12:05:27.039677 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 13 12:05:27.039689 kernel: audit: type=2000 audit(1731499525.630:1): state=initialized audit_enabled=0 res=1 Nov 13 12:05:27.039700 kernel: cpuidle: using governor menu Nov 13 12:05:27.039712 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 13 12:05:27.039724 kernel: dca service started, version 1.12.1 Nov 13 12:05:27.039736 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Nov 13 12:05:27.039753 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 13 12:05:27.039765 kernel: PCI: Using configuration type 1 for base access Nov 13 12:05:27.039777 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 13 12:05:27.039789 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 13 12:05:27.039801 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 13 12:05:27.039813 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 13 12:05:27.039825 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 13 12:05:27.039837 kernel: ACPI: Added _OSI(Module Device) Nov 13 12:05:27.039848 kernel: ACPI: Added _OSI(Processor Device) Nov 13 12:05:27.039865 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Nov 13 12:05:27.039877 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 13 12:05:27.039900 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 13 12:05:27.039913 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 13 12:05:27.039926 kernel: ACPI: Interpreter enabled Nov 13 12:05:27.039938 kernel: ACPI: PM: (supports S0 S5) Nov 13 12:05:27.039951 kernel: ACPI: Using IOAPIC for interrupt routing Nov 13 12:05:27.039963 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 13 12:05:27.039976 kernel: PCI: Using E820 reservations for host bridge windows Nov 13 12:05:27.040006 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 13 12:05:27.040019 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 13 12:05:27.040292 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 13 12:05:27.040484 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Nov 13 12:05:27.040698 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Nov 13 12:05:27.040716 kernel: PCI host bridge to bus 0000:00 Nov 13 12:05:27.040925 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 13 12:05:27.041142 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 13 12:05:27.041299 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 13 12:05:27.041465 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Nov 13 12:05:27.041650 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 13 12:05:27.041796 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Nov 13 12:05:27.041956 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 13 12:05:27.042188 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Nov 13 12:05:27.042396 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Nov 13 12:05:27.042610 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Nov 13 12:05:27.042785 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Nov 13 12:05:27.042967 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Nov 13 12:05:27.043165 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 13 12:05:27.043351 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Nov 13 12:05:27.043563 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Nov 13 12:05:27.043758 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Nov 13 12:05:27.043953 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Nov 13 12:05:27.044178 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Nov 13 12:05:27.044351 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Nov 13 12:05:27.044573 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Nov 13 12:05:27.044752 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Nov 13 12:05:27.044932 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Nov 13 12:05:27.045126 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Nov 13 12:05:27.045316 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Nov 13 12:05:27.045488 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Nov 13 12:05:27.045711 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Nov 13 12:05:27.045882 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Nov 13 12:05:27.046093 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Nov 13 12:05:27.046269 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Nov 13 12:05:27.046453 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Nov 13 12:05:27.046666 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Nov 13 12:05:27.046855 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Nov 13 12:05:27.047060 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Nov 13 12:05:27.047243 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Nov 13 12:05:27.047438 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Nov 13 12:05:27.047664 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Nov 13 12:05:27.047826 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Nov 13 12:05:27.048015 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Nov 13 12:05:27.048221 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Nov 13 12:05:27.048395 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 13 12:05:27.049237 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Nov 13 12:05:27.049414 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Nov 13 12:05:27.049636 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Nov 13 12:05:27.049814 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Nov 13 12:05:27.049981 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Nov 13 12:05:27.050179 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Nov 13 12:05:27.050362 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Nov 13 12:05:27.050546 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Nov 13 12:05:27.050714 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Nov 13 12:05:27.050905 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Nov 13 12:05:27.051097 kernel: pci_bus 0000:02: extended config space not accessible Nov 13 12:05:27.051300 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Nov 13 12:05:27.051491 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Nov 13 12:05:27.051693 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Nov 13 12:05:27.051873 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Nov 13 12:05:27.052097 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Nov 13 12:05:27.052277 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Nov 13 12:05:27.052449 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Nov 13 12:05:27.054668 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Nov 13 12:05:27.054856 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 13 12:05:27.055060 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Nov 13 12:05:27.055240 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Nov 13 12:05:27.055420 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Nov 13 12:05:27.055614 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Nov 13 12:05:27.055780 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 13 12:05:27.055990 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Nov 13 12:05:27.056186 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Nov 13 12:05:27.056383 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 13 12:05:27.057619 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Nov 13 12:05:27.057879 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Nov 13 12:05:27.058085 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 13 12:05:27.058269 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Nov 13 12:05:27.058449 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Nov 13 12:05:27.059746 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 13 12:05:27.059948 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Nov 13 12:05:27.060153 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Nov 13 12:05:27.060340 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 13 12:05:27.061974 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Nov 13 12:05:27.062174 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Nov 13 12:05:27.062349 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 13 12:05:27.062370 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 13 12:05:27.062385 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 13 12:05:27.062409 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 13 12:05:27.062433 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 13 12:05:27.062447 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 13 12:05:27.062461 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 13 12:05:27.062474 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 13 12:05:27.062488 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 13 12:05:27.062501 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 13 12:05:27.062530 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 13 12:05:27.062544 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 13 12:05:27.062557 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 13 12:05:27.062577 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 13 12:05:27.062591 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 13 12:05:27.062607 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 13 12:05:27.062620 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 13 12:05:27.062634 kernel: iommu: Default domain type: Translated Nov 13 12:05:27.062648 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 13 12:05:27.062661 kernel: PCI: Using ACPI for IRQ routing Nov 13 12:05:27.062679 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 13 12:05:27.062692 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 13 12:05:27.062710 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Nov 13 12:05:27.062883 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 13 12:05:27.063067 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 13 12:05:27.063251 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 13 12:05:27.063272 kernel: vgaarb: loaded Nov 13 12:05:27.063286 kernel: clocksource: Switched to clocksource kvm-clock Nov 13 12:05:27.063299 kernel: VFS: Disk quotas dquot_6.6.0 Nov 13 12:05:27.063312 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 13 12:05:27.063325 kernel: pnp: PnP ACPI init Nov 13 12:05:27.066569 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 13 12:05:27.066599 kernel: pnp: PnP ACPI: found 5 devices Nov 13 12:05:27.066614 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 13 12:05:27.066635 kernel: NET: Registered PF_INET protocol family Nov 13 12:05:27.066649 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 13 12:05:27.066663 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 13 12:05:27.066676 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 13 12:05:27.066690 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 13 12:05:27.066711 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 13 12:05:27.066725 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 13 12:05:27.066738 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 13 12:05:27.066752 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 13 12:05:27.066765 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 13 12:05:27.066779 kernel: NET: Registered PF_XDP protocol family Nov 13 12:05:27.066964 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Nov 13 12:05:27.067156 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Nov 13 12:05:27.067378 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Nov 13 12:05:27.067574 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Nov 13 12:05:27.067742 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Nov 13 12:05:27.067909 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Nov 13 12:05:27.068089 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Nov 13 12:05:27.068254 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Nov 13 12:05:27.068428 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Nov 13 12:05:27.070652 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Nov 13 12:05:27.070832 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Nov 13 12:05:27.071017 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Nov 13 12:05:27.071205 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Nov 13 12:05:27.071375 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Nov 13 12:05:27.072839 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Nov 13 12:05:27.073090 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Nov 13 12:05:27.073303 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Nov 13 12:05:27.073489 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Nov 13 12:05:27.073713 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Nov 13 12:05:27.073892 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Nov 13 12:05:27.074088 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Nov 13 12:05:27.074321 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Nov 13 12:05:27.074538 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Nov 13 12:05:27.076710 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Nov 13 12:05:27.076908 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Nov 13 12:05:27.077099 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 13 12:05:27.077273 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Nov 13 12:05:27.077457 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Nov 13 12:05:27.078715 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Nov 13 12:05:27.078904 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 13 12:05:27.079104 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Nov 13 12:05:27.079286 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Nov 13 12:05:27.079457 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Nov 13 12:05:27.079668 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 13 12:05:27.079836 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Nov 13 12:05:27.080006 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Nov 13 12:05:27.080200 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Nov 13 12:05:27.080371 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 13 12:05:27.081588 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Nov 13 12:05:27.081779 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Nov 13 12:05:27.081950 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Nov 13 12:05:27.082147 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 13 12:05:27.082316 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Nov 13 12:05:27.082483 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Nov 13 12:05:27.082698 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Nov 13 12:05:27.082871 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 13 12:05:27.083063 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Nov 13 12:05:27.083231 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Nov 13 12:05:27.083397 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Nov 13 12:05:27.085632 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 13 12:05:27.085800 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 13 12:05:27.085966 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 13 12:05:27.086136 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 13 12:05:27.086298 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Nov 13 12:05:27.086448 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 13 12:05:27.086615 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Nov 13 12:05:27.086814 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Nov 13 12:05:27.086982 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Nov 13 12:05:27.087161 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Nov 13 12:05:27.087335 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Nov 13 12:05:27.093322 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Nov 13 12:05:27.093497 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Nov 13 12:05:27.093692 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 13 12:05:27.093877 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Nov 13 12:05:27.094047 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Nov 13 12:05:27.094207 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 13 12:05:27.094388 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Nov 13 12:05:27.094575 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Nov 13 12:05:27.094746 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 13 12:05:27.094943 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Nov 13 12:05:27.095117 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Nov 13 12:05:27.095276 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 13 12:05:27.095442 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Nov 13 12:05:27.095642 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Nov 13 12:05:27.095812 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 13 12:05:27.095998 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Nov 13 12:05:27.096174 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Nov 13 12:05:27.096349 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 13 12:05:27.096556 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Nov 13 12:05:27.096730 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Nov 13 12:05:27.096900 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 13 12:05:27.096922 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 13 12:05:27.096937 kernel: PCI: CLS 0 bytes, default 64 Nov 13 12:05:27.096959 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 13 12:05:27.096973 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Nov 13 12:05:27.096987 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 13 12:05:27.097001 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Nov 13 12:05:27.097015 kernel: Initialise system trusted keyrings Nov 13 12:05:27.097049 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 13 12:05:27.097065 kernel: Key type asymmetric registered Nov 13 12:05:27.097079 kernel: Asymmetric key parser 'x509' registered Nov 13 12:05:27.097101 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 13 12:05:27.097115 kernel: io scheduler mq-deadline registered Nov 13 12:05:27.097129 kernel: io scheduler kyber registered Nov 13 12:05:27.097143 kernel: io scheduler bfq registered Nov 13 12:05:27.097329 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Nov 13 12:05:27.101574 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Nov 13 12:05:27.101774 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 13 12:05:27.101950 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Nov 13 12:05:27.102145 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Nov 13 12:05:27.102327 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 13 12:05:27.102586 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Nov 13 12:05:27.102758 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Nov 13 12:05:27.102934 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 13 12:05:27.103117 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Nov 13 12:05:27.103304 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Nov 13 12:05:27.103482 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 13 12:05:27.103689 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Nov 13 12:05:27.103868 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Nov 13 12:05:27.104091 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 13 12:05:27.104260 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Nov 13 12:05:27.104429 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Nov 13 12:05:27.110658 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 13 12:05:27.110838 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Nov 13 12:05:27.111007 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Nov 13 12:05:27.111203 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 13 12:05:27.111372 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Nov 13 12:05:27.111560 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Nov 13 12:05:27.111728 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 13 12:05:27.111750 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 13 12:05:27.111766 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 13 12:05:27.111787 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 13 12:05:27.111802 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 13 12:05:27.111824 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 13 12:05:27.111839 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 13 12:05:27.111853 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 13 12:05:27.111867 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 13 12:05:27.112066 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 13 12:05:27.112089 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 13 12:05:27.112250 kernel: rtc_cmos 00:03: registered as rtc0 Nov 13 12:05:27.112413 kernel: rtc_cmos 00:03: setting system clock to 2024-11-13T12:05:26 UTC (1731499526) Nov 13 12:05:27.112624 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Nov 13 12:05:27.112657 kernel: intel_pstate: CPU model not supported Nov 13 12:05:27.112679 kernel: NET: Registered PF_INET6 protocol family Nov 13 12:05:27.112693 kernel: Segment Routing with IPv6 Nov 13 12:05:27.112713 kernel: In-situ OAM (IOAM) with IPv6 Nov 13 12:05:27.112727 kernel: NET: Registered PF_PACKET protocol family Nov 13 12:05:27.112741 kernel: Key type dns_resolver registered Nov 13 12:05:27.112759 kernel: IPI shorthand broadcast: enabled Nov 13 12:05:27.112775 kernel: sched_clock: Marking stable (1310004634, 247823578)->(1815591480, -257763268) Nov 13 12:05:27.112790 kernel: registered taskstats version 1 Nov 13 12:05:27.112804 kernel: Loading compiled-in X.509 certificates Nov 13 12:05:27.112817 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.60-flatcar: 0473a73d840db5324524af106a53c13fc6fc218a' Nov 13 12:05:27.112831 kernel: Key type .fscrypt registered Nov 13 12:05:27.112844 kernel: Key type fscrypt-provisioning registered Nov 13 12:05:27.112858 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 13 12:05:27.112872 kernel: ima: Allocated hash algorithm: sha1 Nov 13 12:05:27.112891 kernel: ima: No architecture policies found Nov 13 12:05:27.112905 kernel: clk: Disabling unused clocks Nov 13 12:05:27.112919 kernel: Freeing unused kernel image (initmem) memory: 42828K Nov 13 12:05:27.112933 kernel: Write protecting the kernel read-only data: 36864k Nov 13 12:05:27.112947 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Nov 13 12:05:27.112961 kernel: Run /init as init process Nov 13 12:05:27.112975 kernel: with arguments: Nov 13 12:05:27.112989 kernel: /init Nov 13 12:05:27.113012 kernel: with environment: Nov 13 12:05:27.113044 kernel: HOME=/ Nov 13 12:05:27.113058 kernel: TERM=linux Nov 13 12:05:27.113071 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 13 12:05:27.113089 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 13 12:05:27.113106 systemd[1]: Detected virtualization kvm. Nov 13 12:05:27.113121 systemd[1]: Detected architecture x86-64. Nov 13 12:05:27.113135 systemd[1]: Running in initrd. Nov 13 12:05:27.113149 systemd[1]: No hostname configured, using default hostname. Nov 13 12:05:27.113170 systemd[1]: Hostname set to . Nov 13 12:05:27.113185 systemd[1]: Initializing machine ID from VM UUID. Nov 13 12:05:27.113200 systemd[1]: Queued start job for default target initrd.target. Nov 13 12:05:27.113215 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 13 12:05:27.113230 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 13 12:05:27.113245 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 13 12:05:27.113261 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 13 12:05:27.113281 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 13 12:05:27.113296 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 13 12:05:27.113313 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 13 12:05:27.113338 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 13 12:05:27.113353 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 13 12:05:27.113368 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 13 12:05:27.113382 systemd[1]: Reached target paths.target - Path Units. Nov 13 12:05:27.113403 systemd[1]: Reached target slices.target - Slice Units. Nov 13 12:05:27.113418 systemd[1]: Reached target swap.target - Swaps. Nov 13 12:05:27.113433 systemd[1]: Reached target timers.target - Timer Units. Nov 13 12:05:27.113455 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 13 12:05:27.113470 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 13 12:05:27.113485 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 13 12:05:27.113500 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 13 12:05:27.113545 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 13 12:05:27.113566 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 13 12:05:27.113600 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 13 12:05:27.113617 systemd[1]: Reached target sockets.target - Socket Units. Nov 13 12:05:27.113632 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 13 12:05:27.113646 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 13 12:05:27.113661 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 13 12:05:27.113685 systemd[1]: Starting systemd-fsck-usr.service... Nov 13 12:05:27.113700 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 13 12:05:27.113714 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 13 12:05:27.113729 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 13 12:05:27.113803 systemd-journald[201]: Collecting audit messages is disabled. Nov 13 12:05:27.113837 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 13 12:05:27.113853 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 13 12:05:27.113874 systemd[1]: Finished systemd-fsck-usr.service. Nov 13 12:05:27.113895 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 13 12:05:27.113911 systemd-journald[201]: Journal started Nov 13 12:05:27.113944 systemd-journald[201]: Runtime Journal (/run/log/journal/8b5a1049b04d4830b3698c72c747234f) is 4.7M, max 38.0M, 33.2M free. Nov 13 12:05:27.057335 systemd-modules-load[202]: Inserted module 'overlay' Nov 13 12:05:27.147205 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 13 12:05:27.147239 kernel: Bridge firewalling registered Nov 13 12:05:27.119653 systemd-modules-load[202]: Inserted module 'br_netfilter' Nov 13 12:05:27.163124 systemd[1]: Started systemd-journald.service - Journal Service. Nov 13 12:05:27.163195 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 13 12:05:27.165446 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 13 12:05:27.167635 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 13 12:05:27.185749 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 13 12:05:27.189730 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 13 12:05:27.191454 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 13 12:05:27.202655 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 13 12:05:27.218855 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 13 12:05:27.225718 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 13 12:05:27.226805 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 13 12:05:27.231438 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 13 12:05:27.233081 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 13 12:05:27.239809 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 13 12:05:27.255922 dracut-cmdline[229]: dracut-dracut-053 Nov 13 12:05:27.261719 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 13 12:05:27.292104 systemd-resolved[235]: Positive Trust Anchors: Nov 13 12:05:27.292126 systemd-resolved[235]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 13 12:05:27.292170 systemd-resolved[235]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 13 12:05:27.305710 systemd-resolved[235]: Defaulting to hostname 'linux'. Nov 13 12:05:27.307977 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 13 12:05:27.309185 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 13 12:05:27.377586 kernel: SCSI subsystem initialized Nov 13 12:05:27.389537 kernel: Loading iSCSI transport class v2.0-870. Nov 13 12:05:27.403543 kernel: iscsi: registered transport (tcp) Nov 13 12:05:27.430122 kernel: iscsi: registered transport (qla4xxx) Nov 13 12:05:27.430169 kernel: QLogic iSCSI HBA Driver Nov 13 12:05:27.486302 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 13 12:05:27.492712 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 13 12:05:27.536934 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 13 12:05:27.537039 kernel: device-mapper: uevent: version 1.0.3 Nov 13 12:05:27.537959 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 13 12:05:27.588569 kernel: raid6: sse2x4 gen() 12673 MB/s Nov 13 12:05:27.606566 kernel: raid6: sse2x2 gen() 8875 MB/s Nov 13 12:05:27.625259 kernel: raid6: sse2x1 gen() 9408 MB/s Nov 13 12:05:27.625298 kernel: raid6: using algorithm sse2x4 gen() 12673 MB/s Nov 13 12:05:27.644191 kernel: raid6: .... xor() 7361 MB/s, rmw enabled Nov 13 12:05:27.644265 kernel: raid6: using ssse3x2 recovery algorithm Nov 13 12:05:27.671563 kernel: xor: automatically using best checksumming function avx Nov 13 12:05:27.874551 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 13 12:05:27.889778 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 13 12:05:27.896767 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 13 12:05:27.929290 systemd-udevd[418]: Using default interface naming scheme 'v255'. Nov 13 12:05:27.937026 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 13 12:05:27.948929 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 13 12:05:27.977831 dracut-pre-trigger[428]: rd.md=0: removing MD RAID activation Nov 13 12:05:28.020750 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 13 12:05:28.027801 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 13 12:05:28.148904 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 13 12:05:28.157822 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 13 12:05:28.186532 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 13 12:05:28.190737 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 13 12:05:28.193664 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 13 12:05:28.195609 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 13 12:05:28.203717 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 13 12:05:28.233424 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 13 12:05:28.283224 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Nov 13 12:05:28.338142 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Nov 13 12:05:28.338366 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 13 12:05:28.338389 kernel: GPT:17805311 != 125829119 Nov 13 12:05:28.338418 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 13 12:05:28.338436 kernel: GPT:17805311 != 125829119 Nov 13 12:05:28.338452 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 13 12:05:28.338469 kernel: cryptd: max_cpu_qlen set to 1000 Nov 13 12:05:28.338486 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 13 12:05:28.338503 kernel: AVX version of gcm_enc/dec engaged. Nov 13 12:05:28.338561 kernel: AES CTR mode by8 optimization enabled Nov 13 12:05:28.344048 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 13 12:05:28.344234 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 13 12:05:28.350341 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 13 12:05:28.351831 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 13 12:05:28.352820 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 13 12:05:28.356460 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 13 12:05:28.368113 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 13 12:05:28.389961 kernel: ACPI: bus type USB registered Nov 13 12:05:28.390039 kernel: usbcore: registered new interface driver usbfs Nov 13 12:05:28.393953 kernel: usbcore: registered new interface driver hub Nov 13 12:05:28.393989 kernel: usbcore: registered new device driver usb Nov 13 12:05:28.413545 kernel: BTRFS: device fsid 9dfeafbb-8ab7-4be2-acae-f51db463fc77 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (468) Nov 13 12:05:28.452574 kernel: libata version 3.00 loaded. Nov 13 12:05:28.452675 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (466) Nov 13 12:05:28.460933 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Nov 13 12:05:28.477039 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Nov 13 12:05:28.477285 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Nov 13 12:05:28.477538 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Nov 13 12:05:28.477750 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Nov 13 12:05:28.477967 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Nov 13 12:05:28.478197 kernel: hub 1-0:1.0: USB hub found Nov 13 12:05:28.478426 kernel: hub 1-0:1.0: 4 ports detected Nov 13 12:05:28.478668 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Nov 13 12:05:28.478910 kernel: hub 2-0:1.0: USB hub found Nov 13 12:05:28.479141 kernel: hub 2-0:1.0: 4 ports detected Nov 13 12:05:28.470169 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 13 12:05:28.568541 kernel: ahci 0000:00:1f.2: version 3.0 Nov 13 12:05:28.568880 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 13 12:05:28.568919 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Nov 13 12:05:28.569143 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 13 12:05:28.569359 kernel: scsi host0: ahci Nov 13 12:05:28.569605 kernel: scsi host1: ahci Nov 13 12:05:28.569825 kernel: scsi host2: ahci Nov 13 12:05:28.570049 kernel: scsi host3: ahci Nov 13 12:05:28.570258 kernel: scsi host4: ahci Nov 13 12:05:28.570465 kernel: scsi host5: ahci Nov 13 12:05:28.570704 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 Nov 13 12:05:28.570726 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 Nov 13 12:05:28.570765 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 Nov 13 12:05:28.570783 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 Nov 13 12:05:28.570801 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 Nov 13 12:05:28.570827 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 Nov 13 12:05:28.569522 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 13 12:05:28.578255 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 13 12:05:28.584787 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 13 12:05:28.585638 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 13 12:05:28.593681 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 13 12:05:28.607799 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 13 12:05:28.612264 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 13 12:05:28.617088 disk-uuid[563]: Primary Header is updated. Nov 13 12:05:28.617088 disk-uuid[563]: Secondary Entries is updated. Nov 13 12:05:28.617088 disk-uuid[563]: Secondary Header is updated. Nov 13 12:05:28.625444 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 13 12:05:28.632525 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 13 12:05:28.638332 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 13 12:05:28.640851 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 13 12:05:28.721580 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Nov 13 12:05:28.812119 kernel: ata3: SATA link down (SStatus 0 SControl 300) Nov 13 12:05:28.812197 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 13 12:05:28.813521 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 13 12:05:28.815522 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 13 12:05:28.817375 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 13 12:05:28.819093 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 13 12:05:28.866569 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 13 12:05:28.874011 kernel: usbcore: registered new interface driver usbhid Nov 13 12:05:28.874062 kernel: usbhid: USB HID core driver Nov 13 12:05:28.880613 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Nov 13 12:05:28.880651 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Nov 13 12:05:29.639324 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 13 12:05:29.639494 disk-uuid[564]: The operation has completed successfully. Nov 13 12:05:29.699689 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 13 12:05:29.699858 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 13 12:05:29.723729 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 13 12:05:29.741189 sh[586]: Success Nov 13 12:05:29.760082 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Nov 13 12:05:29.821161 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 13 12:05:29.830641 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 13 12:05:29.833877 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 13 12:05:29.864609 kernel: BTRFS info (device dm-0): first mount of filesystem 9dfeafbb-8ab7-4be2-acae-f51db463fc77 Nov 13 12:05:29.864658 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 13 12:05:29.866636 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 13 12:05:29.868763 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 13 12:05:29.871449 kernel: BTRFS info (device dm-0): using free space tree Nov 13 12:05:29.880462 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 13 12:05:29.881929 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 13 12:05:29.889830 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 13 12:05:29.892533 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 13 12:05:29.907626 kernel: BTRFS info (device vda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 13 12:05:29.911264 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 13 12:05:29.911295 kernel: BTRFS info (device vda6): using free space tree Nov 13 12:05:29.915534 kernel: BTRFS info (device vda6): auto enabling async discard Nov 13 12:05:29.933687 kernel: BTRFS info (device vda6): last unmount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 13 12:05:29.933182 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 13 12:05:29.944193 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 13 12:05:29.951486 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 13 12:05:30.068907 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 13 12:05:30.082405 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 13 12:05:30.113207 ignition[681]: Ignition 2.19.0 Nov 13 12:05:30.113232 ignition[681]: Stage: fetch-offline Nov 13 12:05:30.113337 ignition[681]: no configs at "/usr/lib/ignition/base.d" Nov 13 12:05:30.113374 ignition[681]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 13 12:05:30.118181 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 13 12:05:30.113651 ignition[681]: parsed url from cmdline: "" Nov 13 12:05:30.113658 ignition[681]: no config URL provided Nov 13 12:05:30.113668 ignition[681]: reading system config file "/usr/lib/ignition/user.ign" Nov 13 12:05:30.113689 ignition[681]: no config at "/usr/lib/ignition/user.ign" Nov 13 12:05:30.113699 ignition[681]: failed to fetch config: resource requires networking Nov 13 12:05:30.113997 ignition[681]: Ignition finished successfully Nov 13 12:05:30.127326 systemd-networkd[770]: lo: Link UP Nov 13 12:05:30.127332 systemd-networkd[770]: lo: Gained carrier Nov 13 12:05:30.129695 systemd-networkd[770]: Enumeration completed Nov 13 12:05:30.130270 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 13 12:05:30.130276 systemd-networkd[770]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 13 12:05:30.130874 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 13 12:05:30.132201 systemd[1]: Reached target network.target - Network. Nov 13 12:05:30.132390 systemd-networkd[770]: eth0: Link UP Nov 13 12:05:30.132397 systemd-networkd[770]: eth0: Gained carrier Nov 13 12:05:30.132408 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 13 12:05:30.141728 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 13 12:05:30.155633 systemd-networkd[770]: eth0: DHCPv4 address 10.230.32.222/30, gateway 10.230.32.221 acquired from 10.230.32.221 Nov 13 12:05:30.167943 ignition[778]: Ignition 2.19.0 Nov 13 12:05:30.167998 ignition[778]: Stage: fetch Nov 13 12:05:30.168338 ignition[778]: no configs at "/usr/lib/ignition/base.d" Nov 13 12:05:30.168360 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 13 12:05:30.168668 ignition[778]: parsed url from cmdline: "" Nov 13 12:05:30.168688 ignition[778]: no config URL provided Nov 13 12:05:30.168712 ignition[778]: reading system config file "/usr/lib/ignition/user.ign" Nov 13 12:05:30.168730 ignition[778]: no config at "/usr/lib/ignition/user.ign" Nov 13 12:05:30.169050 ignition[778]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Nov 13 12:05:30.169109 ignition[778]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Nov 13 12:05:30.169159 ignition[778]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Nov 13 12:05:30.186879 ignition[778]: GET result: OK Nov 13 12:05:30.187576 ignition[778]: parsing config with SHA512: 2c0f14421c0c2ac8d351c49d514abec033f16e720291e7b3d6778ac5e281dee309034f87b68517c6c9cf2e87c894baadd9d8b18f6a26a8b8c696a627ac66022c Nov 13 12:05:30.194868 unknown[778]: fetched base config from "system" Nov 13 12:05:30.194889 unknown[778]: fetched base config from "system" Nov 13 12:05:30.194899 unknown[778]: fetched user config from "openstack" Nov 13 12:05:30.197905 ignition[778]: fetch: fetch complete Nov 13 12:05:30.197921 ignition[778]: fetch: fetch passed Nov 13 12:05:30.198031 ignition[778]: Ignition finished successfully Nov 13 12:05:30.200921 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 13 12:05:30.214801 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 13 12:05:30.235072 ignition[785]: Ignition 2.19.0 Nov 13 12:05:30.235096 ignition[785]: Stage: kargs Nov 13 12:05:30.235372 ignition[785]: no configs at "/usr/lib/ignition/base.d" Nov 13 12:05:30.235394 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 13 12:05:30.237262 ignition[785]: kargs: kargs passed Nov 13 12:05:30.237367 ignition[785]: Ignition finished successfully Nov 13 12:05:30.240038 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 13 12:05:30.245869 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 13 12:05:30.271841 ignition[791]: Ignition 2.19.0 Nov 13 12:05:30.272651 ignition[791]: Stage: disks Nov 13 12:05:30.272991 ignition[791]: no configs at "/usr/lib/ignition/base.d" Nov 13 12:05:30.273066 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 13 12:05:30.274787 ignition[791]: disks: disks passed Nov 13 12:05:30.274860 ignition[791]: Ignition finished successfully Nov 13 12:05:30.276411 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 13 12:05:30.278021 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 13 12:05:30.279258 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 13 12:05:30.280905 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 13 12:05:30.282570 systemd[1]: Reached target sysinit.target - System Initialization. Nov 13 12:05:30.283973 systemd[1]: Reached target basic.target - Basic System. Nov 13 12:05:30.295758 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 13 12:05:30.314687 systemd-fsck[799]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Nov 13 12:05:30.318475 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 13 12:05:30.328706 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 13 12:05:30.448526 kernel: EXT4-fs (vda9): mounted filesystem cc5635ac-cac6-420e-b789-89e3a937cfb2 r/w with ordered data mode. Quota mode: none. Nov 13 12:05:30.449476 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 13 12:05:30.451060 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 13 12:05:30.459670 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 13 12:05:30.463076 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 13 12:05:30.464321 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 13 12:05:30.466251 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Nov 13 12:05:30.471460 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 13 12:05:30.477640 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (807) Nov 13 12:05:30.477677 kernel: BTRFS info (device vda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 13 12:05:30.471549 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 13 12:05:30.480533 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 13 12:05:30.480569 kernel: BTRFS info (device vda6): using free space tree Nov 13 12:05:30.485553 kernel: BTRFS info (device vda6): auto enabling async discard Nov 13 12:05:30.496629 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 13 12:05:30.498279 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 13 12:05:30.506744 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 13 12:05:30.595851 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Nov 13 12:05:30.609869 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Nov 13 12:05:30.615840 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Nov 13 12:05:30.623135 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Nov 13 12:05:30.737420 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 13 12:05:30.743988 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 13 12:05:30.746840 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 13 12:05:30.759538 kernel: BTRFS info (device vda6): last unmount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 13 12:05:30.794910 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 13 12:05:30.796810 ignition[926]: INFO : Ignition 2.19.0 Nov 13 12:05:30.796810 ignition[926]: INFO : Stage: mount Nov 13 12:05:30.799824 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 13 12:05:30.799824 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 13 12:05:30.799824 ignition[926]: INFO : mount: mount passed Nov 13 12:05:30.799824 ignition[926]: INFO : Ignition finished successfully Nov 13 12:05:30.800707 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 13 12:05:30.863081 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 13 12:05:31.714829 systemd-networkd[770]: eth0: Gained IPv6LL Nov 13 12:05:33.222592 systemd-networkd[770]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8837:24:19ff:fee6:20de/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8837:24:19ff:fee6:20de/64 assigned by NDisc. Nov 13 12:05:33.222608 systemd-networkd[770]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Nov 13 12:05:37.681949 coreos-metadata[809]: Nov 13 12:05:37.681 WARN failed to locate config-drive, using the metadata service API instead Nov 13 12:05:37.707383 coreos-metadata[809]: Nov 13 12:05:37.707 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Nov 13 12:05:37.724357 coreos-metadata[809]: Nov 13 12:05:37.724 INFO Fetch successful Nov 13 12:05:37.725249 coreos-metadata[809]: Nov 13 12:05:37.725 INFO wrote hostname srv-sx7g0.gb1.brightbox.com to /sysroot/etc/hostname Nov 13 12:05:37.726953 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Nov 13 12:05:37.727128 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Nov 13 12:05:37.735643 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 13 12:05:37.752773 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 13 12:05:37.770629 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (943) Nov 13 12:05:37.778912 kernel: BTRFS info (device vda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 13 12:05:37.778959 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 13 12:05:37.778981 kernel: BTRFS info (device vda6): using free space tree Nov 13 12:05:37.783526 kernel: BTRFS info (device vda6): auto enabling async discard Nov 13 12:05:37.788255 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 13 12:05:37.821474 ignition[961]: INFO : Ignition 2.19.0 Nov 13 12:05:37.821474 ignition[961]: INFO : Stage: files Nov 13 12:05:37.823210 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 13 12:05:37.823210 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 13 12:05:37.825070 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Nov 13 12:05:37.827150 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 13 12:05:37.827150 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 13 12:05:37.831575 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 13 12:05:37.832929 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 13 12:05:37.833943 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 13 12:05:37.833164 unknown[961]: wrote ssh authorized keys file for user: core Nov 13 12:05:37.835984 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 13 12:05:37.835984 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 13 12:05:37.835984 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 13 12:05:37.835984 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Nov 13 12:05:38.088100 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 13 12:05:38.438343 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 13 12:05:38.438343 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 13 12:05:38.447002 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 13 12:05:38.447002 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 13 12:05:38.447002 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 13 12:05:38.447002 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 13 12:05:38.447002 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 13 12:05:38.447002 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 13 12:05:38.447002 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 13 12:05:38.447002 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 13 12:05:38.447002 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 13 12:05:38.447002 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 13 12:05:38.447002 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 13 12:05:38.447002 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 13 12:05:38.447002 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Nov 13 12:05:38.999251 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 13 12:05:40.369895 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 13 12:05:40.372578 ignition[961]: INFO : files: op(c): [started] processing unit "containerd.service" Nov 13 12:05:40.372578 ignition[961]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 13 12:05:40.374980 ignition[961]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 13 12:05:40.374980 ignition[961]: INFO : files: op(c): [finished] processing unit "containerd.service" Nov 13 12:05:40.374980 ignition[961]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Nov 13 12:05:40.374980 ignition[961]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 13 12:05:40.374980 ignition[961]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 13 12:05:40.374980 ignition[961]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Nov 13 12:05:40.374980 ignition[961]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Nov 13 12:05:40.374980 ignition[961]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Nov 13 12:05:40.374980 ignition[961]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 13 12:05:40.374980 ignition[961]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 13 12:05:40.374980 ignition[961]: INFO : files: files passed Nov 13 12:05:40.390498 ignition[961]: INFO : Ignition finished successfully Nov 13 12:05:40.377013 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 13 12:05:40.388825 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 13 12:05:40.398686 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 13 12:05:40.406019 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 13 12:05:40.406831 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 13 12:05:40.416355 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 13 12:05:40.416355 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 13 12:05:40.419059 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 13 12:05:40.422599 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 13 12:05:40.424208 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 13 12:05:40.436844 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 13 12:05:40.468925 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 13 12:05:40.469103 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 13 12:05:40.471115 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 13 12:05:40.472450 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 13 12:05:40.474077 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 13 12:05:40.480771 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 13 12:05:40.501656 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 13 12:05:40.508741 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 13 12:05:40.538887 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 13 12:05:40.541019 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 13 12:05:40.541975 systemd[1]: Stopped target timers.target - Timer Units. Nov 13 12:05:40.543611 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 13 12:05:40.543824 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 13 12:05:40.545655 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 13 12:05:40.546747 systemd[1]: Stopped target basic.target - Basic System. Nov 13 12:05:40.548299 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 13 12:05:40.549732 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 13 12:05:40.551218 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 13 12:05:40.552836 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 13 12:05:40.554389 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 13 12:05:40.556012 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 13 12:05:40.557569 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 13 12:05:40.559138 systemd[1]: Stopped target swap.target - Swaps. Nov 13 12:05:40.560525 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 13 12:05:40.560708 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 13 12:05:40.562501 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 13 12:05:40.563637 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 13 12:05:40.565013 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 13 12:05:40.565200 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 13 12:05:40.566674 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 13 12:05:40.566855 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 13 12:05:40.568792 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 13 12:05:40.568960 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 13 12:05:40.570730 systemd[1]: ignition-files.service: Deactivated successfully. Nov 13 12:05:40.570892 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 13 12:05:40.578933 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 13 12:05:40.579737 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 13 12:05:40.579976 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 13 12:05:40.594629 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 13 12:05:40.596335 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 13 12:05:40.598638 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 13 12:05:40.600776 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 13 12:05:40.600935 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 13 12:05:40.608779 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 13 12:05:40.608945 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 13 12:05:40.628729 ignition[1013]: INFO : Ignition 2.19.0 Nov 13 12:05:40.628729 ignition[1013]: INFO : Stage: umount Nov 13 12:05:40.628729 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 13 12:05:40.628729 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 13 12:05:40.628729 ignition[1013]: INFO : umount: umount passed Nov 13 12:05:40.628729 ignition[1013]: INFO : Ignition finished successfully Nov 13 12:05:40.630101 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 13 12:05:40.630294 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 13 12:05:40.631866 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 13 12:05:40.632003 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 13 12:05:40.633285 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 13 12:05:40.633365 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 13 12:05:40.634889 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 13 12:05:40.634963 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 13 12:05:40.636478 systemd[1]: Stopped target network.target - Network. Nov 13 12:05:40.637818 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 13 12:05:40.637892 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 13 12:05:40.639336 systemd[1]: Stopped target paths.target - Path Units. Nov 13 12:05:40.640611 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 13 12:05:40.644598 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 13 12:05:40.645496 systemd[1]: Stopped target slices.target - Slice Units. Nov 13 12:05:40.646879 systemd[1]: Stopped target sockets.target - Socket Units. Nov 13 12:05:40.648345 systemd[1]: iscsid.socket: Deactivated successfully. Nov 13 12:05:40.648425 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 13 12:05:40.649599 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 13 12:05:40.649661 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 13 12:05:40.651151 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 13 12:05:40.651254 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 13 12:05:40.652789 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 13 12:05:40.652853 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 13 12:05:40.654389 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 13 12:05:40.656522 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 13 12:05:40.659706 systemd-networkd[770]: eth0: DHCPv6 lease lost Nov 13 12:05:40.664020 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 13 12:05:40.664721 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 13 12:05:40.667589 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 13 12:05:40.667779 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 13 12:05:40.671961 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 13 12:05:40.672230 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 13 12:05:40.679646 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 13 12:05:40.681493 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 13 12:05:40.682658 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 13 12:05:40.683549 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 13 12:05:40.683626 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 13 12:05:40.687035 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 13 12:05:40.687116 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 13 12:05:40.688993 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 13 12:05:40.689061 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 13 12:05:40.692667 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 13 12:05:40.696137 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 13 12:05:40.704111 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 13 12:05:40.704409 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 13 12:05:40.707906 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 13 12:05:40.708012 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 13 12:05:40.710231 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 13 12:05:40.710293 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 13 12:05:40.711013 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 13 12:05:40.711082 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 13 12:05:40.713031 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 13 12:05:40.713099 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 13 12:05:40.714660 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 13 12:05:40.714757 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 13 12:05:40.722753 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 13 12:05:40.724226 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 13 12:05:40.724304 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 13 12:05:40.728339 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 13 12:05:40.728409 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 13 12:05:40.731583 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 13 12:05:40.732600 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 13 12:05:40.739005 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 13 12:05:40.739172 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 13 12:05:40.753077 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 13 12:05:40.753265 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 13 12:05:40.755394 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 13 12:05:40.756181 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 13 12:05:40.756268 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 13 12:05:40.769793 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 13 12:05:40.779930 systemd[1]: Switching root. Nov 13 12:05:40.814320 systemd-journald[201]: Journal stopped Nov 13 12:05:42.357879 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Nov 13 12:05:42.358052 kernel: SELinux: policy capability network_peer_controls=1 Nov 13 12:05:42.358088 kernel: SELinux: policy capability open_perms=1 Nov 13 12:05:42.358113 kernel: SELinux: policy capability extended_socket_class=1 Nov 13 12:05:42.358154 kernel: SELinux: policy capability always_check_network=0 Nov 13 12:05:42.358182 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 13 12:05:42.358208 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 13 12:05:42.358241 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 13 12:05:42.358272 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 13 12:05:42.358293 kernel: audit: type=1403 audit(1731499541.112:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 13 12:05:42.358333 systemd[1]: Successfully loaded SELinux policy in 55.681ms. Nov 13 12:05:42.358396 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.375ms. Nov 13 12:05:42.358426 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 13 12:05:42.358456 systemd[1]: Detected virtualization kvm. Nov 13 12:05:42.358478 systemd[1]: Detected architecture x86-64. Nov 13 12:05:42.362028 systemd[1]: Detected first boot. Nov 13 12:05:42.362065 systemd[1]: Hostname set to . Nov 13 12:05:42.362099 systemd[1]: Initializing machine ID from VM UUID. Nov 13 12:05:42.362141 zram_generator::config[1074]: No configuration found. Nov 13 12:05:42.362165 systemd[1]: Populated /etc with preset unit settings. Nov 13 12:05:42.362192 systemd[1]: Queued start job for default target multi-user.target. Nov 13 12:05:42.362230 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 13 12:05:42.362254 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 13 12:05:42.362281 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 13 12:05:42.362313 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 13 12:05:42.362342 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 13 12:05:42.362376 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 13 12:05:42.362413 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 13 12:05:42.362442 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 13 12:05:42.362463 systemd[1]: Created slice user.slice - User and Session Slice. Nov 13 12:05:42.362490 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 13 12:05:42.362549 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 13 12:05:42.362583 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 13 12:05:42.362613 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 13 12:05:42.362635 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 13 12:05:42.362686 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 13 12:05:42.362716 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 13 12:05:42.362738 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 13 12:05:42.362759 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 13 12:05:42.362779 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 13 12:05:42.362800 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 13 12:05:42.362837 systemd[1]: Reached target slices.target - Slice Units. Nov 13 12:05:42.362865 systemd[1]: Reached target swap.target - Swaps. Nov 13 12:05:42.362887 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 13 12:05:42.362916 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 13 12:05:42.362944 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 13 12:05:42.362994 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 13 12:05:42.363028 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 13 12:05:42.363051 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 13 12:05:42.363072 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 13 12:05:42.363099 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 13 12:05:42.363121 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 13 12:05:42.363142 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 13 12:05:42.363169 systemd[1]: Mounting media.mount - External Media Directory... Nov 13 12:05:42.363191 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 13 12:05:42.363212 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 13 12:05:42.363245 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 13 12:05:42.363267 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 13 12:05:42.363289 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 13 12:05:42.363311 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 13 12:05:42.363331 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 13 12:05:42.363359 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 13 12:05:42.363396 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 13 12:05:42.363418 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 13 12:05:42.363445 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 13 12:05:42.363481 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 13 12:05:42.364293 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 13 12:05:42.364329 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 13 12:05:42.364351 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Nov 13 12:05:42.364386 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Nov 13 12:05:42.364408 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 13 12:05:42.364435 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 13 12:05:42.364457 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 13 12:05:42.364492 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 13 12:05:42.366621 kernel: loop: module loaded Nov 13 12:05:42.366659 kernel: fuse: init (API version 7.39) Nov 13 12:05:42.366693 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 13 12:05:42.366722 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 13 12:05:42.366744 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 13 12:05:42.366771 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 13 12:05:42.366846 systemd-journald[1183]: Collecting audit messages is disabled. Nov 13 12:05:42.366926 systemd[1]: Mounted media.mount - External Media Directory. Nov 13 12:05:42.366950 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 13 12:05:42.366971 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 13 12:05:42.366992 kernel: ACPI: bus type drm_connector registered Nov 13 12:05:42.367012 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 13 12:05:42.367039 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 13 12:05:42.367061 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 13 12:05:42.367086 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 13 12:05:42.367109 systemd-journald[1183]: Journal started Nov 13 12:05:42.367159 systemd-journald[1183]: Runtime Journal (/run/log/journal/8b5a1049b04d4830b3698c72c747234f) is 4.7M, max 38.0M, 33.2M free. Nov 13 12:05:42.370611 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 13 12:05:42.374602 systemd[1]: Started systemd-journald.service - Journal Service. Nov 13 12:05:42.377496 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 13 12:05:42.377852 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 13 12:05:42.379557 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 13 12:05:42.379821 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 13 12:05:42.381140 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 13 12:05:42.381510 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 13 12:05:42.382995 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 13 12:05:42.383404 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 13 12:05:42.384915 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 13 12:05:42.385366 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 13 12:05:42.386857 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 13 12:05:42.388225 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 13 12:05:42.389875 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 13 12:05:42.406388 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 13 12:05:42.413684 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 13 12:05:42.420598 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 13 12:05:42.422714 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 13 12:05:42.432702 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 13 12:05:42.440705 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 13 12:05:42.441971 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 13 12:05:42.454651 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 13 12:05:42.455789 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 13 12:05:42.466744 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 13 12:05:42.474096 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 13 12:05:42.482182 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 13 12:05:42.484075 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 13 12:05:42.498729 systemd-journald[1183]: Time spent on flushing to /var/log/journal/8b5a1049b04d4830b3698c72c747234f is 68.355ms for 1127 entries. Nov 13 12:05:42.498729 systemd-journald[1183]: System Journal (/var/log/journal/8b5a1049b04d4830b3698c72c747234f) is 8.0M, max 584.8M, 576.8M free. Nov 13 12:05:42.599653 systemd-journald[1183]: Received client request to flush runtime journal. Nov 13 12:05:42.507466 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 13 12:05:42.508939 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 13 12:05:42.538069 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 13 12:05:42.583935 systemd-tmpfiles[1228]: ACLs are not supported, ignoring. Nov 13 12:05:42.583956 systemd-tmpfiles[1228]: ACLs are not supported, ignoring. Nov 13 12:05:42.595728 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 13 12:05:42.601077 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 13 12:05:42.604005 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 13 12:05:42.617769 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 13 12:05:42.623729 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 13 12:05:42.649459 udevadm[1246]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 13 12:05:42.681205 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 13 12:05:42.692833 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 13 12:05:42.717128 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Nov 13 12:05:42.717154 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Nov 13 12:05:42.727068 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 13 12:05:43.238733 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 13 12:05:43.251799 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 13 12:05:43.287374 systemd-udevd[1256]: Using default interface naming scheme 'v255'. Nov 13 12:05:43.318306 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 13 12:05:43.332739 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 13 12:05:43.361762 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 13 12:05:43.430457 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 13 12:05:43.446051 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Nov 13 12:05:43.462640 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1269) Nov 13 12:05:43.469541 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1269) Nov 13 12:05:43.505535 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1267) Nov 13 12:05:43.569303 systemd-networkd[1261]: lo: Link UP Nov 13 12:05:43.569316 systemd-networkd[1261]: lo: Gained carrier Nov 13 12:05:43.572255 systemd-networkd[1261]: Enumeration completed Nov 13 12:05:43.572617 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 13 12:05:43.573166 systemd-networkd[1261]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 13 12:05:43.573272 systemd-networkd[1261]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 13 12:05:43.574748 systemd-networkd[1261]: eth0: Link UP Nov 13 12:05:43.574851 systemd-networkd[1261]: eth0: Gained carrier Nov 13 12:05:43.574968 systemd-networkd[1261]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 13 12:05:43.579726 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 13 12:05:43.601593 systemd-networkd[1261]: eth0: DHCPv4 address 10.230.32.222/30, gateway 10.230.32.221 acquired from 10.230.32.221 Nov 13 12:05:43.638544 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 13 12:05:43.646523 kernel: ACPI: button: Power Button [PWRF] Nov 13 12:05:43.654550 kernel: mousedev: PS/2 mouse device common for all mice Nov 13 12:05:43.681141 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 13 12:05:43.722532 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 13 12:05:43.730010 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Nov 13 12:05:43.739171 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 13 12:05:43.741581 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Nov 13 12:05:43.780842 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 13 12:05:43.973995 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 13 12:05:44.005354 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 13 12:05:44.015797 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 13 12:05:44.044572 lvm[1296]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 13 12:05:44.079417 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 13 12:05:44.081119 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 13 12:05:44.088767 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 13 12:05:44.097631 lvm[1299]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 13 12:05:44.136216 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 13 12:05:44.138078 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 13 12:05:44.139029 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 13 12:05:44.139264 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 13 12:05:44.140259 systemd[1]: Reached target machines.target - Containers. Nov 13 12:05:44.142916 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 13 12:05:44.150737 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 13 12:05:44.153746 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 13 12:05:44.156983 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 13 12:05:44.163823 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 13 12:05:44.170737 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 13 12:05:44.175670 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 13 12:05:44.181885 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 13 12:05:44.209834 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 13 12:05:44.219542 kernel: loop0: detected capacity change from 0 to 211296 Nov 13 12:05:44.227308 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 13 12:05:44.231436 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 13 12:05:44.254818 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 13 12:05:44.289554 kernel: loop1: detected capacity change from 0 to 142488 Nov 13 12:05:44.347559 kernel: loop2: detected capacity change from 0 to 8 Nov 13 12:05:44.377557 kernel: loop3: detected capacity change from 0 to 140768 Nov 13 12:05:44.432583 kernel: loop4: detected capacity change from 0 to 211296 Nov 13 12:05:44.448574 kernel: loop5: detected capacity change from 0 to 142488 Nov 13 12:05:44.470822 kernel: loop6: detected capacity change from 0 to 8 Nov 13 12:05:44.474525 kernel: loop7: detected capacity change from 0 to 140768 Nov 13 12:05:44.492701 (sd-merge)[1321]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Nov 13 12:05:44.493738 (sd-merge)[1321]: Merged extensions into '/usr'. Nov 13 12:05:44.500067 systemd[1]: Reloading requested from client PID 1307 ('systemd-sysext') (unit systemd-sysext.service)... Nov 13 12:05:44.500109 systemd[1]: Reloading... Nov 13 12:05:44.574590 zram_generator::config[1349]: No configuration found. Nov 13 12:05:44.789765 ldconfig[1303]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 13 12:05:44.844989 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 13 12:05:44.934485 systemd[1]: Reloading finished in 433 ms. Nov 13 12:05:44.958733 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 13 12:05:44.960152 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 13 12:05:44.971769 systemd[1]: Starting ensure-sysext.service... Nov 13 12:05:44.974726 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 13 12:05:44.985250 systemd[1]: Reloading requested from client PID 1412 ('systemctl') (unit ensure-sysext.service)... Nov 13 12:05:44.985302 systemd[1]: Reloading... Nov 13 12:05:45.027600 systemd-tmpfiles[1413]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 13 12:05:45.029098 systemd-tmpfiles[1413]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 13 12:05:45.031286 systemd-tmpfiles[1413]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 13 12:05:45.031937 systemd-tmpfiles[1413]: ACLs are not supported, ignoring. Nov 13 12:05:45.032060 systemd-tmpfiles[1413]: ACLs are not supported, ignoring. Nov 13 12:05:45.039901 systemd-tmpfiles[1413]: Detected autofs mount point /boot during canonicalization of boot. Nov 13 12:05:45.040131 systemd-tmpfiles[1413]: Skipping /boot Nov 13 12:05:45.058536 systemd-tmpfiles[1413]: Detected autofs mount point /boot during canonicalization of boot. Nov 13 12:05:45.058749 systemd-tmpfiles[1413]: Skipping /boot Nov 13 12:05:45.118597 zram_generator::config[1445]: No configuration found. Nov 13 12:05:45.297389 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 13 12:05:45.347407 systemd-networkd[1261]: eth0: Gained IPv6LL Nov 13 12:05:45.387220 systemd[1]: Reloading finished in 401 ms. Nov 13 12:05:45.410637 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 13 12:05:45.426289 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 13 12:05:45.440782 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 13 12:05:45.445723 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 13 12:05:45.452680 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 13 12:05:45.470801 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 13 12:05:45.480063 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 13 12:05:45.492675 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 13 12:05:45.494121 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 13 12:05:45.499807 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 13 12:05:45.513370 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 13 12:05:45.528006 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 13 12:05:45.530810 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 13 12:05:45.531252 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 13 12:05:45.541061 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 13 12:05:45.541848 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 13 12:05:45.542095 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 13 12:05:45.542227 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 13 12:05:45.550839 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 13 12:05:45.551669 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 13 12:05:45.574912 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 13 12:05:45.575915 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 13 12:05:45.576091 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 13 12:05:45.581090 systemd[1]: Finished ensure-sysext.service. Nov 13 12:05:45.585990 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 13 12:05:45.588215 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 13 12:05:45.588748 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 13 12:05:45.590984 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 13 12:05:45.591426 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 13 12:05:45.593922 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 13 12:05:45.594175 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 13 12:05:45.596618 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 13 12:05:45.596903 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 13 12:05:45.601795 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 13 12:05:45.619380 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 13 12:05:45.620221 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 13 12:05:45.622649 augenrules[1546]: No rules Nov 13 12:05:45.635728 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 13 12:05:45.653746 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 13 12:05:45.656304 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 13 12:05:45.659424 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 13 12:05:45.671239 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 13 12:05:45.673034 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 13 12:05:45.684224 systemd-resolved[1517]: Positive Trust Anchors: Nov 13 12:05:45.684244 systemd-resolved[1517]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 13 12:05:45.684289 systemd-resolved[1517]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 13 12:05:45.691869 systemd-resolved[1517]: Using system hostname 'srv-sx7g0.gb1.brightbox.com'. Nov 13 12:05:45.695482 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 13 12:05:45.696619 systemd[1]: Reached target network.target - Network. Nov 13 12:05:45.697407 systemd[1]: Reached target network-online.target - Network is Online. Nov 13 12:05:45.698473 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 13 12:05:45.748170 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 13 12:05:45.749593 systemd[1]: Reached target sysinit.target - System Initialization. Nov 13 12:05:45.750477 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 13 12:05:45.752322 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 13 12:05:45.753178 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 13 12:05:45.754114 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 13 12:05:45.754185 systemd[1]: Reached target paths.target - Path Units. Nov 13 12:05:45.754867 systemd[1]: Reached target time-set.target - System Time Set. Nov 13 12:05:45.755840 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 13 12:05:45.756781 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 13 12:05:45.757569 systemd[1]: Reached target timers.target - Timer Units. Nov 13 12:05:45.759210 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 13 12:05:45.762467 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 13 12:05:45.765855 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 13 12:05:45.767714 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 13 12:05:45.768468 systemd[1]: Reached target sockets.target - Socket Units. Nov 13 12:05:45.769231 systemd[1]: Reached target basic.target - Basic System. Nov 13 12:05:45.770187 systemd[1]: System is tainted: cgroupsv1 Nov 13 12:05:45.770249 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 13 12:05:45.770296 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 13 12:05:45.773423 systemd[1]: Starting containerd.service - containerd container runtime... Nov 13 12:05:45.776706 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 13 12:05:45.781962 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 13 12:05:45.786878 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 13 12:05:45.793809 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 13 12:05:45.794730 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 13 12:05:45.812487 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 13 12:05:45.824787 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 13 12:05:45.839008 jq[1567]: false Nov 13 12:05:45.842005 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 13 12:05:45.865742 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 13 12:05:45.867486 dbus-daemon[1565]: [system] SELinux support is enabled Nov 13 12:05:45.871569 extend-filesystems[1568]: Found loop4 Nov 13 12:05:45.871569 extend-filesystems[1568]: Found loop5 Nov 13 12:05:45.871569 extend-filesystems[1568]: Found loop6 Nov 13 12:05:45.871569 extend-filesystems[1568]: Found loop7 Nov 13 12:05:45.871569 extend-filesystems[1568]: Found vda Nov 13 12:05:45.871569 extend-filesystems[1568]: Found vda1 Nov 13 12:05:45.871569 extend-filesystems[1568]: Found vda2 Nov 13 12:05:45.871569 extend-filesystems[1568]: Found vda3 Nov 13 12:05:45.871569 extend-filesystems[1568]: Found usr Nov 13 12:05:45.871569 extend-filesystems[1568]: Found vda4 Nov 13 12:05:45.871569 extend-filesystems[1568]: Found vda6 Nov 13 12:05:45.871569 extend-filesystems[1568]: Found vda7 Nov 13 12:05:45.871569 extend-filesystems[1568]: Found vda9 Nov 13 12:05:45.871569 extend-filesystems[1568]: Checking size of /dev/vda9 Nov 13 12:05:45.885887 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 13 12:05:45.902683 systemd-networkd[1261]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8837:24:19ff:fee6:20de/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8837:24:19ff:fee6:20de/64 assigned by NDisc. Nov 13 12:05:45.906749 dbus-daemon[1565]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1261 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Nov 13 12:05:45.902690 systemd-networkd[1261]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Nov 13 12:05:45.915708 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 13 12:05:45.922709 extend-filesystems[1568]: Resized partition /dev/vda9 Nov 13 12:05:45.927737 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 13 12:05:45.935400 extend-filesystems[1596]: resize2fs 1.47.1 (20-May-2024) Nov 13 12:05:45.949423 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Nov 13 12:05:45.935643 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 13 12:05:45.957771 systemd[1]: Starting update-engine.service - Update Engine... Nov 13 12:05:45.964454 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 13 12:05:45.979912 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 13 12:05:45.992867 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 13 12:05:45.993240 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 13 12:05:46.004186 systemd[1]: motdgen.service: Deactivated successfully. Nov 13 12:05:46.004610 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 13 12:05:46.016125 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 13 12:05:46.030695 jq[1604]: true Nov 13 12:05:46.033894 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 13 12:05:46.034259 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 13 12:05:46.060600 dbus-daemon[1565]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 13 12:05:46.077694 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1257) Nov 13 12:05:46.077125 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 13 12:05:46.077163 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 13 12:05:46.906163 systemd-timesyncd[1553]: Contacted time server 162.159.200.1:123 (0.flatcar.pool.ntp.org). Nov 13 12:05:46.906242 systemd-timesyncd[1553]: Initial clock synchronization to Wed 2024-11-13 12:05:46.905953 UTC. Nov 13 12:05:46.906312 systemd-resolved[1517]: Clock change detected. Flushing caches. Nov 13 12:05:46.916637 (ntainerd)[1621]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 13 12:05:46.924187 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Nov 13 12:05:46.925037 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 13 12:05:46.925101 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 13 12:05:46.946953 tar[1609]: linux-amd64/helm Nov 13 12:05:46.947502 update_engine[1601]: I20241113 12:05:46.941617 1601 main.cc:92] Flatcar Update Engine starting Nov 13 12:05:46.953439 jq[1610]: true Nov 13 12:05:46.964445 update_engine[1601]: I20241113 12:05:46.955919 1601 update_check_scheduler.cc:74] Next update check in 7m13s Nov 13 12:05:46.961553 systemd[1]: Started update-engine.service - Update Engine. Nov 13 12:05:46.966479 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 13 12:05:46.975482 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 13 12:05:47.013087 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Nov 13 12:05:47.044257 extend-filesystems[1596]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 13 12:05:47.044257 extend-filesystems[1596]: old_desc_blocks = 1, new_desc_blocks = 8 Nov 13 12:05:47.044257 extend-filesystems[1596]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Nov 13 12:05:47.043520 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 13 12:05:47.059647 bash[1642]: Updated "/home/core/.ssh/authorized_keys" Nov 13 12:05:47.071104 extend-filesystems[1568]: Resized filesystem in /dev/vda9 Nov 13 12:05:47.043914 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 13 12:05:47.068583 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 13 12:05:47.093636 systemd[1]: Starting sshkeys.service... Nov 13 12:05:47.148839 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 13 12:05:47.160832 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 13 12:05:47.169966 systemd-logind[1597]: Watching system buttons on /dev/input/event2 (Power Button) Nov 13 12:05:47.170045 systemd-logind[1597]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 13 12:05:47.171762 systemd-logind[1597]: New seat seat0. Nov 13 12:05:47.177674 systemd[1]: Started systemd-logind.service - User Login Management. Nov 13 12:05:47.290132 locksmithd[1628]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 13 12:05:47.457904 dbus-daemon[1565]: [system] Successfully activated service 'org.freedesktop.hostname1' Nov 13 12:05:47.458168 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Nov 13 12:05:47.462289 dbus-daemon[1565]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1624 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Nov 13 12:05:47.476274 systemd[1]: Starting polkit.service - Authorization Manager... Nov 13 12:05:47.527765 polkitd[1673]: Started polkitd version 121 Nov 13 12:05:47.567542 polkitd[1673]: Loading rules from directory /etc/polkit-1/rules.d Nov 13 12:05:47.567666 polkitd[1673]: Loading rules from directory /usr/share/polkit-1/rules.d Nov 13 12:05:47.575724 polkitd[1673]: Finished loading, compiling and executing 2 rules Nov 13 12:05:47.581278 dbus-daemon[1565]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Nov 13 12:05:47.581623 systemd[1]: Started polkit.service - Authorization Manager. Nov 13 12:05:47.585592 polkitd[1673]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Nov 13 12:05:47.631216 systemd-hostnamed[1624]: Hostname set to (static) Nov 13 12:05:47.694769 containerd[1621]: time="2024-11-13T12:05:47.692648877Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 13 12:05:47.780225 containerd[1621]: time="2024-11-13T12:05:47.779940806Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 13 12:05:47.786982 containerd[1621]: time="2024-11-13T12:05:47.786921849Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.60-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 13 12:05:47.786982 containerd[1621]: time="2024-11-13T12:05:47.786973070Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 13 12:05:47.789101 containerd[1621]: time="2024-11-13T12:05:47.789059551Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 13 12:05:47.790126 containerd[1621]: time="2024-11-13T12:05:47.789358223Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 13 12:05:47.790126 containerd[1621]: time="2024-11-13T12:05:47.789399153Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 13 12:05:47.790126 containerd[1621]: time="2024-11-13T12:05:47.789540078Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 13 12:05:47.790126 containerd[1621]: time="2024-11-13T12:05:47.789568343Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 13 12:05:47.790126 containerd[1621]: time="2024-11-13T12:05:47.789906567Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 13 12:05:47.790126 containerd[1621]: time="2024-11-13T12:05:47.789932305Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 13 12:05:47.790126 containerd[1621]: time="2024-11-13T12:05:47.789953961Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 13 12:05:47.790126 containerd[1621]: time="2024-11-13T12:05:47.789970114Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 13 12:05:47.790481 containerd[1621]: time="2024-11-13T12:05:47.790240921Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 13 12:05:47.791254 containerd[1621]: time="2024-11-13T12:05:47.790822141Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 13 12:05:47.795162 containerd[1621]: time="2024-11-13T12:05:47.794740094Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 13 12:05:47.795162 containerd[1621]: time="2024-11-13T12:05:47.794786445Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 13 12:05:47.795162 containerd[1621]: time="2024-11-13T12:05:47.794940670Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 13 12:05:47.795162 containerd[1621]: time="2024-11-13T12:05:47.795084472Z" level=info msg="metadata content store policy set" policy=shared Nov 13 12:05:47.813612 containerd[1621]: time="2024-11-13T12:05:47.813114503Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 13 12:05:47.813612 containerd[1621]: time="2024-11-13T12:05:47.813223289Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 13 12:05:47.813612 containerd[1621]: time="2024-11-13T12:05:47.813310354Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 13 12:05:47.813612 containerd[1621]: time="2024-11-13T12:05:47.813344310Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 13 12:05:47.813612 containerd[1621]: time="2024-11-13T12:05:47.813387202Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 13 12:05:47.813951 containerd[1621]: time="2024-11-13T12:05:47.813657332Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 13 12:05:47.814584 containerd[1621]: time="2024-11-13T12:05:47.814263965Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 13 12:05:47.814584 containerd[1621]: time="2024-11-13T12:05:47.814468741Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 13 12:05:47.814584 containerd[1621]: time="2024-11-13T12:05:47.814495993Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 13 12:05:47.814584 containerd[1621]: time="2024-11-13T12:05:47.814516888Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 13 12:05:47.814584 containerd[1621]: time="2024-11-13T12:05:47.814548112Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 13 12:05:47.814584 containerd[1621]: time="2024-11-13T12:05:47.814580808Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 13 12:05:47.814813 containerd[1621]: time="2024-11-13T12:05:47.814609873Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 13 12:05:47.814813 containerd[1621]: time="2024-11-13T12:05:47.814633147Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 13 12:05:47.814813 containerd[1621]: time="2024-11-13T12:05:47.814654245Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 13 12:05:47.814813 containerd[1621]: time="2024-11-13T12:05:47.814684636Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 13 12:05:47.814813 containerd[1621]: time="2024-11-13T12:05:47.814703544Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 13 12:05:47.814813 containerd[1621]: time="2024-11-13T12:05:47.814723152Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 13 12:05:47.814813 containerd[1621]: time="2024-11-13T12:05:47.814763251Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 13 12:05:47.814813 containerd[1621]: time="2024-11-13T12:05:47.814786210Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 13 12:05:47.814813 containerd[1621]: time="2024-11-13T12:05:47.814805333Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 13 12:05:47.815161 containerd[1621]: time="2024-11-13T12:05:47.814826526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 13 12:05:47.815161 containerd[1621]: time="2024-11-13T12:05:47.814845062Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 13 12:05:47.815161 containerd[1621]: time="2024-11-13T12:05:47.814871469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 13 12:05:47.815161 containerd[1621]: time="2024-11-13T12:05:47.814891387Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 13 12:05:47.815161 containerd[1621]: time="2024-11-13T12:05:47.814923629Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 13 12:05:47.815161 containerd[1621]: time="2024-11-13T12:05:47.814946198Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 13 12:05:47.815161 containerd[1621]: time="2024-11-13T12:05:47.814967382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 13 12:05:47.815161 containerd[1621]: time="2024-11-13T12:05:47.814985358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 13 12:05:47.818156 containerd[1621]: time="2024-11-13T12:05:47.818050174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 13 12:05:47.818156 containerd[1621]: time="2024-11-13T12:05:47.818087632Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 13 12:05:47.818156 containerd[1621]: time="2024-11-13T12:05:47.818118016Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 13 12:05:47.818379 containerd[1621]: time="2024-11-13T12:05:47.818188639Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 13 12:05:47.818379 containerd[1621]: time="2024-11-13T12:05:47.818214145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 13 12:05:47.818379 containerd[1621]: time="2024-11-13T12:05:47.818232695Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 13 12:05:47.818379 containerd[1621]: time="2024-11-13T12:05:47.818311791Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 13 12:05:47.818379 containerd[1621]: time="2024-11-13T12:05:47.818346746Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 13 12:05:47.818379 containerd[1621]: time="2024-11-13T12:05:47.818367329Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 13 12:05:47.818583 containerd[1621]: time="2024-11-13T12:05:47.818386084Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 13 12:05:47.818583 containerd[1621]: time="2024-11-13T12:05:47.818402630Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 13 12:05:47.818583 containerd[1621]: time="2024-11-13T12:05:47.818433447Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 13 12:05:47.818583 containerd[1621]: time="2024-11-13T12:05:47.818459543Z" level=info msg="NRI interface is disabled by configuration." Nov 13 12:05:47.818583 containerd[1621]: time="2024-11-13T12:05:47.818477484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 13 12:05:47.819946 containerd[1621]: time="2024-11-13T12:05:47.818911345Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 13 12:05:47.821422 containerd[1621]: time="2024-11-13T12:05:47.821172244Z" level=info msg="Connect containerd service" Nov 13 12:05:47.821422 containerd[1621]: time="2024-11-13T12:05:47.821296270Z" level=info msg="using legacy CRI server" Nov 13 12:05:47.821422 containerd[1621]: time="2024-11-13T12:05:47.821315095Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 13 12:05:47.821564 containerd[1621]: time="2024-11-13T12:05:47.821515963Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 13 12:05:47.824803 containerd[1621]: time="2024-11-13T12:05:47.824353358Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 13 12:05:47.826059 containerd[1621]: time="2024-11-13T12:05:47.824999052Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 13 12:05:47.826059 containerd[1621]: time="2024-11-13T12:05:47.825116115Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 13 12:05:47.826059 containerd[1621]: time="2024-11-13T12:05:47.825216843Z" level=info msg="Start subscribing containerd event" Nov 13 12:05:47.826059 containerd[1621]: time="2024-11-13T12:05:47.825278506Z" level=info msg="Start recovering state" Nov 13 12:05:47.826059 containerd[1621]: time="2024-11-13T12:05:47.825410697Z" level=info msg="Start event monitor" Nov 13 12:05:47.826059 containerd[1621]: time="2024-11-13T12:05:47.825447550Z" level=info msg="Start snapshots syncer" Nov 13 12:05:47.826059 containerd[1621]: time="2024-11-13T12:05:47.825469523Z" level=info msg="Start cni network conf syncer for default" Nov 13 12:05:47.826059 containerd[1621]: time="2024-11-13T12:05:47.825483026Z" level=info msg="Start streaming server" Nov 13 12:05:47.825749 systemd[1]: Started containerd.service - containerd container runtime. Nov 13 12:05:47.829697 containerd[1621]: time="2024-11-13T12:05:47.829533174Z" level=info msg="containerd successfully booted in 0.140941s" Nov 13 12:05:48.051299 sshd_keygen[1605]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 13 12:05:48.096984 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 13 12:05:48.111742 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 13 12:05:48.135613 systemd[1]: issuegen.service: Deactivated successfully. Nov 13 12:05:48.136118 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 13 12:05:48.150846 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 13 12:05:48.172284 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 13 12:05:48.185924 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 13 12:05:48.196502 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 13 12:05:48.198804 systemd[1]: Reached target getty.target - Login Prompts. Nov 13 12:05:48.355979 tar[1609]: linux-amd64/LICENSE Nov 13 12:05:48.363041 tar[1609]: linux-amd64/README.md Nov 13 12:05:48.380100 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 13 12:05:48.413230 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 13 12:05:48.420886 (kubelet)[1721]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 13 12:05:49.154682 kubelet[1721]: E1113 12:05:49.154484 1721 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 13 12:05:49.157714 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 13 12:05:49.158862 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 13 12:05:49.441486 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 13 12:05:49.448409 systemd[1]: Started sshd@0-10.230.32.222:22-147.75.109.163:40518.service - OpenSSH per-connection server daemon (147.75.109.163:40518). Nov 13 12:05:50.353402 sshd[1730]: Accepted publickey for core from 147.75.109.163 port 40518 ssh2: RSA SHA256:6zq1KeZH3fhJd7rNbiqRD8Qhg+Zgu4M5RIFDzzh/o6k Nov 13 12:05:50.356715 sshd[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 12:05:50.371237 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 13 12:05:50.378398 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 13 12:05:50.384399 systemd-logind[1597]: New session 1 of user core. Nov 13 12:05:50.409231 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 13 12:05:50.422561 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 13 12:05:50.428599 (systemd)[1738]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 13 12:05:50.563200 systemd[1738]: Queued start job for default target default.target. Nov 13 12:05:50.564407 systemd[1738]: Created slice app.slice - User Application Slice. Nov 13 12:05:50.564454 systemd[1738]: Reached target paths.target - Paths. Nov 13 12:05:50.564477 systemd[1738]: Reached target timers.target - Timers. Nov 13 12:05:50.575153 systemd[1738]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 13 12:05:50.584714 systemd[1738]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 13 12:05:50.584989 systemd[1738]: Reached target sockets.target - Sockets. Nov 13 12:05:50.585163 systemd[1738]: Reached target basic.target - Basic System. Nov 13 12:05:50.585549 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 13 12:05:50.590845 systemd[1738]: Reached target default.target - Main User Target. Nov 13 12:05:50.591047 systemd[1738]: Startup finished in 153ms. Nov 13 12:05:50.606051 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 13 12:05:51.298551 systemd[1]: Started sshd@1-10.230.32.222:22-147.75.109.163:40534.service - OpenSSH per-connection server daemon (147.75.109.163:40534). Nov 13 12:05:52.188646 sshd[1750]: Accepted publickey for core from 147.75.109.163 port 40534 ssh2: RSA SHA256:6zq1KeZH3fhJd7rNbiqRD8Qhg+Zgu4M5RIFDzzh/o6k Nov 13 12:05:52.191692 sshd[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 12:05:52.203224 systemd-logind[1597]: New session 2 of user core. Nov 13 12:05:52.214744 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 13 12:05:52.803849 sshd[1750]: pam_unix(sshd:session): session closed for user core Nov 13 12:05:52.807980 systemd-logind[1597]: Session 2 logged out. Waiting for processes to exit. Nov 13 12:05:52.809675 systemd[1]: sshd@1-10.230.32.222:22-147.75.109.163:40534.service: Deactivated successfully. Nov 13 12:05:52.813647 systemd[1]: session-2.scope: Deactivated successfully. Nov 13 12:05:52.815291 systemd-logind[1597]: Removed session 2. Nov 13 12:05:52.954462 systemd[1]: Started sshd@2-10.230.32.222:22-147.75.109.163:40538.service - OpenSSH per-connection server daemon (147.75.109.163:40538). Nov 13 12:05:53.239828 login[1706]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 13 12:05:53.247535 login[1705]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 13 12:05:53.248713 systemd-logind[1597]: New session 3 of user core. Nov 13 12:05:53.259900 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 13 12:05:53.264480 systemd-logind[1597]: New session 4 of user core. Nov 13 12:05:53.274605 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 13 12:05:53.774547 coreos-metadata[1564]: Nov 13 12:05:53.774 WARN failed to locate config-drive, using the metadata service API instead Nov 13 12:05:53.801444 coreos-metadata[1564]: Nov 13 12:05:53.801 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Nov 13 12:05:53.808284 coreos-metadata[1564]: Nov 13 12:05:53.808 INFO Fetch failed with 404: resource not found Nov 13 12:05:53.808284 coreos-metadata[1564]: Nov 13 12:05:53.808 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Nov 13 12:05:53.809085 coreos-metadata[1564]: Nov 13 12:05:53.809 INFO Fetch successful Nov 13 12:05:53.809264 coreos-metadata[1564]: Nov 13 12:05:53.809 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Nov 13 12:05:53.825799 coreos-metadata[1564]: Nov 13 12:05:53.825 INFO Fetch successful Nov 13 12:05:53.826106 coreos-metadata[1564]: Nov 13 12:05:53.826 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Nov 13 12:05:53.841893 coreos-metadata[1564]: Nov 13 12:05:53.841 INFO Fetch successful Nov 13 12:05:53.842189 coreos-metadata[1564]: Nov 13 12:05:53.842 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Nov 13 12:05:53.856957 sshd[1758]: Accepted publickey for core from 147.75.109.163 port 40538 ssh2: RSA SHA256:6zq1KeZH3fhJd7rNbiqRD8Qhg+Zgu4M5RIFDzzh/o6k Nov 13 12:05:53.859139 sshd[1758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 12:05:53.860264 coreos-metadata[1564]: Nov 13 12:05:53.860 INFO Fetch successful Nov 13 12:05:53.860264 coreos-metadata[1564]: Nov 13 12:05:53.860 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Nov 13 12:05:53.866312 systemd-logind[1597]: New session 5 of user core. Nov 13 12:05:53.876593 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 13 12:05:53.881159 coreos-metadata[1564]: Nov 13 12:05:53.880 INFO Fetch successful Nov 13 12:05:53.921662 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 13 12:05:53.923499 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 13 12:05:54.409406 coreos-metadata[1649]: Nov 13 12:05:54.409 WARN failed to locate config-drive, using the metadata service API instead Nov 13 12:05:54.432201 coreos-metadata[1649]: Nov 13 12:05:54.432 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Nov 13 12:05:54.468365 coreos-metadata[1649]: Nov 13 12:05:54.468 INFO Fetch successful Nov 13 12:05:54.468533 coreos-metadata[1649]: Nov 13 12:05:54.468 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Nov 13 12:05:54.477327 sshd[1758]: pam_unix(sshd:session): session closed for user core Nov 13 12:05:54.481496 systemd-logind[1597]: Session 5 logged out. Waiting for processes to exit. Nov 13 12:05:54.482577 systemd[1]: sshd@2-10.230.32.222:22-147.75.109.163:40538.service: Deactivated successfully. Nov 13 12:05:54.486784 systemd[1]: session-5.scope: Deactivated successfully. Nov 13 12:05:54.488370 systemd-logind[1597]: Removed session 5. Nov 13 12:05:54.501409 coreos-metadata[1649]: Nov 13 12:05:54.501 INFO Fetch successful Nov 13 12:05:54.503762 unknown[1649]: wrote ssh authorized keys file for user: core Nov 13 12:05:54.531499 update-ssh-keys[1809]: Updated "/home/core/.ssh/authorized_keys" Nov 13 12:05:54.534357 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 13 12:05:54.539334 systemd[1]: Finished sshkeys.service. Nov 13 12:05:54.546133 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 13 12:05:54.546582 systemd[1]: Startup finished in 15.841s (kernel) + 12.661s (userspace) = 28.503s. Nov 13 12:05:59.408633 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 13 12:05:59.424326 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 13 12:05:59.678217 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 13 12:05:59.689692 (kubelet)[1827]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 13 12:05:59.777457 kubelet[1827]: E1113 12:05:59.777346 1827 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 13 12:05:59.782221 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 13 12:05:59.782534 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 13 12:06:04.630353 systemd[1]: Started sshd@3-10.230.32.222:22-147.75.109.163:47748.service - OpenSSH per-connection server daemon (147.75.109.163:47748). Nov 13 12:06:05.571807 sshd[1837]: Accepted publickey for core from 147.75.109.163 port 47748 ssh2: RSA SHA256:6zq1KeZH3fhJd7rNbiqRD8Qhg+Zgu4M5RIFDzzh/o6k Nov 13 12:06:05.573994 sshd[1837]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 12:06:05.581368 systemd-logind[1597]: New session 6 of user core. Nov 13 12:06:05.590531 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 13 12:06:06.192910 sshd[1837]: pam_unix(sshd:session): session closed for user core Nov 13 12:06:06.197743 systemd[1]: sshd@3-10.230.32.222:22-147.75.109.163:47748.service: Deactivated successfully. Nov 13 12:06:06.201985 systemd-logind[1597]: Session 6 logged out. Waiting for processes to exit. Nov 13 12:06:06.202331 systemd[1]: session-6.scope: Deactivated successfully. Nov 13 12:06:06.204402 systemd-logind[1597]: Removed session 6. Nov 13 12:06:06.343326 systemd[1]: Started sshd@4-10.230.32.222:22-147.75.109.163:47752.service - OpenSSH per-connection server daemon (147.75.109.163:47752). Nov 13 12:06:07.227812 sshd[1845]: Accepted publickey for core from 147.75.109.163 port 47752 ssh2: RSA SHA256:6zq1KeZH3fhJd7rNbiqRD8Qhg+Zgu4M5RIFDzzh/o6k Nov 13 12:06:07.229803 sshd[1845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 12:06:07.237842 systemd-logind[1597]: New session 7 of user core. Nov 13 12:06:07.243452 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 13 12:06:07.839425 sshd[1845]: pam_unix(sshd:session): session closed for user core Nov 13 12:06:07.844889 systemd[1]: sshd@4-10.230.32.222:22-147.75.109.163:47752.service: Deactivated successfully. Nov 13 12:06:07.848725 systemd-logind[1597]: Session 7 logged out. Waiting for processes to exit. Nov 13 12:06:07.849324 systemd[1]: session-7.scope: Deactivated successfully. Nov 13 12:06:07.851706 systemd-logind[1597]: Removed session 7. Nov 13 12:06:07.998447 systemd[1]: Started sshd@5-10.230.32.222:22-147.75.109.163:47760.service - OpenSSH per-connection server daemon (147.75.109.163:47760). Nov 13 12:06:08.884518 sshd[1853]: Accepted publickey for core from 147.75.109.163 port 47760 ssh2: RSA SHA256:6zq1KeZH3fhJd7rNbiqRD8Qhg+Zgu4M5RIFDzzh/o6k Nov 13 12:06:08.886803 sshd[1853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 12:06:08.893723 systemd-logind[1597]: New session 8 of user core. Nov 13 12:06:08.905628 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 13 12:06:09.510297 sshd[1853]: pam_unix(sshd:session): session closed for user core Nov 13 12:06:09.514849 systemd-logind[1597]: Session 8 logged out. Waiting for processes to exit. Nov 13 12:06:09.516551 systemd[1]: sshd@5-10.230.32.222:22-147.75.109.163:47760.service: Deactivated successfully. Nov 13 12:06:09.521619 systemd[1]: session-8.scope: Deactivated successfully. Nov 13 12:06:09.522990 systemd-logind[1597]: Removed session 8. Nov 13 12:06:09.661404 systemd[1]: Started sshd@6-10.230.32.222:22-147.75.109.163:37082.service - OpenSSH per-connection server daemon (147.75.109.163:37082). Nov 13 12:06:09.856825 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 13 12:06:09.863240 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 13 12:06:10.013214 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 13 12:06:10.018065 (kubelet)[1875]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 13 12:06:10.126431 kubelet[1875]: E1113 12:06:10.126205 1875 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 13 12:06:10.128726 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 13 12:06:10.129325 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 13 12:06:10.558475 sshd[1861]: Accepted publickey for core from 147.75.109.163 port 37082 ssh2: RSA SHA256:6zq1KeZH3fhJd7rNbiqRD8Qhg+Zgu4M5RIFDzzh/o6k Nov 13 12:06:10.560536 sshd[1861]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 12:06:10.569325 systemd-logind[1597]: New session 9 of user core. Nov 13 12:06:10.576459 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 13 12:06:11.049153 sudo[1885]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 13 12:06:11.049671 sudo[1885]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 13 12:06:11.063334 sudo[1885]: pam_unix(sudo:session): session closed for user root Nov 13 12:06:11.207673 sshd[1861]: pam_unix(sshd:session): session closed for user core Nov 13 12:06:11.212375 systemd-logind[1597]: Session 9 logged out. Waiting for processes to exit. Nov 13 12:06:11.214463 systemd[1]: sshd@6-10.230.32.222:22-147.75.109.163:37082.service: Deactivated successfully. Nov 13 12:06:11.217336 systemd[1]: session-9.scope: Deactivated successfully. Nov 13 12:06:11.218955 systemd-logind[1597]: Removed session 9. Nov 13 12:06:11.355711 systemd[1]: Started sshd@7-10.230.32.222:22-147.75.109.163:37094.service - OpenSSH per-connection server daemon (147.75.109.163:37094). Nov 13 12:06:12.250989 sshd[1890]: Accepted publickey for core from 147.75.109.163 port 37094 ssh2: RSA SHA256:6zq1KeZH3fhJd7rNbiqRD8Qhg+Zgu4M5RIFDzzh/o6k Nov 13 12:06:12.253126 sshd[1890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 12:06:12.261095 systemd-logind[1597]: New session 10 of user core. Nov 13 12:06:12.267450 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 13 12:06:12.728104 sudo[1895]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 13 12:06:12.728599 sudo[1895]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 13 12:06:12.733851 sudo[1895]: pam_unix(sudo:session): session closed for user root Nov 13 12:06:12.741619 sudo[1894]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 13 12:06:12.742080 sudo[1894]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 13 12:06:12.766396 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 13 12:06:12.768567 auditctl[1898]: No rules Nov 13 12:06:12.769359 systemd[1]: audit-rules.service: Deactivated successfully. Nov 13 12:06:12.769728 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 13 12:06:12.787548 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 13 12:06:12.822175 augenrules[1917]: No rules Nov 13 12:06:12.823772 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 13 12:06:12.826052 sudo[1894]: pam_unix(sudo:session): session closed for user root Nov 13 12:06:12.970341 sshd[1890]: pam_unix(sshd:session): session closed for user core Nov 13 12:06:12.976797 systemd[1]: sshd@7-10.230.32.222:22-147.75.109.163:37094.service: Deactivated successfully. Nov 13 12:06:12.978296 systemd-logind[1597]: Session 10 logged out. Waiting for processes to exit. Nov 13 12:06:12.981329 systemd[1]: session-10.scope: Deactivated successfully. Nov 13 12:06:12.983532 systemd-logind[1597]: Removed session 10. Nov 13 12:06:13.121333 systemd[1]: Started sshd@8-10.230.32.222:22-147.75.109.163:37100.service - OpenSSH per-connection server daemon (147.75.109.163:37100). Nov 13 12:06:14.016910 sshd[1926]: Accepted publickey for core from 147.75.109.163 port 37100 ssh2: RSA SHA256:6zq1KeZH3fhJd7rNbiqRD8Qhg+Zgu4M5RIFDzzh/o6k Nov 13 12:06:14.019364 sshd[1926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 12:06:14.026123 systemd-logind[1597]: New session 11 of user core. Nov 13 12:06:14.034496 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 13 12:06:14.495416 sudo[1930]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 13 12:06:14.495890 sudo[1930]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 13 12:06:14.958679 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 13 12:06:14.970706 (dockerd)[1945]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 13 12:06:15.447128 dockerd[1945]: time="2024-11-13T12:06:15.446885833Z" level=info msg="Starting up" Nov 13 12:06:15.767698 dockerd[1945]: time="2024-11-13T12:06:15.766660572Z" level=info msg="Loading containers: start." Nov 13 12:06:15.913322 kernel: Initializing XFRM netlink socket Nov 13 12:06:16.034204 systemd-networkd[1261]: docker0: Link UP Nov 13 12:06:16.052764 dockerd[1945]: time="2024-11-13T12:06:16.052700347Z" level=info msg="Loading containers: done." Nov 13 12:06:16.077337 dockerd[1945]: time="2024-11-13T12:06:16.074978878Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 13 12:06:16.077337 dockerd[1945]: time="2024-11-13T12:06:16.075153808Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 13 12:06:16.077337 dockerd[1945]: time="2024-11-13T12:06:16.075360846Z" level=info msg="Daemon has completed initialization" Nov 13 12:06:16.077754 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck434360969-merged.mount: Deactivated successfully. Nov 13 12:06:16.118950 dockerd[1945]: time="2024-11-13T12:06:16.118839665Z" level=info msg="API listen on /run/docker.sock" Nov 13 12:06:16.119211 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 13 12:06:17.573086 containerd[1621]: time="2024-11-13T12:06:17.572962556Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.10\"" Nov 13 12:06:17.668418 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 13 12:06:18.636378 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4167853119.mount: Deactivated successfully. Nov 13 12:06:20.356186 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 13 12:06:20.368368 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 13 12:06:20.659407 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 13 12:06:20.671754 (kubelet)[2166]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 13 12:06:20.766323 kubelet[2166]: E1113 12:06:20.765986 2166 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 13 12:06:20.769446 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 13 12:06:20.769863 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 13 12:06:21.949900 containerd[1621]: time="2024-11-13T12:06:21.949817504Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 12:06:21.951822 containerd[1621]: time="2024-11-13T12:06:21.951750414Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.10: active requests=0, bytes read=35140807" Nov 13 12:06:21.952786 containerd[1621]: time="2024-11-13T12:06:21.952317089Z" level=info msg="ImageCreate event name:\"sha256:18c48eab348cb2ea0d360be7cb2530f47a017434fa672c694e839f837137ffe0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 12:06:21.956751 containerd[1621]: time="2024-11-13T12:06:21.956681992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b4362c227fb9a8e1961e17bc5cb55e3fea4414da9936d71663d223d7eda23669\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 12:06:21.958676 containerd[1621]: time="2024-11-13T12:06:21.958372521Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.10\" with image id \"sha256:18c48eab348cb2ea0d360be7cb2530f47a017434fa672c694e839f837137ffe0\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b4362c227fb9a8e1961e17bc5cb55e3fea4414da9936d71663d223d7eda23669\", size \"35137599\" in 4.385273446s" Nov 13 12:06:21.958676 containerd[1621]: time="2024-11-13T12:06:21.958455708Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.10\" returns image reference \"sha256:18c48eab348cb2ea0d360be7cb2530f47a017434fa672c694e839f837137ffe0\"" Nov 13 12:06:21.990068 containerd[1621]: time="2024-11-13T12:06:21.989720394Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.10\"" Nov 13 12:06:24.976707 containerd[1621]: time="2024-11-13T12:06:24.976571867Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 12:06:24.978357 containerd[1621]: time="2024-11-13T12:06:24.978289970Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.10: active requests=0, bytes read=32218307" Nov 13 12:06:24.980044 containerd[1621]: time="2024-11-13T12:06:24.979949983Z" level=info msg="ImageCreate event name:\"sha256:ad191b766a6c87c02578cced8268155fd86b78f8f096775f9d4c3a8f8dccf6bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 12:06:24.985132 containerd[1621]: time="2024-11-13T12:06:24.984258694Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d74524a4d9d071510c5abb6404bf4daf2609510d8d5f0683e1efd83d69176647\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 12:06:24.986536 containerd[1621]: time="2024-11-13T12:06:24.986221505Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.10\" with image id \"sha256:ad191b766a6c87c02578cced8268155fd86b78f8f096775f9d4c3a8f8dccf6bf\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d74524a4d9d071510c5abb6404bf4daf2609510d8d5f0683e1efd83d69176647\", size \"33663665\" in 2.99642043s" Nov 13 12:06:24.986536 containerd[1621]: time="2024-11-13T12:06:24.986282869Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.10\" returns image reference \"sha256:ad191b766a6c87c02578cced8268155fd86b78f8f096775f9d4c3a8f8dccf6bf\"" Nov 13 12:06:25.021750 containerd[1621]: time="2024-11-13T12:06:25.021704148Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.10\"" Nov 13 12:06:26.801860 containerd[1621]: time="2024-11-13T12:06:26.801763027Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 12:06:26.803434 containerd[1621]: time="2024-11-13T12:06:26.803371027Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.10: active requests=0, bytes read=17332668" Nov 13 12:06:26.804452 containerd[1621]: time="2024-11-13T12:06:26.804395042Z" level=info msg="ImageCreate event name:\"sha256:27a6d029a6b019de099d92bd417a4e40c98e146a04faaab836138abf6307034d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 12:06:26.808442 containerd[1621]: time="2024-11-13T12:06:26.808367273Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:41f2fb005da3fa5512bfc7f267a6f08aaea27c9f7c6d9a93c7ee28607c1f2f77\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 12:06:26.810235 containerd[1621]: time="2024-11-13T12:06:26.810025431Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.10\" with image id \"sha256:27a6d029a6b019de099d92bd417a4e40c98e146a04faaab836138abf6307034d\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:41f2fb005da3fa5512bfc7f267a6f08aaea27c9f7c6d9a93c7ee28607c1f2f77\", size \"18778044\" in 1.788014968s" Nov 13 12:06:26.810235 containerd[1621]: time="2024-11-13T12:06:26.810071841Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.10\" returns image reference \"sha256:27a6d029a6b019de099d92bd417a4e40c98e146a04faaab836138abf6307034d\"" Nov 13 12:06:26.841116 containerd[1621]: time="2024-11-13T12:06:26.841030119Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.10\"" Nov 13 12:06:28.371571 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3664286595.mount: Deactivated successfully. Nov 13 12:06:28.968254 containerd[1621]: time="2024-11-13T12:06:28.968181186Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 12:06:28.969623 containerd[1621]: time="2024-11-13T12:06:28.969310083Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.10: active requests=0, bytes read=28616824" Nov 13 12:06:28.970788 containerd[1621]: time="2024-11-13T12:06:28.970700248Z" level=info msg="ImageCreate event name:\"sha256:561e7e8f714aae262c52c7ea98efdabecf299956499c8a2c63eab6759906f0a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 12:06:28.973613 containerd[1621]: time="2024-11-13T12:06:28.973542192Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:3c5ceb7942f21793d4cb5880bc0ed7ca7d7f93318fc3f0830816593b86aa19d8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 12:06:28.975228 containerd[1621]: time="2024-11-13T12:06:28.974758000Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.10\" with image id \"sha256:561e7e8f714aae262c52c7ea98efdabecf299956499c8a2c63eab6759906f0a4\", repo tag \"registry.k8s.io/kube-proxy:v1.29.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:3c5ceb7942f21793d4cb5880bc0ed7ca7d7f93318fc3f0830816593b86aa19d8\", size \"28615835\" in 2.133671346s" Nov 13 12:06:28.975228 containerd[1621]: time="2024-11-13T12:06:28.974803992Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.10\" returns image reference \"sha256:561e7e8f714aae262c52c7ea98efdabecf299956499c8a2c63eab6759906f0a4\"" Nov 13 12:06:29.006132 containerd[1621]: time="2024-11-13T12:06:29.006065991Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Nov 13 12:06:29.644908 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2628894492.mount: Deactivated successfully. Nov 13 12:06:30.856666 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 13 12:06:30.866237 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 13 12:06:31.129247 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 13 12:06:31.136389 (kubelet)[2264]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 13 12:06:31.179449 containerd[1621]: time="2024-11-13T12:06:31.178084825Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 12:06:31.181129 containerd[1621]: time="2024-11-13T12:06:31.180996954Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Nov 13 12:06:31.182328 containerd[1621]: time="2024-11-13T12:06:31.182257610Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 12:06:31.188940 containerd[1621]: time="2024-11-13T12:06:31.188886161Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 12:06:31.191332 containerd[1621]: time="2024-11-13T12:06:31.190662149Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.184522856s" Nov 13 12:06:31.191332 containerd[1621]: time="2024-11-13T12:06:31.190753095Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Nov 13 12:06:31.231021 containerd[1621]: time="2024-11-13T12:06:31.230946149Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Nov 13 12:06:31.235509 kubelet[2264]: E1113 12:06:31.235374 2264 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 13 12:06:31.238220 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 13 12:06:31.238582 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 13 12:06:32.046125 update_engine[1601]: I20241113 12:06:32.045328 1601 update_attempter.cc:509] Updating boot flags... Nov 13 12:06:32.097396 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2285) Nov 13 12:06:32.189097 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2283) Nov 13 12:06:32.279963 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1110320624.mount: Deactivated successfully. Nov 13 12:06:32.287064 containerd[1621]: time="2024-11-13T12:06:32.286869716Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 12:06:32.288497 containerd[1621]: time="2024-11-13T12:06:32.288417914Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Nov 13 12:06:32.289811 containerd[1621]: time="2024-11-13T12:06:32.289740419Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 12:06:32.293216 containerd[1621]: time="2024-11-13T12:06:32.293118182Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 12:06:32.294690 containerd[1621]: time="2024-11-13T12:06:32.294443507Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 1.063406189s" Nov 13 12:06:32.294690 containerd[1621]: time="2024-11-13T12:06:32.294495025Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Nov 13 12:06:32.326806 containerd[1621]: time="2024-11-13T12:06:32.326640048Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Nov 13 12:06:33.004957 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3479846320.mount: Deactivated successfully. Nov 13 12:06:36.411302 containerd[1621]: time="2024-11-13T12:06:36.411235090Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 12:06:36.412919 containerd[1621]: time="2024-11-13T12:06:36.412863641Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651633" Nov 13 12:06:36.413694 containerd[1621]: time="2024-11-13T12:06:36.413633550Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 12:06:36.418115 containerd[1621]: time="2024-11-13T12:06:36.417993818Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 12:06:36.421649 containerd[1621]: time="2024-11-13T12:06:36.421462095Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 4.09439319s" Nov 13 12:06:36.421649 containerd[1621]: time="2024-11-13T12:06:36.421506923Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Nov 13 12:06:40.985850 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 13 12:06:41.001603 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 13 12:06:41.029491 systemd[1]: Reloading requested from client PID 2411 ('systemctl') (unit session-11.scope)... Nov 13 12:06:41.029691 systemd[1]: Reloading... Nov 13 12:06:41.222037 zram_generator::config[2450]: No configuration found. Nov 13 12:06:41.391278 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 13 12:06:41.494346 systemd[1]: Reloading finished in 463 ms. Nov 13 12:06:41.549221 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 13 12:06:41.549422 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 13 12:06:41.549928 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 13 12:06:41.557815 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 13 12:06:41.714289 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 13 12:06:41.714686 (kubelet)[2526]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 13 12:06:41.804068 kubelet[2526]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 13 12:06:41.804068 kubelet[2526]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 13 12:06:41.804068 kubelet[2526]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 13 12:06:41.806661 kubelet[2526]: I1113 12:06:41.806565 2526 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 13 12:06:42.095673 kubelet[2526]: I1113 12:06:42.095521 2526 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Nov 13 12:06:42.095673 kubelet[2526]: I1113 12:06:42.095588 2526 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 13 12:06:42.096475 kubelet[2526]: I1113 12:06:42.096425 2526 server.go:919] "Client rotation is on, will bootstrap in background" Nov 13 12:06:42.131243 kubelet[2526]: I1113 12:06:42.130608 2526 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 13 12:06:42.133891 kubelet[2526]: E1113 12:06:42.133754 2526 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.230.32.222:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.230.32.222:6443: connect: connection refused Nov 13 12:06:42.148520 kubelet[2526]: I1113 12:06:42.148357 2526 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 13 12:06:42.149115 kubelet[2526]: I1113 12:06:42.149091 2526 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 13 12:06:42.150383 kubelet[2526]: I1113 12:06:42.150313 2526 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 13 12:06:42.151029 kubelet[2526]: I1113 12:06:42.150926 2526 topology_manager.go:138] "Creating topology manager with none policy" Nov 13 12:06:42.151029 kubelet[2526]: I1113 12:06:42.150984 2526 container_manager_linux.go:301] "Creating device plugin manager" Nov 13 12:06:42.151215 kubelet[2526]: I1113 12:06:42.151183 2526 state_mem.go:36] "Initialized new in-memory state store" Nov 13 12:06:42.153412 kubelet[2526]: I1113 12:06:42.153273 2526 kubelet.go:396] "Attempting to sync node with API server" Nov 13 12:06:42.153412 kubelet[2526]: I1113 12:06:42.153316 2526 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 13 12:06:42.154301 kubelet[2526]: W1113 12:06:42.154182 2526 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.230.32.222:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-sx7g0.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.32.222:6443: connect: connection refused Nov 13 12:06:42.154301 kubelet[2526]: E1113 12:06:42.154262 2526 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.230.32.222:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-sx7g0.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.32.222:6443: connect: connection refused Nov 13 12:06:42.155553 kubelet[2526]: I1113 12:06:42.155520 2526 kubelet.go:312] "Adding apiserver pod source" Nov 13 12:06:42.155655 kubelet[2526]: I1113 12:06:42.155604 2526 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 13 12:06:42.157731 kubelet[2526]: I1113 12:06:42.157707 2526 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 13 12:06:42.163143 kubelet[2526]: I1113 12:06:42.163109 2526 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 13 12:06:42.165290 kubelet[2526]: W1113 12:06:42.165187 2526 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.230.32.222:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.32.222:6443: connect: connection refused Nov 13 12:06:42.165421 kubelet[2526]: E1113 12:06:42.165297 2526 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.230.32.222:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.32.222:6443: connect: connection refused Nov 13 12:06:42.165573 kubelet[2526]: W1113 12:06:42.165546 2526 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 13 12:06:42.166962 kubelet[2526]: I1113 12:06:42.166924 2526 server.go:1256] "Started kubelet" Nov 13 12:06:42.167879 kubelet[2526]: I1113 12:06:42.167847 2526 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Nov 13 12:06:42.170541 kubelet[2526]: I1113 12:06:42.170517 2526 server.go:461] "Adding debug handlers to kubelet server" Nov 13 12:06:42.175877 kubelet[2526]: I1113 12:06:42.175843 2526 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 13 12:06:42.176306 kubelet[2526]: I1113 12:06:42.176279 2526 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 13 12:06:42.177849 kubelet[2526]: I1113 12:06:42.177797 2526 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 13 12:06:42.182139 kubelet[2526]: E1113 12:06:42.180128 2526 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.32.222:6443/api/v1/namespaces/default/events\": dial tcp 10.230.32.222:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-sx7g0.gb1.brightbox.com.180785b6dbe327bc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-sx7g0.gb1.brightbox.com,UID:srv-sx7g0.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-sx7g0.gb1.brightbox.com,},FirstTimestamp:2024-11-13 12:06:42.166884284 +0000 UTC m=+0.443826363,LastTimestamp:2024-11-13 12:06:42.166884284 +0000 UTC m=+0.443826363,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-sx7g0.gb1.brightbox.com,}" Nov 13 12:06:42.184709 kubelet[2526]: I1113 12:06:42.184614 2526 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 13 12:06:42.188551 kubelet[2526]: I1113 12:06:42.187993 2526 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Nov 13 12:06:42.188551 kubelet[2526]: I1113 12:06:42.188136 2526 reconciler_new.go:29] "Reconciler: start to sync state" Nov 13 12:06:42.189377 kubelet[2526]: W1113 12:06:42.189303 2526 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.230.32.222:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.32.222:6443: connect: connection refused Nov 13 12:06:42.189492 kubelet[2526]: E1113 12:06:42.189471 2526 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.230.32.222:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.32.222:6443: connect: connection refused Nov 13 12:06:42.189725 kubelet[2526]: E1113 12:06:42.189703 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.32.222:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-sx7g0.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.32.222:6443: connect: connection refused" interval="200ms" Nov 13 12:06:42.191345 kubelet[2526]: I1113 12:06:42.191288 2526 factory.go:221] Registration of the systemd container factory successfully Nov 13 12:06:42.191443 kubelet[2526]: I1113 12:06:42.191424 2526 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 13 12:06:42.195482 kubelet[2526]: E1113 12:06:42.195451 2526 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 13 12:06:42.195821 kubelet[2526]: I1113 12:06:42.195796 2526 factory.go:221] Registration of the containerd container factory successfully Nov 13 12:06:42.228358 kubelet[2526]: I1113 12:06:42.228304 2526 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 13 12:06:42.234186 kubelet[2526]: I1113 12:06:42.234164 2526 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 13 12:06:42.236950 kubelet[2526]: I1113 12:06:42.234487 2526 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 13 12:06:42.239284 kubelet[2526]: I1113 12:06:42.238152 2526 kubelet.go:2329] "Starting kubelet main sync loop" Nov 13 12:06:42.239284 kubelet[2526]: E1113 12:06:42.238276 2526 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 13 12:06:42.239748 kubelet[2526]: W1113 12:06:42.239698 2526 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.230.32.222:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.32.222:6443: connect: connection refused Nov 13 12:06:42.241868 kubelet[2526]: E1113 12:06:42.241840 2526 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.230.32.222:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.32.222:6443: connect: connection refused Nov 13 12:06:42.245295 kubelet[2526]: I1113 12:06:42.245267 2526 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 13 12:06:42.245406 kubelet[2526]: I1113 12:06:42.245385 2526 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 13 12:06:42.245479 kubelet[2526]: I1113 12:06:42.245432 2526 state_mem.go:36] "Initialized new in-memory state store" Nov 13 12:06:42.247709 kubelet[2526]: I1113 12:06:42.247649 2526 policy_none.go:49] "None policy: Start" Nov 13 12:06:42.248615 kubelet[2526]: I1113 12:06:42.248577 2526 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 13 12:06:42.248698 kubelet[2526]: I1113 12:06:42.248622 2526 state_mem.go:35] "Initializing new in-memory state store" Nov 13 12:06:42.258086 kubelet[2526]: I1113 12:06:42.257870 2526 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 13 12:06:42.260019 kubelet[2526]: I1113 12:06:42.259939 2526 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 13 12:06:42.261646 kubelet[2526]: E1113 12:06:42.261605 2526 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-sx7g0.gb1.brightbox.com\" not found" Nov 13 12:06:42.288727 kubelet[2526]: I1113 12:06:42.288632 2526 kubelet_node_status.go:73] "Attempting to register node" node="srv-sx7g0.gb1.brightbox.com" Nov 13 12:06:42.289282 kubelet[2526]: E1113 12:06:42.289253 2526 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.32.222:6443/api/v1/nodes\": dial tcp 10.230.32.222:6443: connect: connection refused" node="srv-sx7g0.gb1.brightbox.com" Nov 13 12:06:42.338678 kubelet[2526]: I1113 12:06:42.338615 2526 topology_manager.go:215] "Topology Admit Handler" podUID="ea61fb35b5e87cc9813253fc986f81df" podNamespace="kube-system" podName="kube-apiserver-srv-sx7g0.gb1.brightbox.com" Nov 13 12:06:42.345868 kubelet[2526]: I1113 12:06:42.345049 2526 topology_manager.go:215] "Topology Admit Handler" podUID="92b63193d3b494a544bdf6cee873d9f3" podNamespace="kube-system" podName="kube-controller-manager-srv-sx7g0.gb1.brightbox.com" Nov 13 12:06:42.349122 kubelet[2526]: I1113 12:06:42.349094 2526 topology_manager.go:215] "Topology Admit Handler" podUID="8cacce1df9ae426b7529c517ea5b9868" podNamespace="kube-system" podName="kube-scheduler-srv-sx7g0.gb1.brightbox.com" Nov 13 12:06:42.389574 kubelet[2526]: I1113 12:06:42.389538 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ea61fb35b5e87cc9813253fc986f81df-ca-certs\") pod \"kube-apiserver-srv-sx7g0.gb1.brightbox.com\" (UID: \"ea61fb35b5e87cc9813253fc986f81df\") " pod="kube-system/kube-apiserver-srv-sx7g0.gb1.brightbox.com" Nov 13 12:06:42.391269 kubelet[2526]: E1113 12:06:42.391214 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.32.222:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-sx7g0.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.32.222:6443: connect: connection refused" interval="400ms" Nov 13 12:06:42.393695 kubelet[2526]: I1113 12:06:42.393655 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ea61fb35b5e87cc9813253fc986f81df-usr-share-ca-certificates\") pod \"kube-apiserver-srv-sx7g0.gb1.brightbox.com\" (UID: \"ea61fb35b5e87cc9813253fc986f81df\") " pod="kube-system/kube-apiserver-srv-sx7g0.gb1.brightbox.com" Nov 13 12:06:42.393827 kubelet[2526]: I1113 12:06:42.393764 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8cacce1df9ae426b7529c517ea5b9868-kubeconfig\") pod \"kube-scheduler-srv-sx7g0.gb1.brightbox.com\" (UID: \"8cacce1df9ae426b7529c517ea5b9868\") " pod="kube-system/kube-scheduler-srv-sx7g0.gb1.brightbox.com" Nov 13 12:06:42.393882 kubelet[2526]: I1113 12:06:42.393830 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/92b63193d3b494a544bdf6cee873d9f3-kubeconfig\") pod \"kube-controller-manager-srv-sx7g0.gb1.brightbox.com\" (UID: \"92b63193d3b494a544bdf6cee873d9f3\") " pod="kube-system/kube-controller-manager-srv-sx7g0.gb1.brightbox.com" Nov 13 12:06:42.393945 kubelet[2526]: I1113 12:06:42.393887 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/92b63193d3b494a544bdf6cee873d9f3-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-sx7g0.gb1.brightbox.com\" (UID: \"92b63193d3b494a544bdf6cee873d9f3\") " pod="kube-system/kube-controller-manager-srv-sx7g0.gb1.brightbox.com" Nov 13 12:06:42.393945 kubelet[2526]: I1113 12:06:42.393925 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ea61fb35b5e87cc9813253fc986f81df-k8s-certs\") pod \"kube-apiserver-srv-sx7g0.gb1.brightbox.com\" (UID: \"ea61fb35b5e87cc9813253fc986f81df\") " pod="kube-system/kube-apiserver-srv-sx7g0.gb1.brightbox.com" Nov 13 12:06:42.394072 kubelet[2526]: I1113 12:06:42.393996 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/92b63193d3b494a544bdf6cee873d9f3-ca-certs\") pod \"kube-controller-manager-srv-sx7g0.gb1.brightbox.com\" (UID: \"92b63193d3b494a544bdf6cee873d9f3\") " pod="kube-system/kube-controller-manager-srv-sx7g0.gb1.brightbox.com" Nov 13 12:06:42.394072 kubelet[2526]: I1113 12:06:42.394062 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/92b63193d3b494a544bdf6cee873d9f3-flexvolume-dir\") pod \"kube-controller-manager-srv-sx7g0.gb1.brightbox.com\" (UID: \"92b63193d3b494a544bdf6cee873d9f3\") " pod="kube-system/kube-controller-manager-srv-sx7g0.gb1.brightbox.com" Nov 13 12:06:42.394195 kubelet[2526]: I1113 12:06:42.394097 2526 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/92b63193d3b494a544bdf6cee873d9f3-k8s-certs\") pod \"kube-controller-manager-srv-sx7g0.gb1.brightbox.com\" (UID: \"92b63193d3b494a544bdf6cee873d9f3\") " pod="kube-system/kube-controller-manager-srv-sx7g0.gb1.brightbox.com" Nov 13 12:06:42.496042 kubelet[2526]: I1113 12:06:42.495525 2526 kubelet_node_status.go:73] "Attempting to register node" node="srv-sx7g0.gb1.brightbox.com" Nov 13 12:06:42.496876 kubelet[2526]: E1113 12:06:42.496633 2526 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.32.222:6443/api/v1/nodes\": dial tcp 10.230.32.222:6443: connect: connection refused" node="srv-sx7g0.gb1.brightbox.com" Nov 13 12:06:42.658168 containerd[1621]: time="2024-11-13T12:06:42.658111789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-sx7g0.gb1.brightbox.com,Uid:ea61fb35b5e87cc9813253fc986f81df,Namespace:kube-system,Attempt:0,}" Nov 13 12:06:42.664437 containerd[1621]: time="2024-11-13T12:06:42.664168090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-sx7g0.gb1.brightbox.com,Uid:92b63193d3b494a544bdf6cee873d9f3,Namespace:kube-system,Attempt:0,}" Nov 13 12:06:42.664437 containerd[1621]: time="2024-11-13T12:06:42.664180391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-sx7g0.gb1.brightbox.com,Uid:8cacce1df9ae426b7529c517ea5b9868,Namespace:kube-system,Attempt:0,}" Nov 13 12:06:42.792771 kubelet[2526]: E1113 12:06:42.792631 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.32.222:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-sx7g0.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.32.222:6443: connect: connection refused" interval="800ms" Nov 13 12:06:42.900799 kubelet[2526]: I1113 12:06:42.900429 2526 kubelet_node_status.go:73] "Attempting to register node" node="srv-sx7g0.gb1.brightbox.com" Nov 13 12:06:42.901444 kubelet[2526]: E1113 12:06:42.900938 2526 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.32.222:6443/api/v1/nodes\": dial tcp 10.230.32.222:6443: connect: connection refused" node="srv-sx7g0.gb1.brightbox.com" Nov 13 12:06:43.284925 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2816188822.mount: Deactivated successfully. Nov 13 12:06:43.286376 kubelet[2526]: W1113 12:06:43.286220 2526 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.230.32.222:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.32.222:6443: connect: connection refused Nov 13 12:06:43.286376 kubelet[2526]: E1113 12:06:43.286326 2526 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.230.32.222:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.32.222:6443: connect: connection refused Nov 13 12:06:43.299924 containerd[1621]: time="2024-11-13T12:06:43.298666368Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 13 12:06:43.300695 containerd[1621]: time="2024-11-13T12:06:43.300636116Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Nov 13 12:06:43.304419 containerd[1621]: time="2024-11-13T12:06:43.304383126Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 13 12:06:43.306808 containerd[1621]: time="2024-11-13T12:06:43.306757206Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 13 12:06:43.308365 containerd[1621]: time="2024-11-13T12:06:43.308329432Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 13 12:06:43.310077 containerd[1621]: time="2024-11-13T12:06:43.310039112Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 13 12:06:43.312082 containerd[1621]: time="2024-11-13T12:06:43.310505617Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 13 12:06:43.315167 containerd[1621]: time="2024-11-13T12:06:43.315120570Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 13 12:06:43.317445 containerd[1621]: time="2024-11-13T12:06:43.317407928Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 653.039725ms" Nov 13 12:06:43.319148 containerd[1621]: time="2024-11-13T12:06:43.319067631Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 654.827031ms" Nov 13 12:06:43.322717 containerd[1621]: time="2024-11-13T12:06:43.322599656Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 664.350596ms" Nov 13 12:06:43.370946 kubelet[2526]: W1113 12:06:43.369483 2526 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.230.32.222:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.32.222:6443: connect: connection refused Nov 13 12:06:43.370946 kubelet[2526]: E1113 12:06:43.369578 2526 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.230.32.222:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.32.222:6443: connect: connection refused Nov 13 12:06:43.519511 containerd[1621]: time="2024-11-13T12:06:43.518895659Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 13 12:06:43.519511 containerd[1621]: time="2024-11-13T12:06:43.518979624Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 13 12:06:43.519511 containerd[1621]: time="2024-11-13T12:06:43.519025033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 12:06:43.519511 containerd[1621]: time="2024-11-13T12:06:43.519241607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 12:06:43.523676 containerd[1621]: time="2024-11-13T12:06:43.523572743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 13 12:06:43.523865 containerd[1621]: time="2024-11-13T12:06:43.523673392Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 13 12:06:43.523865 containerd[1621]: time="2024-11-13T12:06:43.523698069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 12:06:43.525227 containerd[1621]: time="2024-11-13T12:06:43.525070610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 12:06:43.527935 containerd[1621]: time="2024-11-13T12:06:43.527819751Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 13 12:06:43.528071 containerd[1621]: time="2024-11-13T12:06:43.527921332Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 13 12:06:43.528071 containerd[1621]: time="2024-11-13T12:06:43.527946390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 12:06:43.528229 containerd[1621]: time="2024-11-13T12:06:43.528108592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 12:06:43.548641 kubelet[2526]: W1113 12:06:43.548396 2526 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.230.32.222:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.32.222:6443: connect: connection refused Nov 13 12:06:43.548641 kubelet[2526]: E1113 12:06:43.548491 2526 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.230.32.222:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.32.222:6443: connect: connection refused Nov 13 12:06:43.585479 kubelet[2526]: W1113 12:06:43.585396 2526 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.230.32.222:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-sx7g0.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.32.222:6443: connect: connection refused Nov 13 12:06:43.585712 kubelet[2526]: E1113 12:06:43.585677 2526 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.230.32.222:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-sx7g0.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.32.222:6443: connect: connection refused Nov 13 12:06:43.594744 kubelet[2526]: E1113 12:06:43.594557 2526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.32.222:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-sx7g0.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.32.222:6443: connect: connection refused" interval="1.6s" Nov 13 12:06:43.708352 kubelet[2526]: I1113 12:06:43.708288 2526 kubelet_node_status.go:73] "Attempting to register node" node="srv-sx7g0.gb1.brightbox.com" Nov 13 12:06:43.708775 kubelet[2526]: E1113 12:06:43.708733 2526 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.32.222:6443/api/v1/nodes\": dial tcp 10.230.32.222:6443: connect: connection refused" node="srv-sx7g0.gb1.brightbox.com" Nov 13 12:06:43.721347 containerd[1621]: time="2024-11-13T12:06:43.721265196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-sx7g0.gb1.brightbox.com,Uid:8cacce1df9ae426b7529c517ea5b9868,Namespace:kube-system,Attempt:0,} returns sandbox id \"8fd841f8b348d5abd00f72c1d00503f7d23613f4742df404ee7f081a6488b046\"" Nov 13 12:06:43.728156 containerd[1621]: time="2024-11-13T12:06:43.727391854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-sx7g0.gb1.brightbox.com,Uid:ea61fb35b5e87cc9813253fc986f81df,Namespace:kube-system,Attempt:0,} returns sandbox id \"a77221b0bec9a48c13398fe9b37d277765048476ec8fa66a65db56061285596d\"" Nov 13 12:06:43.733444 containerd[1621]: time="2024-11-13T12:06:43.733396461Z" level=info msg="CreateContainer within sandbox \"a77221b0bec9a48c13398fe9b37d277765048476ec8fa66a65db56061285596d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 13 12:06:43.733564 containerd[1621]: time="2024-11-13T12:06:43.733405491Z" level=info msg="CreateContainer within sandbox \"8fd841f8b348d5abd00f72c1d00503f7d23613f4742df404ee7f081a6488b046\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 13 12:06:43.744407 containerd[1621]: time="2024-11-13T12:06:43.744330558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-sx7g0.gb1.brightbox.com,Uid:92b63193d3b494a544bdf6cee873d9f3,Namespace:kube-system,Attempt:0,} returns sandbox id \"64cbfaa879453b190831d426a5b3b82c8d9d0bbb9fee7f8b5ddfd0ebb7e5467a\"" Nov 13 12:06:43.750928 containerd[1621]: time="2024-11-13T12:06:43.750868993Z" level=info msg="CreateContainer within sandbox \"64cbfaa879453b190831d426a5b3b82c8d9d0bbb9fee7f8b5ddfd0ebb7e5467a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 13 12:06:43.753378 containerd[1621]: time="2024-11-13T12:06:43.753336680Z" level=info msg="CreateContainer within sandbox \"a77221b0bec9a48c13398fe9b37d277765048476ec8fa66a65db56061285596d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9a7ee6225f0640bee4b8ae073e7d00592d7183873d28376e9fa46677b3382f64\"" Nov 13 12:06:43.754058 containerd[1621]: time="2024-11-13T12:06:43.754028074Z" level=info msg="StartContainer for \"9a7ee6225f0640bee4b8ae073e7d00592d7183873d28376e9fa46677b3382f64\"" Nov 13 12:06:43.762342 containerd[1621]: time="2024-11-13T12:06:43.762256729Z" level=info msg="CreateContainer within sandbox \"8fd841f8b348d5abd00f72c1d00503f7d23613f4742df404ee7f081a6488b046\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"57c16df02c5d23e878dee78adf36330d459cafa9bb7776c0f56f8f75e59aebd6\"" Nov 13 12:06:43.764315 containerd[1621]: time="2024-11-13T12:06:43.764272929Z" level=info msg="StartContainer for \"57c16df02c5d23e878dee78adf36330d459cafa9bb7776c0f56f8f75e59aebd6\"" Nov 13 12:06:43.772482 containerd[1621]: time="2024-11-13T12:06:43.772405479Z" level=info msg="CreateContainer within sandbox \"64cbfaa879453b190831d426a5b3b82c8d9d0bbb9fee7f8b5ddfd0ebb7e5467a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1b7a0ef132f96085d5f2024d1d625f3ddb60795455ef60cf5289ad8a649b8c2c\"" Nov 13 12:06:43.773160 containerd[1621]: time="2024-11-13T12:06:43.772967509Z" level=info msg="StartContainer for \"1b7a0ef132f96085d5f2024d1d625f3ddb60795455ef60cf5289ad8a649b8c2c\"" Nov 13 12:06:43.918051 containerd[1621]: time="2024-11-13T12:06:43.917977760Z" level=info msg="StartContainer for \"9a7ee6225f0640bee4b8ae073e7d00592d7183873d28376e9fa46677b3382f64\" returns successfully" Nov 13 12:06:43.946463 containerd[1621]: time="2024-11-13T12:06:43.945343607Z" level=info msg="StartContainer for \"57c16df02c5d23e878dee78adf36330d459cafa9bb7776c0f56f8f75e59aebd6\" returns successfully" Nov 13 12:06:43.946463 containerd[1621]: time="2024-11-13T12:06:43.945349141Z" level=info msg="StartContainer for \"1b7a0ef132f96085d5f2024d1d625f3ddb60795455ef60cf5289ad8a649b8c2c\" returns successfully" Nov 13 12:06:44.191437 kubelet[2526]: E1113 12:06:44.191235 2526 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.230.32.222:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.230.32.222:6443: connect: connection refused Nov 13 12:06:45.313040 kubelet[2526]: I1113 12:06:45.312272 2526 kubelet_node_status.go:73] "Attempting to register node" node="srv-sx7g0.gb1.brightbox.com" Nov 13 12:06:46.983163 kubelet[2526]: E1113 12:06:46.983043 2526 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-sx7g0.gb1.brightbox.com\" not found" node="srv-sx7g0.gb1.brightbox.com" Nov 13 12:06:47.012907 kubelet[2526]: I1113 12:06:47.012857 2526 kubelet_node_status.go:76] "Successfully registered node" node="srv-sx7g0.gb1.brightbox.com" Nov 13 12:06:47.050854 kubelet[2526]: E1113 12:06:47.050777 2526 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{srv-sx7g0.gb1.brightbox.com.180785b6dbe327bc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-sx7g0.gb1.brightbox.com,UID:srv-sx7g0.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-sx7g0.gb1.brightbox.com,},FirstTimestamp:2024-11-13 12:06:42.166884284 +0000 UTC m=+0.443826363,LastTimestamp:2024-11-13 12:06:42.166884284 +0000 UTC m=+0.443826363,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-sx7g0.gb1.brightbox.com,}" Nov 13 12:06:47.161714 kubelet[2526]: I1113 12:06:47.161666 2526 apiserver.go:52] "Watching apiserver" Nov 13 12:06:47.188492 kubelet[2526]: I1113 12:06:47.188450 2526 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Nov 13 12:06:49.968508 systemd[1]: Reloading requested from client PID 2803 ('systemctl') (unit session-11.scope)... Nov 13 12:06:49.968534 systemd[1]: Reloading... Nov 13 12:06:50.079405 zram_generator::config[2841]: No configuration found. Nov 13 12:06:50.200736 kubelet[2526]: W1113 12:06:50.200694 2526 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 13 12:06:50.325968 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 13 12:06:50.436962 systemd[1]: Reloading finished in 467 ms. Nov 13 12:06:50.486461 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 13 12:06:50.488231 kubelet[2526]: I1113 12:06:50.486558 2526 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 13 12:06:50.500928 systemd[1]: kubelet.service: Deactivated successfully. Nov 13 12:06:50.501824 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 13 12:06:50.507619 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 13 12:06:50.726349 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 13 12:06:50.743700 (kubelet)[2916]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 13 12:06:50.873422 kubelet[2916]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 13 12:06:50.873422 kubelet[2916]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 13 12:06:50.873422 kubelet[2916]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 13 12:06:50.873422 kubelet[2916]: I1113 12:06:50.873209 2916 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 13 12:06:50.890401 kubelet[2916]: I1113 12:06:50.889574 2916 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Nov 13 12:06:50.890401 kubelet[2916]: I1113 12:06:50.889616 2916 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 13 12:06:50.890401 kubelet[2916]: I1113 12:06:50.889886 2916 server.go:919] "Client rotation is on, will bootstrap in background" Nov 13 12:06:50.893899 kubelet[2916]: I1113 12:06:50.893868 2916 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 13 12:06:50.914734 kubelet[2916]: I1113 12:06:50.913609 2916 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 13 12:06:50.936480 kubelet[2916]: I1113 12:06:50.936125 2916 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 13 12:06:50.937951 kubelet[2916]: I1113 12:06:50.936907 2916 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 13 12:06:50.940046 kubelet[2916]: I1113 12:06:50.938980 2916 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 13 12:06:50.940046 kubelet[2916]: I1113 12:06:50.939647 2916 topology_manager.go:138] "Creating topology manager with none policy" Nov 13 12:06:50.940046 kubelet[2916]: I1113 12:06:50.939668 2916 container_manager_linux.go:301] "Creating device plugin manager" Nov 13 12:06:50.940046 kubelet[2916]: I1113 12:06:50.939908 2916 state_mem.go:36] "Initialized new in-memory state store" Nov 13 12:06:50.944074 kubelet[2916]: I1113 12:06:50.944034 2916 kubelet.go:396] "Attempting to sync node with API server" Nov 13 12:06:50.947856 kubelet[2916]: I1113 12:06:50.947179 2916 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 13 12:06:50.947856 kubelet[2916]: I1113 12:06:50.947288 2916 kubelet.go:312] "Adding apiserver pod source" Nov 13 12:06:50.947856 kubelet[2916]: I1113 12:06:50.947347 2916 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 13 12:06:50.956078 kubelet[2916]: I1113 12:06:50.956040 2916 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 13 12:06:50.956624 kubelet[2916]: I1113 12:06:50.956601 2916 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 13 12:06:50.967439 kubelet[2916]: I1113 12:06:50.965968 2916 server.go:1256] "Started kubelet" Nov 13 12:06:50.976349 kubelet[2916]: I1113 12:06:50.975542 2916 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 13 12:06:50.987737 kubelet[2916]: I1113 12:06:50.986706 2916 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Nov 13 12:06:50.989356 kubelet[2916]: I1113 12:06:50.989325 2916 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 13 12:06:50.990766 kubelet[2916]: I1113 12:06:50.990041 2916 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 13 12:06:50.994430 kubelet[2916]: I1113 12:06:50.992240 2916 server.go:461] "Adding debug handlers to kubelet server" Nov 13 12:06:50.995837 kubelet[2916]: I1113 12:06:50.994803 2916 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 13 12:06:50.999145 kubelet[2916]: I1113 12:06:50.999106 2916 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Nov 13 12:06:50.999688 kubelet[2916]: I1113 12:06:50.999358 2916 reconciler_new.go:29] "Reconciler: start to sync state" Nov 13 12:06:51.007781 kubelet[2916]: I1113 12:06:51.007711 2916 factory.go:221] Registration of the systemd container factory successfully Nov 13 12:06:51.008243 kubelet[2916]: I1113 12:06:51.007885 2916 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 13 12:06:51.011663 kubelet[2916]: E1113 12:06:51.011522 2916 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 13 12:06:51.014957 kubelet[2916]: I1113 12:06:51.014746 2916 factory.go:221] Registration of the containerd container factory successfully Nov 13 12:06:51.032621 kubelet[2916]: I1113 12:06:51.032493 2916 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 13 12:06:51.038054 kubelet[2916]: I1113 12:06:51.038013 2916 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 13 12:06:51.038251 kubelet[2916]: I1113 12:06:51.038079 2916 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 13 12:06:51.038251 kubelet[2916]: I1113 12:06:51.038111 2916 kubelet.go:2329] "Starting kubelet main sync loop" Nov 13 12:06:51.038565 kubelet[2916]: E1113 12:06:51.038459 2916 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 13 12:06:51.123329 kubelet[2916]: I1113 12:06:51.123284 2916 kubelet_node_status.go:73] "Attempting to register node" node="srv-sx7g0.gb1.brightbox.com" Nov 13 12:06:51.139178 kubelet[2916]: I1113 12:06:51.137435 2916 kubelet_node_status.go:112] "Node was previously registered" node="srv-sx7g0.gb1.brightbox.com" Nov 13 12:06:51.139178 kubelet[2916]: I1113 12:06:51.137562 2916 kubelet_node_status.go:76] "Successfully registered node" node="srv-sx7g0.gb1.brightbox.com" Nov 13 12:06:51.143837 kubelet[2916]: E1113 12:06:51.138964 2916 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 13 12:06:51.178768 kubelet[2916]: I1113 12:06:51.178709 2916 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 13 12:06:51.178768 kubelet[2916]: I1113 12:06:51.178750 2916 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 13 12:06:51.178768 kubelet[2916]: I1113 12:06:51.178776 2916 state_mem.go:36] "Initialized new in-memory state store" Nov 13 12:06:51.180079 kubelet[2916]: I1113 12:06:51.179295 2916 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 13 12:06:51.180079 kubelet[2916]: I1113 12:06:51.179384 2916 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 13 12:06:51.180079 kubelet[2916]: I1113 12:06:51.179400 2916 policy_none.go:49] "None policy: Start" Nov 13 12:06:51.180370 kubelet[2916]: I1113 12:06:51.180351 2916 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 13 12:06:51.180445 kubelet[2916]: I1113 12:06:51.180383 2916 state_mem.go:35] "Initializing new in-memory state store" Nov 13 12:06:51.181600 kubelet[2916]: I1113 12:06:51.180590 2916 state_mem.go:75] "Updated machine memory state" Nov 13 12:06:51.184819 kubelet[2916]: I1113 12:06:51.184783 2916 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 13 12:06:51.185993 kubelet[2916]: I1113 12:06:51.185553 2916 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 13 12:06:51.340250 kubelet[2916]: I1113 12:06:51.339933 2916 topology_manager.go:215] "Topology Admit Handler" podUID="92b63193d3b494a544bdf6cee873d9f3" podNamespace="kube-system" podName="kube-controller-manager-srv-sx7g0.gb1.brightbox.com" Nov 13 12:06:51.340250 kubelet[2916]: I1113 12:06:51.340198 2916 topology_manager.go:215] "Topology Admit Handler" podUID="8cacce1df9ae426b7529c517ea5b9868" podNamespace="kube-system" podName="kube-scheduler-srv-sx7g0.gb1.brightbox.com" Nov 13 12:06:51.340591 kubelet[2916]: I1113 12:06:51.340276 2916 topology_manager.go:215] "Topology Admit Handler" podUID="ea61fb35b5e87cc9813253fc986f81df" podNamespace="kube-system" podName="kube-apiserver-srv-sx7g0.gb1.brightbox.com" Nov 13 12:06:51.352435 kubelet[2916]: W1113 12:06:51.350946 2916 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 13 12:06:51.356212 kubelet[2916]: W1113 12:06:51.355405 2916 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 13 12:06:51.356352 kubelet[2916]: W1113 12:06:51.356328 2916 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 13 12:06:51.356460 kubelet[2916]: E1113 12:06:51.356429 2916 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-srv-sx7g0.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-sx7g0.gb1.brightbox.com" Nov 13 12:06:51.502961 kubelet[2916]: I1113 12:06:51.502855 2916 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ea61fb35b5e87cc9813253fc986f81df-usr-share-ca-certificates\") pod \"kube-apiserver-srv-sx7g0.gb1.brightbox.com\" (UID: \"ea61fb35b5e87cc9813253fc986f81df\") " pod="kube-system/kube-apiserver-srv-sx7g0.gb1.brightbox.com" Nov 13 12:06:51.502961 kubelet[2916]: I1113 12:06:51.502951 2916 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/92b63193d3b494a544bdf6cee873d9f3-flexvolume-dir\") pod \"kube-controller-manager-srv-sx7g0.gb1.brightbox.com\" (UID: \"92b63193d3b494a544bdf6cee873d9f3\") " pod="kube-system/kube-controller-manager-srv-sx7g0.gb1.brightbox.com" Nov 13 12:06:51.503423 kubelet[2916]: I1113 12:06:51.502994 2916 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/92b63193d3b494a544bdf6cee873d9f3-k8s-certs\") pod \"kube-controller-manager-srv-sx7g0.gb1.brightbox.com\" (UID: \"92b63193d3b494a544bdf6cee873d9f3\") " pod="kube-system/kube-controller-manager-srv-sx7g0.gb1.brightbox.com" Nov 13 12:06:51.503423 kubelet[2916]: I1113 12:06:51.503059 2916 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ea61fb35b5e87cc9813253fc986f81df-k8s-certs\") pod \"kube-apiserver-srv-sx7g0.gb1.brightbox.com\" (UID: \"ea61fb35b5e87cc9813253fc986f81df\") " pod="kube-system/kube-apiserver-srv-sx7g0.gb1.brightbox.com" Nov 13 12:06:51.503423 kubelet[2916]: I1113 12:06:51.503095 2916 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8cacce1df9ae426b7529c517ea5b9868-kubeconfig\") pod \"kube-scheduler-srv-sx7g0.gb1.brightbox.com\" (UID: \"8cacce1df9ae426b7529c517ea5b9868\") " pod="kube-system/kube-scheduler-srv-sx7g0.gb1.brightbox.com" Nov 13 12:06:51.503423 kubelet[2916]: I1113 12:06:51.503130 2916 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ea61fb35b5e87cc9813253fc986f81df-ca-certs\") pod \"kube-apiserver-srv-sx7g0.gb1.brightbox.com\" (UID: \"ea61fb35b5e87cc9813253fc986f81df\") " pod="kube-system/kube-apiserver-srv-sx7g0.gb1.brightbox.com" Nov 13 12:06:51.503423 kubelet[2916]: I1113 12:06:51.503174 2916 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/92b63193d3b494a544bdf6cee873d9f3-ca-certs\") pod \"kube-controller-manager-srv-sx7g0.gb1.brightbox.com\" (UID: \"92b63193d3b494a544bdf6cee873d9f3\") " pod="kube-system/kube-controller-manager-srv-sx7g0.gb1.brightbox.com" Nov 13 12:06:51.504299 kubelet[2916]: I1113 12:06:51.503223 2916 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/92b63193d3b494a544bdf6cee873d9f3-kubeconfig\") pod \"kube-controller-manager-srv-sx7g0.gb1.brightbox.com\" (UID: \"92b63193d3b494a544bdf6cee873d9f3\") " pod="kube-system/kube-controller-manager-srv-sx7g0.gb1.brightbox.com" Nov 13 12:06:51.504299 kubelet[2916]: I1113 12:06:51.503260 2916 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/92b63193d3b494a544bdf6cee873d9f3-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-sx7g0.gb1.brightbox.com\" (UID: \"92b63193d3b494a544bdf6cee873d9f3\") " pod="kube-system/kube-controller-manager-srv-sx7g0.gb1.brightbox.com" Nov 13 12:06:51.952034 kubelet[2916]: I1113 12:06:51.950063 2916 apiserver.go:52] "Watching apiserver" Nov 13 12:06:52.002818 kubelet[2916]: I1113 12:06:52.002692 2916 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Nov 13 12:06:52.110313 kubelet[2916]: W1113 12:06:52.110248 2916 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 13 12:06:52.110572 kubelet[2916]: E1113 12:06:52.110344 2916 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-srv-sx7g0.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-sx7g0.gb1.brightbox.com" Nov 13 12:06:52.247962 kubelet[2916]: I1113 12:06:52.247500 2916 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-sx7g0.gb1.brightbox.com" podStartSLOduration=1.247407116 podStartE2EDuration="1.247407116s" podCreationTimestamp="2024-11-13 12:06:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-13 12:06:52.226331659 +0000 UTC m=+1.470512977" watchObservedRunningTime="2024-11-13 12:06:52.247407116 +0000 UTC m=+1.491588423" Nov 13 12:06:52.276978 kubelet[2916]: I1113 12:06:52.276668 2916 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-sx7g0.gb1.brightbox.com" podStartSLOduration=1.276604888 podStartE2EDuration="1.276604888s" podCreationTimestamp="2024-11-13 12:06:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-13 12:06:52.250390369 +0000 UTC m=+1.494571682" watchObservedRunningTime="2024-11-13 12:06:52.276604888 +0000 UTC m=+1.520786207" Nov 13 12:06:52.326560 kubelet[2916]: I1113 12:06:52.326480 2916 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-sx7g0.gb1.brightbox.com" podStartSLOduration=2.326403345 podStartE2EDuration="2.326403345s" podCreationTimestamp="2024-11-13 12:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-13 12:06:52.278539712 +0000 UTC m=+1.522721030" watchObservedRunningTime="2024-11-13 12:06:52.326403345 +0000 UTC m=+1.570584647" Nov 13 12:06:56.820813 sudo[1930]: pam_unix(sudo:session): session closed for user root Nov 13 12:06:56.968279 sshd[1926]: pam_unix(sshd:session): session closed for user core Nov 13 12:06:56.974919 systemd[1]: sshd@8-10.230.32.222:22-147.75.109.163:37100.service: Deactivated successfully. Nov 13 12:06:56.981390 systemd-logind[1597]: Session 11 logged out. Waiting for processes to exit. Nov 13 12:06:56.982350 systemd[1]: session-11.scope: Deactivated successfully. Nov 13 12:06:56.985457 systemd-logind[1597]: Removed session 11. Nov 13 12:07:03.987608 kubelet[2916]: I1113 12:07:03.987212 2916 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 13 12:07:03.988980 containerd[1621]: time="2024-11-13T12:07:03.988268616Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 13 12:07:03.990431 kubelet[2916]: I1113 12:07:03.989289 2916 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 13 12:07:04.489229 kubelet[2916]: I1113 12:07:04.487515 2916 topology_manager.go:215] "Topology Admit Handler" podUID="9b9edc2a-b45e-4953-87dd-8dc8362293cd" podNamespace="kube-system" podName="kube-proxy-c4n5b" Nov 13 12:07:04.593378 kubelet[2916]: I1113 12:07:04.593308 2916 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b9edc2a-b45e-4953-87dd-8dc8362293cd-xtables-lock\") pod \"kube-proxy-c4n5b\" (UID: \"9b9edc2a-b45e-4953-87dd-8dc8362293cd\") " pod="kube-system/kube-proxy-c4n5b" Nov 13 12:07:04.593971 kubelet[2916]: I1113 12:07:04.593947 2916 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b9edc2a-b45e-4953-87dd-8dc8362293cd-lib-modules\") pod \"kube-proxy-c4n5b\" (UID: \"9b9edc2a-b45e-4953-87dd-8dc8362293cd\") " pod="kube-system/kube-proxy-c4n5b" Nov 13 12:07:04.594191 kubelet[2916]: I1113 12:07:04.594083 2916 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9b9edc2a-b45e-4953-87dd-8dc8362293cd-kube-proxy\") pod \"kube-proxy-c4n5b\" (UID: \"9b9edc2a-b45e-4953-87dd-8dc8362293cd\") " pod="kube-system/kube-proxy-c4n5b" Nov 13 12:07:04.594400 kubelet[2916]: I1113 12:07:04.594315 2916 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsls4\" (UniqueName: \"kubernetes.io/projected/9b9edc2a-b45e-4953-87dd-8dc8362293cd-kube-api-access-dsls4\") pod \"kube-proxy-c4n5b\" (UID: \"9b9edc2a-b45e-4953-87dd-8dc8362293cd\") " pod="kube-system/kube-proxy-c4n5b" Nov 13 12:07:04.802514 containerd[1621]: time="2024-11-13T12:07:04.802341113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c4n5b,Uid:9b9edc2a-b45e-4953-87dd-8dc8362293cd,Namespace:kube-system,Attempt:0,}" Nov 13 12:07:04.856195 containerd[1621]: time="2024-11-13T12:07:04.855949275Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 13 12:07:04.856195 containerd[1621]: time="2024-11-13T12:07:04.856131087Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 13 12:07:04.856195 containerd[1621]: time="2024-11-13T12:07:04.856159029Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 12:07:04.856953 containerd[1621]: time="2024-11-13T12:07:04.856442821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 12:07:04.899341 systemd[1]: run-containerd-runc-k8s.io-18b18456c3c8c0d8a0af70ede74b22fac94928bf02268b9296e048e68f3977c0-runc.9rol1B.mount: Deactivated successfully. Nov 13 12:07:05.046959 containerd[1621]: time="2024-11-13T12:07:05.046871144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c4n5b,Uid:9b9edc2a-b45e-4953-87dd-8dc8362293cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"18b18456c3c8c0d8a0af70ede74b22fac94928bf02268b9296e048e68f3977c0\"" Nov 13 12:07:05.058694 kubelet[2916]: I1113 12:07:05.054936 2916 topology_manager.go:215] "Topology Admit Handler" podUID="64c0202b-0159-46f1-a768-6cebd27f2245" podNamespace="tigera-operator" podName="tigera-operator-56b74f76df-29f65" Nov 13 12:07:05.061256 containerd[1621]: time="2024-11-13T12:07:05.060597483Z" level=info msg="CreateContainer within sandbox \"18b18456c3c8c0d8a0af70ede74b22fac94928bf02268b9296e048e68f3977c0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 13 12:07:05.096824 containerd[1621]: time="2024-11-13T12:07:05.096749842Z" level=info msg="CreateContainer within sandbox \"18b18456c3c8c0d8a0af70ede74b22fac94928bf02268b9296e048e68f3977c0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f83bd269cc549fc13195e7aba10095f444109de567d7ad4f5ff2862a9348bd80\"" Nov 13 12:07:05.099606 containerd[1621]: time="2024-11-13T12:07:05.098378134Z" level=info msg="StartContainer for \"f83bd269cc549fc13195e7aba10095f444109de567d7ad4f5ff2862a9348bd80\"" Nov 13 12:07:05.102394 kubelet[2916]: I1113 12:07:05.102294 2916 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/64c0202b-0159-46f1-a768-6cebd27f2245-var-lib-calico\") pod \"tigera-operator-56b74f76df-29f65\" (UID: \"64c0202b-0159-46f1-a768-6cebd27f2245\") " pod="tigera-operator/tigera-operator-56b74f76df-29f65" Nov 13 12:07:05.102394 kubelet[2916]: I1113 12:07:05.102358 2916 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz8x7\" (UniqueName: \"kubernetes.io/projected/64c0202b-0159-46f1-a768-6cebd27f2245-kube-api-access-mz8x7\") pod \"tigera-operator-56b74f76df-29f65\" (UID: \"64c0202b-0159-46f1-a768-6cebd27f2245\") " pod="tigera-operator/tigera-operator-56b74f76df-29f65" Nov 13 12:07:05.184446 containerd[1621]: time="2024-11-13T12:07:05.184390137Z" level=info msg="StartContainer for \"f83bd269cc549fc13195e7aba10095f444109de567d7ad4f5ff2862a9348bd80\" returns successfully" Nov 13 12:07:05.375191 containerd[1621]: time="2024-11-13T12:07:05.375112063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-56b74f76df-29f65,Uid:64c0202b-0159-46f1-a768-6cebd27f2245,Namespace:tigera-operator,Attempt:0,}" Nov 13 12:07:05.419742 containerd[1621]: time="2024-11-13T12:07:05.416988328Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 13 12:07:05.419742 containerd[1621]: time="2024-11-13T12:07:05.419466595Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 13 12:07:05.419742 containerd[1621]: time="2024-11-13T12:07:05.419489596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 12:07:05.420551 containerd[1621]: time="2024-11-13T12:07:05.419691034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 12:07:05.537367 containerd[1621]: time="2024-11-13T12:07:05.537271797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-56b74f76df-29f65,Uid:64c0202b-0159-46f1-a768-6cebd27f2245,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"99a15ba6c8286b6e12652021cf8f91dd2692d9b18b1f10a0abb3865cfcd37fd7\"" Nov 13 12:07:05.563104 containerd[1621]: time="2024-11-13T12:07:05.562517215Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.0\"" Nov 13 12:07:09.542443 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2714854190.mount: Deactivated successfully. Nov 13 12:07:10.539408 containerd[1621]: time="2024-11-13T12:07:10.538980836Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 12:07:10.542172 containerd[1621]: time="2024-11-13T12:07:10.542085799Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.0: active requests=0, bytes read=21763363" Nov 13 12:07:10.553282 containerd[1621]: time="2024-11-13T12:07:10.553216255Z" level=info msg="ImageCreate event name:\"sha256:6969e3644ac6358fd921194ec267a243ad5856f3d9595bdbb9a76dc5c5e9875d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 12:07:10.559580 containerd[1621]: time="2024-11-13T12:07:10.559493265Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:67a96f7dcdde24abff66b978202c5e64b9909f4a8fcd9357daca92b499b26e4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 12:07:10.560947 containerd[1621]: time="2024-11-13T12:07:10.560752496Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.0\" with image id \"sha256:6969e3644ac6358fd921194ec267a243ad5856f3d9595bdbb9a76dc5c5e9875d\", repo tag \"quay.io/tigera/operator:v1.36.0\", repo digest \"quay.io/tigera/operator@sha256:67a96f7dcdde24abff66b978202c5e64b9909f4a8fcd9357daca92b499b26e4d\", size \"21757542\" in 4.998169635s" Nov 13 12:07:10.560947 containerd[1621]: time="2024-11-13T12:07:10.560830286Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.0\" returns image reference \"sha256:6969e3644ac6358fd921194ec267a243ad5856f3d9595bdbb9a76dc5c5e9875d\"" Nov 13 12:07:10.573071 containerd[1621]: time="2024-11-13T12:07:10.572853137Z" level=info msg="CreateContainer within sandbox \"99a15ba6c8286b6e12652021cf8f91dd2692d9b18b1f10a0abb3865cfcd37fd7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 13 12:07:10.591493 containerd[1621]: time="2024-11-13T12:07:10.591403551Z" level=info msg="CreateContainer within sandbox \"99a15ba6c8286b6e12652021cf8f91dd2692d9b18b1f10a0abb3865cfcd37fd7\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"bbdd72411451a0be48c307c787f3c615cebfdecb7dc53d6eaddfb7ef181b3a2a\"" Nov 13 12:07:10.593241 containerd[1621]: time="2024-11-13T12:07:10.592973981Z" level=info msg="StartContainer for \"bbdd72411451a0be48c307c787f3c615cebfdecb7dc53d6eaddfb7ef181b3a2a\"" Nov 13 12:07:10.678867 containerd[1621]: time="2024-11-13T12:07:10.678666618Z" level=info msg="StartContainer for \"bbdd72411451a0be48c307c787f3c615cebfdecb7dc53d6eaddfb7ef181b3a2a\" returns successfully" Nov 13 12:07:11.176634 kubelet[2916]: I1113 12:07:11.174755 2916 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-c4n5b" podStartSLOduration=7.174665717 podStartE2EDuration="7.174665717s" podCreationTimestamp="2024-11-13 12:07:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-13 12:07:06.157407105 +0000 UTC m=+15.401588409" watchObservedRunningTime="2024-11-13 12:07:11.174665717 +0000 UTC m=+20.418847019" Nov 13 12:07:14.051640 kubelet[2916]: I1113 12:07:14.051158 2916 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-56b74f76df-29f65" podStartSLOduration=5.031787707 podStartE2EDuration="10.050787785s" podCreationTimestamp="2024-11-13 12:07:04 +0000 UTC" firstStartedPulling="2024-11-13 12:07:05.542329363 +0000 UTC m=+14.786510657" lastFinishedPulling="2024-11-13 12:07:10.561329442 +0000 UTC m=+19.805510735" observedRunningTime="2024-11-13 12:07:11.177347375 +0000 UTC m=+20.421528689" watchObservedRunningTime="2024-11-13 12:07:14.050787785 +0000 UTC m=+23.294969092" Nov 13 12:07:14.056503 kubelet[2916]: I1113 12:07:14.055533 2916 topology_manager.go:215] "Topology Admit Handler" podUID="05cb7685-1113-4fe1-981b-18ed5b06cc3a" podNamespace="calico-system" podName="calico-typha-78c87f8d4b-zcdlg" Nov 13 12:07:14.070023 kubelet[2916]: I1113 12:07:14.068778 2916 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrdk4\" (UniqueName: \"kubernetes.io/projected/05cb7685-1113-4fe1-981b-18ed5b06cc3a-kube-api-access-xrdk4\") pod \"calico-typha-78c87f8d4b-zcdlg\" (UID: \"05cb7685-1113-4fe1-981b-18ed5b06cc3a\") " pod="calico-system/calico-typha-78c87f8d4b-zcdlg" Nov 13 12:07:14.070023 kubelet[2916]: I1113 12:07:14.069364 2916 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/05cb7685-1113-4fe1-981b-18ed5b06cc3a-typha-certs\") pod \"calico-typha-78c87f8d4b-zcdlg\" (UID: \"05cb7685-1113-4fe1-981b-18ed5b06cc3a\") " pod="calico-system/calico-typha-78c87f8d4b-zcdlg" Nov 13 12:07:14.071023 kubelet[2916]: I1113 12:07:14.070300 2916 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05cb7685-1113-4fe1-981b-18ed5b06cc3a-tigera-ca-bundle\") pod \"calico-typha-78c87f8d4b-zcdlg\" (UID: \"05cb7685-1113-4fe1-981b-18ed5b06cc3a\") " pod="calico-system/calico-typha-78c87f8d4b-zcdlg" Nov 13 12:07:14.234813 kubelet[2916]: I1113 12:07:14.234754 2916 topology_manager.go:215] "Topology Admit Handler" podUID="9262c9e2-cafa-40dd-ac8e-d534a6cd2404" podNamespace="calico-system" podName="calico-node-tfcws" Nov 13 12:07:14.271379 kubelet[2916]: I1113 12:07:14.271332 2916 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9262c9e2-cafa-40dd-ac8e-d534a6cd2404-policysync\") pod \"calico-node-tfcws\" (UID: \"9262c9e2-cafa-40dd-ac8e-d534a6cd2404\") " pod="calico-system/calico-node-tfcws" Nov 13 12:07:14.271634 kubelet[2916]: I1113 12:07:14.271401 2916 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9262c9e2-cafa-40dd-ac8e-d534a6cd2404-var-run-calico\") pod \"calico-node-tfcws\" (UID: \"9262c9e2-cafa-40dd-ac8e-d534a6cd2404\") " pod="calico-system/calico-node-tfcws" Nov 13 12:07:14.271634 kubelet[2916]: I1113 12:07:14.271444 2916 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9262c9e2-cafa-40dd-ac8e-d534a6cd2404-var-lib-calico\") pod \"calico-node-tfcws\" (UID: \"9262c9e2-cafa-40dd-ac8e-d534a6cd2404\") " pod="calico-system/calico-node-tfcws" Nov 13 12:07:14.271634 kubelet[2916]: I1113 12:07:14.271492 2916 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9262c9e2-cafa-40dd-ac8e-d534a6cd2404-flexvol-driver-host\") pod \"calico-node-tfcws\" (UID: \"9262c9e2-cafa-40dd-ac8e-d534a6cd2404\") " pod="calico-system/calico-node-tfcws" Nov 13 12:07:14.271634 kubelet[2916]: I1113 12:07:14.271537 2916 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9262c9e2-cafa-40dd-ac8e-d534a6cd2404-lib-modules\") pod \"calico-node-tfcws\" (UID: \"9262c9e2-cafa-40dd-ac8e-d534a6cd2404\") " pod="calico-system/calico-node-tfcws" Nov 13 12:07:14.271634 kubelet[2916]: I1113 12:07:14.271574 2916 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9262c9e2-cafa-40dd-ac8e-d534a6cd2404-cni-net-dir\") pod \"calico-node-tfcws\" (UID: \"9262c9e2-cafa-40dd-ac8e-d534a6cd2404\") " pod="calico-system/calico-node-tfcws" Nov 13 12:07:14.271918 kubelet[2916]: I1113 12:07:14.271619 2916 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9262c9e2-cafa-40dd-ac8e-d534a6cd2404-node-certs\") pod \"calico-node-tfcws\" (UID: \"9262c9e2-cafa-40dd-ac8e-d534a6cd2404\") " pod="calico-system/calico-node-tfcws" Nov 13 12:07:14.271918 kubelet[2916]: I1113 12:07:14.271658 2916 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9262c9e2-cafa-40dd-ac8e-d534a6cd2404-cni-bin-dir\") pod \"calico-node-tfcws\" (UID: \"9262c9e2-cafa-40dd-ac8e-d534a6cd2404\") " pod="calico-system/calico-node-tfcws" Nov 13 12:07:14.271918 kubelet[2916]: I1113 12:07:14.271718 2916 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9262c9e2-cafa-40dd-ac8e-d534a6cd2404-tigera-ca-bundle\") pod \"calico-node-tfcws\" (UID: \"9262c9e2-cafa-40dd-ac8e-d534a6cd2404\") " pod="calico-system/calico-node-tfcws" Nov 13 12:07:14.271918 kubelet[2916]: I1113 12:07:14.271758 2916 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptb2f\" (UniqueName: \"kubernetes.io/projected/9262c9e2-cafa-40dd-ac8e-d534a6cd2404-kube-api-access-ptb2f\") pod \"calico-node-tfcws\" (UID: \"9262c9e2-cafa-40dd-ac8e-d534a6cd2404\") " pod="calico-system/calico-node-tfcws" Nov 13 12:07:14.271918 kubelet[2916]: I1113 12:07:14.271797 2916 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9262c9e2-cafa-40dd-ac8e-d534a6cd2404-cni-log-dir\") pod \"calico-node-tfcws\" (UID: \"9262c9e2-cafa-40dd-ac8e-d534a6cd2404\") " pod="calico-system/calico-node-tfcws" Nov 13 12:07:14.274325 kubelet[2916]: I1113 12:07:14.271832 2916 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9262c9e2-cafa-40dd-ac8e-d534a6cd2404-xtables-lock\") pod \"calico-node-tfcws\" (UID: \"9262c9e2-cafa-40dd-ac8e-d534a6cd2404\") " pod="calico-system/calico-node-tfcws" Nov 13 12:07:14.369117 kubelet[2916]: I1113 12:07:14.368210 2916 topology_manager.go:215] "Topology Admit Handler" podUID="8a886e3e-042b-40d2-b8c2-1a33730ec832" podNamespace="calico-system" podName="csi-node-driver-shdhv" Nov 13 12:07:14.371707 kubelet[2916]: E1113 12:07:14.371671 2916 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-shdhv" podUID="8a886e3e-042b-40d2-b8c2-1a33730ec832" Nov 13 12:07:14.372653 containerd[1621]: time="2024-11-13T12:07:14.371831958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-78c87f8d4b-zcdlg,Uid:05cb7685-1113-4fe1-981b-18ed5b06cc3a,Namespace:calico-system,Attempt:0,}" Nov 13 12:07:14.420514 kubelet[2916]: E1113 12:07:14.420476 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:14.421041 kubelet[2916]: W1113 12:07:14.420795 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:14.421618 kubelet[2916]: E1113 12:07:14.421518 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:14.434386 kubelet[2916]: E1113 12:07:14.434275 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:14.434386 kubelet[2916]: W1113 12:07:14.434300 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:14.436848 kubelet[2916]: E1113 12:07:14.434324 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:14.467888 kubelet[2916]: E1113 12:07:14.467319 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:14.467888 kubelet[2916]: W1113 12:07:14.467496 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:14.467888 kubelet[2916]: E1113 12:07:14.467641 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:14.471877 kubelet[2916]: E1113 12:07:14.470519 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:14.471877 kubelet[2916]: W1113 12:07:14.470539 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:14.471877 kubelet[2916]: E1113 12:07:14.470667 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:14.472932 kubelet[2916]: E1113 12:07:14.472843 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:14.473373 kubelet[2916]: W1113 12:07:14.473024 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:14.473373 kubelet[2916]: E1113 12:07:14.473219 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:14.474656 kubelet[2916]: E1113 12:07:14.474570 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:14.474656 kubelet[2916]: W1113 12:07:14.474589 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:14.475342 kubelet[2916]: E1113 12:07:14.474878 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:14.478111 kubelet[2916]: E1113 12:07:14.477978 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:14.478111 kubelet[2916]: W1113 12:07:14.477998 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:14.478111 kubelet[2916]: E1113 12:07:14.478062 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:14.479995 kubelet[2916]: E1113 12:07:14.478812 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:14.479995 kubelet[2916]: W1113 12:07:14.479724 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:14.479995 kubelet[2916]: E1113 12:07:14.479753 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:14.481370 kubelet[2916]: E1113 12:07:14.480829 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:14.481370 kubelet[2916]: W1113 12:07:14.480858 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:14.481370 kubelet[2916]: E1113 12:07:14.480882 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:14.482027 kubelet[2916]: E1113 12:07:14.481747 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:14.482027 kubelet[2916]: W1113 12:07:14.481765 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:14.482027 kubelet[2916]: E1113 12:07:14.481784 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:14.483093 kubelet[2916]: E1113 12:07:14.482651 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:14.483093 kubelet[2916]: W1113 12:07:14.482680 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:14.483093 kubelet[2916]: E1113 12:07:14.482732 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:14.483877 kubelet[2916]: E1113 12:07:14.483526 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:14.483877 kubelet[2916]: W1113 12:07:14.483562 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:14.483877 kubelet[2916]: E1113 12:07:14.483604 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:14.484806 kubelet[2916]: E1113 12:07:14.484521 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:14.484806 kubelet[2916]: W1113 12:07:14.484538 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:14.484806 kubelet[2916]: E1113 12:07:14.484557 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:14.485727 containerd[1621]: time="2024-11-13T12:07:14.485284398Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 13 12:07:14.485727 containerd[1621]: time="2024-11-13T12:07:14.485373637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 13 12:07:14.485727 containerd[1621]: time="2024-11-13T12:07:14.485392104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 12:07:14.485727 containerd[1621]: time="2024-11-13T12:07:14.485533302Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 12:07:14.486301 kubelet[2916]: E1113 12:07:14.485505 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:14.486301 kubelet[2916]: W1113 12:07:14.485545 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:14.486301 kubelet[2916]: E1113 12:07:14.485567 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:14.487343 kubelet[2916]: E1113 12:07:14.486768 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:14.487343 kubelet[2916]: W1113 12:07:14.486787 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:14.487343 kubelet[2916]: E1113 12:07:14.486810 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:14.487343 kubelet[2916]: I1113 12:07:14.486852 2916 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8a886e3e-042b-40d2-b8c2-1a33730ec832-kubelet-dir\") pod \"csi-node-driver-shdhv\" (UID: \"8a886e3e-042b-40d2-b8c2-1a33730ec832\") " pod="calico-system/csi-node-driver-shdhv" Nov 13 12:07:14.487995 kubelet[2916]: E1113 12:07:14.487632 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:14.487995 kubelet[2916]: W1113 12:07:14.487655 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:14.487995 kubelet[2916]: E1113 12:07:14.487698 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:14.487995 kubelet[2916]: I1113 12:07:14.487752 2916 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8a886e3e-042b-40d2-b8c2-1a33730ec832-socket-dir\") pod \"csi-node-driver-shdhv\" (UID: \"8a886e3e-042b-40d2-b8c2-1a33730ec832\") " pod="calico-system/csi-node-driver-shdhv" Nov 13 12:07:14.488946 kubelet[2916]: E1113 12:07:14.488698 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:14.488946 kubelet[2916]: W1113 12:07:14.488719 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:14.488946 kubelet[2916]: E1113 12:07:14.488756 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:14.489515 kubelet[2916]: E1113 12:07:14.489496 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:14.489902 kubelet[2916]: W1113 12:07:14.489746 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:14.489902 kubelet[2916]: E1113 12:07:14.489774 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:14.489902 kubelet[2916]: I1113 12:07:14.489666 2916 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8a886e3e-042b-40d2-b8c2-1a33730ec832-varrun\") pod \"csi-node-driver-shdhv\" (UID: \"8a886e3e-042b-40d2-b8c2-1a33730ec832\") " pod="calico-system/csi-node-driver-shdhv" Nov 13 12:07:14.490388 kubelet[2916]: E1113 12:07:14.490349 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:14.490388 kubelet[2916]: W1113 12:07:14.490366 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:14.490734 kubelet[2916]: E1113 12:07:14.490574 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:14.491885 kubelet[2916]: E1113 12:07:14.491866 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:14.492177 kubelet[2916]: W1113 12:07:14.491996 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:14.492346 kubelet[2916]: E1113 12:07:14.492327 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:14.492805 kubelet[2916]: E1113 12:07:14.492674 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:14.492805 kubelet[2916]: W1113 12:07:14.492716 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:14.493075 kubelet[2916]: E1113 12:07:14.492941 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:14.493398 kubelet[2916]: E1113 12:07:14.493305 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:14.493398 kubelet[2916]: W1113 12:07:14.493322 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:14.493730 kubelet[2916]: E1113 12:07:14.493597 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:14.493974 kubelet[2916]: E1113 12:07:14.493858 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:14.493974 kubelet[2916]: W1113 12:07:14.493875 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:14.494206 kubelet[2916]: E1113 12:07:14.494070 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:14.494830 kubelet[2916]: E1113 12:07:14.494500 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:14.494830 kubelet[2916]: W1113 12:07:14.494516 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:14.494830 kubelet[2916]: E1113 12:07:14.494540 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:14.495921 kubelet[2916]: E1113 12:07:14.495337 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:14.495921 kubelet[2916]: W1113 12:07:14.495353 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:14.495921 kubelet[2916]: E1113 12:07:14.495517 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:14.496811 kubelet[2916]: E1113 12:07:14.496535 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:14.496811 kubelet[2916]: W1113 12:07:14.496565 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:14.496811 kubelet[2916]: E1113 12:07:14.496635 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:14.498738 kubelet[2916]: E1113 12:07:14.498716 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:14.500519 kubelet[2916]: W1113 12:07:14.499863 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:14.500519 kubelet[2916]: E1113 12:07:14.499986 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:14.506063 kubelet[2916]: E1113 12:07:14.503965 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:14.506063 kubelet[2916]: W1113 12:07:14.503986 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:14.506063 kubelet[2916]: E1113 12:07:14.504046 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:14.506063 kubelet[2916]: E1113 12:07:14.504656 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:14.506063 kubelet[2916]: W1113 12:07:14.504670 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:14.506063 kubelet[2916]: E1113 12:07:14.504756 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:14.506063 kubelet[2916]: E1113 12:07:14.505286 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:14.506063 kubelet[2916]: W1113 12:07:14.505300 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:14.506063 kubelet[2916]: E1113 12:07:14.505319 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:14.507445 kubelet[2916]: E1113 12:07:14.506623 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:14.507445 kubelet[2916]: W1113 12:07:14.506636 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:14.507445 kubelet[2916]: E1113 12:07:14.506656 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:14.552323 containerd[1621]: time="2024-11-13T12:07:14.551029950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tfcws,Uid:9262c9e2-cafa-40dd-ac8e-d534a6cd2404,Namespace:calico-system,Attempt:0,}" Nov 13 12:07:14.591639 kubelet[2916]: E1113 12:07:14.591581 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:14.592461 kubelet[2916]: W1113 12:07:14.592211 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:14.592461 kubelet[2916]: E1113 12:07:14.592285 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:14.596946 kubelet[2916]: E1113 12:07:14.595342 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:14.596946 kubelet[2916]: W1113 12:07:14.595361 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:14.596946 kubelet[2916]: E1113 12:07:14.595858 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:14.596946 kubelet[2916]: W1113 12:07:14.595873 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:14.596946 kubelet[2916]: E1113 12:07:14.595902 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:14.596946 kubelet[2916]: I1113 12:07:14.595945 2916 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdqbb\" (UniqueName: \"kubernetes.io/projected/8a886e3e-042b-40d2-b8c2-1a33730ec832-kube-api-access-wdqbb\") pod \"csi-node-driver-shdhv\" (UID: \"8a886e3e-042b-40d2-b8c2-1a33730ec832\") " pod="calico-system/csi-node-driver-shdhv" Nov 13 12:07:14.596946 kubelet[2916]: E1113 12:07:14.595405 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:14.596946 kubelet[2916]: E1113 12:07:14.596542 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:14.596946 kubelet[2916]: W1113 12:07:14.596558 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:14.598044 kubelet[2916]: E1113 12:07:14.596710 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:14.598044 kubelet[2916]: I1113 12:07:14.596744 2916 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8a886e3e-042b-40d2-b8c2-1a33730ec832-registration-dir\") pod \"csi-node-driver-shdhv\" (UID: \"8a886e3e-042b-40d2-b8c2-1a33730ec832\") " pod="calico-system/csi-node-driver-shdhv" Nov 13 12:07:14.598281 kubelet[2916]: E1113 12:07:14.598254 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:14.598404 kubelet[2916]: W1113 12:07:14.598386 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:14.598582 kubelet[2916]: E1113 12:07:14.598492 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:14.599521 kubelet[2916]: E1113 12:07:14.599277 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:14.599521 kubelet[2916]: W1113 12:07:14.599294 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:14.599820 kubelet[2916]: E1113 12:07:14.599655 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:14.600848 kubelet[2916]: E1113 12:07:14.600469 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:14.600848 kubelet[2916]: W1113 12:07:14.600490 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:14.601289 kubelet[2916]: E1113 12:07:14.601064 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:14.602060 kubelet[2916]: E1113 12:07:14.601744 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:14.602060 kubelet[2916]: W1113 12:07:14.601762 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:14.602060 kubelet[2916]: E1113 12:07:14.601952 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:14.602997 kubelet[2916]: E1113 12:07:14.602715 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:14.602997 kubelet[2916]: W1113 12:07:14.602737 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:14.603713 kubelet[2916]: E1113 12:07:14.603205 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:14.604284 kubelet[2916]: E1113 12:07:14.604141 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:14.604284 kubelet[2916]: W1113 12:07:14.604198 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:14.604697 kubelet[2916]: E1113 12:07:14.604593 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:14.605432 kubelet[2916]: E1113 12:07:14.605235 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:14.605432 kubelet[2916]: W1113 12:07:14.605252 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:14.605786 kubelet[2916]: E1113 12:07:14.605639 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:14.606943 kubelet[2916]: E1113 12:07:14.606925 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:14.607281 kubelet[2916]: W1113 12:07:14.607142 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:14.607281 kubelet[2916]: E1113 12:07:14.607176 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:14.607834 kubelet[2916]: E1113 12:07:14.607728 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:14.607834 kubelet[2916]: W1113 12:07:14.607746 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:14.607834 kubelet[2916]: E1113 12:07:14.607782 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:14.608538 kubelet[2916]: E1113 12:07:14.608365 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:14.608538 kubelet[2916]: W1113 12:07:14.608385 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:14.608538 kubelet[2916]: E1113 12:07:14.608471 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:14.609870 kubelet[2916]: E1113 12:07:14.609850 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:14.610384 kubelet[2916]: W1113 12:07:14.610075 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:14.610384 kubelet[2916]: E1113 12:07:14.610241 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:14.611155 kubelet[2916]: E1113 12:07:14.611027 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:14.611155 kubelet[2916]: W1113 12:07:14.611090 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:14.611594 kubelet[2916]: E1113 12:07:14.611259 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:14.614299 kubelet[2916]: E1113 12:07:14.614252 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:14.614759 kubelet[2916]: W1113 12:07:14.614604 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:14.614759 kubelet[2916]: E1113 12:07:14.614639 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:14.615649 kubelet[2916]: E1113 12:07:14.615631 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:14.616024 kubelet[2916]: W1113 12:07:14.615843 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:14.616223 kubelet[2916]: E1113 12:07:14.616115 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:14.618193 kubelet[2916]: E1113 12:07:14.618061 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:14.618193 kubelet[2916]: W1113 12:07:14.618081 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:14.618193 kubelet[2916]: E1113 12:07:14.618124 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:14.620602 kubelet[2916]: E1113 12:07:14.619656 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:14.621623 kubelet[2916]: W1113 12:07:14.620807 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:14.621623 kubelet[2916]: E1113 12:07:14.620860 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:14.622812 kubelet[2916]: E1113 12:07:14.622714 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:14.622812 kubelet[2916]: W1113 12:07:14.622735 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:14.622812 kubelet[2916]: E1113 12:07:14.622757 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:14.662878 containerd[1621]: time="2024-11-13T12:07:14.662499353Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 13 12:07:14.663633 containerd[1621]: time="2024-11-13T12:07:14.662835220Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 13 12:07:14.663835 containerd[1621]: time="2024-11-13T12:07:14.663406066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 12:07:14.664952 containerd[1621]: time="2024-11-13T12:07:14.664311228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 12:07:14.712030 kubelet[2916]: E1113 12:07:14.711938 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:14.712730 kubelet[2916]: W1113 12:07:14.711977 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:14.712730 kubelet[2916]: E1113 12:07:14.712389 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:14.714016 kubelet[2916]: E1113 12:07:14.713554 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:14.714016 kubelet[2916]: W1113 12:07:14.713570 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:14.714016 kubelet[2916]: E1113 12:07:14.713607 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:14.716078 kubelet[2916]: E1113 12:07:14.715941 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:14.716078 kubelet[2916]: W1113 12:07:14.715988 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:14.716078 kubelet[2916]: E1113 12:07:14.716053 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:14.716785 kubelet[2916]: E1113 12:07:14.716596 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:14.716785 kubelet[2916]: W1113 12:07:14.716630 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:14.716785 kubelet[2916]: E1113 12:07:14.716664 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:14.717387 kubelet[2916]: E1113 12:07:14.717213 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:14.717387 kubelet[2916]: W1113 12:07:14.717231 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:14.717387 kubelet[2916]: E1113 12:07:14.717279 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:14.718139 kubelet[2916]: E1113 12:07:14.717846 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:14.718139 kubelet[2916]: W1113 12:07:14.717867 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:14.718139 kubelet[2916]: E1113 12:07:14.717914 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:14.718773 kubelet[2916]: E1113 12:07:14.718316 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:14.718773 kubelet[2916]: W1113 12:07:14.718329 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:14.718773 kubelet[2916]: E1113 12:07:14.718354 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:14.719556 kubelet[2916]: E1113 12:07:14.719331 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:14.719556 kubelet[2916]: W1113 12:07:14.719364 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:14.719556 kubelet[2916]: E1113 12:07:14.719456 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:14.720422 kubelet[2916]: E1113 12:07:14.720278 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:14.720796 kubelet[2916]: W1113 12:07:14.720296 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:14.720796 kubelet[2916]: E1113 12:07:14.720546 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:14.721572 kubelet[2916]: E1113 12:07:14.721497 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:14.721572 kubelet[2916]: W1113 12:07:14.721514 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:14.721572 kubelet[2916]: E1113 12:07:14.721533 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:14.738227 kubelet[2916]: E1113 12:07:14.738191 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:14.739066 kubelet[2916]: W1113 12:07:14.738631 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:14.739066 kubelet[2916]: E1113 12:07:14.738701 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:14.792471 containerd[1621]: time="2024-11-13T12:07:14.792286449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-78c87f8d4b-zcdlg,Uid:05cb7685-1113-4fe1-981b-18ed5b06cc3a,Namespace:calico-system,Attempt:0,} returns sandbox id \"1e8c1c7cc21be6c3460cde981080e11e9dfcd1b1e7b8a9b9a193940f6fc4f1b5\"" Nov 13 12:07:14.797866 containerd[1621]: time="2024-11-13T12:07:14.797709015Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.0\"" Nov 13 12:07:14.811304 containerd[1621]: time="2024-11-13T12:07:14.811134534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tfcws,Uid:9262c9e2-cafa-40dd-ac8e-d534a6cd2404,Namespace:calico-system,Attempt:0,} returns sandbox id \"cc815bb38d05cfd95459bdcd3b5bb366b7110407bf27b2ccf5e22f632dc3f999\"" Nov 13 12:07:16.039405 kubelet[2916]: E1113 12:07:16.038848 2916 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-shdhv" podUID="8a886e3e-042b-40d2-b8c2-1a33730ec832" Nov 13 12:07:18.039385 kubelet[2916]: E1113 12:07:18.039149 2916 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-shdhv" podUID="8a886e3e-042b-40d2-b8c2-1a33730ec832" Nov 13 12:07:18.408124 containerd[1621]: time="2024-11-13T12:07:18.407636422Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 12:07:18.409900 containerd[1621]: time="2024-11-13T12:07:18.409570831Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.0: active requests=0, bytes read=29849168" Nov 13 12:07:18.410859 containerd[1621]: time="2024-11-13T12:07:18.410815665Z" level=info msg="ImageCreate event name:\"sha256:eb8a933b39daca50b75ccf193cc6193e39512bc996c16898d43d4c1f39c8603b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 12:07:18.414402 containerd[1621]: time="2024-11-13T12:07:18.414356020Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:850e5f751e100580bffb57d1b70d4e90d90ecaab5ef1b6dc6a43dcd34a5e1057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 12:07:18.415926 containerd[1621]: time="2024-11-13T12:07:18.415869853Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.0\" with image id \"sha256:eb8a933b39daca50b75ccf193cc6193e39512bc996c16898d43d4c1f39c8603b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:850e5f751e100580bffb57d1b70d4e90d90ecaab5ef1b6dc6a43dcd34a5e1057\", size \"31342252\" in 3.618055317s" Nov 13 12:07:18.415926 containerd[1621]: time="2024-11-13T12:07:18.415919999Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.0\" returns image reference \"sha256:eb8a933b39daca50b75ccf193cc6193e39512bc996c16898d43d4c1f39c8603b\"" Nov 13 12:07:18.420863 containerd[1621]: time="2024-11-13T12:07:18.419946904Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\"" Nov 13 12:07:18.447875 containerd[1621]: time="2024-11-13T12:07:18.447823980Z" level=info msg="CreateContainer within sandbox \"1e8c1c7cc21be6c3460cde981080e11e9dfcd1b1e7b8a9b9a193940f6fc4f1b5\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 13 12:07:18.492361 containerd[1621]: time="2024-11-13T12:07:18.492171096Z" level=info msg="CreateContainer within sandbox \"1e8c1c7cc21be6c3460cde981080e11e9dfcd1b1e7b8a9b9a193940f6fc4f1b5\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e848aee783e6f63c36ef8c5b1c1af6aadd3afd66ae57c282cf17aa124fb782a8\"" Nov 13 12:07:18.498024 containerd[1621]: time="2024-11-13T12:07:18.495255172Z" level=info msg="StartContainer for \"e848aee783e6f63c36ef8c5b1c1af6aadd3afd66ae57c282cf17aa124fb782a8\"" Nov 13 12:07:18.704802 containerd[1621]: time="2024-11-13T12:07:18.704619073Z" level=info msg="StartContainer for \"e848aee783e6f63c36ef8c5b1c1af6aadd3afd66ae57c282cf17aa124fb782a8\" returns successfully" Nov 13 12:07:19.245078 kubelet[2916]: E1113 12:07:19.244985 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:19.245078 kubelet[2916]: W1113 12:07:19.245066 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:19.246688 kubelet[2916]: E1113 12:07:19.245142 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:19.246688 kubelet[2916]: E1113 12:07:19.245600 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:19.246688 kubelet[2916]: W1113 12:07:19.245617 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:19.246688 kubelet[2916]: E1113 12:07:19.245640 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:19.246688 kubelet[2916]: E1113 12:07:19.246050 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:19.246688 kubelet[2916]: W1113 12:07:19.246101 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:19.246688 kubelet[2916]: E1113 12:07:19.246125 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:19.246688 kubelet[2916]: E1113 12:07:19.246500 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:19.246688 kubelet[2916]: W1113 12:07:19.246532 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:19.246688 kubelet[2916]: E1113 12:07:19.246552 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:19.248739 kubelet[2916]: E1113 12:07:19.246914 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:19.248739 kubelet[2916]: W1113 12:07:19.246928 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:19.248739 kubelet[2916]: E1113 12:07:19.246945 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:19.248739 kubelet[2916]: E1113 12:07:19.247337 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:19.248739 kubelet[2916]: W1113 12:07:19.247380 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:19.248739 kubelet[2916]: E1113 12:07:19.247401 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:19.248739 kubelet[2916]: E1113 12:07:19.247854 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:19.248739 kubelet[2916]: W1113 12:07:19.247881 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:19.248739 kubelet[2916]: E1113 12:07:19.247901 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:19.248739 kubelet[2916]: E1113 12:07:19.248224 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:19.251221 kubelet[2916]: W1113 12:07:19.248238 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:19.251221 kubelet[2916]: E1113 12:07:19.248255 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:19.251221 kubelet[2916]: E1113 12:07:19.248594 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:19.251221 kubelet[2916]: W1113 12:07:19.248608 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:19.251221 kubelet[2916]: E1113 12:07:19.248625 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:19.251221 kubelet[2916]: E1113 12:07:19.248940 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:19.251221 kubelet[2916]: W1113 12:07:19.248953 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:19.251221 kubelet[2916]: E1113 12:07:19.248971 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:19.251221 kubelet[2916]: E1113 12:07:19.249324 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:19.251221 kubelet[2916]: W1113 12:07:19.249364 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:19.253324 kubelet[2916]: E1113 12:07:19.249383 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:19.253324 kubelet[2916]: E1113 12:07:19.249748 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:19.253324 kubelet[2916]: W1113 12:07:19.249786 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:19.253324 kubelet[2916]: E1113 12:07:19.249807 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:19.253324 kubelet[2916]: E1113 12:07:19.250421 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:19.253324 kubelet[2916]: W1113 12:07:19.250469 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:19.253324 kubelet[2916]: E1113 12:07:19.250494 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:19.253324 kubelet[2916]: E1113 12:07:19.250840 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:19.253324 kubelet[2916]: W1113 12:07:19.250872 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:19.253324 kubelet[2916]: E1113 12:07:19.250930 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:19.253793 kubelet[2916]: E1113 12:07:19.251270 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:19.253793 kubelet[2916]: W1113 12:07:19.251284 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:19.253793 kubelet[2916]: E1113 12:07:19.251314 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:19.351830 kubelet[2916]: E1113 12:07:19.351720 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:19.351830 kubelet[2916]: W1113 12:07:19.351752 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:19.351830 kubelet[2916]: E1113 12:07:19.351781 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:19.352638 kubelet[2916]: E1113 12:07:19.352601 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:19.352638 kubelet[2916]: W1113 12:07:19.352629 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:19.352777 kubelet[2916]: E1113 12:07:19.352659 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:19.352945 kubelet[2916]: E1113 12:07:19.352925 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:19.353074 kubelet[2916]: W1113 12:07:19.352945 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:19.353074 kubelet[2916]: E1113 12:07:19.352994 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:19.353299 kubelet[2916]: E1113 12:07:19.353280 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:19.353299 kubelet[2916]: W1113 12:07:19.353300 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:19.353437 kubelet[2916]: E1113 12:07:19.353325 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:19.353658 kubelet[2916]: E1113 12:07:19.353640 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:19.353658 kubelet[2916]: W1113 12:07:19.353658 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:19.353788 kubelet[2916]: E1113 12:07:19.353683 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:19.354019 kubelet[2916]: E1113 12:07:19.353981 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:19.354097 kubelet[2916]: W1113 12:07:19.354020 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:19.354097 kubelet[2916]: E1113 12:07:19.354074 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:19.354415 kubelet[2916]: E1113 12:07:19.354385 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:19.354415 kubelet[2916]: W1113 12:07:19.354416 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:19.354545 kubelet[2916]: E1113 12:07:19.354454 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:19.354762 kubelet[2916]: E1113 12:07:19.354744 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:19.354762 kubelet[2916]: W1113 12:07:19.354762 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:19.354879 kubelet[2916]: E1113 12:07:19.354856 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:19.355203 kubelet[2916]: E1113 12:07:19.355182 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:19.355203 kubelet[2916]: W1113 12:07:19.355202 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:19.355420 kubelet[2916]: E1113 12:07:19.355381 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:19.355606 kubelet[2916]: E1113 12:07:19.355561 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:19.355606 kubelet[2916]: W1113 12:07:19.355605 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:19.355719 kubelet[2916]: E1113 12:07:19.355701 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:19.355935 kubelet[2916]: E1113 12:07:19.355917 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:19.356035 kubelet[2916]: W1113 12:07:19.355937 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:19.356035 kubelet[2916]: E1113 12:07:19.355970 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:19.356828 kubelet[2916]: E1113 12:07:19.356796 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:19.356828 kubelet[2916]: W1113 12:07:19.356819 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:19.356949 kubelet[2916]: E1113 12:07:19.356845 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:19.357466 kubelet[2916]: E1113 12:07:19.357299 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:19.357466 kubelet[2916]: W1113 12:07:19.357317 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:19.357466 kubelet[2916]: E1113 12:07:19.357356 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:19.358082 kubelet[2916]: E1113 12:07:19.357864 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:19.358082 kubelet[2916]: W1113 12:07:19.357882 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:19.358082 kubelet[2916]: E1113 12:07:19.357912 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:19.358442 kubelet[2916]: E1113 12:07:19.358312 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:19.358442 kubelet[2916]: W1113 12:07:19.358329 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:19.358442 kubelet[2916]: E1113 12:07:19.358380 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:19.358993 kubelet[2916]: E1113 12:07:19.358821 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:19.358993 kubelet[2916]: W1113 12:07:19.358838 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:19.358993 kubelet[2916]: E1113 12:07:19.358890 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:19.359410 kubelet[2916]: E1113 12:07:19.359261 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:19.359410 kubelet[2916]: W1113 12:07:19.359279 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:19.359410 kubelet[2916]: E1113 12:07:19.359314 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:19.359672 kubelet[2916]: E1113 12:07:19.359653 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:19.359769 kubelet[2916]: W1113 12:07:19.359750 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:19.359878 kubelet[2916]: E1113 12:07:19.359861 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:20.039539 kubelet[2916]: E1113 12:07:20.039402 2916 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-shdhv" podUID="8a886e3e-042b-40d2-b8c2-1a33730ec832" Nov 13 12:07:20.207670 kubelet[2916]: I1113 12:07:20.206553 2916 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 13 12:07:20.262794 kubelet[2916]: E1113 12:07:20.262589 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:20.262794 kubelet[2916]: W1113 12:07:20.262624 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:20.262794 kubelet[2916]: E1113 12:07:20.262660 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:20.264696 kubelet[2916]: E1113 12:07:20.264519 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:20.264696 kubelet[2916]: W1113 12:07:20.264541 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:20.264696 kubelet[2916]: E1113 12:07:20.264577 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:20.265144 kubelet[2916]: E1113 12:07:20.264951 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:20.265144 kubelet[2916]: W1113 12:07:20.264969 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:20.265144 kubelet[2916]: E1113 12:07:20.264988 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:20.265682 kubelet[2916]: E1113 12:07:20.265361 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:20.265682 kubelet[2916]: W1113 12:07:20.265393 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:20.265682 kubelet[2916]: E1113 12:07:20.265416 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:20.266317 kubelet[2916]: E1113 12:07:20.265776 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:20.266317 kubelet[2916]: W1113 12:07:20.265789 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:20.266317 kubelet[2916]: E1113 12:07:20.265807 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:20.267052 kubelet[2916]: E1113 12:07:20.266763 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:20.267052 kubelet[2916]: W1113 12:07:20.266785 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:20.267052 kubelet[2916]: E1113 12:07:20.266804 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:20.267828 kubelet[2916]: E1113 12:07:20.267281 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:20.267828 kubelet[2916]: W1113 12:07:20.267294 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:20.267828 kubelet[2916]: E1113 12:07:20.267310 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:20.268935 kubelet[2916]: E1113 12:07:20.268856 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:20.268935 kubelet[2916]: W1113 12:07:20.268877 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:20.268935 kubelet[2916]: E1113 12:07:20.268897 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:20.270166 kubelet[2916]: E1113 12:07:20.269780 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:20.270166 kubelet[2916]: W1113 12:07:20.269801 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:20.270166 kubelet[2916]: E1113 12:07:20.269821 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:20.270166 kubelet[2916]: E1113 12:07:20.270106 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:20.270166 kubelet[2916]: W1113 12:07:20.270120 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:20.270166 kubelet[2916]: E1113 12:07:20.270138 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:20.271387 kubelet[2916]: E1113 12:07:20.271083 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:20.271387 kubelet[2916]: W1113 12:07:20.271104 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:20.271387 kubelet[2916]: E1113 12:07:20.271123 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:20.271920 kubelet[2916]: E1113 12:07:20.271426 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:20.271920 kubelet[2916]: W1113 12:07:20.271439 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:20.271920 kubelet[2916]: E1113 12:07:20.271456 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:20.272329 kubelet[2916]: E1113 12:07:20.272171 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:20.272329 kubelet[2916]: W1113 12:07:20.272195 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:20.272329 kubelet[2916]: E1113 12:07:20.272214 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:20.273145 kubelet[2916]: E1113 12:07:20.273120 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:20.273145 kubelet[2916]: W1113 12:07:20.273140 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:20.273565 kubelet[2916]: E1113 12:07:20.273159 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:20.273565 kubelet[2916]: E1113 12:07:20.273555 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:20.273829 kubelet[2916]: W1113 12:07:20.273570 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:20.273829 kubelet[2916]: E1113 12:07:20.273588 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:20.274646 kubelet[2916]: E1113 12:07:20.274528 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:20.274646 kubelet[2916]: W1113 12:07:20.274570 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:20.274646 kubelet[2916]: E1113 12:07:20.274593 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:20.275329 kubelet[2916]: E1113 12:07:20.275308 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:20.275439 kubelet[2916]: W1113 12:07:20.275402 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:20.275521 kubelet[2916]: E1113 12:07:20.275463 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:20.276488 kubelet[2916]: E1113 12:07:20.276281 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:20.276488 kubelet[2916]: W1113 12:07:20.276305 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:20.276488 kubelet[2916]: E1113 12:07:20.276355 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:20.277139 kubelet[2916]: E1113 12:07:20.276956 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:20.277139 kubelet[2916]: W1113 12:07:20.277048 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:20.277139 kubelet[2916]: E1113 12:07:20.277087 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:20.277761 kubelet[2916]: E1113 12:07:20.277453 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:20.277761 kubelet[2916]: W1113 12:07:20.277483 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:20.277761 kubelet[2916]: E1113 12:07:20.277555 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:20.278004 kubelet[2916]: E1113 12:07:20.277838 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:20.278004 kubelet[2916]: W1113 12:07:20.277851 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:20.278004 kubelet[2916]: E1113 12:07:20.277923 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:20.278341 kubelet[2916]: E1113 12:07:20.278218 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:20.278341 kubelet[2916]: W1113 12:07:20.278236 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:20.278341 kubelet[2916]: E1113 12:07:20.278268 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:20.278715 kubelet[2916]: E1113 12:07:20.278513 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:20.278715 kubelet[2916]: W1113 12:07:20.278526 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:20.278715 kubelet[2916]: E1113 12:07:20.278561 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:20.279203 kubelet[2916]: E1113 12:07:20.279174 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:20.279203 kubelet[2916]: W1113 12:07:20.279196 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:20.279631 kubelet[2916]: E1113 12:07:20.279283 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:20.279794 kubelet[2916]: E1113 12:07:20.279772 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:20.279794 kubelet[2916]: W1113 12:07:20.279792 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:20.280165 kubelet[2916]: E1113 12:07:20.280110 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:20.280446 kubelet[2916]: E1113 12:07:20.280427 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:20.280686 kubelet[2916]: W1113 12:07:20.280482 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:20.280686 kubelet[2916]: E1113 12:07:20.280526 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:20.281164 kubelet[2916]: E1113 12:07:20.281117 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:20.281164 kubelet[2916]: W1113 12:07:20.281136 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:20.281164 kubelet[2916]: E1113 12:07:20.281162 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:20.281794 kubelet[2916]: E1113 12:07:20.281429 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:20.281794 kubelet[2916]: W1113 12:07:20.281443 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:20.281794 kubelet[2916]: E1113 12:07:20.281468 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:20.282229 kubelet[2916]: E1113 12:07:20.282205 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:20.282229 kubelet[2916]: W1113 12:07:20.282224 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:20.282885 kubelet[2916]: E1113 12:07:20.282264 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:20.282885 kubelet[2916]: E1113 12:07:20.282564 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:20.282885 kubelet[2916]: W1113 12:07:20.282585 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:20.282885 kubelet[2916]: E1113 12:07:20.282688 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:20.283353 kubelet[2916]: E1113 12:07:20.283334 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:20.283353 kubelet[2916]: W1113 12:07:20.283352 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:20.283475 kubelet[2916]: E1113 12:07:20.283380 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:20.284063 kubelet[2916]: E1113 12:07:20.283907 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:20.284063 kubelet[2916]: W1113 12:07:20.283929 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:20.284063 kubelet[2916]: E1113 12:07:20.283959 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:20.285499 kubelet[2916]: E1113 12:07:20.285373 2916 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 12:07:20.285499 kubelet[2916]: W1113 12:07:20.285394 2916 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 12:07:20.285499 kubelet[2916]: E1113 12:07:20.285413 2916 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 12:07:20.508854 containerd[1621]: time="2024-11-13T12:07:20.508748396Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 12:07:20.510511 containerd[1621]: time="2024-11-13T12:07:20.510350387Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0: active requests=0, bytes read=5362116" Nov 13 12:07:20.511567 containerd[1621]: time="2024-11-13T12:07:20.511486595Z" level=info msg="ImageCreate event name:\"sha256:3fbafc0cb73520aede9a07469f27fd8798e681807d14465761f19c8c2bda1cec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 12:07:20.516494 containerd[1621]: time="2024-11-13T12:07:20.516404621Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:bed11f00e388b9bbf6eb3be410d4bc86d7020f790902b87f9e330df5a2058769\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 12:07:20.518978 containerd[1621]: time="2024-11-13T12:07:20.518917208Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" with image id \"sha256:3fbafc0cb73520aede9a07469f27fd8798e681807d14465761f19c8c2bda1cec\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:bed11f00e388b9bbf6eb3be410d4bc86d7020f790902b87f9e330df5a2058769\", size \"6855168\" in 2.098909093s" Nov 13 12:07:20.519099 containerd[1621]: time="2024-11-13T12:07:20.519033254Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" returns image reference \"sha256:3fbafc0cb73520aede9a07469f27fd8798e681807d14465761f19c8c2bda1cec\"" Nov 13 12:07:20.521964 containerd[1621]: time="2024-11-13T12:07:20.521769333Z" level=info msg="CreateContainer within sandbox \"cc815bb38d05cfd95459bdcd3b5bb366b7110407bf27b2ccf5e22f632dc3f999\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 13 12:07:20.552789 containerd[1621]: time="2024-11-13T12:07:20.552686816Z" level=info msg="CreateContainer within sandbox \"cc815bb38d05cfd95459bdcd3b5bb366b7110407bf27b2ccf5e22f632dc3f999\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"0f323a19a3935d183a7e17380de481ee4a279de7226e5e7dc3b27a341db76707\"" Nov 13 12:07:20.554379 containerd[1621]: time="2024-11-13T12:07:20.554329018Z" level=info msg="StartContainer for \"0f323a19a3935d183a7e17380de481ee4a279de7226e5e7dc3b27a341db76707\"" Nov 13 12:07:20.653568 containerd[1621]: time="2024-11-13T12:07:20.653472979Z" level=info msg="StartContainer for \"0f323a19a3935d183a7e17380de481ee4a279de7226e5e7dc3b27a341db76707\" returns successfully" Nov 13 12:07:20.713104 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0f323a19a3935d183a7e17380de481ee4a279de7226e5e7dc3b27a341db76707-rootfs.mount: Deactivated successfully. Nov 13 12:07:20.771761 containerd[1621]: time="2024-11-13T12:07:20.736868370Z" level=info msg="shim disconnected" id=0f323a19a3935d183a7e17380de481ee4a279de7226e5e7dc3b27a341db76707 namespace=k8s.io Nov 13 12:07:20.771761 containerd[1621]: time="2024-11-13T12:07:20.771659149Z" level=warning msg="cleaning up after shim disconnected" id=0f323a19a3935d183a7e17380de481ee4a279de7226e5e7dc3b27a341db76707 namespace=k8s.io Nov 13 12:07:20.771761 containerd[1621]: time="2024-11-13T12:07:20.771693127Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 13 12:07:21.214450 containerd[1621]: time="2024-11-13T12:07:21.214035631Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.0\"" Nov 13 12:07:21.236277 kubelet[2916]: I1113 12:07:21.236210 2916 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-78c87f8d4b-zcdlg" podStartSLOduration=3.615277177 podStartE2EDuration="7.23613599s" podCreationTimestamp="2024-11-13 12:07:14 +0000 UTC" firstStartedPulling="2024-11-13 12:07:14.796485258 +0000 UTC m=+24.040666550" lastFinishedPulling="2024-11-13 12:07:18.417344053 +0000 UTC m=+27.661525363" observedRunningTime="2024-11-13 12:07:19.224724631 +0000 UTC m=+28.468905940" watchObservedRunningTime="2024-11-13 12:07:21.23613599 +0000 UTC m=+30.480317289" Nov 13 12:07:22.038866 kubelet[2916]: E1113 12:07:22.038760 2916 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-shdhv" podUID="8a886e3e-042b-40d2-b8c2-1a33730ec832" Nov 13 12:07:24.040056 kubelet[2916]: E1113 12:07:24.039989 2916 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-shdhv" podUID="8a886e3e-042b-40d2-b8c2-1a33730ec832" Nov 13 12:07:26.039138 kubelet[2916]: E1113 12:07:26.039065 2916 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-shdhv" podUID="8a886e3e-042b-40d2-b8c2-1a33730ec832" Nov 13 12:07:27.143809 containerd[1621]: time="2024-11-13T12:07:27.143739641Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 12:07:27.145564 containerd[1621]: time="2024-11-13T12:07:27.145498187Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.0: active requests=0, bytes read=96163683" Nov 13 12:07:27.145964 containerd[1621]: time="2024-11-13T12:07:27.145931697Z" level=info msg="ImageCreate event name:\"sha256:124793defc2ae544a3e0dcd1a225bff5166dbebc1bdacb41c4161b9c0c53425c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 12:07:27.152229 containerd[1621]: time="2024-11-13T12:07:27.152190551Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:a7c1b02375aa96ae882655397cd9dd0dcc867d9587ce7b866cf9cd65fd7ca1dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 12:07:27.156215 containerd[1621]: time="2024-11-13T12:07:27.156179148Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.0\" with image id \"sha256:124793defc2ae544a3e0dcd1a225bff5166dbebc1bdacb41c4161b9c0c53425c\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:a7c1b02375aa96ae882655397cd9dd0dcc867d9587ce7b866cf9cd65fd7ca1dd\", size \"97656775\" in 5.942062571s" Nov 13 12:07:27.156668 containerd[1621]: time="2024-11-13T12:07:27.156369769Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.0\" returns image reference \"sha256:124793defc2ae544a3e0dcd1a225bff5166dbebc1bdacb41c4161b9c0c53425c\"" Nov 13 12:07:27.159517 containerd[1621]: time="2024-11-13T12:07:27.159479902Z" level=info msg="CreateContainer within sandbox \"cc815bb38d05cfd95459bdcd3b5bb366b7110407bf27b2ccf5e22f632dc3f999\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 13 12:07:27.197460 containerd[1621]: time="2024-11-13T12:07:27.197288604Z" level=info msg="CreateContainer within sandbox \"cc815bb38d05cfd95459bdcd3b5bb366b7110407bf27b2ccf5e22f632dc3f999\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"caf34a21522894d9f0ef9ff92b045e0e4893275bfa6e953bf1ac1a89c89321e2\"" Nov 13 12:07:27.198147 containerd[1621]: time="2024-11-13T12:07:27.198092193Z" level=info msg="StartContainer for \"caf34a21522894d9f0ef9ff92b045e0e4893275bfa6e953bf1ac1a89c89321e2\"" Nov 13 12:07:27.314031 containerd[1621]: time="2024-11-13T12:07:27.313804572Z" level=info msg="StartContainer for \"caf34a21522894d9f0ef9ff92b045e0e4893275bfa6e953bf1ac1a89c89321e2\" returns successfully" Nov 13 12:07:28.038445 kubelet[2916]: E1113 12:07:28.038389 2916 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-shdhv" podUID="8a886e3e-042b-40d2-b8c2-1a33730ec832" Nov 13 12:07:28.414436 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-caf34a21522894d9f0ef9ff92b045e0e4893275bfa6e953bf1ac1a89c89321e2-rootfs.mount: Deactivated successfully. Nov 13 12:07:28.418059 containerd[1621]: time="2024-11-13T12:07:28.417453359Z" level=info msg="shim disconnected" id=caf34a21522894d9f0ef9ff92b045e0e4893275bfa6e953bf1ac1a89c89321e2 namespace=k8s.io Nov 13 12:07:28.418059 containerd[1621]: time="2024-11-13T12:07:28.417543008Z" level=warning msg="cleaning up after shim disconnected" id=caf34a21522894d9f0ef9ff92b045e0e4893275bfa6e953bf1ac1a89c89321e2 namespace=k8s.io Nov 13 12:07:28.418059 containerd[1621]: time="2024-11-13T12:07:28.417568545Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 13 12:07:28.426022 kubelet[2916]: I1113 12:07:28.425656 2916 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Nov 13 12:07:28.476220 kubelet[2916]: I1113 12:07:28.471222 2916 topology_manager.go:215] "Topology Admit Handler" podUID="b637d3e5-f4d0-4ad5-a801-bcd5490bb15d" podNamespace="kube-system" podName="coredns-76f75df574-kz8bv" Nov 13 12:07:28.476220 kubelet[2916]: I1113 12:07:28.475794 2916 topology_manager.go:215] "Topology Admit Handler" podUID="36b3a588-3839-4c9b-9a4b-8639fa9043ec" podNamespace="calico-system" podName="calico-kube-controllers-668f5df9fb-7xn7m" Nov 13 12:07:28.480055 kubelet[2916]: I1113 12:07:28.478633 2916 topology_manager.go:215] "Topology Admit Handler" podUID="56d5778e-34f5-4698-a9ed-8cc517424213" podNamespace="kube-system" podName="coredns-76f75df574-58wrm" Nov 13 12:07:28.480802 kubelet[2916]: I1113 12:07:28.480289 2916 topology_manager.go:215] "Topology Admit Handler" podUID="3531763d-3a3a-4c39-b14a-dc83c6ab3c8e" podNamespace="calico-apiserver" podName="calico-apiserver-57579cb858-kmp5s" Nov 13 12:07:28.494197 kubelet[2916]: I1113 12:07:28.493995 2916 topology_manager.go:215] "Topology Admit Handler" podUID="b5224871-082e-4b53-9a0e-2a5888841a73" podNamespace="calico-apiserver" podName="calico-apiserver-57579cb858-82wlr" Nov 13 12:07:28.638256 kubelet[2916]: I1113 12:07:28.638145 2916 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qw6l4\" (UniqueName: \"kubernetes.io/projected/56d5778e-34f5-4698-a9ed-8cc517424213-kube-api-access-qw6l4\") pod \"coredns-76f75df574-58wrm\" (UID: \"56d5778e-34f5-4698-a9ed-8cc517424213\") " pod="kube-system/coredns-76f75df574-58wrm" Nov 13 12:07:28.638256 kubelet[2916]: I1113 12:07:28.638241 2916 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8z4q\" (UniqueName: \"kubernetes.io/projected/36b3a588-3839-4c9b-9a4b-8639fa9043ec-kube-api-access-w8z4q\") pod \"calico-kube-controllers-668f5df9fb-7xn7m\" (UID: \"36b3a588-3839-4c9b-9a4b-8639fa9043ec\") " pod="calico-system/calico-kube-controllers-668f5df9fb-7xn7m" Nov 13 12:07:28.638256 kubelet[2916]: I1113 12:07:28.638280 2916 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3531763d-3a3a-4c39-b14a-dc83c6ab3c8e-calico-apiserver-certs\") pod \"calico-apiserver-57579cb858-kmp5s\" (UID: \"3531763d-3a3a-4c39-b14a-dc83c6ab3c8e\") " pod="calico-apiserver/calico-apiserver-57579cb858-kmp5s" Nov 13 12:07:28.638704 kubelet[2916]: I1113 12:07:28.638322 2916 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfhph\" (UniqueName: \"kubernetes.io/projected/3531763d-3a3a-4c39-b14a-dc83c6ab3c8e-kube-api-access-kfhph\") pod \"calico-apiserver-57579cb858-kmp5s\" (UID: \"3531763d-3a3a-4c39-b14a-dc83c6ab3c8e\") " pod="calico-apiserver/calico-apiserver-57579cb858-kmp5s" Nov 13 12:07:28.638704 kubelet[2916]: I1113 12:07:28.638356 2916 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36b3a588-3839-4c9b-9a4b-8639fa9043ec-tigera-ca-bundle\") pod \"calico-kube-controllers-668f5df9fb-7xn7m\" (UID: \"36b3a588-3839-4c9b-9a4b-8639fa9043ec\") " pod="calico-system/calico-kube-controllers-668f5df9fb-7xn7m" Nov 13 12:07:28.638704 kubelet[2916]: I1113 12:07:28.638393 2916 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b637d3e5-f4d0-4ad5-a801-bcd5490bb15d-config-volume\") pod \"coredns-76f75df574-kz8bv\" (UID: \"b637d3e5-f4d0-4ad5-a801-bcd5490bb15d\") " pod="kube-system/coredns-76f75df574-kz8bv" Nov 13 12:07:28.638704 kubelet[2916]: I1113 12:07:28.638431 2916 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/56d5778e-34f5-4698-a9ed-8cc517424213-config-volume\") pod \"coredns-76f75df574-58wrm\" (UID: \"56d5778e-34f5-4698-a9ed-8cc517424213\") " pod="kube-system/coredns-76f75df574-58wrm" Nov 13 12:07:28.638704 kubelet[2916]: I1113 12:07:28.638468 2916 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzlzd\" (UniqueName: \"kubernetes.io/projected/b5224871-082e-4b53-9a0e-2a5888841a73-kube-api-access-rzlzd\") pod \"calico-apiserver-57579cb858-82wlr\" (UID: \"b5224871-082e-4b53-9a0e-2a5888841a73\") " pod="calico-apiserver/calico-apiserver-57579cb858-82wlr" Nov 13 12:07:28.639568 kubelet[2916]: I1113 12:07:28.638527 2916 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvn6p\" (UniqueName: \"kubernetes.io/projected/b637d3e5-f4d0-4ad5-a801-bcd5490bb15d-kube-api-access-hvn6p\") pod \"coredns-76f75df574-kz8bv\" (UID: \"b637d3e5-f4d0-4ad5-a801-bcd5490bb15d\") " pod="kube-system/coredns-76f75df574-kz8bv" Nov 13 12:07:28.639568 kubelet[2916]: I1113 12:07:28.638586 2916 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b5224871-082e-4b53-9a0e-2a5888841a73-calico-apiserver-certs\") pod \"calico-apiserver-57579cb858-82wlr\" (UID: \"b5224871-082e-4b53-9a0e-2a5888841a73\") " pod="calico-apiserver/calico-apiserver-57579cb858-82wlr" Nov 13 12:07:28.802741 containerd[1621]: time="2024-11-13T12:07:28.802542073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kz8bv,Uid:b637d3e5-f4d0-4ad5-a801-bcd5490bb15d,Namespace:kube-system,Attempt:0,}" Nov 13 12:07:28.813031 containerd[1621]: time="2024-11-13T12:07:28.812861351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-58wrm,Uid:56d5778e-34f5-4698-a9ed-8cc517424213,Namespace:kube-system,Attempt:0,}" Nov 13 12:07:28.818600 containerd[1621]: time="2024-11-13T12:07:28.818456530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57579cb858-kmp5s,Uid:3531763d-3a3a-4c39-b14a-dc83c6ab3c8e,Namespace:calico-apiserver,Attempt:0,}" Nov 13 12:07:28.819619 containerd[1621]: time="2024-11-13T12:07:28.819433115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57579cb858-82wlr,Uid:b5224871-082e-4b53-9a0e-2a5888841a73,Namespace:calico-apiserver,Attempt:0,}" Nov 13 12:07:28.821873 containerd[1621]: time="2024-11-13T12:07:28.821511163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-668f5df9fb-7xn7m,Uid:36b3a588-3839-4c9b-9a4b-8639fa9043ec,Namespace:calico-system,Attempt:0,}" Nov 13 12:07:29.169570 containerd[1621]: time="2024-11-13T12:07:29.169473897Z" level=error msg="Failed to destroy network for sandbox \"ebd9b2c005747731e48ce3608906a04ec66097d761f5b87f6837919d549e2c4c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 13 12:07:29.170837 containerd[1621]: time="2024-11-13T12:07:29.170792921Z" level=error msg="Failed to destroy network for sandbox \"5572f68acbacd57352449c2acfec5a91e5d3a7b1728cd7617c27dda20170908b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 13 12:07:29.175249 containerd[1621]: time="2024-11-13T12:07:29.175210068Z" level=error msg="encountered an error cleaning up failed sandbox \"5572f68acbacd57352449c2acfec5a91e5d3a7b1728cd7617c27dda20170908b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 13 12:07:29.178755 containerd[1621]: time="2024-11-13T12:07:29.177860914Z" level=error msg="Failed to destroy network for sandbox \"2c71cd07048b92e21d33e2c4e8ece5641c37ab1ae24040c92d87d3202a7e7f38\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 13 12:07:29.178755 containerd[1621]: time="2024-11-13T12:07:29.178271993Z" level=error msg="encountered an error cleaning up failed sandbox \"2c71cd07048b92e21d33e2c4e8ece5641c37ab1ae24040c92d87d3202a7e7f38\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 13 12:07:29.184402 containerd[1621]: time="2024-11-13T12:07:29.184147339Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57579cb858-kmp5s,Uid:3531763d-3a3a-4c39-b14a-dc83c6ab3c8e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2c71cd07048b92e21d33e2c4e8ece5641c37ab1ae24040c92d87d3202a7e7f38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 13 12:07:29.192779 containerd[1621]: time="2024-11-13T12:07:29.192661888Z" level=error msg="Failed to destroy network for sandbox \"eaed35cb271ca46a0a4f2ded52a9a4684a9f4f2433afbcb7a12ce82687d709eb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 13 12:07:29.193447 containerd[1621]: time="2024-11-13T12:07:29.193240424Z" level=error msg="encountered an error cleaning up failed sandbox \"eaed35cb271ca46a0a4f2ded52a9a4684a9f4f2433afbcb7a12ce82687d709eb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 13 12:07:29.193447 containerd[1621]: time="2024-11-13T12:07:29.193294662Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57579cb858-82wlr,Uid:b5224871-082e-4b53-9a0e-2a5888841a73,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"eaed35cb271ca46a0a4f2ded52a9a4684a9f4f2433afbcb7a12ce82687d709eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 13 12:07:29.193447 containerd[1621]: time="2024-11-13T12:07:29.193411228Z" level=error msg="Failed to destroy network for sandbox \"b4223c8a31fa4cde8619c5216c6837f686b3df265c3fc95b8ee36b05a9487126\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 13 12:07:29.194446 containerd[1621]: time="2024-11-13T12:07:29.193801635Z" level=error msg="encountered an error cleaning up failed sandbox \"b4223c8a31fa4cde8619c5216c6837f686b3df265c3fc95b8ee36b05a9487126\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 13 12:07:29.194446 containerd[1621]: time="2024-11-13T12:07:29.193845823Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-58wrm,Uid:56d5778e-34f5-4698-a9ed-8cc517424213,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b4223c8a31fa4cde8619c5216c6837f686b3df265c3fc95b8ee36b05a9487126\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 13 12:07:29.194446 containerd[1621]: time="2024-11-13T12:07:29.193896119Z" level=error msg="encountered an error cleaning up failed sandbox \"ebd9b2c005747731e48ce3608906a04ec66097d761f5b87f6837919d549e2c4c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 13 12:07:29.194446 containerd[1621]: time="2024-11-13T12:07:29.193940525Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kz8bv,Uid:b637d3e5-f4d0-4ad5-a801-bcd5490bb15d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ebd9b2c005747731e48ce3608906a04ec66097d761f5b87f6837919d549e2c4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 13 12:07:29.194446 containerd[1621]: time="2024-11-13T12:07:29.194027975Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-668f5df9fb-7xn7m,Uid:36b3a588-3839-4c9b-9a4b-8639fa9043ec,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5572f68acbacd57352449c2acfec5a91e5d3a7b1728cd7617c27dda20170908b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 13 12:07:29.198849 kubelet[2916]: E1113 12:07:29.198690 2916 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c71cd07048b92e21d33e2c4e8ece5641c37ab1ae24040c92d87d3202a7e7f38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 13 12:07:29.198849 kubelet[2916]: E1113 12:07:29.198754 2916 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eaed35cb271ca46a0a4f2ded52a9a4684a9f4f2433afbcb7a12ce82687d709eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 13 12:07:29.198849 kubelet[2916]: E1113 12:07:29.198802 2916 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c71cd07048b92e21d33e2c4e8ece5641c37ab1ae24040c92d87d3202a7e7f38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57579cb858-kmp5s" Nov 13 12:07:29.198849 kubelet[2916]: E1113 12:07:29.198844 2916 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c71cd07048b92e21d33e2c4e8ece5641c37ab1ae24040c92d87d3202a7e7f38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57579cb858-kmp5s" Nov 13 12:07:29.199648 kubelet[2916]: E1113 12:07:29.198944 2916 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57579cb858-kmp5s_calico-apiserver(3531763d-3a3a-4c39-b14a-dc83c6ab3c8e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57579cb858-kmp5s_calico-apiserver(3531763d-3a3a-4c39-b14a-dc83c6ab3c8e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2c71cd07048b92e21d33e2c4e8ece5641c37ab1ae24040c92d87d3202a7e7f38\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57579cb858-kmp5s" podUID="3531763d-3a3a-4c39-b14a-dc83c6ab3c8e" Nov 13 12:07:29.199648 kubelet[2916]: E1113 12:07:29.198692 2916 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5572f68acbacd57352449c2acfec5a91e5d3a7b1728cd7617c27dda20170908b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 13 12:07:29.199648 kubelet[2916]: E1113 12:07:29.199427 2916 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5572f68acbacd57352449c2acfec5a91e5d3a7b1728cd7617c27dda20170908b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-668f5df9fb-7xn7m" Nov 13 12:07:29.199821 kubelet[2916]: E1113 12:07:29.199458 2916 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5572f68acbacd57352449c2acfec5a91e5d3a7b1728cd7617c27dda20170908b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-668f5df9fb-7xn7m" Nov 13 12:07:29.199821 kubelet[2916]: E1113 12:07:29.199509 2916 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-668f5df9fb-7xn7m_calico-system(36b3a588-3839-4c9b-9a4b-8639fa9043ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-668f5df9fb-7xn7m_calico-system(36b3a588-3839-4c9b-9a4b-8639fa9043ec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5572f68acbacd57352449c2acfec5a91e5d3a7b1728cd7617c27dda20170908b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-668f5df9fb-7xn7m" podUID="36b3a588-3839-4c9b-9a4b-8639fa9043ec" Nov 13 12:07:29.199821 kubelet[2916]: E1113 12:07:29.198803 2916 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eaed35cb271ca46a0a4f2ded52a9a4684a9f4f2433afbcb7a12ce82687d709eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57579cb858-82wlr" Nov 13 12:07:29.200322 kubelet[2916]: E1113 12:07:29.199561 2916 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eaed35cb271ca46a0a4f2ded52a9a4684a9f4f2433afbcb7a12ce82687d709eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57579cb858-82wlr" Nov 13 12:07:29.200322 kubelet[2916]: E1113 12:07:29.199625 2916 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57579cb858-82wlr_calico-apiserver(b5224871-082e-4b53-9a0e-2a5888841a73)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57579cb858-82wlr_calico-apiserver(b5224871-082e-4b53-9a0e-2a5888841a73)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eaed35cb271ca46a0a4f2ded52a9a4684a9f4f2433afbcb7a12ce82687d709eb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57579cb858-82wlr" podUID="b5224871-082e-4b53-9a0e-2a5888841a73" Nov 13 12:07:29.200322 kubelet[2916]: E1113 12:07:29.199685 2916 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebd9b2c005747731e48ce3608906a04ec66097d761f5b87f6837919d549e2c4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 13 12:07:29.200811 kubelet[2916]: E1113 12:07:29.199728 2916 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebd9b2c005747731e48ce3608906a04ec66097d761f5b87f6837919d549e2c4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-kz8bv" Nov 13 12:07:29.200811 kubelet[2916]: E1113 12:07:29.199758 2916 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebd9b2c005747731e48ce3608906a04ec66097d761f5b87f6837919d549e2c4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-kz8bv" Nov 13 12:07:29.200811 kubelet[2916]: E1113 12:07:29.199806 2916 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-kz8bv_kube-system(b637d3e5-f4d0-4ad5-a801-bcd5490bb15d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-kz8bv_kube-system(b637d3e5-f4d0-4ad5-a801-bcd5490bb15d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ebd9b2c005747731e48ce3608906a04ec66097d761f5b87f6837919d549e2c4c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-kz8bv" podUID="b637d3e5-f4d0-4ad5-a801-bcd5490bb15d" Nov 13 12:07:29.201055 kubelet[2916]: E1113 12:07:29.199907 2916 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4223c8a31fa4cde8619c5216c6837f686b3df265c3fc95b8ee36b05a9487126\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 13 12:07:29.201055 kubelet[2916]: E1113 12:07:29.199956 2916 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4223c8a31fa4cde8619c5216c6837f686b3df265c3fc95b8ee36b05a9487126\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-58wrm" Nov 13 12:07:29.201055 kubelet[2916]: E1113 12:07:29.199992 2916 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4223c8a31fa4cde8619c5216c6837f686b3df265c3fc95b8ee36b05a9487126\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-58wrm" Nov 13 12:07:29.201214 kubelet[2916]: E1113 12:07:29.200062 2916 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-58wrm_kube-system(56d5778e-34f5-4698-a9ed-8cc517424213)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-58wrm_kube-system(56d5778e-34f5-4698-a9ed-8cc517424213)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b4223c8a31fa4cde8619c5216c6837f686b3df265c3fc95b8ee36b05a9487126\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-58wrm" podUID="56d5778e-34f5-4698-a9ed-8cc517424213" Nov 13 12:07:29.270283 kubelet[2916]: I1113 12:07:29.270232 2916 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eaed35cb271ca46a0a4f2ded52a9a4684a9f4f2433afbcb7a12ce82687d709eb" Nov 13 12:07:29.273262 kubelet[2916]: I1113 12:07:29.273232 2916 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c71cd07048b92e21d33e2c4e8ece5641c37ab1ae24040c92d87d3202a7e7f38" Nov 13 12:07:29.294212 kubelet[2916]: I1113 12:07:29.293787 2916 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ebd9b2c005747731e48ce3608906a04ec66097d761f5b87f6837919d549e2c4c" Nov 13 12:07:29.295618 kubelet[2916]: I1113 12:07:29.295555 2916 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4223c8a31fa4cde8619c5216c6837f686b3df265c3fc95b8ee36b05a9487126" Nov 13 12:07:29.298516 kubelet[2916]: I1113 12:07:29.297517 2916 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5572f68acbacd57352449c2acfec5a91e5d3a7b1728cd7617c27dda20170908b" Nov 13 12:07:29.304002 containerd[1621]: time="2024-11-13T12:07:29.303637724Z" level=info msg="StopPodSandbox for \"2c71cd07048b92e21d33e2c4e8ece5641c37ab1ae24040c92d87d3202a7e7f38\"" Nov 13 12:07:29.305838 containerd[1621]: time="2024-11-13T12:07:29.304627948Z" level=info msg="StopPodSandbox for \"b4223c8a31fa4cde8619c5216c6837f686b3df265c3fc95b8ee36b05a9487126\"" Nov 13 12:07:29.305838 containerd[1621]: time="2024-11-13T12:07:29.305796722Z" level=info msg="Ensure that sandbox b4223c8a31fa4cde8619c5216c6837f686b3df265c3fc95b8ee36b05a9487126 in task-service has been cleanup successfully" Nov 13 12:07:29.306546 containerd[1621]: time="2024-11-13T12:07:29.306333984Z" level=info msg="Ensure that sandbox 2c71cd07048b92e21d33e2c4e8ece5641c37ab1ae24040c92d87d3202a7e7f38 in task-service has been cleanup successfully" Nov 13 12:07:29.308347 containerd[1621]: time="2024-11-13T12:07:29.307097091Z" level=info msg="StopPodSandbox for \"eaed35cb271ca46a0a4f2ded52a9a4684a9f4f2433afbcb7a12ce82687d709eb\"" Nov 13 12:07:29.308670 containerd[1621]: time="2024-11-13T12:07:29.307158063Z" level=info msg="StopPodSandbox for \"5572f68acbacd57352449c2acfec5a91e5d3a7b1728cd7617c27dda20170908b\"" Nov 13 12:07:29.313485 containerd[1621]: time="2024-11-13T12:07:29.313402833Z" level=info msg="Ensure that sandbox 5572f68acbacd57352449c2acfec5a91e5d3a7b1728cd7617c27dda20170908b in task-service has been cleanup successfully" Nov 13 12:07:29.314086 containerd[1621]: time="2024-11-13T12:07:29.307135169Z" level=info msg="StopPodSandbox for \"ebd9b2c005747731e48ce3608906a04ec66097d761f5b87f6837919d549e2c4c\"" Nov 13 12:07:29.314086 containerd[1621]: time="2024-11-13T12:07:29.313834761Z" level=info msg="Ensure that sandbox ebd9b2c005747731e48ce3608906a04ec66097d761f5b87f6837919d549e2c4c in task-service has been cleanup successfully" Nov 13 12:07:29.316955 containerd[1621]: time="2024-11-13T12:07:29.311455515Z" level=info msg="Ensure that sandbox eaed35cb271ca46a0a4f2ded52a9a4684a9f4f2433afbcb7a12ce82687d709eb in task-service has been cleanup successfully" Nov 13 12:07:29.320075 containerd[1621]: time="2024-11-13T12:07:29.319919972Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.0\"" Nov 13 12:07:29.475931 containerd[1621]: time="2024-11-13T12:07:29.473909459Z" level=error msg="StopPodSandbox for \"2c71cd07048b92e21d33e2c4e8ece5641c37ab1ae24040c92d87d3202a7e7f38\" failed" error="failed to destroy network for sandbox \"2c71cd07048b92e21d33e2c4e8ece5641c37ab1ae24040c92d87d3202a7e7f38\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 13 12:07:29.476710 kubelet[2916]: E1113 12:07:29.474541 2916 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2c71cd07048b92e21d33e2c4e8ece5641c37ab1ae24040c92d87d3202a7e7f38\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2c71cd07048b92e21d33e2c4e8ece5641c37ab1ae24040c92d87d3202a7e7f38" Nov 13 12:07:29.481353 kubelet[2916]: E1113 12:07:29.480464 2916 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2c71cd07048b92e21d33e2c4e8ece5641c37ab1ae24040c92d87d3202a7e7f38"} Nov 13 12:07:29.481353 kubelet[2916]: E1113 12:07:29.480662 2916 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3531763d-3a3a-4c39-b14a-dc83c6ab3c8e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2c71cd07048b92e21d33e2c4e8ece5641c37ab1ae24040c92d87d3202a7e7f38\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 13 12:07:29.481353 kubelet[2916]: E1113 12:07:29.480754 2916 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3531763d-3a3a-4c39-b14a-dc83c6ab3c8e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2c71cd07048b92e21d33e2c4e8ece5641c37ab1ae24040c92d87d3202a7e7f38\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57579cb858-kmp5s" podUID="3531763d-3a3a-4c39-b14a-dc83c6ab3c8e" Nov 13 12:07:29.481353 kubelet[2916]: E1113 12:07:29.481084 2916 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"eaed35cb271ca46a0a4f2ded52a9a4684a9f4f2433afbcb7a12ce82687d709eb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="eaed35cb271ca46a0a4f2ded52a9a4684a9f4f2433afbcb7a12ce82687d709eb" Nov 13 12:07:29.481353 kubelet[2916]: E1113 12:07:29.481120 2916 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"eaed35cb271ca46a0a4f2ded52a9a4684a9f4f2433afbcb7a12ce82687d709eb"} Nov 13 12:07:29.482209 containerd[1621]: time="2024-11-13T12:07:29.480729589Z" level=error msg="StopPodSandbox for \"eaed35cb271ca46a0a4f2ded52a9a4684a9f4f2433afbcb7a12ce82687d709eb\" failed" error="failed to destroy network for sandbox \"eaed35cb271ca46a0a4f2ded52a9a4684a9f4f2433afbcb7a12ce82687d709eb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 13 12:07:29.482686 kubelet[2916]: E1113 12:07:29.481167 2916 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b5224871-082e-4b53-9a0e-2a5888841a73\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eaed35cb271ca46a0a4f2ded52a9a4684a9f4f2433afbcb7a12ce82687d709eb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 13 12:07:29.482686 kubelet[2916]: E1113 12:07:29.481216 2916 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b5224871-082e-4b53-9a0e-2a5888841a73\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eaed35cb271ca46a0a4f2ded52a9a4684a9f4f2433afbcb7a12ce82687d709eb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57579cb858-82wlr" podUID="b5224871-082e-4b53-9a0e-2a5888841a73" Nov 13 12:07:29.488088 containerd[1621]: time="2024-11-13T12:07:29.487894889Z" level=error msg="StopPodSandbox for \"b4223c8a31fa4cde8619c5216c6837f686b3df265c3fc95b8ee36b05a9487126\" failed" error="failed to destroy network for sandbox \"b4223c8a31fa4cde8619c5216c6837f686b3df265c3fc95b8ee36b05a9487126\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 13 12:07:29.488174 kubelet[2916]: E1113 12:07:29.488145 2916 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b4223c8a31fa4cde8619c5216c6837f686b3df265c3fc95b8ee36b05a9487126\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b4223c8a31fa4cde8619c5216c6837f686b3df265c3fc95b8ee36b05a9487126" Nov 13 12:07:29.488295 kubelet[2916]: E1113 12:07:29.488182 2916 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b4223c8a31fa4cde8619c5216c6837f686b3df265c3fc95b8ee36b05a9487126"} Nov 13 12:07:29.488295 kubelet[2916]: E1113 12:07:29.488267 2916 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"56d5778e-34f5-4698-a9ed-8cc517424213\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b4223c8a31fa4cde8619c5216c6837f686b3df265c3fc95b8ee36b05a9487126\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 13 12:07:29.488443 kubelet[2916]: E1113 12:07:29.488306 2916 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"56d5778e-34f5-4698-a9ed-8cc517424213\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b4223c8a31fa4cde8619c5216c6837f686b3df265c3fc95b8ee36b05a9487126\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-58wrm" podUID="56d5778e-34f5-4698-a9ed-8cc517424213" Nov 13 12:07:29.491242 containerd[1621]: time="2024-11-13T12:07:29.491048293Z" level=error msg="StopPodSandbox for \"ebd9b2c005747731e48ce3608906a04ec66097d761f5b87f6837919d549e2c4c\" failed" error="failed to destroy network for sandbox \"ebd9b2c005747731e48ce3608906a04ec66097d761f5b87f6837919d549e2c4c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 13 12:07:29.491806 containerd[1621]: time="2024-11-13T12:07:29.491512070Z" level=error msg="StopPodSandbox for \"5572f68acbacd57352449c2acfec5a91e5d3a7b1728cd7617c27dda20170908b\" failed" error="failed to destroy network for sandbox \"5572f68acbacd57352449c2acfec5a91e5d3a7b1728cd7617c27dda20170908b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 13 12:07:29.491919 kubelet[2916]: E1113 12:07:29.491633 2916 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ebd9b2c005747731e48ce3608906a04ec66097d761f5b87f6837919d549e2c4c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ebd9b2c005747731e48ce3608906a04ec66097d761f5b87f6837919d549e2c4c" Nov 13 12:07:29.491919 kubelet[2916]: E1113 12:07:29.491672 2916 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ebd9b2c005747731e48ce3608906a04ec66097d761f5b87f6837919d549e2c4c"} Nov 13 12:07:29.491919 kubelet[2916]: E1113 12:07:29.491699 2916 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5572f68acbacd57352449c2acfec5a91e5d3a7b1728cd7617c27dda20170908b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5572f68acbacd57352449c2acfec5a91e5d3a7b1728cd7617c27dda20170908b" Nov 13 12:07:29.491919 kubelet[2916]: E1113 12:07:29.491716 2916 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b637d3e5-f4d0-4ad5-a801-bcd5490bb15d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ebd9b2c005747731e48ce3608906a04ec66097d761f5b87f6837919d549e2c4c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 13 12:07:29.491919 kubelet[2916]: E1113 12:07:29.491736 2916 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5572f68acbacd57352449c2acfec5a91e5d3a7b1728cd7617c27dda20170908b"} Nov 13 12:07:29.492587 kubelet[2916]: E1113 12:07:29.491751 2916 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b637d3e5-f4d0-4ad5-a801-bcd5490bb15d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ebd9b2c005747731e48ce3608906a04ec66097d761f5b87f6837919d549e2c4c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-kz8bv" podUID="b637d3e5-f4d0-4ad5-a801-bcd5490bb15d" Nov 13 12:07:29.492587 kubelet[2916]: E1113 12:07:29.491787 2916 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"36b3a588-3839-4c9b-9a4b-8639fa9043ec\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5572f68acbacd57352449c2acfec5a91e5d3a7b1728cd7617c27dda20170908b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 13 12:07:29.492587 kubelet[2916]: E1113 12:07:29.491825 2916 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"36b3a588-3839-4c9b-9a4b-8639fa9043ec\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5572f68acbacd57352449c2acfec5a91e5d3a7b1728cd7617c27dda20170908b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-668f5df9fb-7xn7m" podUID="36b3a588-3839-4c9b-9a4b-8639fa9043ec" Nov 13 12:07:30.043912 containerd[1621]: time="2024-11-13T12:07:30.043842214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-shdhv,Uid:8a886e3e-042b-40d2-b8c2-1a33730ec832,Namespace:calico-system,Attempt:0,}" Nov 13 12:07:30.142712 containerd[1621]: time="2024-11-13T12:07:30.142451234Z" level=error msg="Failed to destroy network for sandbox \"e499fb67b4891d47aeacea2c689a82a8971d5ee08b39f00648a857d3a035e84c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 13 12:07:30.145675 containerd[1621]: time="2024-11-13T12:07:30.145620208Z" level=error msg="encountered an error cleaning up failed sandbox \"e499fb67b4891d47aeacea2c689a82a8971d5ee08b39f00648a857d3a035e84c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 13 12:07:30.145827 containerd[1621]: time="2024-11-13T12:07:30.145734067Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-shdhv,Uid:8a886e3e-042b-40d2-b8c2-1a33730ec832,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e499fb67b4891d47aeacea2c689a82a8971d5ee08b39f00648a857d3a035e84c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 13 12:07:30.146796 kubelet[2916]: E1113 12:07:30.146759 2916 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e499fb67b4891d47aeacea2c689a82a8971d5ee08b39f00648a857d3a035e84c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 13 12:07:30.148485 kubelet[2916]: E1113 12:07:30.146844 2916 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e499fb67b4891d47aeacea2c689a82a8971d5ee08b39f00648a857d3a035e84c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-shdhv" Nov 13 12:07:30.148485 kubelet[2916]: E1113 12:07:30.146883 2916 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e499fb67b4891d47aeacea2c689a82a8971d5ee08b39f00648a857d3a035e84c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-shdhv" Nov 13 12:07:30.148485 kubelet[2916]: E1113 12:07:30.146990 2916 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-shdhv_calico-system(8a886e3e-042b-40d2-b8c2-1a33730ec832)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-shdhv_calico-system(8a886e3e-042b-40d2-b8c2-1a33730ec832)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e499fb67b4891d47aeacea2c689a82a8971d5ee08b39f00648a857d3a035e84c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-shdhv" podUID="8a886e3e-042b-40d2-b8c2-1a33730ec832" Nov 13 12:07:30.147988 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e499fb67b4891d47aeacea2c689a82a8971d5ee08b39f00648a857d3a035e84c-shm.mount: Deactivated successfully. Nov 13 12:07:30.320063 kubelet[2916]: I1113 12:07:30.319759 2916 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e499fb67b4891d47aeacea2c689a82a8971d5ee08b39f00648a857d3a035e84c" Nov 13 12:07:30.323161 containerd[1621]: time="2024-11-13T12:07:30.321424160Z" level=info msg="StopPodSandbox for \"e499fb67b4891d47aeacea2c689a82a8971d5ee08b39f00648a857d3a035e84c\"" Nov 13 12:07:30.323161 containerd[1621]: time="2024-11-13T12:07:30.321722646Z" level=info msg="Ensure that sandbox e499fb67b4891d47aeacea2c689a82a8971d5ee08b39f00648a857d3a035e84c in task-service has been cleanup successfully" Nov 13 12:07:30.361127 containerd[1621]: time="2024-11-13T12:07:30.360997553Z" level=error msg="StopPodSandbox for \"e499fb67b4891d47aeacea2c689a82a8971d5ee08b39f00648a857d3a035e84c\" failed" error="failed to destroy network for sandbox \"e499fb67b4891d47aeacea2c689a82a8971d5ee08b39f00648a857d3a035e84c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 13 12:07:30.361845 kubelet[2916]: E1113 12:07:30.361462 2916 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e499fb67b4891d47aeacea2c689a82a8971d5ee08b39f00648a857d3a035e84c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e499fb67b4891d47aeacea2c689a82a8971d5ee08b39f00648a857d3a035e84c" Nov 13 12:07:30.361845 kubelet[2916]: E1113 12:07:30.361555 2916 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e499fb67b4891d47aeacea2c689a82a8971d5ee08b39f00648a857d3a035e84c"} Nov 13 12:07:30.361845 kubelet[2916]: E1113 12:07:30.361622 2916 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8a886e3e-042b-40d2-b8c2-1a33730ec832\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e499fb67b4891d47aeacea2c689a82a8971d5ee08b39f00648a857d3a035e84c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 13 12:07:30.361845 kubelet[2916]: E1113 12:07:30.361672 2916 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8a886e3e-042b-40d2-b8c2-1a33730ec832\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e499fb67b4891d47aeacea2c689a82a8971d5ee08b39f00648a857d3a035e84c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-shdhv" podUID="8a886e3e-042b-40d2-b8c2-1a33730ec832" Nov 13 12:07:31.350860 kubelet[2916]: I1113 12:07:31.350110 2916 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 13 12:07:38.588122 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2967028723.mount: Deactivated successfully. Nov 13 12:07:38.681491 containerd[1621]: time="2024-11-13T12:07:38.669105367Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.0: active requests=0, bytes read=140580710" Nov 13 12:07:38.700361 containerd[1621]: time="2024-11-13T12:07:38.700147361Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.0\" with image id \"sha256:df7e265d5ccd035f529156d2ef608d879200d07c1539ca9cac539da91478bc9f\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:0761a4b4a20aefdf788f2b42a221bfcfe926a474152b74fbe091d847f5d823d7\", size \"140580572\" in 9.366589027s" Nov 13 12:07:38.700361 containerd[1621]: time="2024-11-13T12:07:38.700206966Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.0\" returns image reference \"sha256:df7e265d5ccd035f529156d2ef608d879200d07c1539ca9cac539da91478bc9f\"" Nov 13 12:07:38.705689 containerd[1621]: time="2024-11-13T12:07:38.705166848Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 12:07:38.749059 containerd[1621]: time="2024-11-13T12:07:38.748685011Z" level=info msg="ImageCreate event name:\"sha256:df7e265d5ccd035f529156d2ef608d879200d07c1539ca9cac539da91478bc9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 12:07:38.749717 containerd[1621]: time="2024-11-13T12:07:38.749633747Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:0761a4b4a20aefdf788f2b42a221bfcfe926a474152b74fbe091d847f5d823d7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 12:07:38.779207 containerd[1621]: time="2024-11-13T12:07:38.779111267Z" level=info msg="CreateContainer within sandbox \"cc815bb38d05cfd95459bdcd3b5bb366b7110407bf27b2ccf5e22f632dc3f999\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 13 12:07:38.848275 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1952453134.mount: Deactivated successfully. Nov 13 12:07:38.874097 containerd[1621]: time="2024-11-13T12:07:38.873962551Z" level=info msg="CreateContainer within sandbox \"cc815bb38d05cfd95459bdcd3b5bb366b7110407bf27b2ccf5e22f632dc3f999\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7c8b8e24045ef548961a4d09e7cc1c601f90a291de7da9ec1bfbf5ac6654b9d3\"" Nov 13 12:07:38.880617 containerd[1621]: time="2024-11-13T12:07:38.880565860Z" level=info msg="StartContainer for \"7c8b8e24045ef548961a4d09e7cc1c601f90a291de7da9ec1bfbf5ac6654b9d3\"" Nov 13 12:07:39.201135 containerd[1621]: time="2024-11-13T12:07:39.201058648Z" level=info msg="StartContainer for \"7c8b8e24045ef548961a4d09e7cc1c601f90a291de7da9ec1bfbf5ac6654b9d3\" returns successfully" Nov 13 12:07:39.319766 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 13 12:07:39.321184 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 13 12:07:39.426029 kubelet[2916]: I1113 12:07:39.422098 2916 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-tfcws" podStartSLOduration=1.499164649 podStartE2EDuration="25.385831554s" podCreationTimestamp="2024-11-13 12:07:14 +0000 UTC" firstStartedPulling="2024-11-13 12:07:14.814194434 +0000 UTC m=+24.058375728" lastFinishedPulling="2024-11-13 12:07:38.700861344 +0000 UTC m=+47.945042633" observedRunningTime="2024-11-13 12:07:39.385567128 +0000 UTC m=+48.629748441" watchObservedRunningTime="2024-11-13 12:07:39.385831554 +0000 UTC m=+48.630012856" Nov 13 12:07:39.721816 systemd-journald[1183]: Under memory pressure, flushing caches. Nov 13 12:07:39.710687 systemd-resolved[1517]: Under memory pressure, flushing caches. Nov 13 12:07:39.710816 systemd-resolved[1517]: Flushed all caches. Nov 13 12:07:40.044088 containerd[1621]: time="2024-11-13T12:07:40.043825567Z" level=info msg="StopPodSandbox for \"eaed35cb271ca46a0a4f2ded52a9a4684a9f4f2433afbcb7a12ce82687d709eb\"" Nov 13 12:07:40.045631 containerd[1621]: time="2024-11-13T12:07:40.044909111Z" level=info msg="StopPodSandbox for \"5572f68acbacd57352449c2acfec5a91e5d3a7b1728cd7617c27dda20170908b\"" Nov 13 12:07:40.358388 containerd[1621]: 2024-11-13 12:07:40.158 [INFO][4110] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5572f68acbacd57352449c2acfec5a91e5d3a7b1728cd7617c27dda20170908b" Nov 13 12:07:40.358388 containerd[1621]: 2024-11-13 12:07:40.161 [INFO][4110] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5572f68acbacd57352449c2acfec5a91e5d3a7b1728cd7617c27dda20170908b" iface="eth0" netns="/var/run/netns/cni-99327ffd-fdff-656c-346a-2db16077fe3a" Nov 13 12:07:40.358388 containerd[1621]: 2024-11-13 12:07:40.162 [INFO][4110] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5572f68acbacd57352449c2acfec5a91e5d3a7b1728cd7617c27dda20170908b" iface="eth0" netns="/var/run/netns/cni-99327ffd-fdff-656c-346a-2db16077fe3a" Nov 13 12:07:40.358388 containerd[1621]: 2024-11-13 12:07:40.164 [INFO][4110] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5572f68acbacd57352449c2acfec5a91e5d3a7b1728cd7617c27dda20170908b" iface="eth0" netns="/var/run/netns/cni-99327ffd-fdff-656c-346a-2db16077fe3a" Nov 13 12:07:40.358388 containerd[1621]: 2024-11-13 12:07:40.164 [INFO][4110] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5572f68acbacd57352449c2acfec5a91e5d3a7b1728cd7617c27dda20170908b" Nov 13 12:07:40.358388 containerd[1621]: 2024-11-13 12:07:40.164 [INFO][4110] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5572f68acbacd57352449c2acfec5a91e5d3a7b1728cd7617c27dda20170908b" Nov 13 12:07:40.358388 containerd[1621]: 2024-11-13 12:07:40.336 [INFO][4121] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5572f68acbacd57352449c2acfec5a91e5d3a7b1728cd7617c27dda20170908b" HandleID="k8s-pod-network.5572f68acbacd57352449c2acfec5a91e5d3a7b1728cd7617c27dda20170908b" Workload="srv--sx7g0.gb1.brightbox.com-k8s-calico--kube--controllers--668f5df9fb--7xn7m-eth0" Nov 13 12:07:40.358388 containerd[1621]: 2024-11-13 12:07:40.338 [INFO][4121] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 13 12:07:40.358388 containerd[1621]: 2024-11-13 12:07:40.338 [INFO][4121] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 13 12:07:40.358388 containerd[1621]: 2024-11-13 12:07:40.350 [WARNING][4121] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5572f68acbacd57352449c2acfec5a91e5d3a7b1728cd7617c27dda20170908b" HandleID="k8s-pod-network.5572f68acbacd57352449c2acfec5a91e5d3a7b1728cd7617c27dda20170908b" Workload="srv--sx7g0.gb1.brightbox.com-k8s-calico--kube--controllers--668f5df9fb--7xn7m-eth0" Nov 13 12:07:40.358388 containerd[1621]: 2024-11-13 12:07:40.350 [INFO][4121] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5572f68acbacd57352449c2acfec5a91e5d3a7b1728cd7617c27dda20170908b" HandleID="k8s-pod-network.5572f68acbacd57352449c2acfec5a91e5d3a7b1728cd7617c27dda20170908b" Workload="srv--sx7g0.gb1.brightbox.com-k8s-calico--kube--controllers--668f5df9fb--7xn7m-eth0" Nov 13 12:07:40.358388 containerd[1621]: 2024-11-13 12:07:40.352 [INFO][4121] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 13 12:07:40.358388 containerd[1621]: 2024-11-13 12:07:40.355 [INFO][4110] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5572f68acbacd57352449c2acfec5a91e5d3a7b1728cd7617c27dda20170908b" Nov 13 12:07:40.361291 containerd[1621]: time="2024-11-13T12:07:40.359702956Z" level=info msg="TearDown network for sandbox \"5572f68acbacd57352449c2acfec5a91e5d3a7b1728cd7617c27dda20170908b\" successfully" Nov 13 12:07:40.361291 containerd[1621]: time="2024-11-13T12:07:40.359795802Z" level=info msg="StopPodSandbox for \"5572f68acbacd57352449c2acfec5a91e5d3a7b1728cd7617c27dda20170908b\" returns successfully" Nov 13 12:07:40.368185 containerd[1621]: time="2024-11-13T12:07:40.367179135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-668f5df9fb-7xn7m,Uid:36b3a588-3839-4c9b-9a4b-8639fa9043ec,Namespace:calico-system,Attempt:1,}" Nov 13 12:07:40.374418 systemd[1]: run-netns-cni\x2d99327ffd\x2dfdff\x2d656c\x2d346a\x2d2db16077fe3a.mount: Deactivated successfully. Nov 13 12:07:40.382221 containerd[1621]: 2024-11-13 12:07:40.153 [INFO][4109] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="eaed35cb271ca46a0a4f2ded52a9a4684a9f4f2433afbcb7a12ce82687d709eb" Nov 13 12:07:40.382221 containerd[1621]: 2024-11-13 12:07:40.159 [INFO][4109] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="eaed35cb271ca46a0a4f2ded52a9a4684a9f4f2433afbcb7a12ce82687d709eb" iface="eth0" netns="/var/run/netns/cni-0913f126-f5a1-4187-6135-a96b8d4e8e60" Nov 13 12:07:40.382221 containerd[1621]: 2024-11-13 12:07:40.162 [INFO][4109] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="eaed35cb271ca46a0a4f2ded52a9a4684a9f4f2433afbcb7a12ce82687d709eb" iface="eth0" netns="/var/run/netns/cni-0913f126-f5a1-4187-6135-a96b8d4e8e60" Nov 13 12:07:40.382221 containerd[1621]: 2024-11-13 12:07:40.165 [INFO][4109] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="eaed35cb271ca46a0a4f2ded52a9a4684a9f4f2433afbcb7a12ce82687d709eb" iface="eth0" netns="/var/run/netns/cni-0913f126-f5a1-4187-6135-a96b8d4e8e60" Nov 13 12:07:40.382221 containerd[1621]: 2024-11-13 12:07:40.165 [INFO][4109] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="eaed35cb271ca46a0a4f2ded52a9a4684a9f4f2433afbcb7a12ce82687d709eb" Nov 13 12:07:40.382221 containerd[1621]: 2024-11-13 12:07:40.165 [INFO][4109] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eaed35cb271ca46a0a4f2ded52a9a4684a9f4f2433afbcb7a12ce82687d709eb" Nov 13 12:07:40.382221 containerd[1621]: 2024-11-13 12:07:40.336 [INFO][4122] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eaed35cb271ca46a0a4f2ded52a9a4684a9f4f2433afbcb7a12ce82687d709eb" HandleID="k8s-pod-network.eaed35cb271ca46a0a4f2ded52a9a4684a9f4f2433afbcb7a12ce82687d709eb" Workload="srv--sx7g0.gb1.brightbox.com-k8s-calico--apiserver--57579cb858--82wlr-eth0" Nov 13 12:07:40.382221 containerd[1621]: 2024-11-13 12:07:40.338 [INFO][4122] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 13 12:07:40.382221 containerd[1621]: 2024-11-13 12:07:40.352 [INFO][4122] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 13 12:07:40.382221 containerd[1621]: 2024-11-13 12:07:40.363 [WARNING][4122] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eaed35cb271ca46a0a4f2ded52a9a4684a9f4f2433afbcb7a12ce82687d709eb" HandleID="k8s-pod-network.eaed35cb271ca46a0a4f2ded52a9a4684a9f4f2433afbcb7a12ce82687d709eb" Workload="srv--sx7g0.gb1.brightbox.com-k8s-calico--apiserver--57579cb858--82wlr-eth0" Nov 13 12:07:40.382221 containerd[1621]: 2024-11-13 12:07:40.364 [INFO][4122] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eaed35cb271ca46a0a4f2ded52a9a4684a9f4f2433afbcb7a12ce82687d709eb" HandleID="k8s-pod-network.eaed35cb271ca46a0a4f2ded52a9a4684a9f4f2433afbcb7a12ce82687d709eb" Workload="srv--sx7g0.gb1.brightbox.com-k8s-calico--apiserver--57579cb858--82wlr-eth0" Nov 13 12:07:40.382221 containerd[1621]: 2024-11-13 12:07:40.369 [INFO][4122] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 13 12:07:40.382221 containerd[1621]: 2024-11-13 12:07:40.378 [INFO][4109] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="eaed35cb271ca46a0a4f2ded52a9a4684a9f4f2433afbcb7a12ce82687d709eb" Nov 13 12:07:40.385026 containerd[1621]: time="2024-11-13T12:07:40.383780387Z" level=info msg="TearDown network for sandbox \"eaed35cb271ca46a0a4f2ded52a9a4684a9f4f2433afbcb7a12ce82687d709eb\" successfully" Nov 13 12:07:40.385026 containerd[1621]: time="2024-11-13T12:07:40.383826106Z" level=info msg="StopPodSandbox for \"eaed35cb271ca46a0a4f2ded52a9a4684a9f4f2433afbcb7a12ce82687d709eb\" returns successfully" Nov 13 12:07:40.386636 containerd[1621]: time="2024-11-13T12:07:40.386598210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57579cb858-82wlr,Uid:b5224871-082e-4b53-9a0e-2a5888841a73,Namespace:calico-apiserver,Attempt:1,}" Nov 13 12:07:40.391508 systemd[1]: run-netns-cni\x2d0913f126\x2df5a1\x2d4187\x2d6135\x2da96b8d4e8e60.mount: Deactivated successfully. Nov 13 12:07:40.685933 systemd-networkd[1261]: cali7a4b595039b: Link UP Nov 13 12:07:40.686854 systemd-networkd[1261]: cali7a4b595039b: Gained carrier Nov 13 12:07:40.715132 containerd[1621]: 2024-11-13 12:07:40.479 [INFO][4154] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 13 12:07:40.715132 containerd[1621]: 2024-11-13 12:07:40.517 [INFO][4154] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--sx7g0.gb1.brightbox.com-k8s-calico--apiserver--57579cb858--82wlr-eth0 calico-apiserver-57579cb858- calico-apiserver b5224871-082e-4b53-9a0e-2a5888841a73 785 0 2024-11-13 12:07:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:57579cb858 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-sx7g0.gb1.brightbox.com calico-apiserver-57579cb858-82wlr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7a4b595039b [] []}} ContainerID="3a7cde201aab8907371659a24c5f01abbf5ef876e808b35f7919f97477d90585" Namespace="calico-apiserver" Pod="calico-apiserver-57579cb858-82wlr" WorkloadEndpoint="srv--sx7g0.gb1.brightbox.com-k8s-calico--apiserver--57579cb858--82wlr-" Nov 13 12:07:40.715132 containerd[1621]: 2024-11-13 12:07:40.517 [INFO][4154] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3a7cde201aab8907371659a24c5f01abbf5ef876e808b35f7919f97477d90585" Namespace="calico-apiserver" Pod="calico-apiserver-57579cb858-82wlr" WorkloadEndpoint="srv--sx7g0.gb1.brightbox.com-k8s-calico--apiserver--57579cb858--82wlr-eth0" Nov 13 12:07:40.715132 containerd[1621]: 2024-11-13 12:07:40.608 [INFO][4178] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3a7cde201aab8907371659a24c5f01abbf5ef876e808b35f7919f97477d90585" HandleID="k8s-pod-network.3a7cde201aab8907371659a24c5f01abbf5ef876e808b35f7919f97477d90585" Workload="srv--sx7g0.gb1.brightbox.com-k8s-calico--apiserver--57579cb858--82wlr-eth0" Nov 13 12:07:40.715132 containerd[1621]: 2024-11-13 12:07:40.626 [INFO][4178] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3a7cde201aab8907371659a24c5f01abbf5ef876e808b35f7919f97477d90585" HandleID="k8s-pod-network.3a7cde201aab8907371659a24c5f01abbf5ef876e808b35f7919f97477d90585" Workload="srv--sx7g0.gb1.brightbox.com-k8s-calico--apiserver--57579cb858--82wlr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000313060), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-sx7g0.gb1.brightbox.com", "pod":"calico-apiserver-57579cb858-82wlr", "timestamp":"2024-11-13 12:07:40.608666005 +0000 UTC"}, Hostname:"srv-sx7g0.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 13 12:07:40.715132 containerd[1621]: 2024-11-13 12:07:40.626 [INFO][4178] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 13 12:07:40.715132 containerd[1621]: 2024-11-13 12:07:40.626 [INFO][4178] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 13 12:07:40.715132 containerd[1621]: 2024-11-13 12:07:40.626 [INFO][4178] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-sx7g0.gb1.brightbox.com' Nov 13 12:07:40.715132 containerd[1621]: 2024-11-13 12:07:40.628 [INFO][4178] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3a7cde201aab8907371659a24c5f01abbf5ef876e808b35f7919f97477d90585" host="srv-sx7g0.gb1.brightbox.com" Nov 13 12:07:40.715132 containerd[1621]: 2024-11-13 12:07:40.637 [INFO][4178] ipam/ipam.go 372: Looking up existing affinities for host host="srv-sx7g0.gb1.brightbox.com" Nov 13 12:07:40.715132 containerd[1621]: 2024-11-13 12:07:40.643 [INFO][4178] ipam/ipam.go 489: Trying affinity for 192.168.83.64/26 host="srv-sx7g0.gb1.brightbox.com" Nov 13 12:07:40.715132 containerd[1621]: 2024-11-13 12:07:40.645 [INFO][4178] ipam/ipam.go 155: Attempting to load block cidr=192.168.83.64/26 host="srv-sx7g0.gb1.brightbox.com" Nov 13 12:07:40.715132 containerd[1621]: 2024-11-13 12:07:40.648 [INFO][4178] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.83.64/26 host="srv-sx7g0.gb1.brightbox.com" Nov 13 12:07:40.715132 containerd[1621]: 2024-11-13 12:07:40.648 [INFO][4178] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.83.64/26 handle="k8s-pod-network.3a7cde201aab8907371659a24c5f01abbf5ef876e808b35f7919f97477d90585" host="srv-sx7g0.gb1.brightbox.com" Nov 13 12:07:40.715132 containerd[1621]: 2024-11-13 12:07:40.650 [INFO][4178] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3a7cde201aab8907371659a24c5f01abbf5ef876e808b35f7919f97477d90585 Nov 13 12:07:40.715132 containerd[1621]: 2024-11-13 12:07:40.656 [INFO][4178] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.83.64/26 handle="k8s-pod-network.3a7cde201aab8907371659a24c5f01abbf5ef876e808b35f7919f97477d90585" host="srv-sx7g0.gb1.brightbox.com" Nov 13 12:07:40.715132 containerd[1621]: 2024-11-13 12:07:40.665 [INFO][4178] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.83.65/26] block=192.168.83.64/26 handle="k8s-pod-network.3a7cde201aab8907371659a24c5f01abbf5ef876e808b35f7919f97477d90585" host="srv-sx7g0.gb1.brightbox.com" Nov 13 12:07:40.715132 containerd[1621]: 2024-11-13 12:07:40.665 [INFO][4178] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.83.65/26] handle="k8s-pod-network.3a7cde201aab8907371659a24c5f01abbf5ef876e808b35f7919f97477d90585" host="srv-sx7g0.gb1.brightbox.com" Nov 13 12:07:40.715132 containerd[1621]: 2024-11-13 12:07:40.665 [INFO][4178] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 13 12:07:40.715132 containerd[1621]: 2024-11-13 12:07:40.665 [INFO][4178] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.83.65/26] IPv6=[] ContainerID="3a7cde201aab8907371659a24c5f01abbf5ef876e808b35f7919f97477d90585" HandleID="k8s-pod-network.3a7cde201aab8907371659a24c5f01abbf5ef876e808b35f7919f97477d90585" Workload="srv--sx7g0.gb1.brightbox.com-k8s-calico--apiserver--57579cb858--82wlr-eth0" Nov 13 12:07:40.719966 containerd[1621]: 2024-11-13 12:07:40.668 [INFO][4154] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3a7cde201aab8907371659a24c5f01abbf5ef876e808b35f7919f97477d90585" Namespace="calico-apiserver" Pod="calico-apiserver-57579cb858-82wlr" WorkloadEndpoint="srv--sx7g0.gb1.brightbox.com-k8s-calico--apiserver--57579cb858--82wlr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--sx7g0.gb1.brightbox.com-k8s-calico--apiserver--57579cb858--82wlr-eth0", GenerateName:"calico-apiserver-57579cb858-", Namespace:"calico-apiserver", SelfLink:"", UID:"b5224871-082e-4b53-9a0e-2a5888841a73", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2024, time.November, 13, 12, 7, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57579cb858", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-sx7g0.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-57579cb858-82wlr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.83.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7a4b595039b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 13 12:07:40.719966 containerd[1621]: 2024-11-13 12:07:40.668 [INFO][4154] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.83.65/32] ContainerID="3a7cde201aab8907371659a24c5f01abbf5ef876e808b35f7919f97477d90585" Namespace="calico-apiserver" Pod="calico-apiserver-57579cb858-82wlr" WorkloadEndpoint="srv--sx7g0.gb1.brightbox.com-k8s-calico--apiserver--57579cb858--82wlr-eth0" Nov 13 12:07:40.719966 containerd[1621]: 2024-11-13 12:07:40.668 [INFO][4154] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7a4b595039b ContainerID="3a7cde201aab8907371659a24c5f01abbf5ef876e808b35f7919f97477d90585" Namespace="calico-apiserver" Pod="calico-apiserver-57579cb858-82wlr" WorkloadEndpoint="srv--sx7g0.gb1.brightbox.com-k8s-calico--apiserver--57579cb858--82wlr-eth0" Nov 13 12:07:40.719966 containerd[1621]: 2024-11-13 12:07:40.687 [INFO][4154] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3a7cde201aab8907371659a24c5f01abbf5ef876e808b35f7919f97477d90585" Namespace="calico-apiserver" Pod="calico-apiserver-57579cb858-82wlr" WorkloadEndpoint="srv--sx7g0.gb1.brightbox.com-k8s-calico--apiserver--57579cb858--82wlr-eth0" Nov 13 12:07:40.719966 containerd[1621]: 2024-11-13 12:07:40.687 [INFO][4154] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3a7cde201aab8907371659a24c5f01abbf5ef876e808b35f7919f97477d90585" Namespace="calico-apiserver" Pod="calico-apiserver-57579cb858-82wlr" WorkloadEndpoint="srv--sx7g0.gb1.brightbox.com-k8s-calico--apiserver--57579cb858--82wlr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--sx7g0.gb1.brightbox.com-k8s-calico--apiserver--57579cb858--82wlr-eth0", GenerateName:"calico-apiserver-57579cb858-", Namespace:"calico-apiserver", SelfLink:"", UID:"b5224871-082e-4b53-9a0e-2a5888841a73", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2024, time.November, 13, 12, 7, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57579cb858", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-sx7g0.gb1.brightbox.com", ContainerID:"3a7cde201aab8907371659a24c5f01abbf5ef876e808b35f7919f97477d90585", Pod:"calico-apiserver-57579cb858-82wlr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.83.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7a4b595039b", MAC:"b6:18:66:d7:47:eb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 13 12:07:40.719966 containerd[1621]: 2024-11-13 12:07:40.711 [INFO][4154] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3a7cde201aab8907371659a24c5f01abbf5ef876e808b35f7919f97477d90585" Namespace="calico-apiserver" Pod="calico-apiserver-57579cb858-82wlr" WorkloadEndpoint="srv--sx7g0.gb1.brightbox.com-k8s-calico--apiserver--57579cb858--82wlr-eth0" Nov 13 12:07:40.747621 systemd-networkd[1261]: calie8ce195757a: Link UP Nov 13 12:07:40.749511 systemd-networkd[1261]: calie8ce195757a: Gained carrier Nov 13 12:07:40.775809 containerd[1621]: 2024-11-13 12:07:40.507 [INFO][4156] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 13 12:07:40.775809 containerd[1621]: 2024-11-13 12:07:40.528 [INFO][4156] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--sx7g0.gb1.brightbox.com-k8s-calico--kube--controllers--668f5df9fb--7xn7m-eth0 calico-kube-controllers-668f5df9fb- calico-system 36b3a588-3839-4c9b-9a4b-8639fa9043ec 786 0 2024-11-13 12:07:14 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:668f5df9fb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-sx7g0.gb1.brightbox.com calico-kube-controllers-668f5df9fb-7xn7m eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie8ce195757a [] []}} ContainerID="ae352efd14837365c1d88ece7dde3477d1b1962a13516686e49c032a4fd73ae9" Namespace="calico-system" Pod="calico-kube-controllers-668f5df9fb-7xn7m" WorkloadEndpoint="srv--sx7g0.gb1.brightbox.com-k8s-calico--kube--controllers--668f5df9fb--7xn7m-" Nov 13 12:07:40.775809 containerd[1621]: 2024-11-13 12:07:40.528 [INFO][4156] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ae352efd14837365c1d88ece7dde3477d1b1962a13516686e49c032a4fd73ae9" Namespace="calico-system" Pod="calico-kube-controllers-668f5df9fb-7xn7m" WorkloadEndpoint="srv--sx7g0.gb1.brightbox.com-k8s-calico--kube--controllers--668f5df9fb--7xn7m-eth0" Nov 13 12:07:40.775809 containerd[1621]: 2024-11-13 12:07:40.609 [INFO][4182] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ae352efd14837365c1d88ece7dde3477d1b1962a13516686e49c032a4fd73ae9" HandleID="k8s-pod-network.ae352efd14837365c1d88ece7dde3477d1b1962a13516686e49c032a4fd73ae9" Workload="srv--sx7g0.gb1.brightbox.com-k8s-calico--kube--controllers--668f5df9fb--7xn7m-eth0" Nov 13 12:07:40.775809 containerd[1621]: 2024-11-13 12:07:40.626 [INFO][4182] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ae352efd14837365c1d88ece7dde3477d1b1962a13516686e49c032a4fd73ae9" HandleID="k8s-pod-network.ae352efd14837365c1d88ece7dde3477d1b1962a13516686e49c032a4fd73ae9" Workload="srv--sx7g0.gb1.brightbox.com-k8s-calico--kube--controllers--668f5df9fb--7xn7m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051160), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-sx7g0.gb1.brightbox.com", "pod":"calico-kube-controllers-668f5df9fb-7xn7m", "timestamp":"2024-11-13 12:07:40.609743022 +0000 UTC"}, Hostname:"srv-sx7g0.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 13 12:07:40.775809 containerd[1621]: 2024-11-13 12:07:40.626 [INFO][4182] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 13 12:07:40.775809 containerd[1621]: 2024-11-13 12:07:40.665 [INFO][4182] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 13 12:07:40.775809 containerd[1621]: 2024-11-13 12:07:40.665 [INFO][4182] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-sx7g0.gb1.brightbox.com' Nov 13 12:07:40.775809 containerd[1621]: 2024-11-13 12:07:40.670 [INFO][4182] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ae352efd14837365c1d88ece7dde3477d1b1962a13516686e49c032a4fd73ae9" host="srv-sx7g0.gb1.brightbox.com" Nov 13 12:07:40.775809 containerd[1621]: 2024-11-13 12:07:40.679 [INFO][4182] ipam/ipam.go 372: Looking up existing affinities for host host="srv-sx7g0.gb1.brightbox.com" Nov 13 12:07:40.775809 containerd[1621]: 2024-11-13 12:07:40.692 [INFO][4182] ipam/ipam.go 489: Trying affinity for 192.168.83.64/26 host="srv-sx7g0.gb1.brightbox.com" Nov 13 12:07:40.775809 containerd[1621]: 2024-11-13 12:07:40.696 [INFO][4182] ipam/ipam.go 155: Attempting to load block cidr=192.168.83.64/26 host="srv-sx7g0.gb1.brightbox.com" Nov 13 12:07:40.775809 containerd[1621]: 2024-11-13 12:07:40.702 [INFO][4182] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.83.64/26 host="srv-sx7g0.gb1.brightbox.com" Nov 13 12:07:40.775809 containerd[1621]: 2024-11-13 12:07:40.702 [INFO][4182] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.83.64/26 handle="k8s-pod-network.ae352efd14837365c1d88ece7dde3477d1b1962a13516686e49c032a4fd73ae9" host="srv-sx7g0.gb1.brightbox.com" Nov 13 12:07:40.775809 containerd[1621]: 2024-11-13 12:07:40.711 [INFO][4182] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ae352efd14837365c1d88ece7dde3477d1b1962a13516686e49c032a4fd73ae9 Nov 13 12:07:40.775809 containerd[1621]: 2024-11-13 12:07:40.722 [INFO][4182] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.83.64/26 handle="k8s-pod-network.ae352efd14837365c1d88ece7dde3477d1b1962a13516686e49c032a4fd73ae9" host="srv-sx7g0.gb1.brightbox.com" Nov 13 12:07:40.775809 containerd[1621]: 2024-11-13 12:07:40.737 [INFO][4182] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.83.66/26] block=192.168.83.64/26 handle="k8s-pod-network.ae352efd14837365c1d88ece7dde3477d1b1962a13516686e49c032a4fd73ae9" host="srv-sx7g0.gb1.brightbox.com" Nov 13 12:07:40.775809 containerd[1621]: 2024-11-13 12:07:40.737 [INFO][4182] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.83.66/26] handle="k8s-pod-network.ae352efd14837365c1d88ece7dde3477d1b1962a13516686e49c032a4fd73ae9" host="srv-sx7g0.gb1.brightbox.com" Nov 13 12:07:40.775809 containerd[1621]: 2024-11-13 12:07:40.737 [INFO][4182] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 13 12:07:40.775809 containerd[1621]: 2024-11-13 12:07:40.737 [INFO][4182] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.83.66/26] IPv6=[] ContainerID="ae352efd14837365c1d88ece7dde3477d1b1962a13516686e49c032a4fd73ae9" HandleID="k8s-pod-network.ae352efd14837365c1d88ece7dde3477d1b1962a13516686e49c032a4fd73ae9" Workload="srv--sx7g0.gb1.brightbox.com-k8s-calico--kube--controllers--668f5df9fb--7xn7m-eth0" Nov 13 12:07:40.779725 containerd[1621]: 2024-11-13 12:07:40.740 [INFO][4156] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ae352efd14837365c1d88ece7dde3477d1b1962a13516686e49c032a4fd73ae9" Namespace="calico-system" Pod="calico-kube-controllers-668f5df9fb-7xn7m" WorkloadEndpoint="srv--sx7g0.gb1.brightbox.com-k8s-calico--kube--controllers--668f5df9fb--7xn7m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--sx7g0.gb1.brightbox.com-k8s-calico--kube--controllers--668f5df9fb--7xn7m-eth0", GenerateName:"calico-kube-controllers-668f5df9fb-", Namespace:"calico-system", SelfLink:"", UID:"36b3a588-3839-4c9b-9a4b-8639fa9043ec", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2024, time.November, 13, 12, 7, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"668f5df9fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-sx7g0.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-668f5df9fb-7xn7m", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.83.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie8ce195757a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 13 12:07:40.779725 containerd[1621]: 2024-11-13 12:07:40.741 [INFO][4156] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.83.66/32] ContainerID="ae352efd14837365c1d88ece7dde3477d1b1962a13516686e49c032a4fd73ae9" Namespace="calico-system" Pod="calico-kube-controllers-668f5df9fb-7xn7m" WorkloadEndpoint="srv--sx7g0.gb1.brightbox.com-k8s-calico--kube--controllers--668f5df9fb--7xn7m-eth0" Nov 13 12:07:40.779725 containerd[1621]: 2024-11-13 12:07:40.742 [INFO][4156] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie8ce195757a ContainerID="ae352efd14837365c1d88ece7dde3477d1b1962a13516686e49c032a4fd73ae9" Namespace="calico-system" Pod="calico-kube-controllers-668f5df9fb-7xn7m" WorkloadEndpoint="srv--sx7g0.gb1.brightbox.com-k8s-calico--kube--controllers--668f5df9fb--7xn7m-eth0" Nov 13 12:07:40.779725 containerd[1621]: 2024-11-13 12:07:40.751 [INFO][4156] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ae352efd14837365c1d88ece7dde3477d1b1962a13516686e49c032a4fd73ae9" Namespace="calico-system" Pod="calico-kube-controllers-668f5df9fb-7xn7m" WorkloadEndpoint="srv--sx7g0.gb1.brightbox.com-k8s-calico--kube--controllers--668f5df9fb--7xn7m-eth0" Nov 13 12:07:40.779725 containerd[1621]: 2024-11-13 12:07:40.752 [INFO][4156] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ae352efd14837365c1d88ece7dde3477d1b1962a13516686e49c032a4fd73ae9" Namespace="calico-system" Pod="calico-kube-controllers-668f5df9fb-7xn7m" WorkloadEndpoint="srv--sx7g0.gb1.brightbox.com-k8s-calico--kube--controllers--668f5df9fb--7xn7m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--sx7g0.gb1.brightbox.com-k8s-calico--kube--controllers--668f5df9fb--7xn7m-eth0", GenerateName:"calico-kube-controllers-668f5df9fb-", Namespace:"calico-system", SelfLink:"", UID:"36b3a588-3839-4c9b-9a4b-8639fa9043ec", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2024, time.November, 13, 12, 7, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"668f5df9fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-sx7g0.gb1.brightbox.com", ContainerID:"ae352efd14837365c1d88ece7dde3477d1b1962a13516686e49c032a4fd73ae9", Pod:"calico-kube-controllers-668f5df9fb-7xn7m", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.83.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie8ce195757a", MAC:"c6:b7:43:52:db:9e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 13 12:07:40.779725 containerd[1621]: 2024-11-13 12:07:40.772 [INFO][4156] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ae352efd14837365c1d88ece7dde3477d1b1962a13516686e49c032a4fd73ae9" Namespace="calico-system" Pod="calico-kube-controllers-668f5df9fb-7xn7m" WorkloadEndpoint="srv--sx7g0.gb1.brightbox.com-k8s-calico--kube--controllers--668f5df9fb--7xn7m-eth0" Nov 13 12:07:40.812279 containerd[1621]: time="2024-11-13T12:07:40.812151318Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 13 12:07:40.814931 containerd[1621]: time="2024-11-13T12:07:40.812226633Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 13 12:07:40.814931 containerd[1621]: time="2024-11-13T12:07:40.812252151Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 12:07:40.814931 containerd[1621]: time="2024-11-13T12:07:40.812392368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 12:07:40.818704 containerd[1621]: time="2024-11-13T12:07:40.818304952Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 13 12:07:40.818704 containerd[1621]: time="2024-11-13T12:07:40.818367690Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 13 12:07:40.818704 containerd[1621]: time="2024-11-13T12:07:40.818414626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 12:07:40.818704 containerd[1621]: time="2024-11-13T12:07:40.818615946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 12:07:41.049867 containerd[1621]: time="2024-11-13T12:07:41.049077808Z" level=info msg="StopPodSandbox for \"ebd9b2c005747731e48ce3608906a04ec66097d761f5b87f6837919d549e2c4c\"" Nov 13 12:07:41.321403 containerd[1621]: time="2024-11-13T12:07:41.320855903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-668f5df9fb-7xn7m,Uid:36b3a588-3839-4c9b-9a4b-8639fa9043ec,Namespace:calico-system,Attempt:1,} returns sandbox id \"ae352efd14837365c1d88ece7dde3477d1b1962a13516686e49c032a4fd73ae9\"" Nov 13 12:07:41.354079 containerd[1621]: time="2024-11-13T12:07:41.352929072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57579cb858-82wlr,Uid:b5224871-082e-4b53-9a0e-2a5888841a73,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"3a7cde201aab8907371659a24c5f01abbf5ef876e808b35f7919f97477d90585\"" Nov 13 12:07:41.428193 containerd[1621]: time="2024-11-13T12:07:41.427913837Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\"" Nov 13 12:07:41.728452 containerd[1621]: 2024-11-13 12:07:41.409 [INFO][4369] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ebd9b2c005747731e48ce3608906a04ec66097d761f5b87f6837919d549e2c4c" Nov 13 12:07:41.728452 containerd[1621]: 2024-11-13 12:07:41.409 [INFO][4369] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ebd9b2c005747731e48ce3608906a04ec66097d761f5b87f6837919d549e2c4c" iface="eth0" netns="/var/run/netns/cni-b696ef75-0e61-e799-ab82-7053b0f50e75" Nov 13 12:07:41.728452 containerd[1621]: 2024-11-13 12:07:41.412 [INFO][4369] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ebd9b2c005747731e48ce3608906a04ec66097d761f5b87f6837919d549e2c4c" iface="eth0" netns="/var/run/netns/cni-b696ef75-0e61-e799-ab82-7053b0f50e75" Nov 13 12:07:41.728452 containerd[1621]: 2024-11-13 12:07:41.413 [INFO][4369] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ebd9b2c005747731e48ce3608906a04ec66097d761f5b87f6837919d549e2c4c" iface="eth0" netns="/var/run/netns/cni-b696ef75-0e61-e799-ab82-7053b0f50e75" Nov 13 12:07:41.728452 containerd[1621]: 2024-11-13 12:07:41.413 [INFO][4369] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ebd9b2c005747731e48ce3608906a04ec66097d761f5b87f6837919d549e2c4c" Nov 13 12:07:41.728452 containerd[1621]: 2024-11-13 12:07:41.413 [INFO][4369] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ebd9b2c005747731e48ce3608906a04ec66097d761f5b87f6837919d549e2c4c" Nov 13 12:07:41.728452 containerd[1621]: 2024-11-13 12:07:41.684 [INFO][4401] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ebd9b2c005747731e48ce3608906a04ec66097d761f5b87f6837919d549e2c4c" HandleID="k8s-pod-network.ebd9b2c005747731e48ce3608906a04ec66097d761f5b87f6837919d549e2c4c" Workload="srv--sx7g0.gb1.brightbox.com-k8s-coredns--76f75df574--kz8bv-eth0" Nov 13 12:07:41.728452 containerd[1621]: 2024-11-13 12:07:41.686 [INFO][4401] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 13 12:07:41.728452 containerd[1621]: 2024-11-13 12:07:41.686 [INFO][4401] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 13 12:07:41.728452 containerd[1621]: 2024-11-13 12:07:41.705 [WARNING][4401] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ebd9b2c005747731e48ce3608906a04ec66097d761f5b87f6837919d549e2c4c" HandleID="k8s-pod-network.ebd9b2c005747731e48ce3608906a04ec66097d761f5b87f6837919d549e2c4c" Workload="srv--sx7g0.gb1.brightbox.com-k8s-coredns--76f75df574--kz8bv-eth0" Nov 13 12:07:41.728452 containerd[1621]: 2024-11-13 12:07:41.706 [INFO][4401] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ebd9b2c005747731e48ce3608906a04ec66097d761f5b87f6837919d549e2c4c" HandleID="k8s-pod-network.ebd9b2c005747731e48ce3608906a04ec66097d761f5b87f6837919d549e2c4c" Workload="srv--sx7g0.gb1.brightbox.com-k8s-coredns--76f75df574--kz8bv-eth0" Nov 13 12:07:41.728452 containerd[1621]: 2024-11-13 12:07:41.712 [INFO][4401] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 13 12:07:41.728452 containerd[1621]: 2024-11-13 12:07:41.717 [INFO][4369] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ebd9b2c005747731e48ce3608906a04ec66097d761f5b87f6837919d549e2c4c" Nov 13 12:07:41.734248 containerd[1621]: time="2024-11-13T12:07:41.732265601Z" level=info msg="TearDown network for sandbox \"ebd9b2c005747731e48ce3608906a04ec66097d761f5b87f6837919d549e2c4c\" successfully" Nov 13 12:07:41.736145 containerd[1621]: time="2024-11-13T12:07:41.734182832Z" level=info msg="StopPodSandbox for \"ebd9b2c005747731e48ce3608906a04ec66097d761f5b87f6837919d549e2c4c\" returns successfully" Nov 13 12:07:41.742111 containerd[1621]: time="2024-11-13T12:07:41.739221788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kz8bv,Uid:b637d3e5-f4d0-4ad5-a801-bcd5490bb15d,Namespace:kube-system,Attempt:1,}" Nov 13 12:07:41.739984 systemd[1]: run-netns-cni\x2db696ef75\x2d0e61\x2de799\x2dab82\x2d7053b0f50e75.mount: Deactivated successfully. Nov 13 12:07:41.782212 systemd-journald[1183]: Under memory pressure, flushing caches. Nov 13 12:07:41.757510 systemd-resolved[1517]: Under memory pressure, flushing caches. Nov 13 12:07:41.757532 systemd-resolved[1517]: Flushed all caches. Nov 13 12:07:41.764408 systemd-networkd[1261]: cali7a4b595039b: Gained IPv6LL Nov 13 12:07:41.822216 systemd-networkd[1261]: calie8ce195757a: Gained IPv6LL Nov 13 12:07:42.050140 containerd[1621]: time="2024-11-13T12:07:42.048415247Z" level=info msg="StopPodSandbox for \"b4223c8a31fa4cde8619c5216c6837f686b3df265c3fc95b8ee36b05a9487126\"" Nov 13 12:07:42.074055 kernel: bpftool[4484]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 13 12:07:42.225762 systemd-networkd[1261]: cali1bb31fb1f30: Link UP Nov 13 12:07:42.226128 systemd-networkd[1261]: cali1bb31fb1f30: Gained carrier Nov 13 12:07:42.282038 containerd[1621]: 2024-11-13 12:07:41.907 [INFO][4438] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 13 12:07:42.282038 containerd[1621]: 2024-11-13 12:07:41.944 [INFO][4438] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--sx7g0.gb1.brightbox.com-k8s-coredns--76f75df574--kz8bv-eth0 coredns-76f75df574- kube-system b637d3e5-f4d0-4ad5-a801-bcd5490bb15d 799 0 2024-11-13 12:07:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-sx7g0.gb1.brightbox.com coredns-76f75df574-kz8bv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1bb31fb1f30 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="99e2a0a51780132b53fb76356ce4ca2b567343fbcf35fd6a4e897a59b62855bc" Namespace="kube-system" Pod="coredns-76f75df574-kz8bv" WorkloadEndpoint="srv--sx7g0.gb1.brightbox.com-k8s-coredns--76f75df574--kz8bv-" Nov 13 12:07:42.282038 containerd[1621]: 2024-11-13 12:07:41.944 [INFO][4438] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="99e2a0a51780132b53fb76356ce4ca2b567343fbcf35fd6a4e897a59b62855bc" Namespace="kube-system" Pod="coredns-76f75df574-kz8bv" WorkloadEndpoint="srv--sx7g0.gb1.brightbox.com-k8s-coredns--76f75df574--kz8bv-eth0" Nov 13 12:07:42.282038 containerd[1621]: 2024-11-13 12:07:42.127 [INFO][4468] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="99e2a0a51780132b53fb76356ce4ca2b567343fbcf35fd6a4e897a59b62855bc" HandleID="k8s-pod-network.99e2a0a51780132b53fb76356ce4ca2b567343fbcf35fd6a4e897a59b62855bc" Workload="srv--sx7g0.gb1.brightbox.com-k8s-coredns--76f75df574--kz8bv-eth0" Nov 13 12:07:42.282038 containerd[1621]: 2024-11-13 12:07:42.146 [INFO][4468] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="99e2a0a51780132b53fb76356ce4ca2b567343fbcf35fd6a4e897a59b62855bc" HandleID="k8s-pod-network.99e2a0a51780132b53fb76356ce4ca2b567343fbcf35fd6a4e897a59b62855bc" Workload="srv--sx7g0.gb1.brightbox.com-k8s-coredns--76f75df574--kz8bv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a50d0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-sx7g0.gb1.brightbox.com", "pod":"coredns-76f75df574-kz8bv", "timestamp":"2024-11-13 12:07:42.127852452 +0000 UTC"}, Hostname:"srv-sx7g0.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 13 12:07:42.282038 containerd[1621]: 2024-11-13 12:07:42.146 [INFO][4468] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 13 12:07:42.282038 containerd[1621]: 2024-11-13 12:07:42.146 [INFO][4468] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 13 12:07:42.282038 containerd[1621]: 2024-11-13 12:07:42.146 [INFO][4468] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-sx7g0.gb1.brightbox.com' Nov 13 12:07:42.282038 containerd[1621]: 2024-11-13 12:07:42.149 [INFO][4468] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.99e2a0a51780132b53fb76356ce4ca2b567343fbcf35fd6a4e897a59b62855bc" host="srv-sx7g0.gb1.brightbox.com" Nov 13 12:07:42.282038 containerd[1621]: 2024-11-13 12:07:42.161 [INFO][4468] ipam/ipam.go 372: Looking up existing affinities for host host="srv-sx7g0.gb1.brightbox.com" Nov 13 12:07:42.282038 containerd[1621]: 2024-11-13 12:07:42.172 [INFO][4468] ipam/ipam.go 489: Trying affinity for 192.168.83.64/26 host="srv-sx7g0.gb1.brightbox.com" Nov 13 12:07:42.282038 containerd[1621]: 2024-11-13 12:07:42.176 [INFO][4468] ipam/ipam.go 155: Attempting to load block cidr=192.168.83.64/26 host="srv-sx7g0.gb1.brightbox.com" Nov 13 12:07:42.282038 containerd[1621]: 2024-11-13 12:07:42.182 [INFO][4468] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.83.64/26 host="srv-sx7g0.gb1.brightbox.com" Nov 13 12:07:42.282038 containerd[1621]: 2024-11-13 12:07:42.182 [INFO][4468] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.83.64/26 handle="k8s-pod-network.99e2a0a51780132b53fb76356ce4ca2b567343fbcf35fd6a4e897a59b62855bc" host="srv-sx7g0.gb1.brightbox.com" Nov 13 12:07:42.282038 containerd[1621]: 2024-11-13 12:07:42.185 [INFO][4468] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.99e2a0a51780132b53fb76356ce4ca2b567343fbcf35fd6a4e897a59b62855bc Nov 13 12:07:42.282038 containerd[1621]: 2024-11-13 12:07:42.196 [INFO][4468] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.83.64/26 handle="k8s-pod-network.99e2a0a51780132b53fb76356ce4ca2b567343fbcf35fd6a4e897a59b62855bc" host="srv-sx7g0.gb1.brightbox.com" Nov 13 12:07:42.282038 containerd[1621]: 2024-11-13 12:07:42.211 [INFO][4468] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.83.67/26] block=192.168.83.64/26 handle="k8s-pod-network.99e2a0a51780132b53fb76356ce4ca2b567343fbcf35fd6a4e897a59b62855bc" host="srv-sx7g0.gb1.brightbox.com" Nov 13 12:07:42.282038 containerd[1621]: 2024-11-13 12:07:42.213 [INFO][4468] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.83.67/26] handle="k8s-pod-network.99e2a0a51780132b53fb76356ce4ca2b567343fbcf35fd6a4e897a59b62855bc" host="srv-sx7g0.gb1.brightbox.com" Nov 13 12:07:42.282038 containerd[1621]: 2024-11-13 12:07:42.213 [INFO][4468] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 13 12:07:42.282038 containerd[1621]: 2024-11-13 12:07:42.214 [INFO][4468] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.83.67/26] IPv6=[] ContainerID="99e2a0a51780132b53fb76356ce4ca2b567343fbcf35fd6a4e897a59b62855bc" HandleID="k8s-pod-network.99e2a0a51780132b53fb76356ce4ca2b567343fbcf35fd6a4e897a59b62855bc" Workload="srv--sx7g0.gb1.brightbox.com-k8s-coredns--76f75df574--kz8bv-eth0" Nov 13 12:07:42.284826 containerd[1621]: 2024-11-13 12:07:42.218 [INFO][4438] cni-plugin/k8s.go 386: Populated endpoint ContainerID="99e2a0a51780132b53fb76356ce4ca2b567343fbcf35fd6a4e897a59b62855bc" Namespace="kube-system" Pod="coredns-76f75df574-kz8bv" WorkloadEndpoint="srv--sx7g0.gb1.brightbox.com-k8s-coredns--76f75df574--kz8bv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--sx7g0.gb1.brightbox.com-k8s-coredns--76f75df574--kz8bv-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"b637d3e5-f4d0-4ad5-a801-bcd5490bb15d", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2024, time.November, 13, 12, 7, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-sx7g0.gb1.brightbox.com", ContainerID:"", Pod:"coredns-76f75df574-kz8bv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.83.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1bb31fb1f30", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 13 12:07:42.284826 containerd[1621]: 2024-11-13 12:07:42.218 [INFO][4438] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.83.67/32] ContainerID="99e2a0a51780132b53fb76356ce4ca2b567343fbcf35fd6a4e897a59b62855bc" Namespace="kube-system" Pod="coredns-76f75df574-kz8bv" WorkloadEndpoint="srv--sx7g0.gb1.brightbox.com-k8s-coredns--76f75df574--kz8bv-eth0" Nov 13 12:07:42.284826 containerd[1621]: 2024-11-13 12:07:42.218 [INFO][4438] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1bb31fb1f30 ContainerID="99e2a0a51780132b53fb76356ce4ca2b567343fbcf35fd6a4e897a59b62855bc" Namespace="kube-system" Pod="coredns-76f75df574-kz8bv" WorkloadEndpoint="srv--sx7g0.gb1.brightbox.com-k8s-coredns--76f75df574--kz8bv-eth0" Nov 13 12:07:42.284826 containerd[1621]: 2024-11-13 12:07:42.225 [INFO][4438] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="99e2a0a51780132b53fb76356ce4ca2b567343fbcf35fd6a4e897a59b62855bc" Namespace="kube-system" Pod="coredns-76f75df574-kz8bv" WorkloadEndpoint="srv--sx7g0.gb1.brightbox.com-k8s-coredns--76f75df574--kz8bv-eth0" Nov 13 12:07:42.284826 containerd[1621]: 2024-11-13 12:07:42.226 [INFO][4438] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="99e2a0a51780132b53fb76356ce4ca2b567343fbcf35fd6a4e897a59b62855bc" Namespace="kube-system" Pod="coredns-76f75df574-kz8bv" WorkloadEndpoint="srv--sx7g0.gb1.brightbox.com-k8s-coredns--76f75df574--kz8bv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--sx7g0.gb1.brightbox.com-k8s-coredns--76f75df574--kz8bv-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"b637d3e5-f4d0-4ad5-a801-bcd5490bb15d", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2024, time.November, 13, 12, 7, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-sx7g0.gb1.brightbox.com", ContainerID:"99e2a0a51780132b53fb76356ce4ca2b567343fbcf35fd6a4e897a59b62855bc", Pod:"coredns-76f75df574-kz8bv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.83.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1bb31fb1f30", MAC:"aa:19:33:c8:e0:1c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 13 12:07:42.284826 containerd[1621]: 2024-11-13 12:07:42.264 [INFO][4438] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="99e2a0a51780132b53fb76356ce4ca2b567343fbcf35fd6a4e897a59b62855bc" Namespace="kube-system" Pod="coredns-76f75df574-kz8bv" WorkloadEndpoint="srv--sx7g0.gb1.brightbox.com-k8s-coredns--76f75df574--kz8bv-eth0" Nov 13 12:07:42.376149 containerd[1621]: time="2024-11-13T12:07:42.374248784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 13 12:07:42.376149 containerd[1621]: time="2024-11-13T12:07:42.374363797Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 13 12:07:42.376149 containerd[1621]: time="2024-11-13T12:07:42.374382257Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 12:07:42.376149 containerd[1621]: time="2024-11-13T12:07:42.374530665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 12:07:42.434178 containerd[1621]: 2024-11-13 12:07:42.245 [INFO][4495] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b4223c8a31fa4cde8619c5216c6837f686b3df265c3fc95b8ee36b05a9487126" Nov 13 12:07:42.434178 containerd[1621]: 2024-11-13 12:07:42.248 [INFO][4495] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b4223c8a31fa4cde8619c5216c6837f686b3df265c3fc95b8ee36b05a9487126" iface="eth0" netns="/var/run/netns/cni-8c506f67-9c8a-8d38-8e6c-00515aabb303" Nov 13 12:07:42.434178 containerd[1621]: 2024-11-13 12:07:42.248 [INFO][4495] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b4223c8a31fa4cde8619c5216c6837f686b3df265c3fc95b8ee36b05a9487126" iface="eth0" netns="/var/run/netns/cni-8c506f67-9c8a-8d38-8e6c-00515aabb303" Nov 13 12:07:42.434178 containerd[1621]: 2024-11-13 12:07:42.249 [INFO][4495] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b4223c8a31fa4cde8619c5216c6837f686b3df265c3fc95b8ee36b05a9487126" iface="eth0" netns="/var/run/netns/cni-8c506f67-9c8a-8d38-8e6c-00515aabb303" Nov 13 12:07:42.434178 containerd[1621]: 2024-11-13 12:07:42.249 [INFO][4495] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b4223c8a31fa4cde8619c5216c6837f686b3df265c3fc95b8ee36b05a9487126" Nov 13 12:07:42.434178 containerd[1621]: 2024-11-13 12:07:42.249 [INFO][4495] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b4223c8a31fa4cde8619c5216c6837f686b3df265c3fc95b8ee36b05a9487126" Nov 13 12:07:42.434178 containerd[1621]: 2024-11-13 12:07:42.376 [INFO][4506] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b4223c8a31fa4cde8619c5216c6837f686b3df265c3fc95b8ee36b05a9487126" HandleID="k8s-pod-network.b4223c8a31fa4cde8619c5216c6837f686b3df265c3fc95b8ee36b05a9487126" Workload="srv--sx7g0.gb1.brightbox.com-k8s-coredns--76f75df574--58wrm-eth0" Nov 13 12:07:42.434178 containerd[1621]: 2024-11-13 12:07:42.376 [INFO][4506] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 13 12:07:42.434178 containerd[1621]: 2024-11-13 12:07:42.376 [INFO][4506] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 13 12:07:42.434178 containerd[1621]: 2024-11-13 12:07:42.397 [WARNING][4506] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b4223c8a31fa4cde8619c5216c6837f686b3df265c3fc95b8ee36b05a9487126" HandleID="k8s-pod-network.b4223c8a31fa4cde8619c5216c6837f686b3df265c3fc95b8ee36b05a9487126" Workload="srv--sx7g0.gb1.brightbox.com-k8s-coredns--76f75df574--58wrm-eth0" Nov 13 12:07:42.434178 containerd[1621]: 2024-11-13 12:07:42.397 [INFO][4506] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b4223c8a31fa4cde8619c5216c6837f686b3df265c3fc95b8ee36b05a9487126" HandleID="k8s-pod-network.b4223c8a31fa4cde8619c5216c6837f686b3df265c3fc95b8ee36b05a9487126" Workload="srv--sx7g0.gb1.brightbox.com-k8s-coredns--76f75df574--58wrm-eth0" Nov 13 12:07:42.434178 containerd[1621]: 2024-11-13 12:07:42.402 [INFO][4506] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 13 12:07:42.434178 containerd[1621]: 2024-11-13 12:07:42.424 [INFO][4495] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b4223c8a31fa4cde8619c5216c6837f686b3df265c3fc95b8ee36b05a9487126" Nov 13 12:07:42.438256 containerd[1621]: time="2024-11-13T12:07:42.436197778Z" level=info msg="TearDown network for sandbox \"b4223c8a31fa4cde8619c5216c6837f686b3df265c3fc95b8ee36b05a9487126\" successfully" Nov 13 12:07:42.438256 containerd[1621]: time="2024-11-13T12:07:42.436364779Z" level=info msg="StopPodSandbox for \"b4223c8a31fa4cde8619c5216c6837f686b3df265c3fc95b8ee36b05a9487126\" returns successfully" Nov 13 12:07:42.441995 containerd[1621]: time="2024-11-13T12:07:42.441840054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-58wrm,Uid:56d5778e-34f5-4698-a9ed-8cc517424213,Namespace:kube-system,Attempt:1,}" Nov 13 12:07:42.540735 containerd[1621]: time="2024-11-13T12:07:42.540442449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kz8bv,Uid:b637d3e5-f4d0-4ad5-a801-bcd5490bb15d,Namespace:kube-system,Attempt:1,} returns sandbox id \"99e2a0a51780132b53fb76356ce4ca2b567343fbcf35fd6a4e897a59b62855bc\"" Nov 13 12:07:42.560926 containerd[1621]: time="2024-11-13T12:07:42.560875326Z" level=info msg="CreateContainer within sandbox \"99e2a0a51780132b53fb76356ce4ca2b567343fbcf35fd6a4e897a59b62855bc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 13 12:07:42.600878 containerd[1621]: time="2024-11-13T12:07:42.600802580Z" level=info msg="CreateContainer within sandbox \"99e2a0a51780132b53fb76356ce4ca2b567343fbcf35fd6a4e897a59b62855bc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"495bf5dcb37136b766de0a3c7650ece8e3bd22ad5be8bda6b9858675c79b7d25\"" Nov 13 12:07:42.605154 containerd[1621]: time="2024-11-13T12:07:42.603462795Z" level=info msg="StartContainer for \"495bf5dcb37136b766de0a3c7650ece8e3bd22ad5be8bda6b9858675c79b7d25\"" Nov 13 12:07:42.647388 systemd-networkd[1261]: vxlan.calico: Link UP Nov 13 12:07:42.647399 systemd-networkd[1261]: vxlan.calico: Gained carrier Nov 13 12:07:42.743281 systemd[1]: run-netns-cni\x2d8c506f67\x2d9c8a\x2d8d38\x2d8e6c\x2d00515aabb303.mount: Deactivated successfully. Nov 13 12:07:42.892185 containerd[1621]: time="2024-11-13T12:07:42.891656472Z" level=info msg="StartContainer for \"495bf5dcb37136b766de0a3c7650ece8e3bd22ad5be8bda6b9858675c79b7d25\" returns successfully" Nov 13 12:07:42.908399 systemd-networkd[1261]: calidd5a9dbecf4: Link UP Nov 13 12:07:42.910842 systemd-networkd[1261]: calidd5a9dbecf4: Gained carrier Nov 13 12:07:42.945821 containerd[1621]: 2024-11-13 12:07:42.652 [INFO][4555] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--sx7g0.gb1.brightbox.com-k8s-coredns--76f75df574--58wrm-eth0 coredns-76f75df574- kube-system 56d5778e-34f5-4698-a9ed-8cc517424213 806 0 2024-11-13 12:07:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-sx7g0.gb1.brightbox.com coredns-76f75df574-58wrm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calidd5a9dbecf4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="3351f30dec5fa93bf17302263dceab35ffe39c56e2c16c11ebb000d8d5737ff4" Namespace="kube-system" Pod="coredns-76f75df574-58wrm" WorkloadEndpoint="srv--sx7g0.gb1.brightbox.com-k8s-coredns--76f75df574--58wrm-" Nov 13 12:07:42.945821 containerd[1621]: 2024-11-13 12:07:42.652 [INFO][4555] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3351f30dec5fa93bf17302263dceab35ffe39c56e2c16c11ebb000d8d5737ff4" Namespace="kube-system" Pod="coredns-76f75df574-58wrm" WorkloadEndpoint="srv--sx7g0.gb1.brightbox.com-k8s-coredns--76f75df574--58wrm-eth0" Nov 13 12:07:42.945821 containerd[1621]: 2024-11-13 12:07:42.808 [INFO][4611] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3351f30dec5fa93bf17302263dceab35ffe39c56e2c16c11ebb000d8d5737ff4" HandleID="k8s-pod-network.3351f30dec5fa93bf17302263dceab35ffe39c56e2c16c11ebb000d8d5737ff4" Workload="srv--sx7g0.gb1.brightbox.com-k8s-coredns--76f75df574--58wrm-eth0" Nov 13 12:07:42.945821 containerd[1621]: 2024-11-13 12:07:42.828 [INFO][4611] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3351f30dec5fa93bf17302263dceab35ffe39c56e2c16c11ebb000d8d5737ff4" HandleID="k8s-pod-network.3351f30dec5fa93bf17302263dceab35ffe39c56e2c16c11ebb000d8d5737ff4" Workload="srv--sx7g0.gb1.brightbox.com-k8s-coredns--76f75df574--58wrm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000404530), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-sx7g0.gb1.brightbox.com", "pod":"coredns-76f75df574-58wrm", "timestamp":"2024-11-13 12:07:42.807699715 +0000 UTC"}, Hostname:"srv-sx7g0.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 13 12:07:42.945821 containerd[1621]: 2024-11-13 12:07:42.828 [INFO][4611] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 13 12:07:42.945821 containerd[1621]: 2024-11-13 12:07:42.828 [INFO][4611] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 13 12:07:42.945821 containerd[1621]: 2024-11-13 12:07:42.828 [INFO][4611] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-sx7g0.gb1.brightbox.com' Nov 13 12:07:42.945821 containerd[1621]: 2024-11-13 12:07:42.833 [INFO][4611] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3351f30dec5fa93bf17302263dceab35ffe39c56e2c16c11ebb000d8d5737ff4" host="srv-sx7g0.gb1.brightbox.com" Nov 13 12:07:42.945821 containerd[1621]: 2024-11-13 12:07:42.842 [INFO][4611] ipam/ipam.go 372: Looking up existing affinities for host host="srv-sx7g0.gb1.brightbox.com" Nov 13 12:07:42.945821 containerd[1621]: 2024-11-13 12:07:42.853 [INFO][4611] ipam/ipam.go 489: Trying affinity for 192.168.83.64/26 host="srv-sx7g0.gb1.brightbox.com" Nov 13 12:07:42.945821 containerd[1621]: 2024-11-13 12:07:42.857 [INFO][4611] ipam/ipam.go 155: Attempting to load block cidr=192.168.83.64/26 host="srv-sx7g0.gb1.brightbox.com" Nov 13 12:07:42.945821 containerd[1621]: 2024-11-13 12:07:42.863 [INFO][4611] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.83.64/26 host="srv-sx7g0.gb1.brightbox.com" Nov 13 12:07:42.945821 containerd[1621]: 2024-11-13 12:07:42.863 [INFO][4611] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.83.64/26 handle="k8s-pod-network.3351f30dec5fa93bf17302263dceab35ffe39c56e2c16c11ebb000d8d5737ff4" host="srv-sx7g0.gb1.brightbox.com" Nov 13 12:07:42.945821 containerd[1621]: 2024-11-13 12:07:42.866 [INFO][4611] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3351f30dec5fa93bf17302263dceab35ffe39c56e2c16c11ebb000d8d5737ff4 Nov 13 12:07:42.945821 containerd[1621]: 2024-11-13 12:07:42.874 [INFO][4611] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.83.64/26 handle="k8s-pod-network.3351f30dec5fa93bf17302263dceab35ffe39c56e2c16c11ebb000d8d5737ff4" host="srv-sx7g0.gb1.brightbox.com" Nov 13 12:07:42.945821 containerd[1621]: 2024-11-13 12:07:42.887 [INFO][4611] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.83.68/26] block=192.168.83.64/26 handle="k8s-pod-network.3351f30dec5fa93bf17302263dceab35ffe39c56e2c16c11ebb000d8d5737ff4" host="srv-sx7g0.gb1.brightbox.com" Nov 13 12:07:42.945821 containerd[1621]: 2024-11-13 12:07:42.887 [INFO][4611] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.83.68/26] handle="k8s-pod-network.3351f30dec5fa93bf17302263dceab35ffe39c56e2c16c11ebb000d8d5737ff4" host="srv-sx7g0.gb1.brightbox.com" Nov 13 12:07:42.945821 containerd[1621]: 2024-11-13 12:07:42.887 [INFO][4611] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 13 12:07:42.945821 containerd[1621]: 2024-11-13 12:07:42.887 [INFO][4611] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.83.68/26] IPv6=[] ContainerID="3351f30dec5fa93bf17302263dceab35ffe39c56e2c16c11ebb000d8d5737ff4" HandleID="k8s-pod-network.3351f30dec5fa93bf17302263dceab35ffe39c56e2c16c11ebb000d8d5737ff4" Workload="srv--sx7g0.gb1.brightbox.com-k8s-coredns--76f75df574--58wrm-eth0" Nov 13 12:07:42.948607 containerd[1621]: 2024-11-13 12:07:42.894 [INFO][4555] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3351f30dec5fa93bf17302263dceab35ffe39c56e2c16c11ebb000d8d5737ff4" Namespace="kube-system" Pod="coredns-76f75df574-58wrm" WorkloadEndpoint="srv--sx7g0.gb1.brightbox.com-k8s-coredns--76f75df574--58wrm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--sx7g0.gb1.brightbox.com-k8s-coredns--76f75df574--58wrm-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"56d5778e-34f5-4698-a9ed-8cc517424213", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2024, time.November, 13, 12, 7, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-sx7g0.gb1.brightbox.com", ContainerID:"", Pod:"coredns-76f75df574-58wrm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.83.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidd5a9dbecf4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 13 12:07:42.948607 containerd[1621]: 2024-11-13 12:07:42.895 [INFO][4555] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.83.68/32] ContainerID="3351f30dec5fa93bf17302263dceab35ffe39c56e2c16c11ebb000d8d5737ff4" Namespace="kube-system" Pod="coredns-76f75df574-58wrm" WorkloadEndpoint="srv--sx7g0.gb1.brightbox.com-k8s-coredns--76f75df574--58wrm-eth0" Nov 13 12:07:42.948607 containerd[1621]: 2024-11-13 12:07:42.895 [INFO][4555] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidd5a9dbecf4 ContainerID="3351f30dec5fa93bf17302263dceab35ffe39c56e2c16c11ebb000d8d5737ff4" Namespace="kube-system" Pod="coredns-76f75df574-58wrm" WorkloadEndpoint="srv--sx7g0.gb1.brightbox.com-k8s-coredns--76f75df574--58wrm-eth0" Nov 13 12:07:42.948607 containerd[1621]: 2024-11-13 12:07:42.911 [INFO][4555] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3351f30dec5fa93bf17302263dceab35ffe39c56e2c16c11ebb000d8d5737ff4" Namespace="kube-system" Pod="coredns-76f75df574-58wrm" WorkloadEndpoint="srv--sx7g0.gb1.brightbox.com-k8s-coredns--76f75df574--58wrm-eth0" Nov 13 12:07:42.948607 containerd[1621]: 2024-11-13 12:07:42.912 [INFO][4555] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3351f30dec5fa93bf17302263dceab35ffe39c56e2c16c11ebb000d8d5737ff4" Namespace="kube-system" Pod="coredns-76f75df574-58wrm" WorkloadEndpoint="srv--sx7g0.gb1.brightbox.com-k8s-coredns--76f75df574--58wrm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--sx7g0.gb1.brightbox.com-k8s-coredns--76f75df574--58wrm-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"56d5778e-34f5-4698-a9ed-8cc517424213", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2024, time.November, 13, 12, 7, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-sx7g0.gb1.brightbox.com", ContainerID:"3351f30dec5fa93bf17302263dceab35ffe39c56e2c16c11ebb000d8d5737ff4", Pod:"coredns-76f75df574-58wrm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.83.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidd5a9dbecf4", MAC:"c2:37:9b:07:bb:35", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 13 12:07:42.948607 containerd[1621]: 2024-11-13 12:07:42.936 [INFO][4555] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3351f30dec5fa93bf17302263dceab35ffe39c56e2c16c11ebb000d8d5737ff4" Namespace="kube-system" Pod="coredns-76f75df574-58wrm" WorkloadEndpoint="srv--sx7g0.gb1.brightbox.com-k8s-coredns--76f75df574--58wrm-eth0" Nov 13 12:07:42.990416 containerd[1621]: time="2024-11-13T12:07:42.990273491Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 13 12:07:42.991951 containerd[1621]: time="2024-11-13T12:07:42.991885630Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 13 12:07:42.992890 containerd[1621]: time="2024-11-13T12:07:42.992167814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 12:07:42.993217 containerd[1621]: time="2024-11-13T12:07:42.992838482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 12:07:43.043492 containerd[1621]: time="2024-11-13T12:07:43.042518214Z" level=info msg="StopPodSandbox for \"2c71cd07048b92e21d33e2c4e8ece5641c37ab1ae24040c92d87d3202a7e7f38\"" Nov 13 12:07:43.276319 containerd[1621]: time="2024-11-13T12:07:43.274913782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-58wrm,Uid:56d5778e-34f5-4698-a9ed-8cc517424213,Namespace:kube-system,Attempt:1,} returns sandbox id \"3351f30dec5fa93bf17302263dceab35ffe39c56e2c16c11ebb000d8d5737ff4\"" Nov 13 12:07:43.321204 containerd[1621]: time="2024-11-13T12:07:43.320779486Z" level=info msg="CreateContainer within sandbox \"3351f30dec5fa93bf17302263dceab35ffe39c56e2c16c11ebb000d8d5737ff4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 13 12:07:43.368151 containerd[1621]: time="2024-11-13T12:07:43.368087674Z" level=info msg="CreateContainer within sandbox \"3351f30dec5fa93bf17302263dceab35ffe39c56e2c16c11ebb000d8d5737ff4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ca3a4784276031db12f5912c749dfa451f36f7493ebf3768be6483ce0b2690ff\"" Nov 13 12:07:43.371856 containerd[1621]: time="2024-11-13T12:07:43.371811377Z" level=info msg="StartContainer for \"ca3a4784276031db12f5912c749dfa451f36f7493ebf3768be6483ce0b2690ff\"" Nov 13 12:07:43.558487 containerd[1621]: 2024-11-13 12:07:43.384 [INFO][4719] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2c71cd07048b92e21d33e2c4e8ece5641c37ab1ae24040c92d87d3202a7e7f38" Nov 13 12:07:43.558487 containerd[1621]: 2024-11-13 12:07:43.384 [INFO][4719] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2c71cd07048b92e21d33e2c4e8ece5641c37ab1ae24040c92d87d3202a7e7f38" iface="eth0" netns="/var/run/netns/cni-b1dbbff1-a3b7-cd66-1a31-5eba909b1771" Nov 13 12:07:43.558487 containerd[1621]: 2024-11-13 12:07:43.384 [INFO][4719] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2c71cd07048b92e21d33e2c4e8ece5641c37ab1ae24040c92d87d3202a7e7f38" iface="eth0" netns="/var/run/netns/cni-b1dbbff1-a3b7-cd66-1a31-5eba909b1771" Nov 13 12:07:43.558487 containerd[1621]: 2024-11-13 12:07:43.385 [INFO][4719] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2c71cd07048b92e21d33e2c4e8ece5641c37ab1ae24040c92d87d3202a7e7f38" iface="eth0" netns="/var/run/netns/cni-b1dbbff1-a3b7-cd66-1a31-5eba909b1771" Nov 13 12:07:43.558487 containerd[1621]: 2024-11-13 12:07:43.386 [INFO][4719] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2c71cd07048b92e21d33e2c4e8ece5641c37ab1ae24040c92d87d3202a7e7f38" Nov 13 12:07:43.558487 containerd[1621]: 2024-11-13 12:07:43.386 [INFO][4719] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2c71cd07048b92e21d33e2c4e8ece5641c37ab1ae24040c92d87d3202a7e7f38" Nov 13 12:07:43.558487 containerd[1621]: 2024-11-13 12:07:43.477 [INFO][4749] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2c71cd07048b92e21d33e2c4e8ece5641c37ab1ae24040c92d87d3202a7e7f38" HandleID="k8s-pod-network.2c71cd07048b92e21d33e2c4e8ece5641c37ab1ae24040c92d87d3202a7e7f38" Workload="srv--sx7g0.gb1.brightbox.com-k8s-calico--apiserver--57579cb858--kmp5s-eth0" Nov 13 12:07:43.558487 containerd[1621]: 2024-11-13 12:07:43.478 [INFO][4749] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 13 12:07:43.558487 containerd[1621]: 2024-11-13 12:07:43.478 [INFO][4749] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 13 12:07:43.558487 containerd[1621]: 2024-11-13 12:07:43.497 [WARNING][4749] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2c71cd07048b92e21d33e2c4e8ece5641c37ab1ae24040c92d87d3202a7e7f38" HandleID="k8s-pod-network.2c71cd07048b92e21d33e2c4e8ece5641c37ab1ae24040c92d87d3202a7e7f38" Workload="srv--sx7g0.gb1.brightbox.com-k8s-calico--apiserver--57579cb858--kmp5s-eth0" Nov 13 12:07:43.558487 containerd[1621]: 2024-11-13 12:07:43.498 [INFO][4749] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2c71cd07048b92e21d33e2c4e8ece5641c37ab1ae24040c92d87d3202a7e7f38" HandleID="k8s-pod-network.2c71cd07048b92e21d33e2c4e8ece5641c37ab1ae24040c92d87d3202a7e7f38" Workload="srv--sx7g0.gb1.brightbox.com-k8s-calico--apiserver--57579cb858--kmp5s-eth0" Nov 13 12:07:43.558487 containerd[1621]: 2024-11-13 12:07:43.504 [INFO][4749] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 13 12:07:43.558487 containerd[1621]: 2024-11-13 12:07:43.526 [INFO][4719] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2c71cd07048b92e21d33e2c4e8ece5641c37ab1ae24040c92d87d3202a7e7f38" Nov 13 12:07:43.561280 containerd[1621]: time="2024-11-13T12:07:43.561083344Z" level=info msg="TearDown network for sandbox \"2c71cd07048b92e21d33e2c4e8ece5641c37ab1ae24040c92d87d3202a7e7f38\" successfully" Nov 13 12:07:43.561280 containerd[1621]: time="2024-11-13T12:07:43.561135701Z" level=info msg="StopPodSandbox for \"2c71cd07048b92e21d33e2c4e8ece5641c37ab1ae24040c92d87d3202a7e7f38\" returns successfully" Nov 13 12:07:43.620124 containerd[1621]: time="2024-11-13T12:07:43.619212257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57579cb858-kmp5s,Uid:3531763d-3a3a-4c39-b14a-dc83c6ab3c8e,Namespace:calico-apiserver,Attempt:1,}" Nov 13 12:07:43.639045 kubelet[2916]: I1113 12:07:43.638911 2916 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-kz8bv" podStartSLOduration=39.63882686 podStartE2EDuration="39.63882686s" podCreationTimestamp="2024-11-13 12:07:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-13 12:07:43.610380654 +0000 UTC m=+52.854561965" watchObservedRunningTime="2024-11-13 12:07:43.63882686 +0000 UTC m=+52.883008180" Nov 13 12:07:43.743031 containerd[1621]: time="2024-11-13T12:07:43.742372237Z" level=info msg="StartContainer for \"ca3a4784276031db12f5912c749dfa451f36f7493ebf3768be6483ce0b2690ff\" returns successfully" Nov 13 12:07:43.743912 systemd[1]: run-netns-cni\x2db1dbbff1\x2da3b7\x2dcd66\x2d1a31\x2d5eba909b1771.mount: Deactivated successfully. Nov 13 12:07:44.041083 systemd-networkd[1261]: cali6977d659a02: Link UP Nov 13 12:07:44.041887 systemd-networkd[1261]: cali6977d659a02: Gained carrier Nov 13 12:07:44.061359 systemd-networkd[1261]: calidd5a9dbecf4: Gained IPv6LL Nov 13 12:07:44.083215 containerd[1621]: 2024-11-13 12:07:43.870 [INFO][4801] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--sx7g0.gb1.brightbox.com-k8s-calico--apiserver--57579cb858--kmp5s-eth0 calico-apiserver-57579cb858- calico-apiserver 3531763d-3a3a-4c39-b14a-dc83c6ab3c8e 819 0 2024-11-13 12:07:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:57579cb858 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-sx7g0.gb1.brightbox.com calico-apiserver-57579cb858-kmp5s eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6977d659a02 [] []}} ContainerID="04d01acb3f7cc9811f58a4ad7e506710f242ac3e1401431fe52b88e8cfba923f" Namespace="calico-apiserver" Pod="calico-apiserver-57579cb858-kmp5s" WorkloadEndpoint="srv--sx7g0.gb1.brightbox.com-k8s-calico--apiserver--57579cb858--kmp5s-" Nov 13 12:07:44.083215 containerd[1621]: 2024-11-13 12:07:43.874 [INFO][4801] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="04d01acb3f7cc9811f58a4ad7e506710f242ac3e1401431fe52b88e8cfba923f" Namespace="calico-apiserver" Pod="calico-apiserver-57579cb858-kmp5s" WorkloadEndpoint="srv--sx7g0.gb1.brightbox.com-k8s-calico--apiserver--57579cb858--kmp5s-eth0" Nov 13 12:07:44.083215 containerd[1621]: 2024-11-13 12:07:43.967 [INFO][4833] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="04d01acb3f7cc9811f58a4ad7e506710f242ac3e1401431fe52b88e8cfba923f" HandleID="k8s-pod-network.04d01acb3f7cc9811f58a4ad7e506710f242ac3e1401431fe52b88e8cfba923f" Workload="srv--sx7g0.gb1.brightbox.com-k8s-calico--apiserver--57579cb858--kmp5s-eth0" Nov 13 12:07:44.083215 containerd[1621]: 2024-11-13 12:07:43.985 [INFO][4833] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="04d01acb3f7cc9811f58a4ad7e506710f242ac3e1401431fe52b88e8cfba923f" HandleID="k8s-pod-network.04d01acb3f7cc9811f58a4ad7e506710f242ac3e1401431fe52b88e8cfba923f" Workload="srv--sx7g0.gb1.brightbox.com-k8s-calico--apiserver--57579cb858--kmp5s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00039f140), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-sx7g0.gb1.brightbox.com", "pod":"calico-apiserver-57579cb858-kmp5s", "timestamp":"2024-11-13 12:07:43.967405152 +0000 UTC"}, Hostname:"srv-sx7g0.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 13 12:07:44.083215 containerd[1621]: 2024-11-13 12:07:43.985 [INFO][4833] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 13 12:07:44.083215 containerd[1621]: 2024-11-13 12:07:43.985 [INFO][4833] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 13 12:07:44.083215 containerd[1621]: 2024-11-13 12:07:43.985 [INFO][4833] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-sx7g0.gb1.brightbox.com' Nov 13 12:07:44.083215 containerd[1621]: 2024-11-13 12:07:43.987 [INFO][4833] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.04d01acb3f7cc9811f58a4ad7e506710f242ac3e1401431fe52b88e8cfba923f" host="srv-sx7g0.gb1.brightbox.com" Nov 13 12:07:44.083215 containerd[1621]: 2024-11-13 12:07:43.993 [INFO][4833] ipam/ipam.go 372: Looking up existing affinities for host host="srv-sx7g0.gb1.brightbox.com" Nov 13 12:07:44.083215 containerd[1621]: 2024-11-13 12:07:43.999 [INFO][4833] ipam/ipam.go 489: Trying affinity for 192.168.83.64/26 host="srv-sx7g0.gb1.brightbox.com" Nov 13 12:07:44.083215 containerd[1621]: 2024-11-13 12:07:44.001 [INFO][4833] ipam/ipam.go 155: Attempting to load block cidr=192.168.83.64/26 host="srv-sx7g0.gb1.brightbox.com" Nov 13 12:07:44.083215 containerd[1621]: 2024-11-13 12:07:44.005 [INFO][4833] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.83.64/26 host="srv-sx7g0.gb1.brightbox.com" Nov 13 12:07:44.083215 containerd[1621]: 2024-11-13 12:07:44.006 [INFO][4833] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.83.64/26 handle="k8s-pod-network.04d01acb3f7cc9811f58a4ad7e506710f242ac3e1401431fe52b88e8cfba923f" host="srv-sx7g0.gb1.brightbox.com" Nov 13 12:07:44.083215 containerd[1621]: 2024-11-13 12:07:44.008 [INFO][4833] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.04d01acb3f7cc9811f58a4ad7e506710f242ac3e1401431fe52b88e8cfba923f Nov 13 12:07:44.083215 containerd[1621]: 2024-11-13 12:07:44.017 [INFO][4833] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.83.64/26 handle="k8s-pod-network.04d01acb3f7cc9811f58a4ad7e506710f242ac3e1401431fe52b88e8cfba923f" host="srv-sx7g0.gb1.brightbox.com" Nov 13 12:07:44.083215 containerd[1621]: 2024-11-13 12:07:44.030 [INFO][4833] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.83.69/26] block=192.168.83.64/26 handle="k8s-pod-network.04d01acb3f7cc9811f58a4ad7e506710f242ac3e1401431fe52b88e8cfba923f" host="srv-sx7g0.gb1.brightbox.com" Nov 13 12:07:44.083215 containerd[1621]: 2024-11-13 12:07:44.030 [INFO][4833] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.83.69/26] handle="k8s-pod-network.04d01acb3f7cc9811f58a4ad7e506710f242ac3e1401431fe52b88e8cfba923f" host="srv-sx7g0.gb1.brightbox.com" Nov 13 12:07:44.083215 containerd[1621]: 2024-11-13 12:07:44.030 [INFO][4833] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 13 12:07:44.083215 containerd[1621]: 2024-11-13 12:07:44.030 [INFO][4833] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.83.69/26] IPv6=[] ContainerID="04d01acb3f7cc9811f58a4ad7e506710f242ac3e1401431fe52b88e8cfba923f" HandleID="k8s-pod-network.04d01acb3f7cc9811f58a4ad7e506710f242ac3e1401431fe52b88e8cfba923f" Workload="srv--sx7g0.gb1.brightbox.com-k8s-calico--apiserver--57579cb858--kmp5s-eth0" Nov 13 12:07:44.085519 containerd[1621]: 2024-11-13 12:07:44.034 [INFO][4801] cni-plugin/k8s.go 386: Populated endpoint ContainerID="04d01acb3f7cc9811f58a4ad7e506710f242ac3e1401431fe52b88e8cfba923f" Namespace="calico-apiserver" Pod="calico-apiserver-57579cb858-kmp5s" WorkloadEndpoint="srv--sx7g0.gb1.brightbox.com-k8s-calico--apiserver--57579cb858--kmp5s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--sx7g0.gb1.brightbox.com-k8s-calico--apiserver--57579cb858--kmp5s-eth0", GenerateName:"calico-apiserver-57579cb858-", Namespace:"calico-apiserver", SelfLink:"", UID:"3531763d-3a3a-4c39-b14a-dc83c6ab3c8e", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2024, time.November, 13, 12, 7, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57579cb858", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-sx7g0.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-57579cb858-kmp5s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.83.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6977d659a02", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 13 12:07:44.085519 containerd[1621]: 2024-11-13 12:07:44.034 [INFO][4801] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.83.69/32] ContainerID="04d01acb3f7cc9811f58a4ad7e506710f242ac3e1401431fe52b88e8cfba923f" Namespace="calico-apiserver" Pod="calico-apiserver-57579cb858-kmp5s" WorkloadEndpoint="srv--sx7g0.gb1.brightbox.com-k8s-calico--apiserver--57579cb858--kmp5s-eth0" Nov 13 12:07:44.085519 containerd[1621]: 2024-11-13 12:07:44.034 [INFO][4801] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6977d659a02 ContainerID="04d01acb3f7cc9811f58a4ad7e506710f242ac3e1401431fe52b88e8cfba923f" Namespace="calico-apiserver" Pod="calico-apiserver-57579cb858-kmp5s" WorkloadEndpoint="srv--sx7g0.gb1.brightbox.com-k8s-calico--apiserver--57579cb858--kmp5s-eth0" Nov 13 12:07:44.085519 containerd[1621]: 2024-11-13 12:07:44.044 [INFO][4801] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="04d01acb3f7cc9811f58a4ad7e506710f242ac3e1401431fe52b88e8cfba923f" Namespace="calico-apiserver" Pod="calico-apiserver-57579cb858-kmp5s" WorkloadEndpoint="srv--sx7g0.gb1.brightbox.com-k8s-calico--apiserver--57579cb858--kmp5s-eth0" Nov 13 12:07:44.085519 containerd[1621]: 2024-11-13 12:07:44.045 [INFO][4801] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="04d01acb3f7cc9811f58a4ad7e506710f242ac3e1401431fe52b88e8cfba923f" Namespace="calico-apiserver" Pod="calico-apiserver-57579cb858-kmp5s" WorkloadEndpoint="srv--sx7g0.gb1.brightbox.com-k8s-calico--apiserver--57579cb858--kmp5s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--sx7g0.gb1.brightbox.com-k8s-calico--apiserver--57579cb858--kmp5s-eth0", GenerateName:"calico-apiserver-57579cb858-", Namespace:"calico-apiserver", SelfLink:"", UID:"3531763d-3a3a-4c39-b14a-dc83c6ab3c8e", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2024, time.November, 13, 12, 7, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57579cb858", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-sx7g0.gb1.brightbox.com", ContainerID:"04d01acb3f7cc9811f58a4ad7e506710f242ac3e1401431fe52b88e8cfba923f", Pod:"calico-apiserver-57579cb858-kmp5s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.83.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6977d659a02", MAC:"7a:25:2a:93:45:51", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 13 12:07:44.085519 containerd[1621]: 2024-11-13 12:07:44.076 [INFO][4801] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="04d01acb3f7cc9811f58a4ad7e506710f242ac3e1401431fe52b88e8cfba923f" Namespace="calico-apiserver" Pod="calico-apiserver-57579cb858-kmp5s" WorkloadEndpoint="srv--sx7g0.gb1.brightbox.com-k8s-calico--apiserver--57579cb858--kmp5s-eth0" Nov 13 12:07:44.147311 containerd[1621]: time="2024-11-13T12:07:44.147133748Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 13 12:07:44.147311 containerd[1621]: time="2024-11-13T12:07:44.147230658Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 13 12:07:44.147311 containerd[1621]: time="2024-11-13T12:07:44.147258826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 12:07:44.183644 containerd[1621]: time="2024-11-13T12:07:44.147398665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 12:07:44.189320 systemd-networkd[1261]: vxlan.calico: Gained IPv6LL Nov 13 12:07:44.191361 systemd-networkd[1261]: cali1bb31fb1f30: Gained IPv6LL Nov 13 12:07:44.270679 containerd[1621]: time="2024-11-13T12:07:44.270522977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57579cb858-kmp5s,Uid:3531763d-3a3a-4c39-b14a-dc83c6ab3c8e,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"04d01acb3f7cc9811f58a4ad7e506710f242ac3e1401431fe52b88e8cfba923f\"" Nov 13 12:07:44.576244 kubelet[2916]: I1113 12:07:44.576134 2916 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-58wrm" podStartSLOduration=40.575644369 podStartE2EDuration="40.575644369s" podCreationTimestamp="2024-11-13 12:07:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-13 12:07:44.573662113 +0000 UTC m=+53.817843430" watchObservedRunningTime="2024-11-13 12:07:44.575644369 +0000 UTC m=+53.819825676" Nov 13 12:07:45.043279 containerd[1621]: time="2024-11-13T12:07:45.042710135Z" level=info msg="StopPodSandbox for \"e499fb67b4891d47aeacea2c689a82a8971d5ee08b39f00648a857d3a035e84c\"" Nov 13 12:07:45.341358 systemd-networkd[1261]: cali6977d659a02: Gained IPv6LL Nov 13 12:07:45.397070 containerd[1621]: 2024-11-13 12:07:45.217 [INFO][4919] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e499fb67b4891d47aeacea2c689a82a8971d5ee08b39f00648a857d3a035e84c" Nov 13 12:07:45.397070 containerd[1621]: 2024-11-13 12:07:45.221 [INFO][4919] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e499fb67b4891d47aeacea2c689a82a8971d5ee08b39f00648a857d3a035e84c" iface="eth0" netns="/var/run/netns/cni-e09a1721-d633-7416-eb0e-f0253a3eecb3" Nov 13 12:07:45.397070 containerd[1621]: 2024-11-13 12:07:45.223 [INFO][4919] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e499fb67b4891d47aeacea2c689a82a8971d5ee08b39f00648a857d3a035e84c" iface="eth0" netns="/var/run/netns/cni-e09a1721-d633-7416-eb0e-f0253a3eecb3" Nov 13 12:07:45.397070 containerd[1621]: 2024-11-13 12:07:45.224 [INFO][4919] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e499fb67b4891d47aeacea2c689a82a8971d5ee08b39f00648a857d3a035e84c" iface="eth0" netns="/var/run/netns/cni-e09a1721-d633-7416-eb0e-f0253a3eecb3" Nov 13 12:07:45.397070 containerd[1621]: 2024-11-13 12:07:45.225 [INFO][4919] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e499fb67b4891d47aeacea2c689a82a8971d5ee08b39f00648a857d3a035e84c" Nov 13 12:07:45.397070 containerd[1621]: 2024-11-13 12:07:45.225 [INFO][4919] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e499fb67b4891d47aeacea2c689a82a8971d5ee08b39f00648a857d3a035e84c" Nov 13 12:07:45.397070 containerd[1621]: 2024-11-13 12:07:45.377 [INFO][4926] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e499fb67b4891d47aeacea2c689a82a8971d5ee08b39f00648a857d3a035e84c" HandleID="k8s-pod-network.e499fb67b4891d47aeacea2c689a82a8971d5ee08b39f00648a857d3a035e84c" Workload="srv--sx7g0.gb1.brightbox.com-k8s-csi--node--driver--shdhv-eth0" Nov 13 12:07:45.397070 containerd[1621]: 2024-11-13 12:07:45.377 [INFO][4926] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 13 12:07:45.397070 containerd[1621]: 2024-11-13 12:07:45.378 [INFO][4926] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 13 12:07:45.397070 containerd[1621]: 2024-11-13 12:07:45.389 [WARNING][4926] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e499fb67b4891d47aeacea2c689a82a8971d5ee08b39f00648a857d3a035e84c" HandleID="k8s-pod-network.e499fb67b4891d47aeacea2c689a82a8971d5ee08b39f00648a857d3a035e84c" Workload="srv--sx7g0.gb1.brightbox.com-k8s-csi--node--driver--shdhv-eth0" Nov 13 12:07:45.397070 containerd[1621]: 2024-11-13 12:07:45.389 [INFO][4926] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e499fb67b4891d47aeacea2c689a82a8971d5ee08b39f00648a857d3a035e84c" HandleID="k8s-pod-network.e499fb67b4891d47aeacea2c689a82a8971d5ee08b39f00648a857d3a035e84c" Workload="srv--sx7g0.gb1.brightbox.com-k8s-csi--node--driver--shdhv-eth0" Nov 13 12:07:45.397070 containerd[1621]: 2024-11-13 12:07:45.391 [INFO][4926] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 13 12:07:45.397070 containerd[1621]: 2024-11-13 12:07:45.393 [INFO][4919] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e499fb67b4891d47aeacea2c689a82a8971d5ee08b39f00648a857d3a035e84c" Nov 13 12:07:45.397070 containerd[1621]: time="2024-11-13T12:07:45.396158063Z" level=info msg="TearDown network for sandbox \"e499fb67b4891d47aeacea2c689a82a8971d5ee08b39f00648a857d3a035e84c\" successfully" Nov 13 12:07:45.397070 containerd[1621]: time="2024-11-13T12:07:45.396202484Z" level=info msg="StopPodSandbox for \"e499fb67b4891d47aeacea2c689a82a8971d5ee08b39f00648a857d3a035e84c\" returns successfully" Nov 13 12:07:45.401076 containerd[1621]: time="2024-11-13T12:07:45.401035212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-shdhv,Uid:8a886e3e-042b-40d2-b8c2-1a33730ec832,Namespace:calico-system,Attempt:1,}" Nov 13 12:07:45.402449 systemd[1]: run-netns-cni\x2de09a1721\x2dd633\x2d7416\x2deb0e\x2df0253a3eecb3.mount: Deactivated successfully. Nov 13 12:07:45.722724 systemd-networkd[1261]: calicce73b88804: Link UP Nov 13 12:07:45.724209 systemd-networkd[1261]: calicce73b88804: Gained carrier Nov 13 12:07:45.758683 containerd[1621]: 2024-11-13 12:07:45.500 [INFO][4934] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--sx7g0.gb1.brightbox.com-k8s-csi--node--driver--shdhv-eth0 csi-node-driver- calico-system 8a886e3e-042b-40d2-b8c2-1a33730ec832 845 0 2024-11-13 12:07:14 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:64dd8495dc k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-sx7g0.gb1.brightbox.com csi-node-driver-shdhv eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calicce73b88804 [] []}} ContainerID="4c1fbd2c2a04cf6f0ccfcfebcba953c0ccd60c42f79978542527e658c63a1cfd" Namespace="calico-system" Pod="csi-node-driver-shdhv" WorkloadEndpoint="srv--sx7g0.gb1.brightbox.com-k8s-csi--node--driver--shdhv-" Nov 13 12:07:45.758683 containerd[1621]: 2024-11-13 12:07:45.500 [INFO][4934] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4c1fbd2c2a04cf6f0ccfcfebcba953c0ccd60c42f79978542527e658c63a1cfd" Namespace="calico-system" Pod="csi-node-driver-shdhv" WorkloadEndpoint="srv--sx7g0.gb1.brightbox.com-k8s-csi--node--driver--shdhv-eth0" Nov 13 12:07:45.758683 containerd[1621]: 2024-11-13 12:07:45.636 [INFO][4946] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4c1fbd2c2a04cf6f0ccfcfebcba953c0ccd60c42f79978542527e658c63a1cfd" HandleID="k8s-pod-network.4c1fbd2c2a04cf6f0ccfcfebcba953c0ccd60c42f79978542527e658c63a1cfd" Workload="srv--sx7g0.gb1.brightbox.com-k8s-csi--node--driver--shdhv-eth0" Nov 13 12:07:45.758683 containerd[1621]: 2024-11-13 12:07:45.657 [INFO][4946] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4c1fbd2c2a04cf6f0ccfcfebcba953c0ccd60c42f79978542527e658c63a1cfd" HandleID="k8s-pod-network.4c1fbd2c2a04cf6f0ccfcfebcba953c0ccd60c42f79978542527e658c63a1cfd" Workload="srv--sx7g0.gb1.brightbox.com-k8s-csi--node--driver--shdhv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002efa90), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-sx7g0.gb1.brightbox.com", "pod":"csi-node-driver-shdhv", "timestamp":"2024-11-13 12:07:45.636542882 +0000 UTC"}, Hostname:"srv-sx7g0.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 13 12:07:45.758683 containerd[1621]: 2024-11-13 12:07:45.657 [INFO][4946] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 13 12:07:45.758683 containerd[1621]: 2024-11-13 12:07:45.657 [INFO][4946] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 13 12:07:45.758683 containerd[1621]: 2024-11-13 12:07:45.657 [INFO][4946] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-sx7g0.gb1.brightbox.com' Nov 13 12:07:45.758683 containerd[1621]: 2024-11-13 12:07:45.660 [INFO][4946] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4c1fbd2c2a04cf6f0ccfcfebcba953c0ccd60c42f79978542527e658c63a1cfd" host="srv-sx7g0.gb1.brightbox.com" Nov 13 12:07:45.758683 containerd[1621]: 2024-11-13 12:07:45.669 [INFO][4946] ipam/ipam.go 372: Looking up existing affinities for host host="srv-sx7g0.gb1.brightbox.com" Nov 13 12:07:45.758683 containerd[1621]: 2024-11-13 12:07:45.677 [INFO][4946] ipam/ipam.go 489: Trying affinity for 192.168.83.64/26 host="srv-sx7g0.gb1.brightbox.com" Nov 13 12:07:45.758683 containerd[1621]: 2024-11-13 12:07:45.681 [INFO][4946] ipam/ipam.go 155: Attempting to load block cidr=192.168.83.64/26 host="srv-sx7g0.gb1.brightbox.com" Nov 13 12:07:45.758683 containerd[1621]: 2024-11-13 12:07:45.685 [INFO][4946] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.83.64/26 host="srv-sx7g0.gb1.brightbox.com" Nov 13 12:07:45.758683 containerd[1621]: 2024-11-13 12:07:45.685 [INFO][4946] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.83.64/26 handle="k8s-pod-network.4c1fbd2c2a04cf6f0ccfcfebcba953c0ccd60c42f79978542527e658c63a1cfd" host="srv-sx7g0.gb1.brightbox.com" Nov 13 12:07:45.758683 containerd[1621]: 2024-11-13 12:07:45.687 [INFO][4946] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4c1fbd2c2a04cf6f0ccfcfebcba953c0ccd60c42f79978542527e658c63a1cfd Nov 13 12:07:45.758683 containerd[1621]: 2024-11-13 12:07:45.696 [INFO][4946] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.83.64/26 handle="k8s-pod-network.4c1fbd2c2a04cf6f0ccfcfebcba953c0ccd60c42f79978542527e658c63a1cfd" host="srv-sx7g0.gb1.brightbox.com" Nov 13 12:07:45.758683 containerd[1621]: 2024-11-13 12:07:45.709 [INFO][4946] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.83.70/26] block=192.168.83.64/26 handle="k8s-pod-network.4c1fbd2c2a04cf6f0ccfcfebcba953c0ccd60c42f79978542527e658c63a1cfd" host="srv-sx7g0.gb1.brightbox.com" Nov 13 12:07:45.758683 containerd[1621]: 2024-11-13 12:07:45.709 [INFO][4946] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.83.70/26] handle="k8s-pod-network.4c1fbd2c2a04cf6f0ccfcfebcba953c0ccd60c42f79978542527e658c63a1cfd" host="srv-sx7g0.gb1.brightbox.com" Nov 13 12:07:45.758683 containerd[1621]: 2024-11-13 12:07:45.709 [INFO][4946] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 13 12:07:45.758683 containerd[1621]: 2024-11-13 12:07:45.709 [INFO][4946] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.83.70/26] IPv6=[] ContainerID="4c1fbd2c2a04cf6f0ccfcfebcba953c0ccd60c42f79978542527e658c63a1cfd" HandleID="k8s-pod-network.4c1fbd2c2a04cf6f0ccfcfebcba953c0ccd60c42f79978542527e658c63a1cfd" Workload="srv--sx7g0.gb1.brightbox.com-k8s-csi--node--driver--shdhv-eth0" Nov 13 12:07:45.759937 containerd[1621]: 2024-11-13 12:07:45.714 [INFO][4934] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4c1fbd2c2a04cf6f0ccfcfebcba953c0ccd60c42f79978542527e658c63a1cfd" Namespace="calico-system" Pod="csi-node-driver-shdhv" WorkloadEndpoint="srv--sx7g0.gb1.brightbox.com-k8s-csi--node--driver--shdhv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--sx7g0.gb1.brightbox.com-k8s-csi--node--driver--shdhv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8a886e3e-042b-40d2-b8c2-1a33730ec832", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2024, time.November, 13, 12, 7, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"64dd8495dc", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-sx7g0.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-shdhv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.83.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicce73b88804", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 13 12:07:45.759937 containerd[1621]: 2024-11-13 12:07:45.714 [INFO][4934] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.83.70/32] ContainerID="4c1fbd2c2a04cf6f0ccfcfebcba953c0ccd60c42f79978542527e658c63a1cfd" Namespace="calico-system" Pod="csi-node-driver-shdhv" WorkloadEndpoint="srv--sx7g0.gb1.brightbox.com-k8s-csi--node--driver--shdhv-eth0" Nov 13 12:07:45.759937 containerd[1621]: 2024-11-13 12:07:45.714 [INFO][4934] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicce73b88804 ContainerID="4c1fbd2c2a04cf6f0ccfcfebcba953c0ccd60c42f79978542527e658c63a1cfd" Namespace="calico-system" Pod="csi-node-driver-shdhv" WorkloadEndpoint="srv--sx7g0.gb1.brightbox.com-k8s-csi--node--driver--shdhv-eth0" Nov 13 12:07:45.759937 containerd[1621]: 2024-11-13 12:07:45.723 [INFO][4934] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4c1fbd2c2a04cf6f0ccfcfebcba953c0ccd60c42f79978542527e658c63a1cfd" Namespace="calico-system" Pod="csi-node-driver-shdhv" WorkloadEndpoint="srv--sx7g0.gb1.brightbox.com-k8s-csi--node--driver--shdhv-eth0" Nov 13 12:07:45.759937 containerd[1621]: 2024-11-13 12:07:45.726 [INFO][4934] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4c1fbd2c2a04cf6f0ccfcfebcba953c0ccd60c42f79978542527e658c63a1cfd" Namespace="calico-system" Pod="csi-node-driver-shdhv" WorkloadEndpoint="srv--sx7g0.gb1.brightbox.com-k8s-csi--node--driver--shdhv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--sx7g0.gb1.brightbox.com-k8s-csi--node--driver--shdhv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8a886e3e-042b-40d2-b8c2-1a33730ec832", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2024, time.November, 13, 12, 7, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"64dd8495dc", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-sx7g0.gb1.brightbox.com", ContainerID:"4c1fbd2c2a04cf6f0ccfcfebcba953c0ccd60c42f79978542527e658c63a1cfd", Pod:"csi-node-driver-shdhv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.83.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicce73b88804", MAC:"e2:90:5d:96:dd:ae", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 13 12:07:45.759937 containerd[1621]: 2024-11-13 12:07:45.750 [INFO][4934] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4c1fbd2c2a04cf6f0ccfcfebcba953c0ccd60c42f79978542527e658c63a1cfd" Namespace="calico-system" Pod="csi-node-driver-shdhv" WorkloadEndpoint="srv--sx7g0.gb1.brightbox.com-k8s-csi--node--driver--shdhv-eth0" Nov 13 12:07:45.836474 containerd[1621]: time="2024-11-13T12:07:45.835771833Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 13 12:07:45.843090 containerd[1621]: time="2024-11-13T12:07:45.842354762Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 13 12:07:45.844848 containerd[1621]: time="2024-11-13T12:07:45.844128564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 12:07:45.844848 containerd[1621]: time="2024-11-13T12:07:45.844404507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 12:07:45.995174 containerd[1621]: time="2024-11-13T12:07:45.994828223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-shdhv,Uid:8a886e3e-042b-40d2-b8c2-1a33730ec832,Namespace:calico-system,Attempt:1,} returns sandbox id \"4c1fbd2c2a04cf6f0ccfcfebcba953c0ccd60c42f79978542527e658c63a1cfd\"" Nov 13 12:07:46.362499 containerd[1621]: time="2024-11-13T12:07:46.362424092Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 12:07:46.364012 containerd[1621]: time="2024-11-13T12:07:46.363766216Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.0: active requests=0, bytes read=41963930" Nov 13 12:07:46.365189 containerd[1621]: time="2024-11-13T12:07:46.364821460Z" level=info msg="ImageCreate event name:\"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 12:07:46.368418 containerd[1621]: time="2024-11-13T12:07:46.368379531Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 12:07:46.370087 containerd[1621]: time="2024-11-13T12:07:46.370048575Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" with image id \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\", size \"43457038\" in 4.942000521s" Nov 13 12:07:46.370321 containerd[1621]: time="2024-11-13T12:07:46.370237389Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" returns image reference \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\"" Nov 13 12:07:46.371325 containerd[1621]: time="2024-11-13T12:07:46.371220531Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\"" Nov 13 12:07:46.373889 containerd[1621]: time="2024-11-13T12:07:46.373842346Z" level=info msg="CreateContainer within sandbox \"3a7cde201aab8907371659a24c5f01abbf5ef876e808b35f7919f97477d90585\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Nov 13 12:07:46.415662 containerd[1621]: time="2024-11-13T12:07:46.415523791Z" level=info msg="CreateContainer within sandbox \"3a7cde201aab8907371659a24c5f01abbf5ef876e808b35f7919f97477d90585\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7d8d6dcc7eb3520476352411fab560181c2ca6e3c57182796bc72de225b5eb5e\"" Nov 13 12:07:46.416737 containerd[1621]: time="2024-11-13T12:07:46.416585141Z" level=info msg="StartContainer for \"7d8d6dcc7eb3520476352411fab560181c2ca6e3c57182796bc72de225b5eb5e\"" Nov 13 12:07:46.561736 containerd[1621]: time="2024-11-13T12:07:46.561594679Z" level=info msg="StartContainer for \"7d8d6dcc7eb3520476352411fab560181c2ca6e3c57182796bc72de225b5eb5e\" returns successfully" Nov 13 12:07:46.630087 kubelet[2916]: I1113 12:07:46.629698 2916 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-57579cb858-82wlr" podStartSLOduration=28.623716666 podStartE2EDuration="33.62961004s" podCreationTimestamp="2024-11-13 12:07:13 +0000 UTC" firstStartedPulling="2024-11-13 12:07:41.365052781 +0000 UTC m=+50.609234069" lastFinishedPulling="2024-11-13 12:07:46.370946136 +0000 UTC m=+55.615127443" observedRunningTime="2024-11-13 12:07:46.628583078 +0000 UTC m=+55.872764383" watchObservedRunningTime="2024-11-13 12:07:46.62961004 +0000 UTC m=+55.873791349" Nov 13 12:07:47.615488 kubelet[2916]: I1113 12:07:47.615429 2916 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 13 12:07:47.709615 systemd-networkd[1261]: calicce73b88804: Gained IPv6LL Nov 13 12:07:49.613679 containerd[1621]: time="2024-11-13T12:07:49.613612052Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 12:07:49.616057 containerd[1621]: time="2024-11-13T12:07:49.615967754Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.0: active requests=0, bytes read=34152461" Nov 13 12:07:49.622288 containerd[1621]: time="2024-11-13T12:07:49.616237361Z" level=info msg="ImageCreate event name:\"sha256:48cc7c24253a8037ceea486888a8c75cd74cbf20752c30b86fae718f5a3fc134\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 12:07:49.622288 containerd[1621]: time="2024-11-13T12:07:49.621205088Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:8242cd7e9b9b505c73292dd812ce1669bca95cacc56d30687f49e6e0b95c5535\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 12:07:49.624334 containerd[1621]: time="2024-11-13T12:07:49.624299086Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" with image id \"sha256:48cc7c24253a8037ceea486888a8c75cd74cbf20752c30b86fae718f5a3fc134\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:8242cd7e9b9b505c73292dd812ce1669bca95cacc56d30687f49e6e0b95c5535\", size \"35645521\" in 3.253003362s" Nov 13 12:07:49.624461 containerd[1621]: time="2024-11-13T12:07:49.624435048Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" returns image reference \"sha256:48cc7c24253a8037ceea486888a8c75cd74cbf20752c30b86fae718f5a3fc134\"" Nov 13 12:07:49.627041 containerd[1621]: time="2024-11-13T12:07:49.626955572Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\"" Nov 13 12:07:49.675637 containerd[1621]: time="2024-11-13T12:07:49.675527028Z" level=info msg="CreateContainer within sandbox \"ae352efd14837365c1d88ece7dde3477d1b1962a13516686e49c032a4fd73ae9\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Nov 13 12:07:49.715348 containerd[1621]: time="2024-11-13T12:07:49.715140630Z" level=info msg="CreateContainer within sandbox \"ae352efd14837365c1d88ece7dde3477d1b1962a13516686e49c032a4fd73ae9\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"440e27fde854dcf85b170e9bac673af0887823d29180e630414f1505d53eb4fe\"" Nov 13 12:07:49.716295 containerd[1621]: time="2024-11-13T12:07:49.716235475Z" level=info msg="StartContainer for \"440e27fde854dcf85b170e9bac673af0887823d29180e630414f1505d53eb4fe\"" Nov 13 12:07:49.879995 containerd[1621]: time="2024-11-13T12:07:49.879634994Z" level=info msg="StartContainer for \"440e27fde854dcf85b170e9bac673af0887823d29180e630414f1505d53eb4fe\" returns successfully" Nov 13 12:07:50.000993 containerd[1621]: time="2024-11-13T12:07:49.999956561Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 12:07:50.000993 containerd[1621]: time="2024-11-13T12:07:50.000517362Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.0: active requests=0, bytes read=77" Nov 13 12:07:50.013125 containerd[1621]: time="2024-11-13T12:07:50.012201283Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" with image id \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\", size \"43457038\" in 385.197182ms" Nov 13 12:07:50.013125 containerd[1621]: time="2024-11-13T12:07:50.012271481Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" returns image reference \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\"" Nov 13 12:07:50.020665 containerd[1621]: time="2024-11-13T12:07:50.020596542Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.0\"" Nov 13 12:07:50.032518 containerd[1621]: time="2024-11-13T12:07:50.032440933Z" level=info msg="CreateContainer within sandbox \"04d01acb3f7cc9811f58a4ad7e506710f242ac3e1401431fe52b88e8cfba923f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Nov 13 12:07:50.056245 containerd[1621]: time="2024-11-13T12:07:50.056146258Z" level=info msg="CreateContainer within sandbox \"04d01acb3f7cc9811f58a4ad7e506710f242ac3e1401431fe52b88e8cfba923f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c0df6578ed7e2f88b801417e67bfb4860b4092dd5a5c9ab4f61b5502010b3715\"" Nov 13 12:07:50.060066 containerd[1621]: time="2024-11-13T12:07:50.057571439Z" level=info msg="StartContainer for \"c0df6578ed7e2f88b801417e67bfb4860b4092dd5a5c9ab4f61b5502010b3715\"" Nov 13 12:07:50.256987 containerd[1621]: time="2024-11-13T12:07:50.255956854Z" level=info msg="StartContainer for \"c0df6578ed7e2f88b801417e67bfb4860b4092dd5a5c9ab4f61b5502010b3715\" returns successfully" Nov 13 12:07:50.706403 kubelet[2916]: I1113 12:07:50.706349 2916 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-668f5df9fb-7xn7m" podStartSLOduration=28.455705897 podStartE2EDuration="36.704695561s" podCreationTimestamp="2024-11-13 12:07:14 +0000 UTC" firstStartedPulling="2024-11-13 12:07:41.376880957 +0000 UTC m=+50.621062245" lastFinishedPulling="2024-11-13 12:07:49.625870616 +0000 UTC m=+58.870051909" observedRunningTime="2024-11-13 12:07:50.677299313 +0000 UTC m=+59.921480624" watchObservedRunningTime="2024-11-13 12:07:50.704695561 +0000 UTC m=+59.948876863" Nov 13 12:07:50.710259 kubelet[2916]: I1113 12:07:50.709362 2916 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-57579cb858-kmp5s" podStartSLOduration=31.964835871 podStartE2EDuration="37.708327244s" podCreationTimestamp="2024-11-13 12:07:13 +0000 UTC" firstStartedPulling="2024-11-13 12:07:44.276399413 +0000 UTC m=+53.520580702" lastFinishedPulling="2024-11-13 12:07:50.019890781 +0000 UTC m=+59.264072075" observedRunningTime="2024-11-13 12:07:50.702660122 +0000 UTC m=+59.946841430" watchObservedRunningTime="2024-11-13 12:07:50.708327244 +0000 UTC m=+59.952508551" Nov 13 12:07:51.119027 containerd[1621]: time="2024-11-13T12:07:51.116713380Z" level=info msg="StopPodSandbox for \"e499fb67b4891d47aeacea2c689a82a8971d5ee08b39f00648a857d3a035e84c\"" Nov 13 12:07:51.453875 containerd[1621]: 2024-11-13 12:07:51.310 [WARNING][5171] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e499fb67b4891d47aeacea2c689a82a8971d5ee08b39f00648a857d3a035e84c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--sx7g0.gb1.brightbox.com-k8s-csi--node--driver--shdhv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8a886e3e-042b-40d2-b8c2-1a33730ec832", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2024, time.November, 13, 12, 7, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"64dd8495dc", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-sx7g0.gb1.brightbox.com", ContainerID:"4c1fbd2c2a04cf6f0ccfcfebcba953c0ccd60c42f79978542527e658c63a1cfd", Pod:"csi-node-driver-shdhv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.83.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicce73b88804", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 13 12:07:51.453875 containerd[1621]: 2024-11-13 12:07:51.315 [INFO][5171] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e499fb67b4891d47aeacea2c689a82a8971d5ee08b39f00648a857d3a035e84c" Nov 13 12:07:51.453875 containerd[1621]: 2024-11-13 12:07:51.315 [INFO][5171] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e499fb67b4891d47aeacea2c689a82a8971d5ee08b39f00648a857d3a035e84c" iface="eth0" netns="" Nov 13 12:07:51.453875 containerd[1621]: 2024-11-13 12:07:51.315 [INFO][5171] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e499fb67b4891d47aeacea2c689a82a8971d5ee08b39f00648a857d3a035e84c" Nov 13 12:07:51.453875 containerd[1621]: 2024-11-13 12:07:51.315 [INFO][5171] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e499fb67b4891d47aeacea2c689a82a8971d5ee08b39f00648a857d3a035e84c" Nov 13 12:07:51.453875 containerd[1621]: 2024-11-13 12:07:51.414 [INFO][5178] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e499fb67b4891d47aeacea2c689a82a8971d5ee08b39f00648a857d3a035e84c" HandleID="k8s-pod-network.e499fb67b4891d47aeacea2c689a82a8971d5ee08b39f00648a857d3a035e84c" Workload="srv--sx7g0.gb1.brightbox.com-k8s-csi--node--driver--shdhv-eth0" Nov 13 12:07:51.453875 containerd[1621]: 2024-11-13 12:07:51.414 [INFO][5178] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 13 12:07:51.453875 containerd[1621]: 2024-11-13 12:07:51.414 [INFO][5178] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 13 12:07:51.453875 containerd[1621]: 2024-11-13 12:07:51.434 [WARNING][5178] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e499fb67b4891d47aeacea2c689a82a8971d5ee08b39f00648a857d3a035e84c" HandleID="k8s-pod-network.e499fb67b4891d47aeacea2c689a82a8971d5ee08b39f00648a857d3a035e84c" Workload="srv--sx7g0.gb1.brightbox.com-k8s-csi--node--driver--shdhv-eth0" Nov 13 12:07:51.453875 containerd[1621]: 2024-11-13 12:07:51.434 [INFO][5178] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e499fb67b4891d47aeacea2c689a82a8971d5ee08b39f00648a857d3a035e84c" HandleID="k8s-pod-network.e499fb67b4891d47aeacea2c689a82a8971d5ee08b39f00648a857d3a035e84c" Workload="srv--sx7g0.gb1.brightbox.com-k8s-csi--node--driver--shdhv-eth0" Nov 13 12:07:51.453875 containerd[1621]: 2024-11-13 12:07:51.437 [INFO][5178] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 13 12:07:51.453875 containerd[1621]: 2024-11-13 12:07:51.445 [INFO][5171] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e499fb67b4891d47aeacea2c689a82a8971d5ee08b39f00648a857d3a035e84c" Nov 13 12:07:51.453875 containerd[1621]: time="2024-11-13T12:07:51.453211780Z" level=info msg="TearDown network for sandbox \"e499fb67b4891d47aeacea2c689a82a8971d5ee08b39f00648a857d3a035e84c\" successfully" Nov 13 12:07:51.453875 containerd[1621]: time="2024-11-13T12:07:51.453246056Z" level=info msg="StopPodSandbox for \"e499fb67b4891d47aeacea2c689a82a8971d5ee08b39f00648a857d3a035e84c\" returns successfully" Nov 13 12:07:51.464730 containerd[1621]: time="2024-11-13T12:07:51.464685985Z" level=info msg="RemovePodSandbox for \"e499fb67b4891d47aeacea2c689a82a8971d5ee08b39f00648a857d3a035e84c\"" Nov 13 12:07:51.479526 containerd[1621]: time="2024-11-13T12:07:51.479485163Z" level=info msg="Forcibly stopping sandbox \"e499fb67b4891d47aeacea2c689a82a8971d5ee08b39f00648a857d3a035e84c\"" Nov 13 12:07:51.656107 kubelet[2916]: I1113 12:07:51.655958 2916 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 13 12:07:51.719185 systemd-journald[1183]: Under memory pressure, flushing caches. Nov 13 12:07:51.677909 systemd-resolved[1517]: Under memory pressure, flushing caches. Nov 13 12:07:51.678626 systemd-resolved[1517]: Flushed all caches. Nov 13 12:07:51.874352 systemd[1]: run-containerd-runc-k8s.io-440e27fde854dcf85b170e9bac673af0887823d29180e630414f1505d53eb4fe-runc.uYDDRV.mount: Deactivated successfully. Nov 13 12:07:51.965457 containerd[1621]: 2024-11-13 12:07:51.618 [WARNING][5196] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e499fb67b4891d47aeacea2c689a82a8971d5ee08b39f00648a857d3a035e84c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--sx7g0.gb1.brightbox.com-k8s-csi--node--driver--shdhv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8a886e3e-042b-40d2-b8c2-1a33730ec832", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2024, time.November, 13, 12, 7, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"64dd8495dc", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-sx7g0.gb1.brightbox.com", ContainerID:"4c1fbd2c2a04cf6f0ccfcfebcba953c0ccd60c42f79978542527e658c63a1cfd", Pod:"csi-node-driver-shdhv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.83.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicce73b88804", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 13 12:07:51.965457 containerd[1621]: 2024-11-13 12:07:51.624 [INFO][5196] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e499fb67b4891d47aeacea2c689a82a8971d5ee08b39f00648a857d3a035e84c" Nov 13 12:07:51.965457 containerd[1621]: 2024-11-13 12:07:51.625 [INFO][5196] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e499fb67b4891d47aeacea2c689a82a8971d5ee08b39f00648a857d3a035e84c" iface="eth0" netns="" Nov 13 12:07:51.965457 containerd[1621]: 2024-11-13 12:07:51.625 [INFO][5196] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e499fb67b4891d47aeacea2c689a82a8971d5ee08b39f00648a857d3a035e84c" Nov 13 12:07:51.965457 containerd[1621]: 2024-11-13 12:07:51.625 [INFO][5196] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e499fb67b4891d47aeacea2c689a82a8971d5ee08b39f00648a857d3a035e84c" Nov 13 12:07:51.965457 containerd[1621]: 2024-11-13 12:07:51.903 [INFO][5203] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e499fb67b4891d47aeacea2c689a82a8971d5ee08b39f00648a857d3a035e84c" HandleID="k8s-pod-network.e499fb67b4891d47aeacea2c689a82a8971d5ee08b39f00648a857d3a035e84c" Workload="srv--sx7g0.gb1.brightbox.com-k8s-csi--node--driver--shdhv-eth0" Nov 13 12:07:51.965457 containerd[1621]: 2024-11-13 12:07:51.908 [INFO][5203] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 13 12:07:51.965457 containerd[1621]: 2024-11-13 12:07:51.910 [INFO][5203] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 13 12:07:51.965457 containerd[1621]: 2024-11-13 12:07:51.927 [WARNING][5203] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e499fb67b4891d47aeacea2c689a82a8971d5ee08b39f00648a857d3a035e84c" HandleID="k8s-pod-network.e499fb67b4891d47aeacea2c689a82a8971d5ee08b39f00648a857d3a035e84c" Workload="srv--sx7g0.gb1.brightbox.com-k8s-csi--node--driver--shdhv-eth0" Nov 13 12:07:51.965457 containerd[1621]: 2024-11-13 12:07:51.928 [INFO][5203] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e499fb67b4891d47aeacea2c689a82a8971d5ee08b39f00648a857d3a035e84c" HandleID="k8s-pod-network.e499fb67b4891d47aeacea2c689a82a8971d5ee08b39f00648a857d3a035e84c" Workload="srv--sx7g0.gb1.brightbox.com-k8s-csi--node--driver--shdhv-eth0" Nov 13 12:07:51.965457 containerd[1621]: 2024-11-13 12:07:51.938 [INFO][5203] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 13 12:07:51.965457 containerd[1621]: 2024-11-13 12:07:51.956 [INFO][5196] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e499fb67b4891d47aeacea2c689a82a8971d5ee08b39f00648a857d3a035e84c" Nov 13 12:07:51.968078 containerd[1621]: time="2024-11-13T12:07:51.966771378Z" level=info msg="TearDown network for sandbox \"e499fb67b4891d47aeacea2c689a82a8971d5ee08b39f00648a857d3a035e84c\" successfully" Nov 13 12:07:52.028102 containerd[1621]: time="2024-11-13T12:07:52.027086295Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e499fb67b4891d47aeacea2c689a82a8971d5ee08b39f00648a857d3a035e84c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 13 12:07:52.028894 containerd[1621]: time="2024-11-13T12:07:52.028601668Z" level=info msg="RemovePodSandbox \"e499fb67b4891d47aeacea2c689a82a8971d5ee08b39f00648a857d3a035e84c\" returns successfully" Nov 13 12:07:52.035539 containerd[1621]: time="2024-11-13T12:07:52.035146303Z" level=info msg="StopPodSandbox for \"b4223c8a31fa4cde8619c5216c6837f686b3df265c3fc95b8ee36b05a9487126\"" Nov 13 12:07:52.614518 containerd[1621]: 2024-11-13 12:07:52.392 [WARNING][5249] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b4223c8a31fa4cde8619c5216c6837f686b3df265c3fc95b8ee36b05a9487126" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--sx7g0.gb1.brightbox.com-k8s-coredns--76f75df574--58wrm-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"56d5778e-34f5-4698-a9ed-8cc517424213", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2024, time.November, 13, 12, 7, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-sx7g0.gb1.brightbox.com", ContainerID:"3351f30dec5fa93bf17302263dceab35ffe39c56e2c16c11ebb000d8d5737ff4", Pod:"coredns-76f75df574-58wrm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.83.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidd5a9dbecf4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 13 12:07:52.614518 containerd[1621]: 2024-11-13 12:07:52.395 [INFO][5249] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b4223c8a31fa4cde8619c5216c6837f686b3df265c3fc95b8ee36b05a9487126" Nov 13 12:07:52.614518 containerd[1621]: 2024-11-13 12:07:52.395 [INFO][5249] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b4223c8a31fa4cde8619c5216c6837f686b3df265c3fc95b8ee36b05a9487126" iface="eth0" netns="" Nov 13 12:07:52.614518 containerd[1621]: 2024-11-13 12:07:52.395 [INFO][5249] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b4223c8a31fa4cde8619c5216c6837f686b3df265c3fc95b8ee36b05a9487126" Nov 13 12:07:52.614518 containerd[1621]: 2024-11-13 12:07:52.395 [INFO][5249] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b4223c8a31fa4cde8619c5216c6837f686b3df265c3fc95b8ee36b05a9487126" Nov 13 12:07:52.614518 containerd[1621]: 2024-11-13 12:07:52.569 [INFO][5255] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b4223c8a31fa4cde8619c5216c6837f686b3df265c3fc95b8ee36b05a9487126" HandleID="k8s-pod-network.b4223c8a31fa4cde8619c5216c6837f686b3df265c3fc95b8ee36b05a9487126" Workload="srv--sx7g0.gb1.brightbox.com-k8s-coredns--76f75df574--58wrm-eth0" Nov 13 12:07:52.614518 containerd[1621]: 2024-11-13 12:07:52.572 [INFO][5255] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 13 12:07:52.614518 containerd[1621]: 2024-11-13 12:07:52.572 [INFO][5255] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 13 12:07:52.614518 containerd[1621]: 2024-11-13 12:07:52.593 [WARNING][5255] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b4223c8a31fa4cde8619c5216c6837f686b3df265c3fc95b8ee36b05a9487126" HandleID="k8s-pod-network.b4223c8a31fa4cde8619c5216c6837f686b3df265c3fc95b8ee36b05a9487126" Workload="srv--sx7g0.gb1.brightbox.com-k8s-coredns--76f75df574--58wrm-eth0" Nov 13 12:07:52.614518 containerd[1621]: 2024-11-13 12:07:52.593 [INFO][5255] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b4223c8a31fa4cde8619c5216c6837f686b3df265c3fc95b8ee36b05a9487126" HandleID="k8s-pod-network.b4223c8a31fa4cde8619c5216c6837f686b3df265c3fc95b8ee36b05a9487126" Workload="srv--sx7g0.gb1.brightbox.com-k8s-coredns--76f75df574--58wrm-eth0" Nov 13 12:07:52.614518 containerd[1621]: 2024-11-13 12:07:52.600 [INFO][5255] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 13 12:07:52.614518 containerd[1621]: 2024-11-13 12:07:52.605 [INFO][5249] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b4223c8a31fa4cde8619c5216c6837f686b3df265c3fc95b8ee36b05a9487126" Nov 13 12:07:52.618843 containerd[1621]: time="2024-11-13T12:07:52.614909284Z" level=info msg="TearDown network for sandbox \"b4223c8a31fa4cde8619c5216c6837f686b3df265c3fc95b8ee36b05a9487126\" successfully" Nov 13 12:07:52.618843 containerd[1621]: time="2024-11-13T12:07:52.615301026Z" level=info msg="StopPodSandbox for \"b4223c8a31fa4cde8619c5216c6837f686b3df265c3fc95b8ee36b05a9487126\" returns successfully" Nov 13 12:07:52.618843 containerd[1621]: time="2024-11-13T12:07:52.618885107Z" level=info msg="RemovePodSandbox for \"b4223c8a31fa4cde8619c5216c6837f686b3df265c3fc95b8ee36b05a9487126\"" Nov 13 12:07:52.619278 containerd[1621]: time="2024-11-13T12:07:52.619060989Z" level=info msg="Forcibly stopping sandbox \"b4223c8a31fa4cde8619c5216c6837f686b3df265c3fc95b8ee36b05a9487126\"" Nov 13 12:07:52.661118 kubelet[2916]: I1113 12:07:52.659960 2916 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 13 12:07:52.718058 containerd[1621]: time="2024-11-13T12:07:52.715896894Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 12:07:52.721147 containerd[1621]: time="2024-11-13T12:07:52.721100895Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.0: active requests=0, bytes read=7902635" Nov 13 12:07:52.723223 containerd[1621]: time="2024-11-13T12:07:52.723167524Z" level=info msg="ImageCreate event name:\"sha256:a58f4c4b5a7fc2dc0036f198a37464aa007ff2dfe31c8fddad993477291bea46\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 12:07:52.736749 containerd[1621]: time="2024-11-13T12:07:52.736707827Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:034dac492808ec38cd5e596ef6c97d7cd01aaab29a4952c746b27c75ecab8cf5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 12:07:52.745758 containerd[1621]: time="2024-11-13T12:07:52.745599501Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.0\" with image id \"sha256:a58f4c4b5a7fc2dc0036f198a37464aa007ff2dfe31c8fddad993477291bea46\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:034dac492808ec38cd5e596ef6c97d7cd01aaab29a4952c746b27c75ecab8cf5\", size \"9395727\" in 2.724628425s" Nov 13 12:07:52.745888 containerd[1621]: time="2024-11-13T12:07:52.745767698Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.0\" returns image reference \"sha256:a58f4c4b5a7fc2dc0036f198a37464aa007ff2dfe31c8fddad993477291bea46\"" Nov 13 12:07:52.755162 containerd[1621]: time="2024-11-13T12:07:52.755127730Z" level=info msg="CreateContainer within sandbox \"4c1fbd2c2a04cf6f0ccfcfebcba953c0ccd60c42f79978542527e658c63a1cfd\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Nov 13 12:07:52.833075 containerd[1621]: time="2024-11-13T12:07:52.832044533Z" level=info msg="CreateContainer within sandbox \"4c1fbd2c2a04cf6f0ccfcfebcba953c0ccd60c42f79978542527e658c63a1cfd\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"d6c5efeaf65926aa0fa7057ca8657b1a9db72c811e02f3743fbb115a56b32182\"" Nov 13 12:07:52.835532 containerd[1621]: time="2024-11-13T12:07:52.834402256Z" level=info msg="StartContainer for \"d6c5efeaf65926aa0fa7057ca8657b1a9db72c811e02f3743fbb115a56b32182\"" Nov 13 12:07:52.940403 containerd[1621]: 2024-11-13 12:07:52.772 [WARNING][5273] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b4223c8a31fa4cde8619c5216c6837f686b3df265c3fc95b8ee36b05a9487126" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--sx7g0.gb1.brightbox.com-k8s-coredns--76f75df574--58wrm-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"56d5778e-34f5-4698-a9ed-8cc517424213", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2024, time.November, 13, 12, 7, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-sx7g0.gb1.brightbox.com", ContainerID:"3351f30dec5fa93bf17302263dceab35ffe39c56e2c16c11ebb000d8d5737ff4", Pod:"coredns-76f75df574-58wrm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.83.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidd5a9dbecf4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 13 12:07:52.940403 containerd[1621]: 2024-11-13 12:07:52.775 [INFO][5273] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b4223c8a31fa4cde8619c5216c6837f686b3df265c3fc95b8ee36b05a9487126" Nov 13 12:07:52.940403 containerd[1621]: 2024-11-13 12:07:52.775 [INFO][5273] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b4223c8a31fa4cde8619c5216c6837f686b3df265c3fc95b8ee36b05a9487126" iface="eth0" netns="" Nov 13 12:07:52.940403 containerd[1621]: 2024-11-13 12:07:52.775 [INFO][5273] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b4223c8a31fa4cde8619c5216c6837f686b3df265c3fc95b8ee36b05a9487126" Nov 13 12:07:52.940403 containerd[1621]: 2024-11-13 12:07:52.775 [INFO][5273] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b4223c8a31fa4cde8619c5216c6837f686b3df265c3fc95b8ee36b05a9487126" Nov 13 12:07:52.940403 containerd[1621]: 2024-11-13 12:07:52.872 [INFO][5280] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b4223c8a31fa4cde8619c5216c6837f686b3df265c3fc95b8ee36b05a9487126" HandleID="k8s-pod-network.b4223c8a31fa4cde8619c5216c6837f686b3df265c3fc95b8ee36b05a9487126" Workload="srv--sx7g0.gb1.brightbox.com-k8s-coredns--76f75df574--58wrm-eth0" Nov 13 12:07:52.940403 containerd[1621]: 2024-11-13 12:07:52.877 [INFO][5280] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 13 12:07:52.940403 containerd[1621]: 2024-11-13 12:07:52.877 [INFO][5280] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 13 12:07:52.940403 containerd[1621]: 2024-11-13 12:07:52.913 [WARNING][5280] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b4223c8a31fa4cde8619c5216c6837f686b3df265c3fc95b8ee36b05a9487126" HandleID="k8s-pod-network.b4223c8a31fa4cde8619c5216c6837f686b3df265c3fc95b8ee36b05a9487126" Workload="srv--sx7g0.gb1.brightbox.com-k8s-coredns--76f75df574--58wrm-eth0" Nov 13 12:07:52.940403 containerd[1621]: 2024-11-13 12:07:52.914 [INFO][5280] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b4223c8a31fa4cde8619c5216c6837f686b3df265c3fc95b8ee36b05a9487126" HandleID="k8s-pod-network.b4223c8a31fa4cde8619c5216c6837f686b3df265c3fc95b8ee36b05a9487126" Workload="srv--sx7g0.gb1.brightbox.com-k8s-coredns--76f75df574--58wrm-eth0" Nov 13 12:07:52.940403 containerd[1621]: 2024-11-13 12:07:52.920 [INFO][5280] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 13 12:07:52.940403 containerd[1621]: 2024-11-13 12:07:52.933 [INFO][5273] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b4223c8a31fa4cde8619c5216c6837f686b3df265c3fc95b8ee36b05a9487126" Nov 13 12:07:52.940403 containerd[1621]: time="2024-11-13T12:07:52.940132917Z" level=info msg="TearDown network for sandbox \"b4223c8a31fa4cde8619c5216c6837f686b3df265c3fc95b8ee36b05a9487126\" successfully" Nov 13 12:07:52.945746 containerd[1621]: time="2024-11-13T12:07:52.945712432Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b4223c8a31fa4cde8619c5216c6837f686b3df265c3fc95b8ee36b05a9487126\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 13 12:07:52.945936 containerd[1621]: time="2024-11-13T12:07:52.945905523Z" level=info msg="RemovePodSandbox \"b4223c8a31fa4cde8619c5216c6837f686b3df265c3fc95b8ee36b05a9487126\" returns successfully" Nov 13 12:07:52.946947 containerd[1621]: time="2024-11-13T12:07:52.946917323Z" level=info msg="StopPodSandbox for \"2c71cd07048b92e21d33e2c4e8ece5641c37ab1ae24040c92d87d3202a7e7f38\"" Nov 13 12:07:53.121220 containerd[1621]: time="2024-11-13T12:07:53.121133924Z" level=info msg="StartContainer for \"d6c5efeaf65926aa0fa7057ca8657b1a9db72c811e02f3743fbb115a56b32182\" returns successfully" Nov 13 12:07:53.125643 containerd[1621]: time="2024-11-13T12:07:53.125514434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\"" Nov 13 12:07:53.236965 containerd[1621]: 2024-11-13 12:07:53.124 [WARNING][5316] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2c71cd07048b92e21d33e2c4e8ece5641c37ab1ae24040c92d87d3202a7e7f38" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--sx7g0.gb1.brightbox.com-k8s-calico--apiserver--57579cb858--kmp5s-eth0", GenerateName:"calico-apiserver-57579cb858-", Namespace:"calico-apiserver", SelfLink:"", UID:"3531763d-3a3a-4c39-b14a-dc83c6ab3c8e", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2024, time.November, 13, 12, 7, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57579cb858", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-sx7g0.gb1.brightbox.com", ContainerID:"04d01acb3f7cc9811f58a4ad7e506710f242ac3e1401431fe52b88e8cfba923f", Pod:"calico-apiserver-57579cb858-kmp5s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.83.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6977d659a02", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 13 12:07:53.236965 containerd[1621]: 2024-11-13 12:07:53.126 [INFO][5316] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2c71cd07048b92e21d33e2c4e8ece5641c37ab1ae24040c92d87d3202a7e7f38" Nov 13 12:07:53.236965 containerd[1621]: 2024-11-13 12:07:53.126 [INFO][5316] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2c71cd07048b92e21d33e2c4e8ece5641c37ab1ae24040c92d87d3202a7e7f38" iface="eth0" netns="" Nov 13 12:07:53.236965 containerd[1621]: 2024-11-13 12:07:53.126 [INFO][5316] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2c71cd07048b92e21d33e2c4e8ece5641c37ab1ae24040c92d87d3202a7e7f38" Nov 13 12:07:53.236965 containerd[1621]: 2024-11-13 12:07:53.126 [INFO][5316] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2c71cd07048b92e21d33e2c4e8ece5641c37ab1ae24040c92d87d3202a7e7f38" Nov 13 12:07:53.236965 containerd[1621]: 2024-11-13 12:07:53.211 [INFO][5333] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2c71cd07048b92e21d33e2c4e8ece5641c37ab1ae24040c92d87d3202a7e7f38" HandleID="k8s-pod-network.2c71cd07048b92e21d33e2c4e8ece5641c37ab1ae24040c92d87d3202a7e7f38" Workload="srv--sx7g0.gb1.brightbox.com-k8s-calico--apiserver--57579cb858--kmp5s-eth0" Nov 13 12:07:53.236965 containerd[1621]: 2024-11-13 12:07:53.213 [INFO][5333] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 13 12:07:53.236965 containerd[1621]: 2024-11-13 12:07:53.213 [INFO][5333] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 13 12:07:53.236965 containerd[1621]: 2024-11-13 12:07:53.226 [WARNING][5333] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2c71cd07048b92e21d33e2c4e8ece5641c37ab1ae24040c92d87d3202a7e7f38" HandleID="k8s-pod-network.2c71cd07048b92e21d33e2c4e8ece5641c37ab1ae24040c92d87d3202a7e7f38" Workload="srv--sx7g0.gb1.brightbox.com-k8s-calico--apiserver--57579cb858--kmp5s-eth0" Nov 13 12:07:53.236965 containerd[1621]: 2024-11-13 12:07:53.226 [INFO][5333] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2c71cd07048b92e21d33e2c4e8ece5641c37ab1ae24040c92d87d3202a7e7f38" HandleID="k8s-pod-network.2c71cd07048b92e21d33e2c4e8ece5641c37ab1ae24040c92d87d3202a7e7f38" Workload="srv--sx7g0.gb1.brightbox.com-k8s-calico--apiserver--57579cb858--kmp5s-eth0" Nov 13 12:07:53.236965 containerd[1621]: 2024-11-13 12:07:53.230 [INFO][5333] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 13 12:07:53.236965 containerd[1621]: 2024-11-13 12:07:53.233 [INFO][5316] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2c71cd07048b92e21d33e2c4e8ece5641c37ab1ae24040c92d87d3202a7e7f38" Nov 13 12:07:53.236965 containerd[1621]: time="2024-11-13T12:07:53.236948339Z" level=info msg="TearDown network for sandbox \"2c71cd07048b92e21d33e2c4e8ece5641c37ab1ae24040c92d87d3202a7e7f38\" successfully" Nov 13 12:07:53.240074 containerd[1621]: time="2024-11-13T12:07:53.236988129Z" level=info msg="StopPodSandbox for \"2c71cd07048b92e21d33e2c4e8ece5641c37ab1ae24040c92d87d3202a7e7f38\" returns successfully" Nov 13 12:07:53.240074 containerd[1621]: time="2024-11-13T12:07:53.238807703Z" level=info msg="RemovePodSandbox for \"2c71cd07048b92e21d33e2c4e8ece5641c37ab1ae24040c92d87d3202a7e7f38\"" Nov 13 12:07:53.240074 containerd[1621]: time="2024-11-13T12:07:53.238861588Z" level=info msg="Forcibly stopping sandbox \"2c71cd07048b92e21d33e2c4e8ece5641c37ab1ae24040c92d87d3202a7e7f38\"" Nov 13 12:07:53.423624 containerd[1621]: 2024-11-13 12:07:53.314 [WARNING][5355] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2c71cd07048b92e21d33e2c4e8ece5641c37ab1ae24040c92d87d3202a7e7f38" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--sx7g0.gb1.brightbox.com-k8s-calico--apiserver--57579cb858--kmp5s-eth0", GenerateName:"calico-apiserver-57579cb858-", Namespace:"calico-apiserver", SelfLink:"", UID:"3531763d-3a3a-4c39-b14a-dc83c6ab3c8e", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2024, time.November, 13, 12, 7, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57579cb858", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-sx7g0.gb1.brightbox.com", ContainerID:"04d01acb3f7cc9811f58a4ad7e506710f242ac3e1401431fe52b88e8cfba923f", Pod:"calico-apiserver-57579cb858-kmp5s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.83.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6977d659a02", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 13 12:07:53.423624 containerd[1621]: 2024-11-13 12:07:53.314 [INFO][5355] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2c71cd07048b92e21d33e2c4e8ece5641c37ab1ae24040c92d87d3202a7e7f38" Nov 13 12:07:53.423624 containerd[1621]: 2024-11-13 12:07:53.314 [INFO][5355] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2c71cd07048b92e21d33e2c4e8ece5641c37ab1ae24040c92d87d3202a7e7f38" iface="eth0" netns="" Nov 13 12:07:53.423624 containerd[1621]: 2024-11-13 12:07:53.314 [INFO][5355] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2c71cd07048b92e21d33e2c4e8ece5641c37ab1ae24040c92d87d3202a7e7f38" Nov 13 12:07:53.423624 containerd[1621]: 2024-11-13 12:07:53.314 [INFO][5355] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2c71cd07048b92e21d33e2c4e8ece5641c37ab1ae24040c92d87d3202a7e7f38" Nov 13 12:07:53.423624 containerd[1621]: 2024-11-13 12:07:53.396 [INFO][5362] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2c71cd07048b92e21d33e2c4e8ece5641c37ab1ae24040c92d87d3202a7e7f38" HandleID="k8s-pod-network.2c71cd07048b92e21d33e2c4e8ece5641c37ab1ae24040c92d87d3202a7e7f38" Workload="srv--sx7g0.gb1.brightbox.com-k8s-calico--apiserver--57579cb858--kmp5s-eth0" Nov 13 12:07:53.423624 containerd[1621]: 2024-11-13 12:07:53.399 [INFO][5362] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 13 12:07:53.423624 containerd[1621]: 2024-11-13 12:07:53.399 [INFO][5362] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 13 12:07:53.423624 containerd[1621]: 2024-11-13 12:07:53.411 [WARNING][5362] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2c71cd07048b92e21d33e2c4e8ece5641c37ab1ae24040c92d87d3202a7e7f38" HandleID="k8s-pod-network.2c71cd07048b92e21d33e2c4e8ece5641c37ab1ae24040c92d87d3202a7e7f38" Workload="srv--sx7g0.gb1.brightbox.com-k8s-calico--apiserver--57579cb858--kmp5s-eth0" Nov 13 12:07:53.423624 containerd[1621]: 2024-11-13 12:07:53.411 [INFO][5362] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2c71cd07048b92e21d33e2c4e8ece5641c37ab1ae24040c92d87d3202a7e7f38" HandleID="k8s-pod-network.2c71cd07048b92e21d33e2c4e8ece5641c37ab1ae24040c92d87d3202a7e7f38" Workload="srv--sx7g0.gb1.brightbox.com-k8s-calico--apiserver--57579cb858--kmp5s-eth0" Nov 13 12:07:53.423624 containerd[1621]: 2024-11-13 12:07:53.414 [INFO][5362] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 13 12:07:53.423624 containerd[1621]: 2024-11-13 12:07:53.421 [INFO][5355] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2c71cd07048b92e21d33e2c4e8ece5641c37ab1ae24040c92d87d3202a7e7f38" Nov 13 12:07:53.423624 containerd[1621]: time="2024-11-13T12:07:53.423673039Z" level=info msg="TearDown network for sandbox \"2c71cd07048b92e21d33e2c4e8ece5641c37ab1ae24040c92d87d3202a7e7f38\" successfully" Nov 13 12:07:53.429448 containerd[1621]: time="2024-11-13T12:07:53.428576266Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2c71cd07048b92e21d33e2c4e8ece5641c37ab1ae24040c92d87d3202a7e7f38\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 13 12:07:53.429448 containerd[1621]: time="2024-11-13T12:07:53.428652029Z" level=info msg="RemovePodSandbox \"2c71cd07048b92e21d33e2c4e8ece5641c37ab1ae24040c92d87d3202a7e7f38\" returns successfully" Nov 13 12:07:53.430435 containerd[1621]: time="2024-11-13T12:07:53.429869956Z" level=info msg="StopPodSandbox for \"eaed35cb271ca46a0a4f2ded52a9a4684a9f4f2433afbcb7a12ce82687d709eb\"" Nov 13 12:07:53.788198 containerd[1621]: 2024-11-13 12:07:53.610 [WARNING][5380] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eaed35cb271ca46a0a4f2ded52a9a4684a9f4f2433afbcb7a12ce82687d709eb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--sx7g0.gb1.brightbox.com-k8s-calico--apiserver--57579cb858--82wlr-eth0", GenerateName:"calico-apiserver-57579cb858-", Namespace:"calico-apiserver", SelfLink:"", UID:"b5224871-082e-4b53-9a0e-2a5888841a73", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2024, time.November, 13, 12, 7, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57579cb858", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-sx7g0.gb1.brightbox.com", ContainerID:"3a7cde201aab8907371659a24c5f01abbf5ef876e808b35f7919f97477d90585", Pod:"calico-apiserver-57579cb858-82wlr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.83.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7a4b595039b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 13 12:07:53.788198 containerd[1621]: 2024-11-13 12:07:53.610 [INFO][5380] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="eaed35cb271ca46a0a4f2ded52a9a4684a9f4f2433afbcb7a12ce82687d709eb" Nov 13 12:07:53.788198 containerd[1621]: 2024-11-13 12:07:53.610 [INFO][5380] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eaed35cb271ca46a0a4f2ded52a9a4684a9f4f2433afbcb7a12ce82687d709eb" iface="eth0" netns="" Nov 13 12:07:53.788198 containerd[1621]: 2024-11-13 12:07:53.610 [INFO][5380] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="eaed35cb271ca46a0a4f2ded52a9a4684a9f4f2433afbcb7a12ce82687d709eb" Nov 13 12:07:53.788198 containerd[1621]: 2024-11-13 12:07:53.610 [INFO][5380] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eaed35cb271ca46a0a4f2ded52a9a4684a9f4f2433afbcb7a12ce82687d709eb" Nov 13 12:07:53.788198 containerd[1621]: 2024-11-13 12:07:53.747 [INFO][5387] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eaed35cb271ca46a0a4f2ded52a9a4684a9f4f2433afbcb7a12ce82687d709eb" HandleID="k8s-pod-network.eaed35cb271ca46a0a4f2ded52a9a4684a9f4f2433afbcb7a12ce82687d709eb" Workload="srv--sx7g0.gb1.brightbox.com-k8s-calico--apiserver--57579cb858--82wlr-eth0" Nov 13 12:07:53.788198 containerd[1621]: 2024-11-13 12:07:53.750 [INFO][5387] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 13 12:07:53.788198 containerd[1621]: 2024-11-13 12:07:53.750 [INFO][5387] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 13 12:07:53.788198 containerd[1621]: 2024-11-13 12:07:53.771 [WARNING][5387] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eaed35cb271ca46a0a4f2ded52a9a4684a9f4f2433afbcb7a12ce82687d709eb" HandleID="k8s-pod-network.eaed35cb271ca46a0a4f2ded52a9a4684a9f4f2433afbcb7a12ce82687d709eb" Workload="srv--sx7g0.gb1.brightbox.com-k8s-calico--apiserver--57579cb858--82wlr-eth0" Nov 13 12:07:53.788198 containerd[1621]: 2024-11-13 12:07:53.771 [INFO][5387] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eaed35cb271ca46a0a4f2ded52a9a4684a9f4f2433afbcb7a12ce82687d709eb" HandleID="k8s-pod-network.eaed35cb271ca46a0a4f2ded52a9a4684a9f4f2433afbcb7a12ce82687d709eb" Workload="srv--sx7g0.gb1.brightbox.com-k8s-calico--apiserver--57579cb858--82wlr-eth0" Nov 13 12:07:53.788198 containerd[1621]: 2024-11-13 12:07:53.778 [INFO][5387] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 13 12:07:53.788198 containerd[1621]: 2024-11-13 12:07:53.784 [INFO][5380] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="eaed35cb271ca46a0a4f2ded52a9a4684a9f4f2433afbcb7a12ce82687d709eb" Nov 13 12:07:53.796446 containerd[1621]: time="2024-11-13T12:07:53.788269588Z" level=info msg="TearDown network for sandbox \"eaed35cb271ca46a0a4f2ded52a9a4684a9f4f2433afbcb7a12ce82687d709eb\" successfully" Nov 13 12:07:53.796446 containerd[1621]: time="2024-11-13T12:07:53.788321488Z" level=info msg="StopPodSandbox for \"eaed35cb271ca46a0a4f2ded52a9a4684a9f4f2433afbcb7a12ce82687d709eb\" returns successfully" Nov 13 12:07:53.796446 containerd[1621]: time="2024-11-13T12:07:53.789394515Z" level=info msg="RemovePodSandbox for \"eaed35cb271ca46a0a4f2ded52a9a4684a9f4f2433afbcb7a12ce82687d709eb\"" Nov 13 12:07:53.796446 containerd[1621]: time="2024-11-13T12:07:53.789456660Z" level=info msg="Forcibly stopping sandbox \"eaed35cb271ca46a0a4f2ded52a9a4684a9f4f2433afbcb7a12ce82687d709eb\"" Nov 13 12:07:54.031650 containerd[1621]: 2024-11-13 12:07:53.874 [WARNING][5410] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eaed35cb271ca46a0a4f2ded52a9a4684a9f4f2433afbcb7a12ce82687d709eb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--sx7g0.gb1.brightbox.com-k8s-calico--apiserver--57579cb858--82wlr-eth0", GenerateName:"calico-apiserver-57579cb858-", Namespace:"calico-apiserver", SelfLink:"", UID:"b5224871-082e-4b53-9a0e-2a5888841a73", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2024, time.November, 13, 12, 7, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57579cb858", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-sx7g0.gb1.brightbox.com", ContainerID:"3a7cde201aab8907371659a24c5f01abbf5ef876e808b35f7919f97477d90585", Pod:"calico-apiserver-57579cb858-82wlr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.83.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7a4b595039b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 13 12:07:54.031650 containerd[1621]: 2024-11-13 12:07:53.874 [INFO][5410] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="eaed35cb271ca46a0a4f2ded52a9a4684a9f4f2433afbcb7a12ce82687d709eb" Nov 13 12:07:54.031650 containerd[1621]: 2024-11-13 12:07:53.874 [INFO][5410] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eaed35cb271ca46a0a4f2ded52a9a4684a9f4f2433afbcb7a12ce82687d709eb" iface="eth0" netns="" Nov 13 12:07:54.031650 containerd[1621]: 2024-11-13 12:07:53.877 [INFO][5410] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="eaed35cb271ca46a0a4f2ded52a9a4684a9f4f2433afbcb7a12ce82687d709eb" Nov 13 12:07:54.031650 containerd[1621]: 2024-11-13 12:07:53.877 [INFO][5410] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eaed35cb271ca46a0a4f2ded52a9a4684a9f4f2433afbcb7a12ce82687d709eb" Nov 13 12:07:54.031650 containerd[1621]: 2024-11-13 12:07:53.997 [INFO][5418] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eaed35cb271ca46a0a4f2ded52a9a4684a9f4f2433afbcb7a12ce82687d709eb" HandleID="k8s-pod-network.eaed35cb271ca46a0a4f2ded52a9a4684a9f4f2433afbcb7a12ce82687d709eb" Workload="srv--sx7g0.gb1.brightbox.com-k8s-calico--apiserver--57579cb858--82wlr-eth0" Nov 13 12:07:54.031650 containerd[1621]: 2024-11-13 12:07:53.997 [INFO][5418] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 13 12:07:54.031650 containerd[1621]: 2024-11-13 12:07:53.997 [INFO][5418] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 13 12:07:54.031650 containerd[1621]: 2024-11-13 12:07:54.020 [WARNING][5418] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eaed35cb271ca46a0a4f2ded52a9a4684a9f4f2433afbcb7a12ce82687d709eb" HandleID="k8s-pod-network.eaed35cb271ca46a0a4f2ded52a9a4684a9f4f2433afbcb7a12ce82687d709eb" Workload="srv--sx7g0.gb1.brightbox.com-k8s-calico--apiserver--57579cb858--82wlr-eth0" Nov 13 12:07:54.031650 containerd[1621]: 2024-11-13 12:07:54.020 [INFO][5418] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eaed35cb271ca46a0a4f2ded52a9a4684a9f4f2433afbcb7a12ce82687d709eb" HandleID="k8s-pod-network.eaed35cb271ca46a0a4f2ded52a9a4684a9f4f2433afbcb7a12ce82687d709eb" Workload="srv--sx7g0.gb1.brightbox.com-k8s-calico--apiserver--57579cb858--82wlr-eth0" Nov 13 12:07:54.031650 containerd[1621]: 2024-11-13 12:07:54.022 [INFO][5418] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 13 12:07:54.031650 containerd[1621]: 2024-11-13 12:07:54.025 [INFO][5410] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="eaed35cb271ca46a0a4f2ded52a9a4684a9f4f2433afbcb7a12ce82687d709eb" Nov 13 12:07:54.033891 containerd[1621]: time="2024-11-13T12:07:54.032134927Z" level=info msg="TearDown network for sandbox \"eaed35cb271ca46a0a4f2ded52a9a4684a9f4f2433afbcb7a12ce82687d709eb\" successfully" Nov 13 12:07:54.046079 containerd[1621]: time="2024-11-13T12:07:54.045528677Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eaed35cb271ca46a0a4f2ded52a9a4684a9f4f2433afbcb7a12ce82687d709eb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 13 12:07:54.046079 containerd[1621]: time="2024-11-13T12:07:54.045632056Z" level=info msg="RemovePodSandbox \"eaed35cb271ca46a0a4f2ded52a9a4684a9f4f2433afbcb7a12ce82687d709eb\" returns successfully" Nov 13 12:07:54.048883 containerd[1621]: time="2024-11-13T12:07:54.048248439Z" level=info msg="StopPodSandbox for \"5572f68acbacd57352449c2acfec5a91e5d3a7b1728cd7617c27dda20170908b\"" Nov 13 12:07:54.267400 containerd[1621]: 2024-11-13 12:07:54.195 [WARNING][5438] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5572f68acbacd57352449c2acfec5a91e5d3a7b1728cd7617c27dda20170908b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--sx7g0.gb1.brightbox.com-k8s-calico--kube--controllers--668f5df9fb--7xn7m-eth0", GenerateName:"calico-kube-controllers-668f5df9fb-", Namespace:"calico-system", SelfLink:"", UID:"36b3a588-3839-4c9b-9a4b-8639fa9043ec", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2024, time.November, 13, 12, 7, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"668f5df9fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-sx7g0.gb1.brightbox.com", ContainerID:"ae352efd14837365c1d88ece7dde3477d1b1962a13516686e49c032a4fd73ae9", Pod:"calico-kube-controllers-668f5df9fb-7xn7m", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.83.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie8ce195757a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 13 12:07:54.267400 containerd[1621]: 2024-11-13 12:07:54.195 [INFO][5438] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5572f68acbacd57352449c2acfec5a91e5d3a7b1728cd7617c27dda20170908b" Nov 13 12:07:54.267400 containerd[1621]: 2024-11-13 12:07:54.195 [INFO][5438] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5572f68acbacd57352449c2acfec5a91e5d3a7b1728cd7617c27dda20170908b" iface="eth0" netns="" Nov 13 12:07:54.267400 containerd[1621]: 2024-11-13 12:07:54.195 [INFO][5438] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5572f68acbacd57352449c2acfec5a91e5d3a7b1728cd7617c27dda20170908b" Nov 13 12:07:54.267400 containerd[1621]: 2024-11-13 12:07:54.195 [INFO][5438] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5572f68acbacd57352449c2acfec5a91e5d3a7b1728cd7617c27dda20170908b" Nov 13 12:07:54.267400 containerd[1621]: 2024-11-13 12:07:54.250 [INFO][5444] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5572f68acbacd57352449c2acfec5a91e5d3a7b1728cd7617c27dda20170908b" HandleID="k8s-pod-network.5572f68acbacd57352449c2acfec5a91e5d3a7b1728cd7617c27dda20170908b" Workload="srv--sx7g0.gb1.brightbox.com-k8s-calico--kube--controllers--668f5df9fb--7xn7m-eth0" Nov 13 12:07:54.267400 containerd[1621]: 2024-11-13 12:07:54.250 [INFO][5444] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 13 12:07:54.267400 containerd[1621]: 2024-11-13 12:07:54.250 [INFO][5444] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 13 12:07:54.267400 containerd[1621]: 2024-11-13 12:07:54.260 [WARNING][5444] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5572f68acbacd57352449c2acfec5a91e5d3a7b1728cd7617c27dda20170908b" HandleID="k8s-pod-network.5572f68acbacd57352449c2acfec5a91e5d3a7b1728cd7617c27dda20170908b" Workload="srv--sx7g0.gb1.brightbox.com-k8s-calico--kube--controllers--668f5df9fb--7xn7m-eth0" Nov 13 12:07:54.267400 containerd[1621]: 2024-11-13 12:07:54.260 [INFO][5444] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5572f68acbacd57352449c2acfec5a91e5d3a7b1728cd7617c27dda20170908b" HandleID="k8s-pod-network.5572f68acbacd57352449c2acfec5a91e5d3a7b1728cd7617c27dda20170908b" Workload="srv--sx7g0.gb1.brightbox.com-k8s-calico--kube--controllers--668f5df9fb--7xn7m-eth0" Nov 13 12:07:54.267400 containerd[1621]: 2024-11-13 12:07:54.262 [INFO][5444] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 13 12:07:54.267400 containerd[1621]: 2024-11-13 12:07:54.265 [INFO][5438] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5572f68acbacd57352449c2acfec5a91e5d3a7b1728cd7617c27dda20170908b" Nov 13 12:07:54.269955 containerd[1621]: time="2024-11-13T12:07:54.267425956Z" level=info msg="TearDown network for sandbox \"5572f68acbacd57352449c2acfec5a91e5d3a7b1728cd7617c27dda20170908b\" successfully" Nov 13 12:07:54.269955 containerd[1621]: time="2024-11-13T12:07:54.267471039Z" level=info msg="StopPodSandbox for \"5572f68acbacd57352449c2acfec5a91e5d3a7b1728cd7617c27dda20170908b\" returns successfully" Nov 13 12:07:54.269955 containerd[1621]: time="2024-11-13T12:07:54.269409907Z" level=info msg="RemovePodSandbox for \"5572f68acbacd57352449c2acfec5a91e5d3a7b1728cd7617c27dda20170908b\"" Nov 13 12:07:54.269955 containerd[1621]: time="2024-11-13T12:07:54.269478414Z" level=info msg="Forcibly stopping sandbox \"5572f68acbacd57352449c2acfec5a91e5d3a7b1728cd7617c27dda20170908b\"" Nov 13 12:07:54.385846 containerd[1621]: 2024-11-13 12:07:54.326 [WARNING][5462] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5572f68acbacd57352449c2acfec5a91e5d3a7b1728cd7617c27dda20170908b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--sx7g0.gb1.brightbox.com-k8s-calico--kube--controllers--668f5df9fb--7xn7m-eth0", GenerateName:"calico-kube-controllers-668f5df9fb-", Namespace:"calico-system", SelfLink:"", UID:"36b3a588-3839-4c9b-9a4b-8639fa9043ec", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2024, time.November, 13, 12, 7, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"668f5df9fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-sx7g0.gb1.brightbox.com", ContainerID:"ae352efd14837365c1d88ece7dde3477d1b1962a13516686e49c032a4fd73ae9", Pod:"calico-kube-controllers-668f5df9fb-7xn7m", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.83.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie8ce195757a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 13 12:07:54.385846 containerd[1621]: 2024-11-13 12:07:54.326 [INFO][5462] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5572f68acbacd57352449c2acfec5a91e5d3a7b1728cd7617c27dda20170908b" Nov 13 12:07:54.385846 containerd[1621]: 2024-11-13 12:07:54.326 [INFO][5462] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5572f68acbacd57352449c2acfec5a91e5d3a7b1728cd7617c27dda20170908b" iface="eth0" netns="" Nov 13 12:07:54.385846 containerd[1621]: 2024-11-13 12:07:54.326 [INFO][5462] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5572f68acbacd57352449c2acfec5a91e5d3a7b1728cd7617c27dda20170908b" Nov 13 12:07:54.385846 containerd[1621]: 2024-11-13 12:07:54.326 [INFO][5462] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5572f68acbacd57352449c2acfec5a91e5d3a7b1728cd7617c27dda20170908b" Nov 13 12:07:54.385846 containerd[1621]: 2024-11-13 12:07:54.367 [INFO][5468] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5572f68acbacd57352449c2acfec5a91e5d3a7b1728cd7617c27dda20170908b" HandleID="k8s-pod-network.5572f68acbacd57352449c2acfec5a91e5d3a7b1728cd7617c27dda20170908b" Workload="srv--sx7g0.gb1.brightbox.com-k8s-calico--kube--controllers--668f5df9fb--7xn7m-eth0" Nov 13 12:07:54.385846 containerd[1621]: 2024-11-13 12:07:54.368 [INFO][5468] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 13 12:07:54.385846 containerd[1621]: 2024-11-13 12:07:54.368 [INFO][5468] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 13 12:07:54.385846 containerd[1621]: 2024-11-13 12:07:54.377 [WARNING][5468] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5572f68acbacd57352449c2acfec5a91e5d3a7b1728cd7617c27dda20170908b" HandleID="k8s-pod-network.5572f68acbacd57352449c2acfec5a91e5d3a7b1728cd7617c27dda20170908b" Workload="srv--sx7g0.gb1.brightbox.com-k8s-calico--kube--controllers--668f5df9fb--7xn7m-eth0" Nov 13 12:07:54.385846 containerd[1621]: 2024-11-13 12:07:54.377 [INFO][5468] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5572f68acbacd57352449c2acfec5a91e5d3a7b1728cd7617c27dda20170908b" HandleID="k8s-pod-network.5572f68acbacd57352449c2acfec5a91e5d3a7b1728cd7617c27dda20170908b" Workload="srv--sx7g0.gb1.brightbox.com-k8s-calico--kube--controllers--668f5df9fb--7xn7m-eth0" Nov 13 12:07:54.385846 containerd[1621]: 2024-11-13 12:07:54.379 [INFO][5468] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 13 12:07:54.385846 containerd[1621]: 2024-11-13 12:07:54.381 [INFO][5462] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5572f68acbacd57352449c2acfec5a91e5d3a7b1728cd7617c27dda20170908b" Nov 13 12:07:54.389376 containerd[1621]: time="2024-11-13T12:07:54.386143859Z" level=info msg="TearDown network for sandbox \"5572f68acbacd57352449c2acfec5a91e5d3a7b1728cd7617c27dda20170908b\" successfully" Nov 13 12:07:54.397530 containerd[1621]: time="2024-11-13T12:07:54.397319547Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5572f68acbacd57352449c2acfec5a91e5d3a7b1728cd7617c27dda20170908b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 13 12:07:54.397530 containerd[1621]: time="2024-11-13T12:07:54.397430448Z" level=info msg="RemovePodSandbox \"5572f68acbacd57352449c2acfec5a91e5d3a7b1728cd7617c27dda20170908b\" returns successfully" Nov 13 12:07:54.399456 containerd[1621]: time="2024-11-13T12:07:54.399104748Z" level=info msg="StopPodSandbox for \"ebd9b2c005747731e48ce3608906a04ec66097d761f5b87f6837919d549e2c4c\"" Nov 13 12:07:54.521657 containerd[1621]: 2024-11-13 12:07:54.454 [WARNING][5486] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ebd9b2c005747731e48ce3608906a04ec66097d761f5b87f6837919d549e2c4c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--sx7g0.gb1.brightbox.com-k8s-coredns--76f75df574--kz8bv-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"b637d3e5-f4d0-4ad5-a801-bcd5490bb15d", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2024, time.November, 13, 12, 7, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-sx7g0.gb1.brightbox.com", ContainerID:"99e2a0a51780132b53fb76356ce4ca2b567343fbcf35fd6a4e897a59b62855bc", Pod:"coredns-76f75df574-kz8bv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.83.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1bb31fb1f30", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 13 12:07:54.521657 containerd[1621]: 2024-11-13 12:07:54.455 [INFO][5486] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ebd9b2c005747731e48ce3608906a04ec66097d761f5b87f6837919d549e2c4c" Nov 13 12:07:54.521657 containerd[1621]: 2024-11-13 12:07:54.455 [INFO][5486] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ebd9b2c005747731e48ce3608906a04ec66097d761f5b87f6837919d549e2c4c" iface="eth0" netns="" Nov 13 12:07:54.521657 containerd[1621]: 2024-11-13 12:07:54.455 [INFO][5486] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ebd9b2c005747731e48ce3608906a04ec66097d761f5b87f6837919d549e2c4c" Nov 13 12:07:54.521657 containerd[1621]: 2024-11-13 12:07:54.455 [INFO][5486] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ebd9b2c005747731e48ce3608906a04ec66097d761f5b87f6837919d549e2c4c" Nov 13 12:07:54.521657 containerd[1621]: 2024-11-13 12:07:54.498 [INFO][5492] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ebd9b2c005747731e48ce3608906a04ec66097d761f5b87f6837919d549e2c4c" HandleID="k8s-pod-network.ebd9b2c005747731e48ce3608906a04ec66097d761f5b87f6837919d549e2c4c" Workload="srv--sx7g0.gb1.brightbox.com-k8s-coredns--76f75df574--kz8bv-eth0" Nov 13 12:07:54.521657 containerd[1621]: 2024-11-13 12:07:54.499 [INFO][5492] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 13 12:07:54.521657 containerd[1621]: 2024-11-13 12:07:54.499 [INFO][5492] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 13 12:07:54.521657 containerd[1621]: 2024-11-13 12:07:54.511 [WARNING][5492] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ebd9b2c005747731e48ce3608906a04ec66097d761f5b87f6837919d549e2c4c" HandleID="k8s-pod-network.ebd9b2c005747731e48ce3608906a04ec66097d761f5b87f6837919d549e2c4c" Workload="srv--sx7g0.gb1.brightbox.com-k8s-coredns--76f75df574--kz8bv-eth0" Nov 13 12:07:54.521657 containerd[1621]: 2024-11-13 12:07:54.511 [INFO][5492] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ebd9b2c005747731e48ce3608906a04ec66097d761f5b87f6837919d549e2c4c" HandleID="k8s-pod-network.ebd9b2c005747731e48ce3608906a04ec66097d761f5b87f6837919d549e2c4c" Workload="srv--sx7g0.gb1.brightbox.com-k8s-coredns--76f75df574--kz8bv-eth0" Nov 13 12:07:54.521657 containerd[1621]: 2024-11-13 12:07:54.514 [INFO][5492] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 13 12:07:54.521657 containerd[1621]: 2024-11-13 12:07:54.517 [INFO][5486] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ebd9b2c005747731e48ce3608906a04ec66097d761f5b87f6837919d549e2c4c" Nov 13 12:07:54.524607 containerd[1621]: time="2024-11-13T12:07:54.522314478Z" level=info msg="TearDown network for sandbox \"ebd9b2c005747731e48ce3608906a04ec66097d761f5b87f6837919d549e2c4c\" successfully" Nov 13 12:07:54.524607 containerd[1621]: time="2024-11-13T12:07:54.522372374Z" level=info msg="StopPodSandbox for \"ebd9b2c005747731e48ce3608906a04ec66097d761f5b87f6837919d549e2c4c\" returns successfully" Nov 13 12:07:54.524607 containerd[1621]: time="2024-11-13T12:07:54.523521404Z" level=info msg="RemovePodSandbox for \"ebd9b2c005747731e48ce3608906a04ec66097d761f5b87f6837919d549e2c4c\"" Nov 13 12:07:54.524607 containerd[1621]: time="2024-11-13T12:07:54.523584675Z" level=info msg="Forcibly stopping sandbox \"ebd9b2c005747731e48ce3608906a04ec66097d761f5b87f6837919d549e2c4c\"" Nov 13 12:07:54.778359 containerd[1621]: 2024-11-13 12:07:54.661 [WARNING][5511] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ebd9b2c005747731e48ce3608906a04ec66097d761f5b87f6837919d549e2c4c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--sx7g0.gb1.brightbox.com-k8s-coredns--76f75df574--kz8bv-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"b637d3e5-f4d0-4ad5-a801-bcd5490bb15d", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2024, time.November, 13, 12, 7, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-sx7g0.gb1.brightbox.com", ContainerID:"99e2a0a51780132b53fb76356ce4ca2b567343fbcf35fd6a4e897a59b62855bc", Pod:"coredns-76f75df574-kz8bv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.83.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1bb31fb1f30", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 13 12:07:54.778359 containerd[1621]: 2024-11-13 12:07:54.662 [INFO][5511] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ebd9b2c005747731e48ce3608906a04ec66097d761f5b87f6837919d549e2c4c" Nov 13 12:07:54.778359 containerd[1621]: 2024-11-13 12:07:54.662 [INFO][5511] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ebd9b2c005747731e48ce3608906a04ec66097d761f5b87f6837919d549e2c4c" iface="eth0" netns="" Nov 13 12:07:54.778359 containerd[1621]: 2024-11-13 12:07:54.662 [INFO][5511] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ebd9b2c005747731e48ce3608906a04ec66097d761f5b87f6837919d549e2c4c" Nov 13 12:07:54.778359 containerd[1621]: 2024-11-13 12:07:54.662 [INFO][5511] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ebd9b2c005747731e48ce3608906a04ec66097d761f5b87f6837919d549e2c4c" Nov 13 12:07:54.778359 containerd[1621]: 2024-11-13 12:07:54.752 [INFO][5517] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ebd9b2c005747731e48ce3608906a04ec66097d761f5b87f6837919d549e2c4c" HandleID="k8s-pod-network.ebd9b2c005747731e48ce3608906a04ec66097d761f5b87f6837919d549e2c4c" Workload="srv--sx7g0.gb1.brightbox.com-k8s-coredns--76f75df574--kz8bv-eth0" Nov 13 12:07:54.778359 containerd[1621]: 2024-11-13 12:07:54.752 [INFO][5517] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 13 12:07:54.778359 containerd[1621]: 2024-11-13 12:07:54.752 [INFO][5517] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 13 12:07:54.778359 containerd[1621]: 2024-11-13 12:07:54.764 [WARNING][5517] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ebd9b2c005747731e48ce3608906a04ec66097d761f5b87f6837919d549e2c4c" HandleID="k8s-pod-network.ebd9b2c005747731e48ce3608906a04ec66097d761f5b87f6837919d549e2c4c" Workload="srv--sx7g0.gb1.brightbox.com-k8s-coredns--76f75df574--kz8bv-eth0" Nov 13 12:07:54.778359 containerd[1621]: 2024-11-13 12:07:54.765 [INFO][5517] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ebd9b2c005747731e48ce3608906a04ec66097d761f5b87f6837919d549e2c4c" HandleID="k8s-pod-network.ebd9b2c005747731e48ce3608906a04ec66097d761f5b87f6837919d549e2c4c" Workload="srv--sx7g0.gb1.brightbox.com-k8s-coredns--76f75df574--kz8bv-eth0" Nov 13 12:07:54.778359 containerd[1621]: 2024-11-13 12:07:54.768 [INFO][5517] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 13 12:07:54.778359 containerd[1621]: 2024-11-13 12:07:54.772 [INFO][5511] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ebd9b2c005747731e48ce3608906a04ec66097d761f5b87f6837919d549e2c4c" Nov 13 12:07:54.781957 containerd[1621]: time="2024-11-13T12:07:54.780123771Z" level=info msg="TearDown network for sandbox \"ebd9b2c005747731e48ce3608906a04ec66097d761f5b87f6837919d549e2c4c\" successfully" Nov 13 12:07:54.832072 containerd[1621]: time="2024-11-13T12:07:54.831989374Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ebd9b2c005747731e48ce3608906a04ec66097d761f5b87f6837919d549e2c4c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 13 12:07:54.832750 containerd[1621]: time="2024-11-13T12:07:54.832117760Z" level=info msg="RemovePodSandbox \"ebd9b2c005747731e48ce3608906a04ec66097d761f5b87f6837919d549e2c4c\" returns successfully" Nov 13 12:07:55.216788 containerd[1621]: time="2024-11-13T12:07:55.216720887Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 12:07:55.219306 containerd[1621]: time="2024-11-13T12:07:55.218588292Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0: active requests=0, bytes read=10501080" Nov 13 12:07:55.219306 containerd[1621]: time="2024-11-13T12:07:55.218650702Z" level=info msg="ImageCreate event name:\"sha256:448cca84519399c3138626aff1a43b0b9168ecbe27e0e8e6df63416012eeeaae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 12:07:55.221852 containerd[1621]: time="2024-11-13T12:07:55.221801785Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:69153d7038238f84185e52b4a84e11c5cf5af716ef8613fb0a475ea311dca0cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 12:07:55.224474 containerd[1621]: time="2024-11-13T12:07:55.223788310Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" with image id \"sha256:448cca84519399c3138626aff1a43b0b9168ecbe27e0e8e6df63416012eeeaae\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:69153d7038238f84185e52b4a84e11c5cf5af716ef8613fb0a475ea311dca0cb\", size \"11994124\" in 2.098122291s" Nov 13 12:07:55.224474 containerd[1621]: time="2024-11-13T12:07:55.223857944Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" returns image reference \"sha256:448cca84519399c3138626aff1a43b0b9168ecbe27e0e8e6df63416012eeeaae\"" Nov 13 12:07:55.229179 containerd[1621]: time="2024-11-13T12:07:55.228905443Z" level=info msg="CreateContainer within sandbox \"4c1fbd2c2a04cf6f0ccfcfebcba953c0ccd60c42f79978542527e658c63a1cfd\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Nov 13 12:07:55.250798 containerd[1621]: time="2024-11-13T12:07:55.250735194Z" level=info msg="CreateContainer within sandbox \"4c1fbd2c2a04cf6f0ccfcfebcba953c0ccd60c42f79978542527e658c63a1cfd\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"55ce52fd424b7d80f8f6c32f1e53e4f6f786e7f78081e3e0a92e0611ca5be418\"" Nov 13 12:07:55.253083 containerd[1621]: time="2024-11-13T12:07:55.252131138Z" level=info msg="StartContainer for \"55ce52fd424b7d80f8f6c32f1e53e4f6f786e7f78081e3e0a92e0611ca5be418\"" Nov 13 12:07:55.380035 containerd[1621]: time="2024-11-13T12:07:55.379954992Z" level=info msg="StartContainer for \"55ce52fd424b7d80f8f6c32f1e53e4f6f786e7f78081e3e0a92e0611ca5be418\" returns successfully" Nov 13 12:07:55.748944 kubelet[2916]: I1113 12:07:55.748696 2916 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-shdhv" podStartSLOduration=32.522219215 podStartE2EDuration="41.747223598s" podCreationTimestamp="2024-11-13 12:07:14 +0000 UTC" firstStartedPulling="2024-11-13 12:07:45.999267342 +0000 UTC m=+55.243448630" lastFinishedPulling="2024-11-13 12:07:55.22427172 +0000 UTC m=+64.468453013" observedRunningTime="2024-11-13 12:07:55.747193682 +0000 UTC m=+64.991374981" watchObservedRunningTime="2024-11-13 12:07:55.747223598 +0000 UTC m=+64.991404900" Nov 13 12:07:56.373611 kubelet[2916]: I1113 12:07:56.373511 2916 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Nov 13 12:07:56.381999 kubelet[2916]: I1113 12:07:56.381606 2916 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Nov 13 12:08:01.058412 systemd[1]: Started sshd@9-10.230.32.222:22-147.75.109.163:39156.service - OpenSSH per-connection server daemon (147.75.109.163:39156). Nov 13 12:08:02.022335 sshd[5589]: Accepted publickey for core from 147.75.109.163 port 39156 ssh2: RSA SHA256:6zq1KeZH3fhJd7rNbiqRD8Qhg+Zgu4M5RIFDzzh/o6k Nov 13 12:08:02.030159 sshd[5589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 12:08:02.071188 systemd-logind[1597]: New session 12 of user core. Nov 13 12:08:02.077528 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 13 12:08:03.376426 sshd[5589]: pam_unix(sshd:session): session closed for user core Nov 13 12:08:03.386602 systemd-logind[1597]: Session 12 logged out. Waiting for processes to exit. Nov 13 12:08:03.390050 systemd[1]: sshd@9-10.230.32.222:22-147.75.109.163:39156.service: Deactivated successfully. Nov 13 12:08:03.399739 systemd[1]: session-12.scope: Deactivated successfully. Nov 13 12:08:03.402189 systemd-logind[1597]: Removed session 12. Nov 13 12:08:08.532327 systemd[1]: Started sshd@10-10.230.32.222:22-147.75.109.163:39166.service - OpenSSH per-connection server daemon (147.75.109.163:39166). Nov 13 12:08:09.445819 sshd[5618]: Accepted publickey for core from 147.75.109.163 port 39166 ssh2: RSA SHA256:6zq1KeZH3fhJd7rNbiqRD8Qhg+Zgu4M5RIFDzzh/o6k Nov 13 12:08:09.449295 sshd[5618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 12:08:09.456538 systemd-logind[1597]: New session 13 of user core. Nov 13 12:08:09.463432 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 13 12:08:10.278870 sshd[5618]: pam_unix(sshd:session): session closed for user core Nov 13 12:08:10.286309 systemd[1]: sshd@10-10.230.32.222:22-147.75.109.163:39166.service: Deactivated successfully. Nov 13 12:08:10.292337 systemd[1]: session-13.scope: Deactivated successfully. Nov 13 12:08:10.293476 systemd-logind[1597]: Session 13 logged out. Waiting for processes to exit. Nov 13 12:08:10.299951 systemd-logind[1597]: Removed session 13. Nov 13 12:08:15.434362 systemd[1]: Started sshd@11-10.230.32.222:22-147.75.109.163:39722.service - OpenSSH per-connection server daemon (147.75.109.163:39722). Nov 13 12:08:16.330916 sshd[5633]: Accepted publickey for core from 147.75.109.163 port 39722 ssh2: RSA SHA256:6zq1KeZH3fhJd7rNbiqRD8Qhg+Zgu4M5RIFDzzh/o6k Nov 13 12:08:16.334248 sshd[5633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 12:08:16.342420 systemd-logind[1597]: New session 14 of user core. Nov 13 12:08:16.348516 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 13 12:08:16.709403 systemd[1]: run-containerd-runc-k8s.io-7c8b8e24045ef548961a4d09e7cc1c601f90a291de7da9ec1bfbf5ac6654b9d3-runc.8iZDfh.mount: Deactivated successfully. Nov 13 12:08:17.100418 sshd[5633]: pam_unix(sshd:session): session closed for user core Nov 13 12:08:17.115969 systemd[1]: sshd@11-10.230.32.222:22-147.75.109.163:39722.service: Deactivated successfully. Nov 13 12:08:17.122348 systemd[1]: session-14.scope: Deactivated successfully. Nov 13 12:08:17.124400 systemd-logind[1597]: Session 14 logged out. Waiting for processes to exit. Nov 13 12:08:17.127322 systemd-logind[1597]: Removed session 14. Nov 13 12:08:17.250456 systemd[1]: Started sshd@12-10.230.32.222:22-147.75.109.163:39728.service - OpenSSH per-connection server daemon (147.75.109.163:39728). Nov 13 12:08:18.151972 sshd[5669]: Accepted publickey for core from 147.75.109.163 port 39728 ssh2: RSA SHA256:6zq1KeZH3fhJd7rNbiqRD8Qhg+Zgu4M5RIFDzzh/o6k Nov 13 12:08:18.153509 sshd[5669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 12:08:18.160636 systemd-logind[1597]: New session 15 of user core. Nov 13 12:08:18.167385 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 13 12:08:18.963348 sshd[5669]: pam_unix(sshd:session): session closed for user core Nov 13 12:08:18.972278 systemd-logind[1597]: Session 15 logged out. Waiting for processes to exit. Nov 13 12:08:18.973677 systemd[1]: sshd@12-10.230.32.222:22-147.75.109.163:39728.service: Deactivated successfully. Nov 13 12:08:18.982587 systemd[1]: session-15.scope: Deactivated successfully. Nov 13 12:08:18.985083 systemd-logind[1597]: Removed session 15. Nov 13 12:08:19.113676 systemd[1]: Started sshd@13-10.230.32.222:22-147.75.109.163:58366.service - OpenSSH per-connection server daemon (147.75.109.163:58366). Nov 13 12:08:20.022065 sshd[5681]: Accepted publickey for core from 147.75.109.163 port 58366 ssh2: RSA SHA256:6zq1KeZH3fhJd7rNbiqRD8Qhg+Zgu4M5RIFDzzh/o6k Nov 13 12:08:20.024378 sshd[5681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 12:08:20.035409 systemd-logind[1597]: New session 16 of user core. Nov 13 12:08:20.042835 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 13 12:08:20.893341 sshd[5681]: pam_unix(sshd:session): session closed for user core Nov 13 12:08:20.906854 systemd[1]: sshd@13-10.230.32.222:22-147.75.109.163:58366.service: Deactivated successfully. Nov 13 12:08:20.912381 systemd-logind[1597]: Session 16 logged out. Waiting for processes to exit. Nov 13 12:08:20.912577 systemd[1]: session-16.scope: Deactivated successfully. Nov 13 12:08:20.919504 systemd-logind[1597]: Removed session 16. Nov 13 12:08:24.402229 kubelet[2916]: I1113 12:08:24.402161 2916 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 13 12:08:26.044717 systemd[1]: Started sshd@14-10.230.32.222:22-147.75.109.163:58382.service - OpenSSH per-connection server daemon (147.75.109.163:58382). Nov 13 12:08:26.985897 sshd[5711]: Accepted publickey for core from 147.75.109.163 port 58382 ssh2: RSA SHA256:6zq1KeZH3fhJd7rNbiqRD8Qhg+Zgu4M5RIFDzzh/o6k Nov 13 12:08:26.989481 sshd[5711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 12:08:26.998268 systemd-logind[1597]: New session 17 of user core. Nov 13 12:08:27.006462 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 13 12:08:27.802405 sshd[5711]: pam_unix(sshd:session): session closed for user core Nov 13 12:08:27.811208 systemd[1]: sshd@14-10.230.32.222:22-147.75.109.163:58382.service: Deactivated successfully. Nov 13 12:08:27.819975 systemd[1]: session-17.scope: Deactivated successfully. Nov 13 12:08:27.821833 systemd-logind[1597]: Session 17 logged out. Waiting for processes to exit. Nov 13 12:08:27.823612 systemd-logind[1597]: Removed session 17. Nov 13 12:08:32.962253 systemd[1]: Started sshd@15-10.230.32.222:22-147.75.109.163:46406.service - OpenSSH per-connection server daemon (147.75.109.163:46406). Nov 13 12:08:33.876049 sshd[5746]: Accepted publickey for core from 147.75.109.163 port 46406 ssh2: RSA SHA256:6zq1KeZH3fhJd7rNbiqRD8Qhg+Zgu4M5RIFDzzh/o6k Nov 13 12:08:33.880051 sshd[5746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 12:08:33.895617 systemd-logind[1597]: New session 18 of user core. Nov 13 12:08:33.901735 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 13 12:08:34.668899 sshd[5746]: pam_unix(sshd:session): session closed for user core Nov 13 12:08:34.683784 systemd[1]: sshd@15-10.230.32.222:22-147.75.109.163:46406.service: Deactivated successfully. Nov 13 12:08:34.692552 systemd-logind[1597]: Session 18 logged out. Waiting for processes to exit. Nov 13 12:08:34.693057 systemd[1]: session-18.scope: Deactivated successfully. Nov 13 12:08:34.695373 systemd-logind[1597]: Removed session 18. Nov 13 12:08:39.818359 systemd[1]: Started sshd@16-10.230.32.222:22-147.75.109.163:54678.service - OpenSSH per-connection server daemon (147.75.109.163:54678). Nov 13 12:08:40.735262 sshd[5762]: Accepted publickey for core from 147.75.109.163 port 54678 ssh2: RSA SHA256:6zq1KeZH3fhJd7rNbiqRD8Qhg+Zgu4M5RIFDzzh/o6k Nov 13 12:08:40.738982 sshd[5762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 12:08:40.745787 systemd-logind[1597]: New session 19 of user core. Nov 13 12:08:40.752448 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 13 12:08:41.589876 sshd[5762]: pam_unix(sshd:session): session closed for user core Nov 13 12:08:41.597298 systemd[1]: sshd@16-10.230.32.222:22-147.75.109.163:54678.service: Deactivated successfully. Nov 13 12:08:41.598291 systemd-logind[1597]: Session 19 logged out. Waiting for processes to exit. Nov 13 12:08:41.603155 systemd[1]: session-19.scope: Deactivated successfully. Nov 13 12:08:41.604835 systemd-logind[1597]: Removed session 19. Nov 13 12:08:41.739434 systemd[1]: Started sshd@17-10.230.32.222:22-147.75.109.163:54680.service - OpenSSH per-connection server daemon (147.75.109.163:54680). Nov 13 12:08:42.641157 sshd[5796]: Accepted publickey for core from 147.75.109.163 port 54680 ssh2: RSA SHA256:6zq1KeZH3fhJd7rNbiqRD8Qhg+Zgu4M5RIFDzzh/o6k Nov 13 12:08:42.644066 sshd[5796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 12:08:42.651661 systemd-logind[1597]: New session 20 of user core. Nov 13 12:08:42.658573 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 13 12:08:43.746978 sshd[5796]: pam_unix(sshd:session): session closed for user core Nov 13 12:08:43.754635 systemd[1]: sshd@17-10.230.32.222:22-147.75.109.163:54680.service: Deactivated successfully. Nov 13 12:08:43.763037 systemd-logind[1597]: Session 20 logged out. Waiting for processes to exit. Nov 13 12:08:43.763222 systemd[1]: session-20.scope: Deactivated successfully. Nov 13 12:08:43.766192 systemd-logind[1597]: Removed session 20. Nov 13 12:08:43.910446 systemd[1]: Started sshd@18-10.230.32.222:22-147.75.109.163:54694.service - OpenSSH per-connection server daemon (147.75.109.163:54694). Nov 13 12:08:44.821530 sshd[5812]: Accepted publickey for core from 147.75.109.163 port 54694 ssh2: RSA SHA256:6zq1KeZH3fhJd7rNbiqRD8Qhg+Zgu4M5RIFDzzh/o6k Nov 13 12:08:44.823742 sshd[5812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 12:08:44.835756 systemd-logind[1597]: New session 21 of user core. Nov 13 12:08:44.842038 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 13 12:08:47.844560 systemd-journald[1183]: Under memory pressure, flushing caches. Nov 13 12:08:47.809742 systemd-resolved[1517]: Under memory pressure, flushing caches. Nov 13 12:08:47.809809 systemd-resolved[1517]: Flushed all caches. Nov 13 12:08:48.555710 sshd[5812]: pam_unix(sshd:session): session closed for user core Nov 13 12:08:48.570603 systemd[1]: sshd@18-10.230.32.222:22-147.75.109.163:54694.service: Deactivated successfully. Nov 13 12:08:48.570916 systemd-logind[1597]: Session 21 logged out. Waiting for processes to exit. Nov 13 12:08:48.584018 systemd[1]: session-21.scope: Deactivated successfully. Nov 13 12:08:48.587131 systemd-logind[1597]: Removed session 21. Nov 13 12:08:48.705464 systemd[1]: Started sshd@19-10.230.32.222:22-147.75.109.163:54710.service - OpenSSH per-connection server daemon (147.75.109.163:54710). Nov 13 12:08:49.620763 sshd[5853]: Accepted publickey for core from 147.75.109.163 port 54710 ssh2: RSA SHA256:6zq1KeZH3fhJd7rNbiqRD8Qhg+Zgu4M5RIFDzzh/o6k Nov 13 12:08:49.622605 sshd[5853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 12:08:49.631090 systemd-logind[1597]: New session 22 of user core. Nov 13 12:08:49.642482 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 13 12:08:49.868407 systemd-journald[1183]: Under memory pressure, flushing caches. Nov 13 12:08:49.854388 systemd-resolved[1517]: Under memory pressure, flushing caches. Nov 13 12:08:49.854400 systemd-resolved[1517]: Flushed all caches. Nov 13 12:08:51.060886 sshd[5853]: pam_unix(sshd:session): session closed for user core Nov 13 12:08:51.076298 systemd[1]: sshd@19-10.230.32.222:22-147.75.109.163:54710.service: Deactivated successfully. Nov 13 12:08:51.076984 systemd-logind[1597]: Session 22 logged out. Waiting for processes to exit. Nov 13 12:08:51.091583 systemd[1]: session-22.scope: Deactivated successfully. Nov 13 12:08:51.095421 systemd-logind[1597]: Removed session 22. Nov 13 12:08:51.213688 systemd[1]: Started sshd@20-10.230.32.222:22-147.75.109.163:36740.service - OpenSSH per-connection server daemon (147.75.109.163:36740). Nov 13 12:08:51.915024 systemd-journald[1183]: Under memory pressure, flushing caches. Nov 13 12:08:51.901435 systemd-resolved[1517]: Under memory pressure, flushing caches. Nov 13 12:08:51.901505 systemd-resolved[1517]: Flushed all caches. Nov 13 12:08:52.132610 sshd[5867]: Accepted publickey for core from 147.75.109.163 port 36740 ssh2: RSA SHA256:6zq1KeZH3fhJd7rNbiqRD8Qhg+Zgu4M5RIFDzzh/o6k Nov 13 12:08:52.136326 sshd[5867]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 12:08:52.153539 systemd-logind[1597]: New session 23 of user core. Nov 13 12:08:52.162333 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 13 12:08:53.011408 sshd[5867]: pam_unix(sshd:session): session closed for user core Nov 13 12:08:53.018813 systemd[1]: sshd@20-10.230.32.222:22-147.75.109.163:36740.service: Deactivated successfully. Nov 13 12:08:53.035722 systemd[1]: session-23.scope: Deactivated successfully. Nov 13 12:08:53.037951 systemd-logind[1597]: Session 23 logged out. Waiting for processes to exit. Nov 13 12:08:53.043939 systemd-logind[1597]: Removed session 23. Nov 13 12:08:58.164431 systemd[1]: Started sshd@21-10.230.32.222:22-147.75.109.163:36742.service - OpenSSH per-connection server daemon (147.75.109.163:36742). Nov 13 12:08:59.068055 sshd[5884]: Accepted publickey for core from 147.75.109.163 port 36742 ssh2: RSA SHA256:6zq1KeZH3fhJd7rNbiqRD8Qhg+Zgu4M5RIFDzzh/o6k Nov 13 12:08:59.070741 sshd[5884]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 12:08:59.078363 systemd-logind[1597]: New session 24 of user core. Nov 13 12:08:59.085508 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 13 12:08:59.825880 sshd[5884]: pam_unix(sshd:session): session closed for user core Nov 13 12:08:59.832359 systemd-logind[1597]: Session 24 logged out. Waiting for processes to exit. Nov 13 12:08:59.832802 systemd[1]: sshd@21-10.230.32.222:22-147.75.109.163:36742.service: Deactivated successfully. Nov 13 12:08:59.836922 systemd[1]: session-24.scope: Deactivated successfully. Nov 13 12:08:59.838557 systemd-logind[1597]: Removed session 24. Nov 13 12:09:04.978344 systemd[1]: Started sshd@22-10.230.32.222:22-147.75.109.163:53036.service - OpenSSH per-connection server daemon (147.75.109.163:53036). Nov 13 12:09:05.865396 sshd[5923]: Accepted publickey for core from 147.75.109.163 port 53036 ssh2: RSA SHA256:6zq1KeZH3fhJd7rNbiqRD8Qhg+Zgu4M5RIFDzzh/o6k Nov 13 12:09:05.867900 sshd[5923]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 12:09:05.875446 systemd-logind[1597]: New session 25 of user core. Nov 13 12:09:05.885897 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 13 12:09:06.623815 sshd[5923]: pam_unix(sshd:session): session closed for user core Nov 13 12:09:06.629711 systemd[1]: sshd@22-10.230.32.222:22-147.75.109.163:53036.service: Deactivated successfully. Nov 13 12:09:06.630842 systemd-logind[1597]: Session 25 logged out. Waiting for processes to exit. Nov 13 12:09:06.638084 systemd[1]: session-25.scope: Deactivated successfully. Nov 13 12:09:06.641469 systemd-logind[1597]: Removed session 25. Nov 13 12:09:11.775971 systemd[1]: Started sshd@23-10.230.32.222:22-147.75.109.163:56924.service - OpenSSH per-connection server daemon (147.75.109.163:56924). Nov 13 12:09:12.673109 sshd[5941]: Accepted publickey for core from 147.75.109.163 port 56924 ssh2: RSA SHA256:6zq1KeZH3fhJd7rNbiqRD8Qhg+Zgu4M5RIFDzzh/o6k Nov 13 12:09:12.676986 sshd[5941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 12:09:12.687978 systemd-logind[1597]: New session 26 of user core. Nov 13 12:09:12.694558 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 13 12:09:13.416304 sshd[5941]: pam_unix(sshd:session): session closed for user core Nov 13 12:09:13.423261 systemd[1]: sshd@23-10.230.32.222:22-147.75.109.163:56924.service: Deactivated successfully. Nov 13 12:09:13.427161 systemd-logind[1597]: Session 26 logged out. Waiting for processes to exit. Nov 13 12:09:13.427927 systemd[1]: session-26.scope: Deactivated successfully. Nov 13 12:09:13.431336 systemd-logind[1597]: Removed session 26.