Jan 30 18:06:09.046378 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 18:06:09.046425 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 18:06:09.046440 kernel: BIOS-provided physical RAM map: Jan 30 18:06:09.046456 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 30 18:06:09.046466 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 30 18:06:09.046475 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 30 18:06:09.046486 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Jan 30 18:06:09.046496 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Jan 30 18:06:09.046506 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 30 18:06:09.046516 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 30 18:06:09.046526 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 30 18:06:09.046536 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 30 18:06:09.046559 kernel: NX (Execute Disable) protection: active Jan 30 18:06:09.046571 kernel: APIC: Static calls initialized Jan 30 18:06:09.046583 kernel: SMBIOS 2.8 present. Jan 30 18:06:09.046599 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Jan 30 18:06:09.046611 kernel: Hypervisor detected: KVM Jan 30 18:06:09.046627 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 18:06:09.046638 kernel: kvm-clock: using sched offset of 5120025087 cycles Jan 30 18:06:09.046650 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 18:06:09.046661 kernel: tsc: Detected 2799.998 MHz processor Jan 30 18:06:09.046672 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 18:06:09.046683 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 18:06:09.046694 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 30 18:06:09.046704 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 30 18:06:09.046727 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 18:06:09.046744 kernel: Using GB pages for direct mapping Jan 30 18:06:09.046755 kernel: ACPI: Early table checksum verification disabled Jan 30 18:06:09.046765 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Jan 30 18:06:09.046776 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 18:06:09.046787 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 18:06:09.046798 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 18:06:09.046809 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Jan 30 18:06:09.046819 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 18:06:09.046830 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 18:06:09.046846 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 18:06:09.046856 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 18:06:09.046867 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Jan 30 18:06:09.046878 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Jan 30 18:06:09.046889 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Jan 30 18:06:09.046906 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Jan 30 18:06:09.046917 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Jan 30 18:06:09.046933 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Jan 30 18:06:09.046945 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Jan 30 18:06:09.046956 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 30 18:06:09.046974 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 30 18:06:09.046986 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jan 30 18:06:09.046997 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Jan 30 18:06:09.047008 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jan 30 18:06:09.047020 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Jan 30 18:06:09.047036 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jan 30 18:06:09.047048 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Jan 30 18:06:09.047059 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jan 30 18:06:09.047070 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Jan 30 18:06:09.047081 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jan 30 18:06:09.047093 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Jan 30 18:06:09.047104 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jan 30 18:06:09.047115 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Jan 30 18:06:09.047131 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jan 30 18:06:09.047149 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Jan 30 18:06:09.047160 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 30 18:06:09.047172 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 30 18:06:09.047183 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Jan 30 18:06:09.047195 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Jan 30 18:06:09.047206 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Jan 30 18:06:09.047218 kernel: Zone ranges: Jan 30 18:06:09.047229 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 18:06:09.047240 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Jan 30 18:06:09.047257 kernel: Normal empty Jan 30 18:06:09.047268 kernel: Movable zone start for each node Jan 30 18:06:09.047280 kernel: Early memory node ranges Jan 30 18:06:09.047291 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 30 18:06:09.047302 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Jan 30 18:06:09.047313 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Jan 30 18:06:09.047325 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 18:06:09.047336 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 30 18:06:09.047353 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Jan 30 18:06:09.047365 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 30 18:06:09.047382 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 18:06:09.047394 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 30 18:06:09.047418 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 30 18:06:09.047431 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 18:06:09.047442 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 18:06:09.047454 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 18:06:09.047465 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 18:06:09.047477 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 18:06:09.047488 kernel: TSC deadline timer available Jan 30 18:06:09.047506 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Jan 30 18:06:09.047518 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 30 18:06:09.047529 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 30 18:06:09.047541 kernel: Booting paravirtualized kernel on KVM Jan 30 18:06:09.047552 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 18:06:09.047564 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 30 18:06:09.047575 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Jan 30 18:06:09.047586 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Jan 30 18:06:09.047598 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 30 18:06:09.047614 kernel: kvm-guest: PV spinlocks enabled Jan 30 18:06:09.047626 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 30 18:06:09.047639 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 18:06:09.047651 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 18:06:09.047662 kernel: random: crng init done Jan 30 18:06:09.047673 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 18:06:09.047685 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 30 18:06:09.047696 kernel: Fallback order for Node 0: 0 Jan 30 18:06:09.047723 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Jan 30 18:06:09.047740 kernel: Policy zone: DMA32 Jan 30 18:06:09.047753 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 18:06:09.047764 kernel: software IO TLB: area num 16. Jan 30 18:06:09.047776 kernel: Memory: 1901528K/2096616K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 194828K reserved, 0K cma-reserved) Jan 30 18:06:09.047788 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 30 18:06:09.047799 kernel: Kernel/User page tables isolation: enabled Jan 30 18:06:09.047810 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 18:06:09.047828 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 18:06:09.047840 kernel: Dynamic Preempt: voluntary Jan 30 18:06:09.047851 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 18:06:09.047863 kernel: rcu: RCU event tracing is enabled. Jan 30 18:06:09.047875 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 30 18:06:09.047887 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 18:06:09.047927 kernel: Rude variant of Tasks RCU enabled. Jan 30 18:06:09.047948 kernel: Tracing variant of Tasks RCU enabled. Jan 30 18:06:09.047960 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 18:06:09.047972 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 30 18:06:09.047984 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Jan 30 18:06:09.047996 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 18:06:09.048013 kernel: Console: colour VGA+ 80x25 Jan 30 18:06:09.048025 kernel: printk: console [tty0] enabled Jan 30 18:06:09.048037 kernel: printk: console [ttyS0] enabled Jan 30 18:06:09.048049 kernel: ACPI: Core revision 20230628 Jan 30 18:06:09.048061 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 18:06:09.048073 kernel: x2apic enabled Jan 30 18:06:09.048090 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 18:06:09.048107 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Jan 30 18:06:09.048121 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) Jan 30 18:06:09.048133 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 30 18:06:09.048145 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 30 18:06:09.048157 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 30 18:06:09.048169 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 18:06:09.048181 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 18:06:09.048193 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 18:06:09.048210 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 18:06:09.048235 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 30 18:06:09.048246 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 18:06:09.048258 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 18:06:09.048269 kernel: MDS: Mitigation: Clear CPU buffers Jan 30 18:06:09.048280 kernel: MMIO Stale Data: Unknown: No mitigations Jan 30 18:06:09.048292 kernel: SRBDS: Unknown: Dependent on hypervisor status Jan 30 18:06:09.048303 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 18:06:09.048315 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 18:06:09.048327 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 18:06:09.048338 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 18:06:09.048354 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 30 18:06:09.048366 kernel: Freeing SMP alternatives memory: 32K Jan 30 18:06:09.048382 kernel: pid_max: default: 32768 minimum: 301 Jan 30 18:06:09.048395 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 18:06:09.048461 kernel: landlock: Up and running. Jan 30 18:06:09.048476 kernel: SELinux: Initializing. Jan 30 18:06:09.048488 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 18:06:09.048500 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 18:06:09.048512 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Jan 30 18:06:09.048524 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 30 18:06:09.048536 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 30 18:06:09.048556 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 30 18:06:09.048568 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Jan 30 18:06:09.048580 kernel: signal: max sigframe size: 1776 Jan 30 18:06:09.048592 kernel: rcu: Hierarchical SRCU implementation. Jan 30 18:06:09.048605 kernel: rcu: Max phase no-delay instances is 400. Jan 30 18:06:09.048617 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 30 18:06:09.048629 kernel: smp: Bringing up secondary CPUs ... Jan 30 18:06:09.048641 kernel: smpboot: x86: Booting SMP configuration: Jan 30 18:06:09.048653 kernel: .... node #0, CPUs: #1 Jan 30 18:06:09.048671 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jan 30 18:06:09.048683 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 18:06:09.048695 kernel: smpboot: Max logical packages: 16 Jan 30 18:06:09.048716 kernel: smpboot: Total of 2 processors activated (11199.99 BogoMIPS) Jan 30 18:06:09.048738 kernel: devtmpfs: initialized Jan 30 18:06:09.048750 kernel: x86/mm: Memory block size: 128MB Jan 30 18:06:09.048762 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 18:06:09.048775 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 30 18:06:09.048787 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 18:06:09.048805 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 18:06:09.048817 kernel: audit: initializing netlink subsys (disabled) Jan 30 18:06:09.048829 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 18:06:09.048841 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 18:06:09.048853 kernel: audit: type=2000 audit(1738260367.792:1): state=initialized audit_enabled=0 res=1 Jan 30 18:06:09.048864 kernel: cpuidle: using governor menu Jan 30 18:06:09.048877 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 18:06:09.048889 kernel: dca service started, version 1.12.1 Jan 30 18:06:09.048901 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 30 18:06:09.048918 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 30 18:06:09.048930 kernel: PCI: Using configuration type 1 for base access Jan 30 18:06:09.048943 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 18:06:09.048955 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 18:06:09.048967 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 18:06:09.048979 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 18:06:09.048991 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 18:06:09.049003 kernel: ACPI: Added _OSI(Module Device) Jan 30 18:06:09.049015 kernel: ACPI: Added _OSI(Processor Device) Jan 30 18:06:09.049032 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 18:06:09.049044 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 18:06:09.049056 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 18:06:09.049068 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 18:06:09.049080 kernel: ACPI: Interpreter enabled Jan 30 18:06:09.049092 kernel: ACPI: PM: (supports S0 S5) Jan 30 18:06:09.049103 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 18:06:09.049115 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 18:06:09.049127 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 18:06:09.049144 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 30 18:06:09.049157 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 18:06:09.050030 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 18:06:09.050246 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 30 18:06:09.050413 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 30 18:06:09.050451 kernel: PCI host bridge to bus 0000:00 Jan 30 18:06:09.050630 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 18:06:09.050808 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 18:06:09.050958 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 18:06:09.051107 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jan 30 18:06:09.051257 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 30 18:06:09.051417 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Jan 30 18:06:09.051571 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 18:06:09.051810 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 30 18:06:09.052034 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Jan 30 18:06:09.052208 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Jan 30 18:06:09.052502 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Jan 30 18:06:09.052679 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Jan 30 18:06:09.052861 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 18:06:09.053050 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 30 18:06:09.053227 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Jan 30 18:06:09.053431 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 30 18:06:09.053604 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Jan 30 18:06:09.053804 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 30 18:06:09.053993 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Jan 30 18:06:09.054204 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 30 18:06:09.054382 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Jan 30 18:06:09.054582 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 30 18:06:09.054758 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Jan 30 18:06:09.054948 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 30 18:06:09.055136 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Jan 30 18:06:09.055321 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 30 18:06:09.055532 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Jan 30 18:06:09.055746 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 30 18:06:09.055916 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Jan 30 18:06:09.056113 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 30 18:06:09.056281 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 30 18:06:09.057726 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Jan 30 18:06:09.057904 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Jan 30 18:06:09.058079 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Jan 30 18:06:09.058264 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 30 18:06:09.058444 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jan 30 18:06:09.058608 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Jan 30 18:06:09.058784 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Jan 30 18:06:09.058973 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 30 18:06:09.059152 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 30 18:06:09.059354 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 30 18:06:09.059538 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Jan 30 18:06:09.061623 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Jan 30 18:06:09.061886 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 30 18:06:09.062058 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 30 18:06:09.062249 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Jan 30 18:06:09.063526 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Jan 30 18:06:09.063722 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 30 18:06:09.063893 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 30 18:06:09.064058 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 30 18:06:09.064269 kernel: pci_bus 0000:02: extended config space not accessible Jan 30 18:06:09.065534 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Jan 30 18:06:09.065740 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Jan 30 18:06:09.065914 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 30 18:06:09.066085 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 30 18:06:09.066275 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 30 18:06:09.068249 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Jan 30 18:06:09.068533 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 30 18:06:09.068718 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 30 18:06:09.068897 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 30 18:06:09.069092 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 30 18:06:09.069269 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Jan 30 18:06:09.069465 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 30 18:06:09.069632 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 30 18:06:09.069810 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 30 18:06:09.069978 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 30 18:06:09.070139 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 30 18:06:09.070309 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 30 18:06:09.070495 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 30 18:06:09.070660 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 30 18:06:09.070838 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 30 18:06:09.071007 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 30 18:06:09.071170 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 30 18:06:09.071334 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 30 18:06:09.072569 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 30 18:06:09.072762 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 30 18:06:09.072927 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 30 18:06:09.073092 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 30 18:06:09.073253 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 30 18:06:09.074435 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 30 18:06:09.074459 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 18:06:09.074473 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 18:06:09.074486 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 18:06:09.074505 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 18:06:09.074518 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 30 18:06:09.074530 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 30 18:06:09.074543 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 30 18:06:09.074555 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 30 18:06:09.074567 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 30 18:06:09.074579 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 30 18:06:09.074591 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 30 18:06:09.074604 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 30 18:06:09.074621 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 30 18:06:09.074633 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 30 18:06:09.074645 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 30 18:06:09.074658 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 30 18:06:09.074671 kernel: iommu: Default domain type: Translated Jan 30 18:06:09.074683 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 18:06:09.074695 kernel: PCI: Using ACPI for IRQ routing Jan 30 18:06:09.074719 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 18:06:09.074733 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 30 18:06:09.074751 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Jan 30 18:06:09.074921 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 30 18:06:09.075086 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 30 18:06:09.075247 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 18:06:09.075267 kernel: vgaarb: loaded Jan 30 18:06:09.075279 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 18:06:09.075291 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 18:06:09.075304 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 18:06:09.075323 kernel: pnp: PnP ACPI init Jan 30 18:06:09.075537 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 30 18:06:09.075558 kernel: pnp: PnP ACPI: found 5 devices Jan 30 18:06:09.075571 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 18:06:09.075583 kernel: NET: Registered PF_INET protocol family Jan 30 18:06:09.075595 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 18:06:09.075608 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 30 18:06:09.075620 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 18:06:09.075632 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 18:06:09.075652 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 30 18:06:09.075665 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 30 18:06:09.075677 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 18:06:09.075689 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 18:06:09.075702 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 18:06:09.075725 kernel: NET: Registered PF_XDP protocol family Jan 30 18:06:09.075891 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Jan 30 18:06:09.076055 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 30 18:06:09.076227 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 30 18:06:09.076390 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 30 18:06:09.077601 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 30 18:06:09.077785 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 30 18:06:09.077949 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 30 18:06:09.078111 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 30 18:06:09.078282 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jan 30 18:06:09.078500 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jan 30 18:06:09.078664 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jan 30 18:06:09.078845 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jan 30 18:06:09.079006 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jan 30 18:06:09.079168 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jan 30 18:06:09.079332 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jan 30 18:06:09.079521 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jan 30 18:06:09.079737 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 30 18:06:09.079919 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 30 18:06:09.080086 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 30 18:06:09.080252 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 30 18:06:09.082456 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 30 18:06:09.082639 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 30 18:06:09.082824 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 30 18:06:09.082992 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 30 18:06:09.083165 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 30 18:06:09.083327 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 30 18:06:09.083551 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 30 18:06:09.083726 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 30 18:06:09.083891 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 30 18:06:09.084062 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 30 18:06:09.084231 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 30 18:06:09.084392 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 30 18:06:09.085318 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 30 18:06:09.085507 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 30 18:06:09.085671 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 30 18:06:09.085849 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 30 18:06:09.086013 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 30 18:06:09.086175 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 30 18:06:09.086336 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 30 18:06:09.086602 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 30 18:06:09.086787 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 30 18:06:09.086953 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 30 18:06:09.087116 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 30 18:06:09.087283 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 30 18:06:09.087501 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 30 18:06:09.087664 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 30 18:06:09.087840 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 30 18:06:09.088011 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 30 18:06:09.088173 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 30 18:06:09.088334 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 30 18:06:09.088516 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 18:06:09.088679 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 18:06:09.088860 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 18:06:09.089007 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jan 30 18:06:09.089165 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 30 18:06:09.089324 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Jan 30 18:06:09.089549 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 30 18:06:09.089719 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Jan 30 18:06:09.089876 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 30 18:06:09.090050 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Jan 30 18:06:09.090217 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Jan 30 18:06:09.090370 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Jan 30 18:06:09.090552 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 30 18:06:09.090744 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Jan 30 18:06:09.090912 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Jan 30 18:06:09.091080 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 30 18:06:09.091281 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jan 30 18:06:09.093484 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Jan 30 18:06:09.093652 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 30 18:06:09.093852 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Jan 30 18:06:09.094009 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Jan 30 18:06:09.094162 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 30 18:06:09.094342 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Jan 30 18:06:09.095550 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Jan 30 18:06:09.095720 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 30 18:06:09.095896 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Jan 30 18:06:09.096051 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Jan 30 18:06:09.096204 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 30 18:06:09.096380 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Jan 30 18:06:09.097585 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Jan 30 18:06:09.097796 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 30 18:06:09.097819 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 30 18:06:09.097832 kernel: PCI: CLS 0 bytes, default 64 Jan 30 18:06:09.097845 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 30 18:06:09.097858 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Jan 30 18:06:09.097871 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 30 18:06:09.097884 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Jan 30 18:06:09.097898 kernel: Initialise system trusted keyrings Jan 30 18:06:09.097918 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 30 18:06:09.097931 kernel: Key type asymmetric registered Jan 30 18:06:09.097944 kernel: Asymmetric key parser 'x509' registered Jan 30 18:06:09.097956 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 18:06:09.097969 kernel: io scheduler mq-deadline registered Jan 30 18:06:09.097982 kernel: io scheduler kyber registered Jan 30 18:06:09.097995 kernel: io scheduler bfq registered Jan 30 18:06:09.098163 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 30 18:06:09.098330 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 30 18:06:09.098533 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 18:06:09.098700 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 30 18:06:09.098878 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 30 18:06:09.099041 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 18:06:09.099207 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 30 18:06:09.099368 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 30 18:06:09.101576 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 18:06:09.101765 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 30 18:06:09.101933 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 30 18:06:09.102099 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 18:06:09.102275 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 30 18:06:09.102456 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 30 18:06:09.102631 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 18:06:09.102815 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 30 18:06:09.102982 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 30 18:06:09.103147 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 18:06:09.103316 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 30 18:06:09.104906 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 30 18:06:09.105087 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 18:06:09.105255 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 30 18:06:09.107454 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 30 18:06:09.107625 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 18:06:09.107647 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 18:06:09.107662 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 30 18:06:09.107683 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 30 18:06:09.107697 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 18:06:09.107726 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 18:06:09.107747 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 18:06:09.107770 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 18:06:09.107794 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 18:06:09.107816 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 30 18:06:09.108113 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 30 18:06:09.108322 kernel: rtc_cmos 00:03: registered as rtc0 Jan 30 18:06:09.108522 kernel: rtc_cmos 00:03: setting system clock to 2025-01-30T18:06:08 UTC (1738260368) Jan 30 18:06:09.108692 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 30 18:06:09.108724 kernel: intel_pstate: CPU model not supported Jan 30 18:06:09.108738 kernel: NET: Registered PF_INET6 protocol family Jan 30 18:06:09.108750 kernel: Segment Routing with IPv6 Jan 30 18:06:09.108763 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 18:06:09.108775 kernel: NET: Registered PF_PACKET protocol family Jan 30 18:06:09.108788 kernel: Key type dns_resolver registered Jan 30 18:06:09.108809 kernel: IPI shorthand broadcast: enabled Jan 30 18:06:09.108822 kernel: sched_clock: Marking stable (1693025588, 230472835)->(2071510563, -148012140) Jan 30 18:06:09.108834 kernel: registered taskstats version 1 Jan 30 18:06:09.108847 kernel: Loading compiled-in X.509 certificates Jan 30 18:06:09.108860 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 18:06:09.108873 kernel: Key type .fscrypt registered Jan 30 18:06:09.108885 kernel: Key type fscrypt-provisioning registered Jan 30 18:06:09.108898 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 18:06:09.108916 kernel: ima: Allocated hash algorithm: sha1 Jan 30 18:06:09.108929 kernel: ima: No architecture policies found Jan 30 18:06:09.108942 kernel: clk: Disabling unused clocks Jan 30 18:06:09.108955 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 18:06:09.108967 kernel: Write protecting the kernel read-only data: 36864k Jan 30 18:06:09.108980 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 18:06:09.108993 kernel: Run /init as init process Jan 30 18:06:09.109005 kernel: with arguments: Jan 30 18:06:09.109018 kernel: /init Jan 30 18:06:09.109031 kernel: with environment: Jan 30 18:06:09.109048 kernel: HOME=/ Jan 30 18:06:09.109061 kernel: TERM=linux Jan 30 18:06:09.109073 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 18:06:09.109089 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 18:06:09.109105 systemd[1]: Detected virtualization kvm. Jan 30 18:06:09.109119 systemd[1]: Detected architecture x86-64. Jan 30 18:06:09.109132 systemd[1]: Running in initrd. Jan 30 18:06:09.109150 systemd[1]: No hostname configured, using default hostname. Jan 30 18:06:09.109163 systemd[1]: Hostname set to . Jan 30 18:06:09.109177 systemd[1]: Initializing machine ID from VM UUID. Jan 30 18:06:09.109191 systemd[1]: Queued start job for default target initrd.target. Jan 30 18:06:09.109204 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 18:06:09.109218 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 18:06:09.109232 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 18:06:09.109246 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 18:06:09.109265 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 18:06:09.109279 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 18:06:09.109295 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 18:06:09.109309 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 18:06:09.109323 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 18:06:09.109336 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 18:06:09.109350 systemd[1]: Reached target paths.target - Path Units. Jan 30 18:06:09.109369 systemd[1]: Reached target slices.target - Slice Units. Jan 30 18:06:09.109383 systemd[1]: Reached target swap.target - Swaps. Jan 30 18:06:09.109396 systemd[1]: Reached target timers.target - Timer Units. Jan 30 18:06:09.110455 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 18:06:09.110476 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 18:06:09.110490 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 18:06:09.110504 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 18:06:09.110517 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 18:06:09.110531 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 18:06:09.110552 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 18:06:09.110566 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 18:06:09.110580 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 18:06:09.110599 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 18:06:09.110613 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 18:06:09.110626 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 18:06:09.110640 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 18:06:09.110654 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 18:06:09.110672 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 18:06:09.110753 systemd-journald[201]: Collecting audit messages is disabled. Jan 30 18:06:09.110787 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 18:06:09.110801 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 18:06:09.110822 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 18:06:09.110837 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 18:06:09.110859 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 18:06:09.110879 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 18:06:09.110898 kernel: Bridge firewalling registered Jan 30 18:06:09.110913 systemd-journald[201]: Journal started Jan 30 18:06:09.110938 systemd-journald[201]: Runtime Journal (/run/log/journal/b2c2b17f80d14413b80fa1bc7449983e) is 4.7M, max 38.0M, 33.2M free. Jan 30 18:06:09.057200 systemd-modules-load[202]: Inserted module 'overlay' Jan 30 18:06:09.169988 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 18:06:09.110894 systemd-modules-load[202]: Inserted module 'br_netfilter' Jan 30 18:06:09.171026 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 18:06:09.172203 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 18:06:09.182833 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 18:06:09.184594 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 18:06:09.198561 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 18:06:09.201626 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 18:06:09.209600 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 18:06:09.211481 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 18:06:09.232670 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 18:06:09.233790 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 18:06:09.237148 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 18:06:09.248730 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 18:06:09.251530 dracut-cmdline[232]: dracut-dracut-053 Jan 30 18:06:09.256648 dracut-cmdline[232]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 18:06:09.287665 systemd-resolved[239]: Positive Trust Anchors: Jan 30 18:06:09.287684 systemd-resolved[239]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 18:06:09.287737 systemd-resolved[239]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 18:06:09.296008 systemd-resolved[239]: Defaulting to hostname 'linux'. Jan 30 18:06:09.297782 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 18:06:09.298908 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 18:06:09.361476 kernel: SCSI subsystem initialized Jan 30 18:06:09.372479 kernel: Loading iSCSI transport class v2.0-870. Jan 30 18:06:09.385456 kernel: iscsi: registered transport (tcp) Jan 30 18:06:09.410669 kernel: iscsi: registered transport (qla4xxx) Jan 30 18:06:09.410761 kernel: QLogic iSCSI HBA Driver Jan 30 18:06:09.465789 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 18:06:09.472679 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 18:06:09.505166 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 18:06:09.505249 kernel: device-mapper: uevent: version 1.0.3 Jan 30 18:06:09.507316 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 18:06:09.553452 kernel: raid6: sse2x4 gen() 14673 MB/s Jan 30 18:06:09.571443 kernel: raid6: sse2x2 gen() 9946 MB/s Jan 30 18:06:09.589891 kernel: raid6: sse2x1 gen() 10395 MB/s Jan 30 18:06:09.589941 kernel: raid6: using algorithm sse2x4 gen() 14673 MB/s Jan 30 18:06:09.608947 kernel: raid6: .... xor() 8351 MB/s, rmw enabled Jan 30 18:06:09.609033 kernel: raid6: using ssse3x2 recovery algorithm Jan 30 18:06:09.633454 kernel: xor: automatically using best checksumming function avx Jan 30 18:06:09.818446 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 18:06:09.832574 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 18:06:09.841633 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 18:06:09.857799 systemd-udevd[419]: Using default interface naming scheme 'v255'. Jan 30 18:06:09.864383 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 18:06:09.869638 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 18:06:09.897143 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Jan 30 18:06:09.936033 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 18:06:09.943604 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 18:06:10.051464 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 18:06:10.059649 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 18:06:10.088884 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 18:06:10.091090 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 18:06:10.091859 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 18:06:10.094498 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 18:06:10.101654 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 18:06:10.130018 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 18:06:10.176296 kernel: ACPI: bus type USB registered Jan 30 18:06:10.176357 kernel: usbcore: registered new interface driver usbfs Jan 30 18:06:10.176376 kernel: usbcore: registered new interface driver hub Jan 30 18:06:10.178429 kernel: usbcore: registered new device driver usb Jan 30 18:06:10.186431 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Jan 30 18:06:10.263546 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 30 18:06:10.263795 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Jan 30 18:06:10.264004 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 30 18:06:10.264229 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 30 18:06:10.264457 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Jan 30 18:06:10.264661 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Jan 30 18:06:10.264874 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 30 18:06:10.265052 kernel: hub 1-0:1.0: USB hub found Jan 30 18:06:10.265284 kernel: hub 1-0:1.0: 4 ports detected Jan 30 18:06:10.265699 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 30 18:06:10.265996 kernel: hub 2-0:1.0: USB hub found Jan 30 18:06:10.266217 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 18:06:10.266238 kernel: hub 2-0:1.0: 4 ports detected Jan 30 18:06:10.266443 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 18:06:10.266463 kernel: GPT:17805311 != 125829119 Jan 30 18:06:10.266481 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 18:06:10.266497 kernel: GPT:17805311 != 125829119 Jan 30 18:06:10.266532 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 18:06:10.266552 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 18:06:10.266569 kernel: AVX version of gcm_enc/dec engaged. Jan 30 18:06:10.266586 kernel: AES CTR mode by8 optimization enabled Jan 30 18:06:10.263651 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 18:06:10.263828 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 18:06:10.266213 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 18:06:10.268303 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 18:06:10.268505 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 18:06:10.269208 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 18:06:10.278659 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 18:06:10.301449 kernel: libata version 3.00 loaded. Jan 30 18:06:10.320071 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (475) Jan 30 18:06:10.386444 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (476) Jan 30 18:06:10.392808 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 30 18:06:10.483664 kernel: ahci 0000:00:1f.2: version 3.0 Jan 30 18:06:10.484119 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 30 18:06:10.484143 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 30 18:06:10.484393 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 30 18:06:10.485707 kernel: scsi host0: ahci Jan 30 18:06:10.486026 kernel: scsi host1: ahci Jan 30 18:06:10.486294 kernel: scsi host2: ahci Jan 30 18:06:10.486624 kernel: scsi host3: ahci Jan 30 18:06:10.486897 kernel: scsi host4: ahci Jan 30 18:06:10.487160 kernel: scsi host5: ahci Jan 30 18:06:10.487471 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 Jan 30 18:06:10.487493 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 Jan 30 18:06:10.487511 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 Jan 30 18:06:10.487528 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 Jan 30 18:06:10.487545 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 Jan 30 18:06:10.487562 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 Jan 30 18:06:10.490216 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 18:06:10.494452 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 30 18:06:10.499938 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 30 18:06:10.515367 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 30 18:06:10.516231 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 30 18:06:10.523834 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 18:06:10.530654 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 18:06:10.538073 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 18:06:10.549363 disk-uuid[564]: Primary Header is updated. Jan 30 18:06:10.549363 disk-uuid[564]: Secondary Entries is updated. Jan 30 18:06:10.549363 disk-uuid[564]: Secondary Header is updated. Jan 30 18:06:10.556777 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 18:06:10.559993 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 18:06:10.566511 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 18:06:10.659658 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 18:06:10.738523 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 30 18:06:10.738600 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 30 18:06:10.741700 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 30 18:06:10.741743 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 30 18:06:10.744083 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 30 18:06:10.744709 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 30 18:06:10.756355 kernel: usbcore: registered new interface driver usbhid Jan 30 18:06:10.756394 kernel: usbhid: USB HID core driver Jan 30 18:06:10.763878 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jan 30 18:06:10.763919 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Jan 30 18:06:11.568532 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 18:06:11.570229 disk-uuid[569]: The operation has completed successfully. Jan 30 18:06:11.622701 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 18:06:11.622851 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 18:06:11.640654 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 18:06:11.646192 sh[585]: Success Jan 30 18:06:11.662712 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Jan 30 18:06:11.731728 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 18:06:11.733692 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 18:06:11.737125 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 18:06:11.769448 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 18:06:11.769545 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 18:06:11.769568 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 18:06:11.771485 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 18:06:11.773986 kernel: BTRFS info (device dm-0): using free space tree Jan 30 18:06:11.783732 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 18:06:11.785144 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 18:06:11.791608 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 18:06:11.794102 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 18:06:11.811613 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 18:06:11.811677 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 18:06:11.811699 kernel: BTRFS info (device vda6): using free space tree Jan 30 18:06:11.816456 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 18:06:11.830599 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 18:06:11.830263 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 18:06:11.839083 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 18:06:11.844626 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 18:06:12.005086 ignition[672]: Ignition 2.19.0 Jan 30 18:06:12.005106 ignition[672]: Stage: fetch-offline Jan 30 18:06:12.005186 ignition[672]: no configs at "/usr/lib/ignition/base.d" Jan 30 18:06:12.005206 ignition[672]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 18:06:12.005388 ignition[672]: parsed url from cmdline: "" Jan 30 18:06:12.005395 ignition[672]: no config URL provided Jan 30 18:06:12.005420 ignition[672]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 18:06:12.009469 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 18:06:12.005440 ignition[672]: no config at "/usr/lib/ignition/user.ign" Jan 30 18:06:12.005449 ignition[672]: failed to fetch config: resource requires networking Jan 30 18:06:12.005704 ignition[672]: Ignition finished successfully Jan 30 18:06:12.016583 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 18:06:12.023652 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 18:06:12.062616 systemd-networkd[774]: lo: Link UP Jan 30 18:06:12.062634 systemd-networkd[774]: lo: Gained carrier Jan 30 18:06:12.064749 systemd-networkd[774]: Enumeration completed Jan 30 18:06:12.065268 systemd-networkd[774]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 18:06:12.065273 systemd-networkd[774]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 18:06:12.065829 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 18:06:12.066709 systemd-networkd[774]: eth0: Link UP Jan 30 18:06:12.066714 systemd-networkd[774]: eth0: Gained carrier Jan 30 18:06:12.066725 systemd-networkd[774]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 18:06:12.067502 systemd[1]: Reached target network.target - Network. Jan 30 18:06:12.075581 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 18:06:12.095859 systemd-networkd[774]: eth0: DHCPv4 address 10.230.68.22/30, gateway 10.230.68.21 acquired from 10.230.68.21 Jan 30 18:06:12.114556 ignition[776]: Ignition 2.19.0 Jan 30 18:06:12.114578 ignition[776]: Stage: fetch Jan 30 18:06:12.114829 ignition[776]: no configs at "/usr/lib/ignition/base.d" Jan 30 18:06:12.114849 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 18:06:12.115013 ignition[776]: parsed url from cmdline: "" Jan 30 18:06:12.115020 ignition[776]: no config URL provided Jan 30 18:06:12.115030 ignition[776]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 18:06:12.115047 ignition[776]: no config at "/usr/lib/ignition/user.ign" Jan 30 18:06:12.115196 ignition[776]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 30 18:06:12.115373 ignition[776]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 30 18:06:12.115440 ignition[776]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 30 18:06:12.130959 ignition[776]: GET result: OK Jan 30 18:06:12.131938 ignition[776]: parsing config with SHA512: 53d9240f6f868bfe5ed40dfced8121b028bc96f4644faac6a66e69cc32dfd1f6e38bdf97d3dd57720d1fdf9e79a3961a15b33d28ee3ee33415b98fc78307d7a5 Jan 30 18:06:12.137558 unknown[776]: fetched base config from "system" Jan 30 18:06:12.138362 unknown[776]: fetched base config from "system" Jan 30 18:06:12.138874 ignition[776]: fetch: fetch complete Jan 30 18:06:12.138376 unknown[776]: fetched user config from "openstack" Jan 30 18:06:12.138882 ignition[776]: fetch: fetch passed Jan 30 18:06:12.140753 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 18:06:12.138947 ignition[776]: Ignition finished successfully Jan 30 18:06:12.151720 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 18:06:12.179701 ignition[783]: Ignition 2.19.0 Jan 30 18:06:12.179722 ignition[783]: Stage: kargs Jan 30 18:06:12.180025 ignition[783]: no configs at "/usr/lib/ignition/base.d" Jan 30 18:06:12.180045 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 18:06:12.185581 ignition[783]: kargs: kargs passed Jan 30 18:06:12.186361 ignition[783]: Ignition finished successfully Jan 30 18:06:12.188847 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 18:06:12.196704 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 18:06:12.217974 ignition[789]: Ignition 2.19.0 Jan 30 18:06:12.219000 ignition[789]: Stage: disks Jan 30 18:06:12.219942 ignition[789]: no configs at "/usr/lib/ignition/base.d" Jan 30 18:06:12.220730 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 18:06:12.222897 ignition[789]: disks: disks passed Jan 30 18:06:12.223635 ignition[789]: Ignition finished successfully Jan 30 18:06:12.225457 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 18:06:12.226885 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 18:06:12.227701 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 18:06:12.229282 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 18:06:12.230880 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 18:06:12.232318 systemd[1]: Reached target basic.target - Basic System. Jan 30 18:06:12.247710 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 18:06:12.265560 systemd-fsck[797]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 30 18:06:12.268031 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 18:06:12.275571 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 18:06:12.389446 kernel: EXT4-fs (vda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 18:06:12.389767 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 18:06:12.391109 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 18:06:12.400614 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 18:06:12.403575 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 18:06:12.404661 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 18:06:12.406952 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 30 18:06:12.409005 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 18:06:12.409043 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 18:06:12.422192 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (805) Jan 30 18:06:12.422236 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 18:06:12.422255 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 18:06:12.422271 kernel: BTRFS info (device vda6): using free space tree Jan 30 18:06:12.419769 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 18:06:12.426441 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 18:06:12.429583 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 18:06:12.433066 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 18:06:12.509700 initrd-setup-root[833]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 18:06:12.519157 initrd-setup-root[840]: cut: /sysroot/etc/group: No such file or directory Jan 30 18:06:12.525058 initrd-setup-root[847]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 18:06:12.533821 initrd-setup-root[854]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 18:06:12.668936 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 18:06:12.675541 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 18:06:12.678146 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 18:06:12.692471 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 18:06:12.713108 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 18:06:12.731738 ignition[922]: INFO : Ignition 2.19.0 Jan 30 18:06:12.731738 ignition[922]: INFO : Stage: mount Jan 30 18:06:12.734073 ignition[922]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 18:06:12.734073 ignition[922]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 18:06:12.734073 ignition[922]: INFO : mount: mount passed Jan 30 18:06:12.734073 ignition[922]: INFO : Ignition finished successfully Jan 30 18:06:12.734785 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 18:06:12.765594 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 18:06:13.937073 systemd-networkd[774]: eth0: Gained IPv6LL Jan 30 18:06:15.193673 systemd-networkd[774]: eth0: Ignoring DHCPv6 address 2a02:1348:179:9105:24:19ff:fee6:4416/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:9105:24:19ff:fee6:4416/64 assigned by NDisc. Jan 30 18:06:15.193694 systemd-networkd[774]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 30 18:06:19.607500 coreos-metadata[807]: Jan 30 18:06:19.607 WARN failed to locate config-drive, using the metadata service API instead Jan 30 18:06:19.627963 coreos-metadata[807]: Jan 30 18:06:19.627 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 30 18:06:19.648691 coreos-metadata[807]: Jan 30 18:06:19.648 INFO Fetch successful Jan 30 18:06:19.649714 coreos-metadata[807]: Jan 30 18:06:19.648 INFO wrote hostname srv-xoz4v.gb1.brightbox.com to /sysroot/etc/hostname Jan 30 18:06:19.651611 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 30 18:06:19.651937 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 30 18:06:19.660657 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 18:06:19.691861 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 18:06:19.703460 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (938) Jan 30 18:06:19.708485 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 18:06:19.708545 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 18:06:19.709548 kernel: BTRFS info (device vda6): using free space tree Jan 30 18:06:19.714479 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 18:06:19.718462 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 18:06:19.754188 ignition[955]: INFO : Ignition 2.19.0 Jan 30 18:06:19.754188 ignition[955]: INFO : Stage: files Jan 30 18:06:19.755979 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 18:06:19.755979 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 18:06:19.755979 ignition[955]: DEBUG : files: compiled without relabeling support, skipping Jan 30 18:06:19.758687 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 18:06:19.758687 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 18:06:19.760681 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 18:06:19.760681 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 18:06:19.762780 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 18:06:19.761044 unknown[955]: wrote ssh authorized keys file for user: core Jan 30 18:06:19.766065 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 30 18:06:19.767396 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 30 18:06:19.767396 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 18:06:19.767396 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 30 18:06:19.895072 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 30 18:06:22.139366 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 18:06:22.141560 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 30 18:06:22.141560 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 18:06:22.141560 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 18:06:22.141560 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 18:06:22.141560 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 18:06:22.141560 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 18:06:22.141560 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 18:06:22.155288 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 18:06:22.155288 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 18:06:22.155288 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 18:06:22.155288 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 18:06:22.155288 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 18:06:22.155288 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 18:06:22.155288 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 30 18:06:22.775279 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 30 18:06:24.545113 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 18:06:24.548047 ignition[955]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 30 18:06:24.548047 ignition[955]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 30 18:06:24.550719 ignition[955]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 30 18:06:24.550719 ignition[955]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 30 18:06:24.550719 ignition[955]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 30 18:06:24.550719 ignition[955]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 18:06:24.550719 ignition[955]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 18:06:24.550719 ignition[955]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 30 18:06:24.550719 ignition[955]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 30 18:06:24.550719 ignition[955]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 18:06:24.563439 ignition[955]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 18:06:24.563439 ignition[955]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 18:06:24.563439 ignition[955]: INFO : files: files passed Jan 30 18:06:24.563439 ignition[955]: INFO : Ignition finished successfully Jan 30 18:06:24.555263 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 18:06:24.567659 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 18:06:24.597288 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 18:06:24.601923 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 18:06:24.602090 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 18:06:24.628113 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 18:06:24.629662 initrd-setup-root-after-ignition[984]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 18:06:24.632396 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 18:06:24.634530 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 18:06:24.636056 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 18:06:24.641729 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 18:06:24.687458 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 18:06:24.687667 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 18:06:24.689534 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 18:06:24.690848 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 18:06:24.692481 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 18:06:24.698655 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 18:06:24.728663 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 18:06:24.736712 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 18:06:24.752327 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 18:06:24.754279 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 18:06:24.755242 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 18:06:24.756815 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 18:06:24.756992 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 18:06:24.758933 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 18:06:24.759887 systemd[1]: Stopped target basic.target - Basic System. Jan 30 18:06:24.761324 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 18:06:24.762699 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 18:06:24.764128 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 18:06:24.765673 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 18:06:24.767217 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 18:06:24.768858 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 18:06:24.770292 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 18:06:24.771950 systemd[1]: Stopped target swap.target - Swaps. Jan 30 18:06:24.773310 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 18:06:24.773538 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 18:06:24.775285 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 18:06:24.776351 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 18:06:24.777746 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 18:06:24.778088 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 18:06:24.779202 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 18:06:24.779366 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 18:06:24.781574 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 18:06:24.781756 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 18:06:24.783346 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 18:06:24.783558 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 18:06:24.793894 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 18:06:24.797838 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 18:06:24.798738 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 18:06:24.799030 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 18:06:24.809151 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 18:06:24.809347 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 18:06:24.820644 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 18:06:24.820804 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 18:06:24.843284 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 18:06:24.850009 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 18:06:24.851080 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 18:06:24.859974 ignition[1008]: INFO : Ignition 2.19.0 Jan 30 18:06:24.859974 ignition[1008]: INFO : Stage: umount Jan 30 18:06:24.861861 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 18:06:24.861861 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 18:06:24.863725 ignition[1008]: INFO : umount: umount passed Jan 30 18:06:24.863725 ignition[1008]: INFO : Ignition finished successfully Jan 30 18:06:24.863336 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 18:06:24.863589 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 18:06:24.865114 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 18:06:24.865280 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 18:06:24.866884 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 18:06:24.866983 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 18:06:24.868313 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 18:06:24.868397 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 18:06:24.869683 systemd[1]: Stopped target network.target - Network. Jan 30 18:06:24.871066 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 18:06:24.871187 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 18:06:24.872572 systemd[1]: Stopped target paths.target - Path Units. Jan 30 18:06:24.873808 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 18:06:24.875754 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 18:06:24.876855 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 18:06:24.878241 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 18:06:24.879913 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 18:06:24.880028 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 18:06:24.881419 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 18:06:24.881505 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 18:06:24.882697 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 18:06:24.882774 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 18:06:24.884198 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 18:06:24.884320 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 18:06:24.885861 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 18:06:24.885933 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 18:06:24.887536 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 18:06:24.889180 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 18:06:24.895736 systemd-networkd[774]: eth0: DHCPv6 lease lost Jan 30 18:06:24.900926 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 18:06:24.901213 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 18:06:24.903497 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 18:06:24.903801 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 18:06:24.910625 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 18:06:24.911351 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 18:06:24.912491 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 18:06:24.914014 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 18:06:24.919797 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 18:06:24.920099 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 18:06:24.928827 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 18:06:24.929100 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 18:06:24.943195 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 18:06:24.943287 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 18:06:24.945262 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 18:06:24.945325 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 18:06:24.946811 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 18:06:24.946892 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 18:06:24.949110 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 18:06:24.949183 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 18:06:24.950492 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 18:06:24.950567 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 18:06:24.962735 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 18:06:24.965819 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 18:06:24.965914 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 18:06:24.966654 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 18:06:24.966760 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 18:06:24.969541 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 18:06:24.969611 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 18:06:24.970454 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 30 18:06:24.970540 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 18:06:24.971355 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 18:06:24.973100 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 18:06:24.974370 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 18:06:24.974631 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 18:06:24.975940 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 18:06:24.976007 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 18:06:24.978061 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 18:06:24.978245 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 18:06:24.979274 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 18:06:24.979489 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 18:06:24.982099 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 18:06:24.989666 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 18:06:25.002173 systemd[1]: Switching root. Jan 30 18:06:25.036447 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Jan 30 18:06:25.036632 systemd-journald[201]: Journal stopped Jan 30 18:06:26.790003 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 18:06:26.790233 kernel: SELinux: policy capability open_perms=1 Jan 30 18:06:26.790268 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 18:06:26.790310 kernel: SELinux: policy capability always_check_network=0 Jan 30 18:06:26.790341 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 18:06:26.790377 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 18:06:26.793539 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 18:06:26.793566 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 18:06:26.793607 kernel: audit: type=1403 audit(1738260385.461:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 18:06:26.793668 systemd[1]: Successfully loaded SELinux policy in 68.323ms. Jan 30 18:06:26.793726 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 28.271ms. Jan 30 18:06:26.793760 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 18:06:26.793808 systemd[1]: Detected virtualization kvm. Jan 30 18:06:26.793834 systemd[1]: Detected architecture x86-64. Jan 30 18:06:26.793887 systemd[1]: Detected first boot. Jan 30 18:06:26.793928 systemd[1]: Hostname set to . Jan 30 18:06:26.793949 systemd[1]: Initializing machine ID from VM UUID. Jan 30 18:06:26.793969 zram_generator::config[1068]: No configuration found. Jan 30 18:06:26.794010 systemd[1]: Populated /etc with preset unit settings. Jan 30 18:06:26.794031 systemd[1]: Queued start job for default target multi-user.target. Jan 30 18:06:26.794059 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 30 18:06:26.794087 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 18:06:26.794121 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 18:06:26.794153 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 18:06:26.794174 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 18:06:26.794194 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 18:06:26.794214 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 18:06:26.794250 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 18:06:26.794280 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 18:06:26.794301 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 18:06:26.794327 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 18:06:26.794365 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 18:06:26.794394 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 18:06:26.799471 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 18:06:26.799517 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 18:06:26.799541 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 18:06:26.799570 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 18:06:26.799591 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 18:06:26.799612 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 18:06:26.799639 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 18:06:26.799683 systemd[1]: Reached target slices.target - Slice Units. Jan 30 18:06:26.799712 systemd[1]: Reached target swap.target - Swaps. Jan 30 18:06:26.799733 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 18:06:26.799754 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 18:06:26.799811 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 18:06:26.799855 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 18:06:26.799876 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 18:06:26.799900 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 18:06:26.799926 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 18:06:26.799946 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 18:06:26.799965 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 18:06:26.799984 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 18:06:26.800032 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 18:06:26.800062 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 18:06:26.800084 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 18:06:26.800110 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 18:06:26.800150 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 18:06:26.800169 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 18:06:26.800187 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 18:06:26.800211 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 18:06:26.800230 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 18:06:26.800261 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 18:06:26.800281 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 18:06:26.800322 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 18:06:26.800355 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 18:06:26.800387 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 18:06:26.800407 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 18:06:26.803542 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 30 18:06:26.803586 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 30 18:06:26.803624 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 18:06:26.803646 kernel: fuse: init (API version 7.39) Jan 30 18:06:26.803666 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 18:06:26.803685 kernel: loop: module loaded Jan 30 18:06:26.803713 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 18:06:26.803734 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 18:06:26.803755 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 18:06:26.803786 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 18:06:26.803807 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 18:06:26.803839 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 18:06:26.803861 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 18:06:26.803893 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 18:06:26.803914 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 18:06:26.803934 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 18:06:26.803953 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 18:06:26.803979 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 18:06:26.803999 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 18:06:26.804058 systemd-journald[1172]: Collecting audit messages is disabled. Jan 30 18:06:26.804135 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 18:06:26.804159 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 18:06:26.804201 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 18:06:26.804222 kernel: ACPI: bus type drm_connector registered Jan 30 18:06:26.804249 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 18:06:26.804270 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 18:06:26.804290 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 18:06:26.804310 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 18:06:26.804340 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 18:06:26.804361 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 18:06:26.804381 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 18:06:26.804436 systemd-journald[1172]: Journal started Jan 30 18:06:26.810128 systemd-journald[1172]: Runtime Journal (/run/log/journal/b2c2b17f80d14413b80fa1bc7449983e) is 4.7M, max 38.0M, 33.2M free. Jan 30 18:06:26.810220 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 18:06:26.810270 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 18:06:26.816474 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 18:06:26.817631 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 18:06:26.818804 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 18:06:26.833711 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 18:06:26.840531 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 18:06:26.851043 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 18:06:26.851932 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 18:06:26.860617 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 18:06:26.873684 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 18:06:26.874686 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 18:06:26.880559 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 18:06:26.881398 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 18:06:26.890614 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 18:06:26.903557 systemd-journald[1172]: Time spent on flushing to /var/log/journal/b2c2b17f80d14413b80fa1bc7449983e is 58.357ms for 1125 entries. Jan 30 18:06:26.903557 systemd-journald[1172]: System Journal (/var/log/journal/b2c2b17f80d14413b80fa1bc7449983e) is 8.0M, max 584.8M, 576.8M free. Jan 30 18:06:26.989666 systemd-journald[1172]: Received client request to flush runtime journal. Jan 30 18:06:26.912735 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 18:06:26.920966 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 18:06:26.923884 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 18:06:26.945219 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 18:06:26.946209 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 18:06:26.995122 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 18:06:27.009040 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 18:06:27.020034 systemd-tmpfiles[1224]: ACLs are not supported, ignoring. Jan 30 18:06:27.020483 systemd-tmpfiles[1224]: ACLs are not supported, ignoring. Jan 30 18:06:27.029077 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 18:06:27.042662 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 18:06:27.051988 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 18:06:27.057631 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 18:06:27.090285 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 18:06:27.098700 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 18:06:27.109580 udevadm[1242]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 30 18:06:27.134031 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Jan 30 18:06:27.134059 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Jan 30 18:06:27.143067 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 18:06:27.813743 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 18:06:27.827715 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 18:06:27.860382 systemd-udevd[1252]: Using default interface naming scheme 'v255'. Jan 30 18:06:27.890184 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 18:06:27.902642 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 18:06:27.930672 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 18:06:28.009274 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 30 18:06:28.016526 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 18:06:28.091633 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1264) Jan 30 18:06:28.137131 systemd-networkd[1256]: lo: Link UP Jan 30 18:06:28.142444 systemd-networkd[1256]: lo: Gained carrier Jan 30 18:06:28.148971 systemd-networkd[1256]: Enumeration completed Jan 30 18:06:28.151214 systemd-networkd[1256]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 18:06:28.152575 systemd-networkd[1256]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 18:06:28.156112 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 18:06:28.160045 systemd-networkd[1256]: eth0: Link UP Jan 30 18:06:28.160244 systemd-networkd[1256]: eth0: Gained carrier Jan 30 18:06:28.160380 systemd-networkd[1256]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 18:06:28.184658 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 18:06:28.186505 systemd-networkd[1256]: eth0: DHCPv4 address 10.230.68.22/30, gateway 10.230.68.21 acquired from 10.230.68.21 Jan 30 18:06:28.236435 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 30 18:06:28.239530 systemd-networkd[1256]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 18:06:28.244432 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 18:06:28.247456 kernel: ACPI: button: Power Button [PWRF] Jan 30 18:06:28.300485 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 30 18:06:28.308458 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 30 18:06:28.316711 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 30 18:06:28.317016 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 30 18:06:28.337343 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 18:06:28.370713 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 18:06:28.573184 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 18:06:28.585724 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 18:06:28.595705 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 18:06:28.617000 lvm[1292]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 18:06:28.647914 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 18:06:28.649722 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 18:06:28.666838 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 18:06:28.672973 lvm[1295]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 18:06:28.708770 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 18:06:28.710268 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 18:06:28.711132 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 18:06:28.711189 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 18:06:28.711962 systemd[1]: Reached target machines.target - Containers. Jan 30 18:06:28.715330 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 18:06:28.727615 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 18:06:28.731626 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 18:06:28.732559 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 18:06:28.737617 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 18:06:28.740578 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 18:06:28.747618 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 18:06:28.753697 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 18:06:28.777318 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 18:06:28.786727 kernel: loop0: detected capacity change from 0 to 140768 Jan 30 18:06:28.793965 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 18:06:28.797471 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 18:06:28.831623 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 18:06:28.863473 kernel: loop1: detected capacity change from 0 to 142488 Jan 30 18:06:28.909507 kernel: loop2: detected capacity change from 0 to 210664 Jan 30 18:06:28.973083 kernel: loop3: detected capacity change from 0 to 8 Jan 30 18:06:29.012464 kernel: loop4: detected capacity change from 0 to 140768 Jan 30 18:06:29.040661 kernel: loop5: detected capacity change from 0 to 142488 Jan 30 18:06:29.061576 kernel: loop6: detected capacity change from 0 to 210664 Jan 30 18:06:29.091470 kernel: loop7: detected capacity change from 0 to 8 Jan 30 18:06:29.098462 (sd-merge)[1318]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 30 18:06:29.099477 (sd-merge)[1318]: Merged extensions into '/usr'. Jan 30 18:06:29.107078 systemd[1]: Reloading requested from client PID 1303 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 18:06:29.107125 systemd[1]: Reloading... Jan 30 18:06:29.232705 systemd-networkd[1256]: eth0: Gained IPv6LL Jan 30 18:06:29.245340 zram_generator::config[1352]: No configuration found. Jan 30 18:06:29.465472 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 18:06:29.512530 ldconfig[1299]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 18:06:29.552018 systemd[1]: Reloading finished in 444 ms. Jan 30 18:06:29.578116 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 18:06:29.585226 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 18:06:29.587208 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 18:06:29.601653 systemd[1]: Starting ensure-sysext.service... Jan 30 18:06:29.606592 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 18:06:29.616165 systemd[1]: Reloading requested from client PID 1411 ('systemctl') (unit ensure-sysext.service)... Jan 30 18:06:29.616235 systemd[1]: Reloading... Jan 30 18:06:29.654050 systemd-tmpfiles[1412]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 18:06:29.655395 systemd-tmpfiles[1412]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 18:06:29.657151 systemd-tmpfiles[1412]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 18:06:29.658967 systemd-tmpfiles[1412]: ACLs are not supported, ignoring. Jan 30 18:06:29.659194 systemd-tmpfiles[1412]: ACLs are not supported, ignoring. Jan 30 18:06:29.664030 systemd-tmpfiles[1412]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 18:06:29.665572 systemd-tmpfiles[1412]: Skipping /boot Jan 30 18:06:29.693421 systemd-tmpfiles[1412]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 18:06:29.693629 systemd-tmpfiles[1412]: Skipping /boot Jan 30 18:06:29.744084 systemd-networkd[1256]: eth0: Ignoring DHCPv6 address 2a02:1348:179:9105:24:19ff:fee6:4416/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:9105:24:19ff:fee6:4416/64 assigned by NDisc. Jan 30 18:06:29.744099 systemd-networkd[1256]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 30 18:06:29.750426 zram_generator::config[1440]: No configuration found. Jan 30 18:06:29.918485 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 18:06:30.001806 systemd[1]: Reloading finished in 384 ms. Jan 30 18:06:30.032119 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 18:06:30.045706 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 18:06:30.051634 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 18:06:30.056609 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 18:06:30.067670 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 18:06:30.082723 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 18:06:30.099253 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 18:06:30.099581 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 18:06:30.105980 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 18:06:30.117205 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 18:06:30.133390 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 18:06:30.139598 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 18:06:30.139774 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 18:06:30.144348 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 18:06:30.147590 augenrules[1529]: No rules Jan 30 18:06:30.144642 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 18:06:30.154719 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 18:06:30.157251 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 18:06:30.157617 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 18:06:30.168812 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 18:06:30.173677 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 18:06:30.179668 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 18:06:30.185150 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 18:06:30.187039 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 18:06:30.206172 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 18:06:30.214259 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 18:06:30.215160 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 18:06:30.222711 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 18:06:30.223427 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 18:06:30.238264 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 18:06:30.242172 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 18:06:30.244722 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 18:06:30.244971 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 18:06:30.250007 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 18:06:30.250268 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 18:06:30.272754 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 18:06:30.273153 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 18:06:30.280590 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 18:06:30.291222 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 18:06:30.294725 systemd-resolved[1511]: Positive Trust Anchors: Jan 30 18:06:30.294744 systemd-resolved[1511]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 18:06:30.294786 systemd-resolved[1511]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 18:06:30.300607 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 18:06:30.304945 systemd-resolved[1511]: Using system hostname 'srv-xoz4v.gb1.brightbox.com'. Jan 30 18:06:30.314648 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 18:06:30.316024 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 18:06:30.316103 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 18:06:30.316136 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 18:06:30.318151 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 18:06:30.320444 systemd[1]: Finished ensure-sysext.service. Jan 30 18:06:30.321664 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 18:06:30.323036 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 18:06:30.323266 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 18:06:30.324580 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 18:06:30.324814 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 18:06:30.326061 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 18:06:30.326283 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 18:06:30.327659 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 18:06:30.330700 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 18:06:30.337282 systemd[1]: Reached target network.target - Network. Jan 30 18:06:30.338140 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 18:06:30.338985 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 18:06:30.339907 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 18:06:30.340119 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 18:06:30.349643 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 18:06:30.419542 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 18:06:30.421057 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 18:06:30.422182 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 18:06:30.423163 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 18:06:30.429595 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 18:06:30.430505 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 18:06:30.430659 systemd[1]: Reached target paths.target - Path Units. Jan 30 18:06:30.431509 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 18:06:30.432538 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 18:06:30.433526 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 18:06:30.434346 systemd[1]: Reached target timers.target - Timer Units. Jan 30 18:06:30.436845 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 18:06:30.439922 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 18:06:30.443376 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 18:06:30.445602 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 18:06:30.446402 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 18:06:30.447144 systemd[1]: Reached target basic.target - Basic System. Jan 30 18:06:30.448139 systemd[1]: System is tainted: cgroupsv1 Jan 30 18:06:30.448211 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 18:06:30.448262 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 18:06:30.451556 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 18:06:30.461614 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 18:06:30.468593 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 18:06:30.477582 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 18:06:30.483854 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 18:06:30.484895 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 18:06:30.496121 jq[1583]: false Jan 30 18:06:30.496940 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 18:06:30.504606 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 18:06:30.511559 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 18:06:30.527050 dbus-daemon[1580]: [system] SELinux support is enabled Jan 30 18:06:30.527515 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 18:06:30.539743 extend-filesystems[1584]: Found loop4 Jan 30 18:06:30.539743 extend-filesystems[1584]: Found loop5 Jan 30 18:06:30.539743 extend-filesystems[1584]: Found loop6 Jan 30 18:06:30.539743 extend-filesystems[1584]: Found loop7 Jan 30 18:06:30.539743 extend-filesystems[1584]: Found vda Jan 30 18:06:30.539743 extend-filesystems[1584]: Found vda1 Jan 30 18:06:30.539743 extend-filesystems[1584]: Found vda2 Jan 30 18:06:30.539743 extend-filesystems[1584]: Found vda3 Jan 30 18:06:30.539743 extend-filesystems[1584]: Found usr Jan 30 18:06:30.539743 extend-filesystems[1584]: Found vda4 Jan 30 18:06:30.539743 extend-filesystems[1584]: Found vda6 Jan 30 18:06:30.539743 extend-filesystems[1584]: Found vda7 Jan 30 18:06:30.539743 extend-filesystems[1584]: Found vda9 Jan 30 18:06:30.539743 extend-filesystems[1584]: Checking size of /dev/vda9 Jan 30 18:06:30.584207 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Jan 30 18:06:30.536707 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 18:06:30.584817 extend-filesystems[1584]: Resized partition /dev/vda9 Jan 30 18:06:30.551288 dbus-daemon[1580]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1256 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 30 18:06:30.555618 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 18:06:30.587179 extend-filesystems[1601]: resize2fs 1.47.1 (20-May-2024) Jan 30 18:06:30.575570 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 18:06:30.580245 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 18:06:30.598534 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 18:06:30.613512 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 18:06:30.617339 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 18:06:30.636897 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 18:06:30.637273 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 18:06:30.642608 jq[1613]: true Jan 30 18:06:30.651741 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1262) Jan 30 18:06:30.654746 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 18:06:30.655102 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 18:06:30.667719 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 18:06:30.678142 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 18:06:30.678506 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 18:06:30.735190 dbus-daemon[1580]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 30 18:06:30.733070 (ntainerd)[1633]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 18:06:30.743788 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 18:06:30.743835 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 18:06:30.757592 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 30 18:06:30.758431 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 18:06:30.758471 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 18:06:30.785387 jq[1625]: true Jan 30 18:06:31.779630 systemd-timesyncd[1573]: Contacted time server 132.226.210.133:123 (0.flatcar.pool.ntp.org). Jan 30 18:06:31.779733 systemd-timesyncd[1573]: Initial clock synchronization to Thu 2025-01-30 18:06:31.779389 UTC. Jan 30 18:06:31.780491 systemd-resolved[1511]: Clock change detected. Flushing caches. Jan 30 18:06:31.782143 update_engine[1611]: I20250130 18:06:31.780668 1611 main.cc:92] Flatcar Update Engine starting Jan 30 18:06:31.789932 systemd[1]: Started update-engine.service - Update Engine. Jan 30 18:06:31.792386 update_engine[1611]: I20250130 18:06:31.792085 1611 update_check_scheduler.cc:74] Next update check in 7m45s Jan 30 18:06:31.792431 tar[1624]: linux-amd64/helm Jan 30 18:06:31.793020 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 18:06:31.801075 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 18:06:32.040893 bash[1656]: Updated "/home/core/.ssh/authorized_keys" Jan 30 18:06:32.040809 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 18:06:32.065836 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 30 18:06:32.088400 systemd[1]: Starting sshkeys.service... Jan 30 18:06:32.106626 extend-filesystems[1601]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 30 18:06:32.106626 extend-filesystems[1601]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 30 18:06:32.106626 extend-filesystems[1601]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 30 18:06:32.137408 extend-filesystems[1584]: Resized filesystem in /dev/vda9 Jan 30 18:06:32.138630 sshd_keygen[1619]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 18:06:32.108028 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 18:06:32.108375 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 18:06:32.115656 systemd-logind[1602]: Watching system buttons on /dev/input/event2 (Power Button) Jan 30 18:06:32.115689 systemd-logind[1602]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 18:06:32.120249 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 30 18:06:32.121814 systemd-logind[1602]: New seat seat0. Jan 30 18:06:32.135363 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 30 18:06:32.138245 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 18:06:32.186470 locksmithd[1642]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 18:06:32.255397 dbus-daemon[1580]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 30 18:06:32.255816 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 30 18:06:32.259072 dbus-daemon[1580]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1638 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 30 18:06:32.273681 systemd[1]: Starting polkit.service - Authorization Manager... Jan 30 18:06:32.275265 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 18:06:32.295702 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 18:06:32.349308 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 18:06:32.349679 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 18:06:32.365392 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 18:06:32.372913 polkitd[1687]: Started polkitd version 121 Jan 30 18:06:32.405503 polkitd[1687]: Loading rules from directory /etc/polkit-1/rules.d Jan 30 18:06:32.407044 polkitd[1687]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 30 18:06:32.424667 polkitd[1687]: Finished loading, compiling and executing 2 rules Jan 30 18:06:32.428507 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 18:06:32.440440 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 18:06:32.444977 dbus-daemon[1580]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 30 18:06:32.446132 polkitd[1687]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 30 18:06:32.453357 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 18:06:32.458376 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 18:06:32.465955 systemd[1]: Started polkit.service - Authorization Manager. Jan 30 18:06:32.477476 containerd[1633]: time="2025-01-30T18:06:32.477327601Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 18:06:32.477629 systemd-hostnamed[1638]: Hostname set to (static) Jan 30 18:06:32.525303 containerd[1633]: time="2025-01-30T18:06:32.524210249Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 18:06:32.532136 containerd[1633]: time="2025-01-30T18:06:32.532084887Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 18:06:32.532286 containerd[1633]: time="2025-01-30T18:06:32.532261763Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 18:06:32.532404 containerd[1633]: time="2025-01-30T18:06:32.532380041Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 18:06:32.532850 containerd[1633]: time="2025-01-30T18:06:32.532823733Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 18:06:32.533937 containerd[1633]: time="2025-01-30T18:06:32.533908666Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 18:06:32.534462 containerd[1633]: time="2025-01-30T18:06:32.534172759Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 18:06:32.534462 containerd[1633]: time="2025-01-30T18:06:32.534220186Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 18:06:32.534775 containerd[1633]: time="2025-01-30T18:06:32.534745060Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 18:06:32.534906 containerd[1633]: time="2025-01-30T18:06:32.534881305Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 18:06:32.535020 containerd[1633]: time="2025-01-30T18:06:32.534995089Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 18:06:32.535126 containerd[1633]: time="2025-01-30T18:06:32.535104409Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 18:06:32.535467 containerd[1633]: time="2025-01-30T18:06:32.535433184Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 18:06:32.537578 containerd[1633]: time="2025-01-30T18:06:32.537550226Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 18:06:32.537963 containerd[1633]: time="2025-01-30T18:06:32.537925650Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 18:06:32.538070 containerd[1633]: time="2025-01-30T18:06:32.538045597Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 18:06:32.538365 containerd[1633]: time="2025-01-30T18:06:32.538340371Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 18:06:32.538585 containerd[1633]: time="2025-01-30T18:06:32.538560186Z" level=info msg="metadata content store policy set" policy=shared Jan 30 18:06:32.547357 containerd[1633]: time="2025-01-30T18:06:32.547270953Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 18:06:32.547901 containerd[1633]: time="2025-01-30T18:06:32.547498007Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 18:06:32.548051 containerd[1633]: time="2025-01-30T18:06:32.548018111Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 18:06:32.548164 containerd[1633]: time="2025-01-30T18:06:32.548140467Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 18:06:32.548279 containerd[1633]: time="2025-01-30T18:06:32.548256205Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 18:06:32.548939 containerd[1633]: time="2025-01-30T18:06:32.548502662Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 18:06:32.550478 containerd[1633]: time="2025-01-30T18:06:32.550440646Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 18:06:32.550715 containerd[1633]: time="2025-01-30T18:06:32.550689328Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 18:06:32.551885 containerd[1633]: time="2025-01-30T18:06:32.550813250Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 18:06:32.551885 containerd[1633]: time="2025-01-30T18:06:32.550854350Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 18:06:32.551885 containerd[1633]: time="2025-01-30T18:06:32.550907150Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 18:06:32.551885 containerd[1633]: time="2025-01-30T18:06:32.550931668Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 18:06:32.551885 containerd[1633]: time="2025-01-30T18:06:32.550952904Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 18:06:32.551885 containerd[1633]: time="2025-01-30T18:06:32.550975222Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 18:06:32.551885 containerd[1633]: time="2025-01-30T18:06:32.550996712Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 18:06:32.551885 containerd[1633]: time="2025-01-30T18:06:32.551017859Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 18:06:32.551885 containerd[1633]: time="2025-01-30T18:06:32.551038693Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 18:06:32.551885 containerd[1633]: time="2025-01-30T18:06:32.551058133Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 18:06:32.551885 containerd[1633]: time="2025-01-30T18:06:32.551094746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 18:06:32.551885 containerd[1633]: time="2025-01-30T18:06:32.551119101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 18:06:32.551885 containerd[1633]: time="2025-01-30T18:06:32.551138964Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 18:06:32.551885 containerd[1633]: time="2025-01-30T18:06:32.551161148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 18:06:32.552384 containerd[1633]: time="2025-01-30T18:06:32.551180312Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 18:06:32.552384 containerd[1633]: time="2025-01-30T18:06:32.551201624Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 18:06:32.552384 containerd[1633]: time="2025-01-30T18:06:32.551220739Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 18:06:32.552384 containerd[1633]: time="2025-01-30T18:06:32.551247022Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 18:06:32.552384 containerd[1633]: time="2025-01-30T18:06:32.551268082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 18:06:32.552384 containerd[1633]: time="2025-01-30T18:06:32.551290095Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 18:06:32.552384 containerd[1633]: time="2025-01-30T18:06:32.551323186Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 18:06:32.552384 containerd[1633]: time="2025-01-30T18:06:32.551353769Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 18:06:32.552384 containerd[1633]: time="2025-01-30T18:06:32.551374493Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 18:06:32.552384 containerd[1633]: time="2025-01-30T18:06:32.551396741Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 18:06:32.552384 containerd[1633]: time="2025-01-30T18:06:32.551425375Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 18:06:32.552384 containerd[1633]: time="2025-01-30T18:06:32.551445489Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 18:06:32.552384 containerd[1633]: time="2025-01-30T18:06:32.551463424Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 18:06:32.552384 containerd[1633]: time="2025-01-30T18:06:32.551531009Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 18:06:32.552846 containerd[1633]: time="2025-01-30T18:06:32.551561351Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 18:06:32.552846 containerd[1633]: time="2025-01-30T18:06:32.551579311Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 18:06:32.552846 containerd[1633]: time="2025-01-30T18:06:32.551598489Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 18:06:32.552846 containerd[1633]: time="2025-01-30T18:06:32.551614004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 18:06:32.552846 containerd[1633]: time="2025-01-30T18:06:32.551633511Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 18:06:32.552846 containerd[1633]: time="2025-01-30T18:06:32.551649574Z" level=info msg="NRI interface is disabled by configuration." Jan 30 18:06:32.552846 containerd[1633]: time="2025-01-30T18:06:32.551665409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 18:06:32.553905 containerd[1633]: time="2025-01-30T18:06:32.553349306Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 18:06:32.553905 containerd[1633]: time="2025-01-30T18:06:32.553436751Z" level=info msg="Connect containerd service" Jan 30 18:06:32.553905 containerd[1633]: time="2025-01-30T18:06:32.553493016Z" level=info msg="using legacy CRI server" Jan 30 18:06:32.553905 containerd[1633]: time="2025-01-30T18:06:32.553508739Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 18:06:32.553905 containerd[1633]: time="2025-01-30T18:06:32.553656707Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 18:06:32.556928 containerd[1633]: time="2025-01-30T18:06:32.556359297Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 18:06:32.556989 containerd[1633]: time="2025-01-30T18:06:32.556913634Z" level=info msg="Start subscribing containerd event" Jan 30 18:06:32.557058 containerd[1633]: time="2025-01-30T18:06:32.556982576Z" level=info msg="Start recovering state" Jan 30 18:06:32.557238 containerd[1633]: time="2025-01-30T18:06:32.557159246Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 18:06:32.557291 containerd[1633]: time="2025-01-30T18:06:32.557249967Z" level=info msg="Start event monitor" Jan 30 18:06:32.557291 containerd[1633]: time="2025-01-30T18:06:32.557282605Z" level=info msg="Start snapshots syncer" Jan 30 18:06:32.557352 containerd[1633]: time="2025-01-30T18:06:32.557299435Z" level=info msg="Start cni network conf syncer for default" Jan 30 18:06:32.557352 containerd[1633]: time="2025-01-30T18:06:32.557314199Z" level=info msg="Start streaming server" Jan 30 18:06:32.558486 containerd[1633]: time="2025-01-30T18:06:32.557496426Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 18:06:32.557722 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 18:06:32.564895 containerd[1633]: time="2025-01-30T18:06:32.564344387Z" level=info msg="containerd successfully booted in 0.094817s" Jan 30 18:06:33.067158 tar[1624]: linux-amd64/LICENSE Jan 30 18:06:33.071246 tar[1624]: linux-amd64/README.md Jan 30 18:06:33.089502 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 18:06:33.472095 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 18:06:33.477364 (kubelet)[1729]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 18:06:34.323637 kubelet[1729]: E0130 18:06:34.323476 1729 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 18:06:34.327117 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 18:06:34.327461 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 18:06:34.870263 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 18:06:34.876417 systemd[1]: Started sshd@0-10.230.68.22:22-139.178.89.65:45144.service - OpenSSH per-connection server daemon (139.178.89.65:45144). Jan 30 18:06:35.779358 sshd[1739]: Accepted publickey for core from 139.178.89.65 port 45144 ssh2: RSA SHA256:mi9Ffww0GZqlgqqsrskunsrI33jB/1uB1d3dx4wABvw Jan 30 18:06:35.783330 sshd[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 18:06:35.799453 systemd-logind[1602]: New session 1 of user core. Jan 30 18:06:35.802589 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 18:06:35.809348 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 18:06:35.833342 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 18:06:35.851506 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 18:06:35.866558 (systemd)[1746]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 18:06:35.998580 systemd[1746]: Queued start job for default target default.target. Jan 30 18:06:35.999161 systemd[1746]: Created slice app.slice - User Application Slice. Jan 30 18:06:35.999190 systemd[1746]: Reached target paths.target - Paths. Jan 30 18:06:35.999211 systemd[1746]: Reached target timers.target - Timers. Jan 30 18:06:36.004982 systemd[1746]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 18:06:36.023173 systemd[1746]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 18:06:36.023403 systemd[1746]: Reached target sockets.target - Sockets. Jan 30 18:06:36.023433 systemd[1746]: Reached target basic.target - Basic System. Jan 30 18:06:36.023503 systemd[1746]: Reached target default.target - Main User Target. Jan 30 18:06:36.023570 systemd[1746]: Startup finished in 147ms. Jan 30 18:06:36.024224 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 18:06:36.037578 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 18:06:36.666474 systemd[1]: Started sshd@1-10.230.68.22:22-139.178.89.65:45160.service - OpenSSH per-connection server daemon (139.178.89.65:45160). Jan 30 18:06:37.530697 login[1706]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 30 18:06:37.532667 login[1708]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 30 18:06:37.542896 systemd-logind[1602]: New session 3 of user core. Jan 30 18:06:37.550608 sshd[1758]: Accepted publickey for core from 139.178.89.65 port 45160 ssh2: RSA SHA256:mi9Ffww0GZqlgqqsrskunsrI33jB/1uB1d3dx4wABvw Jan 30 18:06:37.552479 sshd[1758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 18:06:37.553459 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 18:06:37.560284 systemd-logind[1602]: New session 2 of user core. Jan 30 18:06:37.567386 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 18:06:37.573079 systemd-logind[1602]: New session 4 of user core. Jan 30 18:06:37.575989 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 18:06:38.167604 sshd[1758]: pam_unix(sshd:session): session closed for user core Jan 30 18:06:38.171893 systemd[1]: sshd@1-10.230.68.22:22-139.178.89.65:45160.service: Deactivated successfully. Jan 30 18:06:38.175374 systemd-logind[1602]: Session 4 logged out. Waiting for processes to exit. Jan 30 18:06:38.176761 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 18:06:38.179392 systemd-logind[1602]: Removed session 4. Jan 30 18:06:38.318323 systemd[1]: Started sshd@2-10.230.68.22:22-139.178.89.65:45168.service - OpenSSH per-connection server daemon (139.178.89.65:45168). Jan 30 18:06:38.594436 coreos-metadata[1578]: Jan 30 18:06:38.594 WARN failed to locate config-drive, using the metadata service API instead Jan 30 18:06:38.618975 coreos-metadata[1578]: Jan 30 18:06:38.618 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 30 18:06:38.625275 coreos-metadata[1578]: Jan 30 18:06:38.625 INFO Fetch failed with 404: resource not found Jan 30 18:06:38.625275 coreos-metadata[1578]: Jan 30 18:06:38.625 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 30 18:06:38.625958 coreos-metadata[1578]: Jan 30 18:06:38.625 INFO Fetch successful Jan 30 18:06:38.626212 coreos-metadata[1578]: Jan 30 18:06:38.626 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 30 18:06:38.641910 coreos-metadata[1578]: Jan 30 18:06:38.641 INFO Fetch successful Jan 30 18:06:38.641998 coreos-metadata[1578]: Jan 30 18:06:38.641 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 30 18:06:38.657221 coreos-metadata[1578]: Jan 30 18:06:38.657 INFO Fetch successful Jan 30 18:06:38.657398 coreos-metadata[1578]: Jan 30 18:06:38.657 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 30 18:06:38.670882 coreos-metadata[1578]: Jan 30 18:06:38.670 INFO Fetch successful Jan 30 18:06:38.671100 coreos-metadata[1578]: Jan 30 18:06:38.671 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 30 18:06:38.688466 coreos-metadata[1578]: Jan 30 18:06:38.688 INFO Fetch successful Jan 30 18:06:38.718278 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 18:06:38.720326 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 18:06:39.236285 sshd[1794]: Accepted publickey for core from 139.178.89.65 port 45168 ssh2: RSA SHA256:mi9Ffww0GZqlgqqsrskunsrI33jB/1uB1d3dx4wABvw Jan 30 18:06:39.238102 sshd[1794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 18:06:39.244411 systemd-logind[1602]: New session 5 of user core. Jan 30 18:06:39.254398 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 18:06:39.388245 coreos-metadata[1668]: Jan 30 18:06:39.388 WARN failed to locate config-drive, using the metadata service API instead Jan 30 18:06:39.409794 coreos-metadata[1668]: Jan 30 18:06:39.409 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 30 18:06:39.433459 coreos-metadata[1668]: Jan 30 18:06:39.433 INFO Fetch successful Jan 30 18:06:39.433783 coreos-metadata[1668]: Jan 30 18:06:39.433 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 30 18:06:39.460192 coreos-metadata[1668]: Jan 30 18:06:39.460 INFO Fetch successful Jan 30 18:06:39.463218 unknown[1668]: wrote ssh authorized keys file for user: core Jan 30 18:06:39.484893 update-ssh-keys[1812]: Updated "/home/core/.ssh/authorized_keys" Jan 30 18:06:39.488132 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 30 18:06:39.494663 systemd[1]: Finished sshkeys.service. Jan 30 18:06:39.498686 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 18:06:39.499301 systemd[1]: Startup finished in 18.547s (kernel) + 13.121s (userspace) = 31.669s. Jan 30 18:06:39.857225 sshd[1794]: pam_unix(sshd:session): session closed for user core Jan 30 18:06:39.860969 systemd[1]: sshd@2-10.230.68.22:22-139.178.89.65:45168.service: Deactivated successfully. Jan 30 18:06:39.865223 systemd-logind[1602]: Session 5 logged out. Waiting for processes to exit. Jan 30 18:06:39.866623 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 18:06:39.867724 systemd-logind[1602]: Removed session 5. Jan 30 18:06:44.578038 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 18:06:44.586137 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 18:06:44.742079 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 18:06:44.746867 (kubelet)[1834]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 18:06:44.870013 kubelet[1834]: E0130 18:06:44.869783 1834 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 18:06:44.874375 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 18:06:44.874683 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 18:06:48.607247 systemd[1]: Started sshd@3-10.230.68.22:22-113.200.60.74:39046.service - OpenSSH per-connection server daemon (113.200.60.74:39046). Jan 30 18:06:50.011305 systemd[1]: Started sshd@4-10.230.68.22:22-139.178.89.65:54692.service - OpenSSH per-connection server daemon (139.178.89.65:54692). Jan 30 18:06:50.893577 sshd[1844]: Accepted publickey for core from 139.178.89.65 port 54692 ssh2: RSA SHA256:mi9Ffww0GZqlgqqsrskunsrI33jB/1uB1d3dx4wABvw Jan 30 18:06:50.895703 sshd[1844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 18:06:50.902064 systemd-logind[1602]: New session 6 of user core. Jan 30 18:06:50.910298 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 18:06:51.514203 sshd[1844]: pam_unix(sshd:session): session closed for user core Jan 30 18:06:51.517815 systemd[1]: sshd@4-10.230.68.22:22-139.178.89.65:54692.service: Deactivated successfully. Jan 30 18:06:51.522187 systemd-logind[1602]: Session 6 logged out. Waiting for processes to exit. Jan 30 18:06:51.523409 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 18:06:51.524717 systemd-logind[1602]: Removed session 6. Jan 30 18:06:51.668276 systemd[1]: Started sshd@5-10.230.68.22:22-139.178.89.65:60148.service - OpenSSH per-connection server daemon (139.178.89.65:60148). Jan 30 18:06:52.555662 sshd[1852]: Accepted publickey for core from 139.178.89.65 port 60148 ssh2: RSA SHA256:mi9Ffww0GZqlgqqsrskunsrI33jB/1uB1d3dx4wABvw Jan 30 18:06:52.558173 sshd[1852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 18:06:52.564692 systemd-logind[1602]: New session 7 of user core. Jan 30 18:06:52.580446 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 18:06:53.168301 sshd[1852]: pam_unix(sshd:session): session closed for user core Jan 30 18:06:53.174731 systemd[1]: sshd@5-10.230.68.22:22-139.178.89.65:60148.service: Deactivated successfully. Jan 30 18:06:53.176139 systemd-logind[1602]: Session 7 logged out. Waiting for processes to exit. Jan 30 18:06:53.179572 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 18:06:53.180939 systemd-logind[1602]: Removed session 7. Jan 30 18:06:53.320287 systemd[1]: Started sshd@6-10.230.68.22:22-139.178.89.65:60152.service - OpenSSH per-connection server daemon (139.178.89.65:60152). Jan 30 18:06:54.223034 sshd[1860]: Accepted publickey for core from 139.178.89.65 port 60152 ssh2: RSA SHA256:mi9Ffww0GZqlgqqsrskunsrI33jB/1uB1d3dx4wABvw Jan 30 18:06:54.227015 sshd[1860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 18:06:54.236835 systemd-logind[1602]: New session 8 of user core. Jan 30 18:06:54.250944 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 18:06:54.849352 sshd[1860]: pam_unix(sshd:session): session closed for user core Jan 30 18:06:54.855764 systemd[1]: sshd@6-10.230.68.22:22-139.178.89.65:60152.service: Deactivated successfully. Jan 30 18:06:54.859061 systemd-logind[1602]: Session 8 logged out. Waiting for processes to exit. Jan 30 18:06:54.860607 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 18:06:54.861960 systemd-logind[1602]: Removed session 8. Jan 30 18:06:54.993433 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 18:06:55.004211 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 18:06:55.008220 systemd[1]: Started sshd@7-10.230.68.22:22-139.178.89.65:60158.service - OpenSSH per-connection server daemon (139.178.89.65:60158). Jan 30 18:06:55.454092 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 18:06:55.459333 (kubelet)[1882]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 18:06:55.532570 kubelet[1882]: E0130 18:06:55.532223 1882 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 18:06:55.535765 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 18:06:55.536662 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 18:06:55.907045 sshd[1869]: Accepted publickey for core from 139.178.89.65 port 60158 ssh2: RSA SHA256:mi9Ffww0GZqlgqqsrskunsrI33jB/1uB1d3dx4wABvw Jan 30 18:06:55.909290 sshd[1869]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 18:06:55.917169 systemd-logind[1602]: New session 9 of user core. Jan 30 18:06:55.927611 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 18:06:56.400213 sudo[1893]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 18:06:56.400742 sudo[1893]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 18:06:56.416571 sudo[1893]: pam_unix(sudo:session): session closed for user root Jan 30 18:06:56.561526 sshd[1869]: pam_unix(sshd:session): session closed for user core Jan 30 18:06:56.567656 systemd[1]: sshd@7-10.230.68.22:22-139.178.89.65:60158.service: Deactivated successfully. Jan 30 18:06:56.571609 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 18:06:56.571612 systemd-logind[1602]: Session 9 logged out. Waiting for processes to exit. Jan 30 18:06:56.574446 systemd-logind[1602]: Removed session 9. Jan 30 18:06:56.714682 systemd[1]: Started sshd@8-10.230.68.22:22-139.178.89.65:60162.service - OpenSSH per-connection server daemon (139.178.89.65:60162). Jan 30 18:06:57.611819 sshd[1898]: Accepted publickey for core from 139.178.89.65 port 60162 ssh2: RSA SHA256:mi9Ffww0GZqlgqqsrskunsrI33jB/1uB1d3dx4wABvw Jan 30 18:06:57.615181 sshd[1898]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 18:06:57.623266 systemd-logind[1602]: New session 10 of user core. Jan 30 18:06:57.630283 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 18:06:58.093184 sudo[1903]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 18:06:58.093748 sudo[1903]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 18:06:58.099637 sudo[1903]: pam_unix(sudo:session): session closed for user root Jan 30 18:06:58.107903 sudo[1902]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 18:06:58.108339 sudo[1902]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 18:06:58.128223 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 18:06:58.132297 auditctl[1906]: No rules Jan 30 18:06:58.133187 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 18:06:58.133582 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 18:06:58.144567 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 18:06:58.181732 augenrules[1925]: No rules Jan 30 18:06:58.183404 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 18:06:58.185741 sudo[1902]: pam_unix(sudo:session): session closed for user root Jan 30 18:06:58.332271 sshd[1898]: pam_unix(sshd:session): session closed for user core Jan 30 18:06:58.335770 systemd[1]: sshd@8-10.230.68.22:22-139.178.89.65:60162.service: Deactivated successfully. Jan 30 18:06:58.340569 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 18:06:58.341347 systemd-logind[1602]: Session 10 logged out. Waiting for processes to exit. Jan 30 18:06:58.342882 systemd-logind[1602]: Removed session 10. Jan 30 18:06:58.483430 systemd[1]: Started sshd@9-10.230.68.22:22-139.178.89.65:60178.service - OpenSSH per-connection server daemon (139.178.89.65:60178). Jan 30 18:06:59.363519 sshd[1934]: Accepted publickey for core from 139.178.89.65 port 60178 ssh2: RSA SHA256:mi9Ffww0GZqlgqqsrskunsrI33jB/1uB1d3dx4wABvw Jan 30 18:06:59.365512 sshd[1934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 18:06:59.372977 systemd-logind[1602]: New session 11 of user core. Jan 30 18:06:59.378294 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 18:06:59.840289 sudo[1938]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 18:06:59.840754 sudo[1938]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 18:07:00.488556 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 18:07:00.488609 (dockerd)[1954]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 18:07:01.154141 dockerd[1954]: time="2025-01-30T18:07:01.154009955Z" level=info msg="Starting up" Jan 30 18:07:01.509187 dockerd[1954]: time="2025-01-30T18:07:01.508462635Z" level=info msg="Loading containers: start." Jan 30 18:07:01.680912 kernel: Initializing XFRM netlink socket Jan 30 18:07:01.799714 systemd-networkd[1256]: docker0: Link UP Jan 30 18:07:01.824215 dockerd[1954]: time="2025-01-30T18:07:01.823920012Z" level=info msg="Loading containers: done." Jan 30 18:07:01.870005 dockerd[1954]: time="2025-01-30T18:07:01.868373555Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 18:07:01.870005 dockerd[1954]: time="2025-01-30T18:07:01.868920904Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 30 18:07:01.870005 dockerd[1954]: time="2025-01-30T18:07:01.869230106Z" level=info msg="Daemon has completed initialization" Jan 30 18:07:01.917186 dockerd[1954]: time="2025-01-30T18:07:01.916964606Z" level=info msg="API listen on /run/docker.sock" Jan 30 18:07:01.917843 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 18:07:02.505016 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 30 18:07:03.378524 containerd[1633]: time="2025-01-30T18:07:03.378148349Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 30 18:07:04.130629 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2893015209.mount: Deactivated successfully. Jan 30 18:07:05.586761 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 30 18:07:05.601268 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 18:07:05.907561 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 18:07:05.925712 (kubelet)[2170]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 18:07:06.073621 kubelet[2170]: E0130 18:07:06.073435 2170 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 18:07:06.076705 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 18:07:06.077232 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 18:07:06.512955 containerd[1633]: time="2025-01-30T18:07:06.512884662Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:07:06.514666 containerd[1633]: time="2025-01-30T18:07:06.514617816Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32677020" Jan 30 18:07:06.515404 containerd[1633]: time="2025-01-30T18:07:06.515331934Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:07:06.520292 containerd[1633]: time="2025-01-30T18:07:06.520249415Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:07:06.525071 containerd[1633]: time="2025-01-30T18:07:06.525014142Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 3.146733442s" Jan 30 18:07:06.525192 containerd[1633]: time="2025-01-30T18:07:06.525097466Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 30 18:07:06.551605 containerd[1633]: time="2025-01-30T18:07:06.551545848Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 30 18:07:09.308012 containerd[1633]: time="2025-01-30T18:07:09.307945701Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:07:09.309492 containerd[1633]: time="2025-01-30T18:07:09.309451651Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29605753" Jan 30 18:07:09.310317 containerd[1633]: time="2025-01-30T18:07:09.310012123Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:07:09.313836 containerd[1633]: time="2025-01-30T18:07:09.313764753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:07:09.316029 containerd[1633]: time="2025-01-30T18:07:09.315373223Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 2.763758404s" Jan 30 18:07:09.316029 containerd[1633]: time="2025-01-30T18:07:09.315419416Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 30 18:07:09.350845 containerd[1633]: time="2025-01-30T18:07:09.350782495Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 30 18:07:11.015021 containerd[1633]: time="2025-01-30T18:07:11.014958516Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:07:11.017172 containerd[1633]: time="2025-01-30T18:07:11.017105646Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17783072" Jan 30 18:07:11.018964 containerd[1633]: time="2025-01-30T18:07:11.018918444Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:07:11.022222 containerd[1633]: time="2025-01-30T18:07:11.022159141Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:07:11.023913 containerd[1633]: time="2025-01-30T18:07:11.023720236Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 1.672875459s" Jan 30 18:07:11.023913 containerd[1633]: time="2025-01-30T18:07:11.023766178Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 30 18:07:11.056343 containerd[1633]: time="2025-01-30T18:07:11.056010716Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 30 18:07:13.021985 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1543536612.mount: Deactivated successfully. Jan 30 18:07:13.808545 containerd[1633]: time="2025-01-30T18:07:13.808450618Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:07:13.810329 containerd[1633]: time="2025-01-30T18:07:13.810260142Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058345" Jan 30 18:07:13.811382 containerd[1633]: time="2025-01-30T18:07:13.811319670Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:07:13.814988 containerd[1633]: time="2025-01-30T18:07:13.814937591Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:07:13.816018 containerd[1633]: time="2025-01-30T18:07:13.815681050Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 2.759607692s" Jan 30 18:07:13.816018 containerd[1633]: time="2025-01-30T18:07:13.815731435Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 30 18:07:13.848971 containerd[1633]: time="2025-01-30T18:07:13.848925302Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 18:07:14.477911 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2401470195.mount: Deactivated successfully. Jan 30 18:07:15.806579 containerd[1633]: time="2025-01-30T18:07:15.806487018Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:07:15.808750 containerd[1633]: time="2025-01-30T18:07:15.808702826Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Jan 30 18:07:15.810357 containerd[1633]: time="2025-01-30T18:07:15.809671963Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:07:15.813079 containerd[1633]: time="2025-01-30T18:07:15.813008090Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:07:15.815401 containerd[1633]: time="2025-01-30T18:07:15.814623715Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.965433755s" Jan 30 18:07:15.815401 containerd[1633]: time="2025-01-30T18:07:15.814672072Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 30 18:07:15.845410 containerd[1633]: time="2025-01-30T18:07:15.845349153Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 30 18:07:16.084133 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 30 18:07:16.092141 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 18:07:16.411126 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 18:07:16.419236 (kubelet)[2272]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 18:07:16.552616 kubelet[2272]: E0130 18:07:16.552518 2272 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 18:07:16.555495 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 18:07:16.556849 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 18:07:16.645297 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1648571749.mount: Deactivated successfully. Jan 30 18:07:16.649360 containerd[1633]: time="2025-01-30T18:07:16.649277673Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:07:16.651086 containerd[1633]: time="2025-01-30T18:07:16.651031585Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Jan 30 18:07:16.651902 containerd[1633]: time="2025-01-30T18:07:16.651828310Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:07:16.654770 containerd[1633]: time="2025-01-30T18:07:16.654731619Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:07:16.656420 containerd[1633]: time="2025-01-30T18:07:16.656254345Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 810.852344ms" Jan 30 18:07:16.656420 containerd[1633]: time="2025-01-30T18:07:16.656303409Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 30 18:07:16.683078 containerd[1633]: time="2025-01-30T18:07:16.682890995Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 30 18:07:16.840811 update_engine[1611]: I20250130 18:07:16.840301 1611 update_attempter.cc:509] Updating boot flags... Jan 30 18:07:16.911533 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2297) Jan 30 18:07:16.981900 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2300) Jan 30 18:07:17.047519 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2300) Jan 30 18:07:17.371890 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount416778748.mount: Deactivated successfully. Jan 30 18:07:20.221207 containerd[1633]: time="2025-01-30T18:07:20.221111692Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:07:20.222649 containerd[1633]: time="2025-01-30T18:07:20.222603366Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238579" Jan 30 18:07:20.223453 containerd[1633]: time="2025-01-30T18:07:20.223386915Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:07:20.227495 containerd[1633]: time="2025-01-30T18:07:20.227426912Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:07:20.229398 containerd[1633]: time="2025-01-30T18:07:20.229188070Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.546246147s" Jan 30 18:07:20.229398 containerd[1633]: time="2025-01-30T18:07:20.229248546Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 30 18:07:23.473598 systemd[1]: Started sshd@10-10.230.68.22:22-113.200.60.74:42777.service - OpenSSH per-connection server daemon (113.200.60.74:42777). Jan 30 18:07:24.314467 systemd[1]: Started sshd@11-10.230.68.22:22-52.140.61.101:44802.service - OpenSSH per-connection server daemon (52.140.61.101:44802). Jan 30 18:07:24.805589 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 18:07:24.817294 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 18:07:24.850194 systemd[1]: Reloading requested from client PID 2417 ('systemctl') (unit session-11.scope)... Jan 30 18:07:24.850238 systemd[1]: Reloading... Jan 30 18:07:25.022725 zram_generator::config[2469]: No configuration found. Jan 30 18:07:25.202050 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 18:07:25.306096 systemd[1]: Reloading finished in 455 ms. Jan 30 18:07:25.368058 sshd[2408]: Invalid user dev from 52.140.61.101 port 44802 Jan 30 18:07:25.374541 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 18:07:25.375069 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 18:07:25.375839 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 18:07:25.383428 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 18:07:25.606228 sshd[2408]: Received disconnect from 52.140.61.101 port 44802:11: Bye Bye [preauth] Jan 30 18:07:25.606228 sshd[2408]: Disconnected from invalid user dev 52.140.61.101 port 44802 [preauth] Jan 30 18:07:25.609553 systemd[1]: sshd@11-10.230.68.22:22-52.140.61.101:44802.service: Deactivated successfully. Jan 30 18:07:25.619066 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 18:07:25.630485 (kubelet)[2540]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 18:07:25.957668 kubelet[2540]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 18:07:25.957668 kubelet[2540]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 18:07:25.957668 kubelet[2540]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 18:07:25.959525 kubelet[2540]: I0130 18:07:25.958618 2540 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 18:07:26.647617 kubelet[2540]: I0130 18:07:26.647400 2540 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 18:07:26.647617 kubelet[2540]: I0130 18:07:26.647469 2540 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 18:07:26.647935 kubelet[2540]: I0130 18:07:26.647915 2540 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 18:07:26.690614 kubelet[2540]: I0130 18:07:26.690214 2540 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 18:07:26.695147 kubelet[2540]: E0130 18:07:26.695109 2540 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.230.68.22:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.230.68.22:6443: connect: connection refused Jan 30 18:07:26.713893 kubelet[2540]: I0130 18:07:26.713823 2540 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 18:07:26.719227 kubelet[2540]: I0130 18:07:26.719118 2540 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 18:07:26.720882 kubelet[2540]: I0130 18:07:26.719217 2540 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-xoz4v.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 18:07:26.721264 kubelet[2540]: I0130 18:07:26.720986 2540 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 18:07:26.721264 kubelet[2540]: I0130 18:07:26.721011 2540 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 18:07:26.721434 kubelet[2540]: I0130 18:07:26.721361 2540 state_mem.go:36] "Initialized new in-memory state store" Jan 30 18:07:26.724037 kubelet[2540]: W0130 18:07:26.723816 2540 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.68.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-xoz4v.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.68.22:6443: connect: connection refused Jan 30 18:07:26.724037 kubelet[2540]: E0130 18:07:26.723991 2540 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.230.68.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-xoz4v.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.68.22:6443: connect: connection refused Jan 30 18:07:26.724533 kubelet[2540]: I0130 18:07:26.724501 2540 kubelet.go:400] "Attempting to sync node with API server" Jan 30 18:07:26.724636 kubelet[2540]: I0130 18:07:26.724537 2540 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 18:07:26.725562 kubelet[2540]: I0130 18:07:26.725518 2540 kubelet.go:312] "Adding apiserver pod source" Jan 30 18:07:26.725894 kubelet[2540]: I0130 18:07:26.725648 2540 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 18:07:26.729892 kubelet[2540]: W0130 18:07:26.729499 2540 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.68.22:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.68.22:6443: connect: connection refused Jan 30 18:07:26.729892 kubelet[2540]: E0130 18:07:26.729616 2540 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.230.68.22:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.68.22:6443: connect: connection refused Jan 30 18:07:26.731788 kubelet[2540]: I0130 18:07:26.731245 2540 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 18:07:26.733898 kubelet[2540]: I0130 18:07:26.733087 2540 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 18:07:26.733898 kubelet[2540]: W0130 18:07:26.733312 2540 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 18:07:26.737230 kubelet[2540]: I0130 18:07:26.737207 2540 server.go:1264] "Started kubelet" Jan 30 18:07:26.740248 kubelet[2540]: I0130 18:07:26.740185 2540 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 18:07:26.742424 kubelet[2540]: I0130 18:07:26.742386 2540 server.go:455] "Adding debug handlers to kubelet server" Jan 30 18:07:26.747619 kubelet[2540]: I0130 18:07:26.746790 2540 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 18:07:26.747619 kubelet[2540]: I0130 18:07:26.747412 2540 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 18:07:26.749890 kubelet[2540]: I0130 18:07:26.749694 2540 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 18:07:26.752262 kubelet[2540]: E0130 18:07:26.752070 2540 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.68.22:6443/api/v1/namespaces/default/events\": dial tcp 10.230.68.22:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-xoz4v.gb1.brightbox.com.181f8aaaa6e0949f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-xoz4v.gb1.brightbox.com,UID:srv-xoz4v.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-xoz4v.gb1.brightbox.com,},FirstTimestamp:2025-01-30 18:07:26.737110175 +0000 UTC m=+1.094900984,LastTimestamp:2025-01-30 18:07:26.737110175 +0000 UTC m=+1.094900984,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-xoz4v.gb1.brightbox.com,}" Jan 30 18:07:26.758562 kubelet[2540]: I0130 18:07:26.758538 2540 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 18:07:26.763250 kubelet[2540]: I0130 18:07:26.762825 2540 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 18:07:26.763250 kubelet[2540]: I0130 18:07:26.763087 2540 reconciler.go:26] "Reconciler: start to sync state" Jan 30 18:07:26.764314 kubelet[2540]: E0130 18:07:26.763551 2540 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.68.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-xoz4v.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.68.22:6443: connect: connection refused" interval="200ms" Jan 30 18:07:26.764654 kubelet[2540]: W0130 18:07:26.764602 2540 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.68.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.68.22:6443: connect: connection refused Jan 30 18:07:26.764808 kubelet[2540]: E0130 18:07:26.764785 2540 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.230.68.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.68.22:6443: connect: connection refused Jan 30 18:07:26.765399 kubelet[2540]: I0130 18:07:26.765374 2540 factory.go:221] Registration of the systemd container factory successfully Jan 30 18:07:26.765630 kubelet[2540]: I0130 18:07:26.765604 2540 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 18:07:26.769646 kubelet[2540]: I0130 18:07:26.769624 2540 factory.go:221] Registration of the containerd container factory successfully Jan 30 18:07:26.778282 kubelet[2540]: E0130 18:07:26.778245 2540 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 18:07:26.797967 kubelet[2540]: I0130 18:07:26.797795 2540 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 18:07:26.800444 kubelet[2540]: I0130 18:07:26.800310 2540 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 18:07:26.800444 kubelet[2540]: I0130 18:07:26.800435 2540 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 18:07:26.800620 kubelet[2540]: I0130 18:07:26.800507 2540 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 18:07:26.800886 kubelet[2540]: E0130 18:07:26.800652 2540 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 18:07:26.812197 kubelet[2540]: W0130 18:07:26.811947 2540 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.68.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.68.22:6443: connect: connection refused Jan 30 18:07:26.812197 kubelet[2540]: E0130 18:07:26.812029 2540 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.230.68.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.68.22:6443: connect: connection refused Jan 30 18:07:26.831138 kubelet[2540]: I0130 18:07:26.830597 2540 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 18:07:26.831138 kubelet[2540]: I0130 18:07:26.830632 2540 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 18:07:26.831138 kubelet[2540]: I0130 18:07:26.830677 2540 state_mem.go:36] "Initialized new in-memory state store" Jan 30 18:07:26.834566 kubelet[2540]: I0130 18:07:26.834457 2540 policy_none.go:49] "None policy: Start" Jan 30 18:07:26.835491 kubelet[2540]: I0130 18:07:26.835389 2540 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 18:07:26.836037 kubelet[2540]: I0130 18:07:26.835583 2540 state_mem.go:35] "Initializing new in-memory state store" Jan 30 18:07:26.844898 kubelet[2540]: I0130 18:07:26.844189 2540 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 18:07:26.844898 kubelet[2540]: I0130 18:07:26.844593 2540 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 18:07:26.845159 kubelet[2540]: I0130 18:07:26.845137 2540 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 18:07:26.849272 kubelet[2540]: E0130 18:07:26.849246 2540 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-xoz4v.gb1.brightbox.com\" not found" Jan 30 18:07:26.862271 kubelet[2540]: I0130 18:07:26.862217 2540 kubelet_node_status.go:73] "Attempting to register node" node="srv-xoz4v.gb1.brightbox.com" Jan 30 18:07:26.862976 kubelet[2540]: E0130 18:07:26.862936 2540 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.68.22:6443/api/v1/nodes\": dial tcp 10.230.68.22:6443: connect: connection refused" node="srv-xoz4v.gb1.brightbox.com" Jan 30 18:07:26.903899 kubelet[2540]: I0130 18:07:26.901785 2540 topology_manager.go:215] "Topology Admit Handler" podUID="0cc7bb43a74537e8cd6ca95dcad28dcd" podNamespace="kube-system" podName="kube-apiserver-srv-xoz4v.gb1.brightbox.com" Jan 30 18:07:26.907231 kubelet[2540]: I0130 18:07:26.907199 2540 topology_manager.go:215] "Topology Admit Handler" podUID="b65d7107bab364d0710880eeacdeb686" podNamespace="kube-system" podName="kube-controller-manager-srv-xoz4v.gb1.brightbox.com" Jan 30 18:07:26.910828 kubelet[2540]: I0130 18:07:26.910795 2540 topology_manager.go:215] "Topology Admit Handler" podUID="0c5c985182516a0cbc189e0b43d0cb69" podNamespace="kube-system" podName="kube-scheduler-srv-xoz4v.gb1.brightbox.com" Jan 30 18:07:26.965608 kubelet[2540]: E0130 18:07:26.965512 2540 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.68.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-xoz4v.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.68.22:6443: connect: connection refused" interval="400ms" Jan 30 18:07:27.064440 kubelet[2540]: I0130 18:07:27.064229 2540 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b65d7107bab364d0710880eeacdeb686-kubeconfig\") pod \"kube-controller-manager-srv-xoz4v.gb1.brightbox.com\" (UID: \"b65d7107bab364d0710880eeacdeb686\") " pod="kube-system/kube-controller-manager-srv-xoz4v.gb1.brightbox.com" Jan 30 18:07:27.064440 kubelet[2540]: I0130 18:07:27.064313 2540 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b65d7107bab364d0710880eeacdeb686-flexvolume-dir\") pod \"kube-controller-manager-srv-xoz4v.gb1.brightbox.com\" (UID: \"b65d7107bab364d0710880eeacdeb686\") " pod="kube-system/kube-controller-manager-srv-xoz4v.gb1.brightbox.com" Jan 30 18:07:27.064440 kubelet[2540]: I0130 18:07:27.064360 2540 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b65d7107bab364d0710880eeacdeb686-k8s-certs\") pod \"kube-controller-manager-srv-xoz4v.gb1.brightbox.com\" (UID: \"b65d7107bab364d0710880eeacdeb686\") " pod="kube-system/kube-controller-manager-srv-xoz4v.gb1.brightbox.com" Jan 30 18:07:27.065342 kubelet[2540]: I0130 18:07:27.064392 2540 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0cc7bb43a74537e8cd6ca95dcad28dcd-usr-share-ca-certificates\") pod \"kube-apiserver-srv-xoz4v.gb1.brightbox.com\" (UID: \"0cc7bb43a74537e8cd6ca95dcad28dcd\") " pod="kube-system/kube-apiserver-srv-xoz4v.gb1.brightbox.com" Jan 30 18:07:27.065342 kubelet[2540]: I0130 18:07:27.064633 2540 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b65d7107bab364d0710880eeacdeb686-ca-certs\") pod \"kube-controller-manager-srv-xoz4v.gb1.brightbox.com\" (UID: \"b65d7107bab364d0710880eeacdeb686\") " pod="kube-system/kube-controller-manager-srv-xoz4v.gb1.brightbox.com" Jan 30 18:07:27.065342 kubelet[2540]: I0130 18:07:27.064678 2540 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b65d7107bab364d0710880eeacdeb686-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-xoz4v.gb1.brightbox.com\" (UID: \"b65d7107bab364d0710880eeacdeb686\") " pod="kube-system/kube-controller-manager-srv-xoz4v.gb1.brightbox.com" Jan 30 18:07:27.065342 kubelet[2540]: I0130 18:07:27.064743 2540 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0c5c985182516a0cbc189e0b43d0cb69-kubeconfig\") pod \"kube-scheduler-srv-xoz4v.gb1.brightbox.com\" (UID: \"0c5c985182516a0cbc189e0b43d0cb69\") " pod="kube-system/kube-scheduler-srv-xoz4v.gb1.brightbox.com" Jan 30 18:07:27.065342 kubelet[2540]: I0130 18:07:27.064773 2540 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0cc7bb43a74537e8cd6ca95dcad28dcd-ca-certs\") pod \"kube-apiserver-srv-xoz4v.gb1.brightbox.com\" (UID: \"0cc7bb43a74537e8cd6ca95dcad28dcd\") " pod="kube-system/kube-apiserver-srv-xoz4v.gb1.brightbox.com" Jan 30 18:07:27.065672 kubelet[2540]: I0130 18:07:27.064802 2540 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0cc7bb43a74537e8cd6ca95dcad28dcd-k8s-certs\") pod \"kube-apiserver-srv-xoz4v.gb1.brightbox.com\" (UID: \"0cc7bb43a74537e8cd6ca95dcad28dcd\") " pod="kube-system/kube-apiserver-srv-xoz4v.gb1.brightbox.com" Jan 30 18:07:27.067893 kubelet[2540]: I0130 18:07:27.067691 2540 kubelet_node_status.go:73] "Attempting to register node" node="srv-xoz4v.gb1.brightbox.com" Jan 30 18:07:27.068191 kubelet[2540]: E0130 18:07:27.068149 2540 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.68.22:6443/api/v1/nodes\": dial tcp 10.230.68.22:6443: connect: connection refused" node="srv-xoz4v.gb1.brightbox.com" Jan 30 18:07:27.218317 containerd[1633]: time="2025-01-30T18:07:27.217623819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-xoz4v.gb1.brightbox.com,Uid:0cc7bb43a74537e8cd6ca95dcad28dcd,Namespace:kube-system,Attempt:0,}" Jan 30 18:07:27.223389 containerd[1633]: time="2025-01-30T18:07:27.223346065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-xoz4v.gb1.brightbox.com,Uid:b65d7107bab364d0710880eeacdeb686,Namespace:kube-system,Attempt:0,}" Jan 30 18:07:27.228065 containerd[1633]: time="2025-01-30T18:07:27.228025046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-xoz4v.gb1.brightbox.com,Uid:0c5c985182516a0cbc189e0b43d0cb69,Namespace:kube-system,Attempt:0,}" Jan 30 18:07:27.367439 kubelet[2540]: E0130 18:07:27.367356 2540 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.68.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-xoz4v.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.68.22:6443: connect: connection refused" interval="800ms" Jan 30 18:07:27.472056 kubelet[2540]: I0130 18:07:27.471804 2540 kubelet_node_status.go:73] "Attempting to register node" node="srv-xoz4v.gb1.brightbox.com" Jan 30 18:07:27.472773 kubelet[2540]: E0130 18:07:27.472733 2540 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.68.22:6443/api/v1/nodes\": dial tcp 10.230.68.22:6443: connect: connection refused" node="srv-xoz4v.gb1.brightbox.com" Jan 30 18:07:27.544810 kubelet[2540]: W0130 18:07:27.544626 2540 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.68.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-xoz4v.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.68.22:6443: connect: connection refused Jan 30 18:07:27.544810 kubelet[2540]: E0130 18:07:27.544755 2540 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.230.68.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-xoz4v.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.68.22:6443: connect: connection refused Jan 30 18:07:27.715167 kubelet[2540]: W0130 18:07:27.715058 2540 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.68.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.68.22:6443: connect: connection refused Jan 30 18:07:27.715167 kubelet[2540]: E0130 18:07:27.715163 2540 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.230.68.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.68.22:6443: connect: connection refused Jan 30 18:07:27.745042 kubelet[2540]: W0130 18:07:27.744805 2540 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.68.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.68.22:6443: connect: connection refused Jan 30 18:07:27.745042 kubelet[2540]: E0130 18:07:27.744892 2540 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.230.68.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.68.22:6443: connect: connection refused Jan 30 18:07:27.818013 kubelet[2540]: W0130 18:07:27.817854 2540 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.68.22:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.68.22:6443: connect: connection refused Jan 30 18:07:27.818351 kubelet[2540]: E0130 18:07:27.818301 2540 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.230.68.22:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.68.22:6443: connect: connection refused Jan 30 18:07:27.834541 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount861952592.mount: Deactivated successfully. Jan 30 18:07:27.840823 containerd[1633]: time="2025-01-30T18:07:27.839550873Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 18:07:27.843433 containerd[1633]: time="2025-01-30T18:07:27.843378525Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 30 18:07:27.846421 containerd[1633]: time="2025-01-30T18:07:27.846384007Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 18:07:27.848479 containerd[1633]: time="2025-01-30T18:07:27.848413685Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 18:07:27.849096 containerd[1633]: time="2025-01-30T18:07:27.848914373Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 18:07:27.850135 containerd[1633]: time="2025-01-30T18:07:27.850075370Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 18:07:27.852338 containerd[1633]: time="2025-01-30T18:07:27.852015638Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 18:07:27.853136 containerd[1633]: time="2025-01-30T18:07:27.853068004Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 18:07:27.858888 containerd[1633]: time="2025-01-30T18:07:27.857359975Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 633.92916ms" Jan 30 18:07:27.862205 containerd[1633]: time="2025-01-30T18:07:27.862136595Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 644.277843ms" Jan 30 18:07:27.863879 containerd[1633]: time="2025-01-30T18:07:27.863816693Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 635.699ms" Jan 30 18:07:28.115703 containerd[1633]: time="2025-01-30T18:07:28.115430784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 18:07:28.116275 containerd[1633]: time="2025-01-30T18:07:28.115676007Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 18:07:28.116275 containerd[1633]: time="2025-01-30T18:07:28.115699829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 18:07:28.116275 containerd[1633]: time="2025-01-30T18:07:28.115878972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 18:07:28.131853 containerd[1633]: time="2025-01-30T18:07:28.131740501Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 18:07:28.132032 containerd[1633]: time="2025-01-30T18:07:28.131909755Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 18:07:28.132136 containerd[1633]: time="2025-01-30T18:07:28.132075484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 18:07:28.135855 containerd[1633]: time="2025-01-30T18:07:28.135749311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 18:07:28.139070 containerd[1633]: time="2025-01-30T18:07:28.138959694Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 18:07:28.139188 containerd[1633]: time="2025-01-30T18:07:28.139110993Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 18:07:28.139258 containerd[1633]: time="2025-01-30T18:07:28.139183120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 18:07:28.139543 containerd[1633]: time="2025-01-30T18:07:28.139415513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 18:07:28.168567 kubelet[2540]: E0130 18:07:28.168486 2540 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.68.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-xoz4v.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.68.22:6443: connect: connection refused" interval="1.6s" Jan 30 18:07:28.278603 kubelet[2540]: I0130 18:07:28.278355 2540 kubelet_node_status.go:73] "Attempting to register node" node="srv-xoz4v.gb1.brightbox.com" Jan 30 18:07:28.282410 kubelet[2540]: E0130 18:07:28.282066 2540 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.68.22:6443/api/v1/nodes\": dial tcp 10.230.68.22:6443: connect: connection refused" node="srv-xoz4v.gb1.brightbox.com" Jan 30 18:07:28.302777 containerd[1633]: time="2025-01-30T18:07:28.302718503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-xoz4v.gb1.brightbox.com,Uid:0cc7bb43a74537e8cd6ca95dcad28dcd,Namespace:kube-system,Attempt:0,} returns sandbox id \"7183f971b177ed05613d74acd9fbb6cf4e902a329368bb213c056c3ad3a102d9\"" Jan 30 18:07:28.310204 containerd[1633]: time="2025-01-30T18:07:28.309579145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-xoz4v.gb1.brightbox.com,Uid:b65d7107bab364d0710880eeacdeb686,Namespace:kube-system,Attempt:0,} returns sandbox id \"0fdde2bc342779ec9d1a556ab37a75caaedb9613fa117e84247af5b763144157\"" Jan 30 18:07:28.321926 containerd[1633]: time="2025-01-30T18:07:28.321705274Z" level=info msg="CreateContainer within sandbox \"7183f971b177ed05613d74acd9fbb6cf4e902a329368bb213c056c3ad3a102d9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 18:07:28.322579 containerd[1633]: time="2025-01-30T18:07:28.322488865Z" level=info msg="CreateContainer within sandbox \"0fdde2bc342779ec9d1a556ab37a75caaedb9613fa117e84247af5b763144157\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 18:07:28.332991 containerd[1633]: time="2025-01-30T18:07:28.332939179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-xoz4v.gb1.brightbox.com,Uid:0c5c985182516a0cbc189e0b43d0cb69,Namespace:kube-system,Attempt:0,} returns sandbox id \"ffb463e410966ef2d095adbe553cd8bcff987e7a2bbc19b4155e6038b1e84754\"" Jan 30 18:07:28.340969 containerd[1633]: time="2025-01-30T18:07:28.340785987Z" level=info msg="CreateContainer within sandbox \"ffb463e410966ef2d095adbe553cd8bcff987e7a2bbc19b4155e6038b1e84754\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 18:07:28.354092 containerd[1633]: time="2025-01-30T18:07:28.354029103Z" level=info msg="CreateContainer within sandbox \"0fdde2bc342779ec9d1a556ab37a75caaedb9613fa117e84247af5b763144157\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4659acdcbe52198f2bbd4130f27e9dd4c59e13bd3ddafbe2b0cc8ca641294fc6\"" Jan 30 18:07:28.354806 containerd[1633]: time="2025-01-30T18:07:28.354773977Z" level=info msg="StartContainer for \"4659acdcbe52198f2bbd4130f27e9dd4c59e13bd3ddafbe2b0cc8ca641294fc6\"" Jan 30 18:07:28.361368 containerd[1633]: time="2025-01-30T18:07:28.361217367Z" level=info msg="CreateContainer within sandbox \"7183f971b177ed05613d74acd9fbb6cf4e902a329368bb213c056c3ad3a102d9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d7c6eb9c2cda5efc61e53bce9fb79db6ea371668d3aea478d0b42246101b022e\"" Jan 30 18:07:28.363575 containerd[1633]: time="2025-01-30T18:07:28.362297694Z" level=info msg="StartContainer for \"d7c6eb9c2cda5efc61e53bce9fb79db6ea371668d3aea478d0b42246101b022e\"" Jan 30 18:07:28.374772 containerd[1633]: time="2025-01-30T18:07:28.374595136Z" level=info msg="CreateContainer within sandbox \"ffb463e410966ef2d095adbe553cd8bcff987e7a2bbc19b4155e6038b1e84754\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4e0ad8d5555e1a2a8e6d84851c429fb2e0e6edcaee1cfab690b2c8009a988f1c\"" Jan 30 18:07:28.377365 containerd[1633]: time="2025-01-30T18:07:28.377334945Z" level=info msg="StartContainer for \"4e0ad8d5555e1a2a8e6d84851c429fb2e0e6edcaee1cfab690b2c8009a988f1c\"" Jan 30 18:07:28.551742 containerd[1633]: time="2025-01-30T18:07:28.551674674Z" level=info msg="StartContainer for \"d7c6eb9c2cda5efc61e53bce9fb79db6ea371668d3aea478d0b42246101b022e\" returns successfully" Jan 30 18:07:28.572062 containerd[1633]: time="2025-01-30T18:07:28.571988010Z" level=info msg="StartContainer for \"4659acdcbe52198f2bbd4130f27e9dd4c59e13bd3ddafbe2b0cc8ca641294fc6\" returns successfully" Jan 30 18:07:28.582071 containerd[1633]: time="2025-01-30T18:07:28.582007495Z" level=info msg="StartContainer for \"4e0ad8d5555e1a2a8e6d84851c429fb2e0e6edcaee1cfab690b2c8009a988f1c\" returns successfully" Jan 30 18:07:28.799726 kubelet[2540]: E0130 18:07:28.797480 2540 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.230.68.22:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.230.68.22:6443: connect: connection refused Jan 30 18:07:29.751338 systemd[1]: Started sshd@12-10.230.68.22:22-185.147.124.49:22788.service - OpenSSH per-connection server daemon (185.147.124.49:22788). Jan 30 18:07:29.893748 kubelet[2540]: I0130 18:07:29.892737 2540 kubelet_node_status.go:73] "Attempting to register node" node="srv-xoz4v.gb1.brightbox.com" Jan 30 18:07:29.991860 sshd[2817]: Invalid user from 185.147.124.49 port 22788 Jan 30 18:07:30.042031 sshd[2817]: Connection reset by invalid user 185.147.124.49 port 22788 [preauth] Jan 30 18:07:30.040819 systemd[1]: sshd@12-10.230.68.22:22-185.147.124.49:22788.service: Deactivated successfully. Jan 30 18:07:30.124279 systemd[1]: Started sshd@13-10.230.68.22:22-185.147.124.49:22802.service - OpenSSH per-connection server daemon (185.147.124.49:22802). Jan 30 18:07:30.413050 sshd[2822]: Invalid user EXAMPLES from 185.147.124.49 port 22802 Jan 30 18:07:30.460839 sshd[2822]: Connection reset by invalid user EXAMPLES 185.147.124.49 port 22802 [preauth] Jan 30 18:07:30.462588 systemd[1]: sshd@13-10.230.68.22:22-185.147.124.49:22802.service: Deactivated successfully. Jan 30 18:07:30.520194 systemd[1]: Started sshd@14-10.230.68.22:22-185.147.124.49:22818.service - OpenSSH per-connection server daemon (185.147.124.49:22818). Jan 30 18:07:30.758020 sshd[2827]: Invalid user from 185.147.124.49 port 22818 Jan 30 18:07:30.804836 sshd[2827]: Connection reset by invalid user 185.147.124.49 port 22818 [preauth] Jan 30 18:07:30.810419 systemd[1]: sshd@14-10.230.68.22:22-185.147.124.49:22818.service: Deactivated successfully. Jan 30 18:07:30.862979 systemd[1]: Started sshd@15-10.230.68.22:22-185.147.124.49:22830.service - OpenSSH per-connection server daemon (185.147.124.49:22830). Jan 30 18:07:31.096081 sshd[2832]: Invalid user aaPower from 185.147.124.49 port 22830 Jan 30 18:07:31.142429 sshd[2832]: Connection reset by invalid user aaPower 185.147.124.49 port 22830 [preauth] Jan 30 18:07:31.148512 systemd[1]: sshd@15-10.230.68.22:22-185.147.124.49:22830.service: Deactivated successfully. Jan 30 18:07:31.205302 systemd[1]: Started sshd@16-10.230.68.22:22-185.147.124.49:22836.service - OpenSSH per-connection server daemon (185.147.124.49:22836). Jan 30 18:07:31.424849 sshd[2837]: Invalid user Administrator from 185.147.124.49 port 22836 Jan 30 18:07:31.474942 sshd[2837]: Connection reset by invalid user Administrator 185.147.124.49 port 22836 [preauth] Jan 30 18:07:31.474120 systemd[1]: sshd@16-10.230.68.22:22-185.147.124.49:22836.service: Deactivated successfully. Jan 30 18:07:31.597194 kubelet[2540]: E0130 18:07:31.597140 2540 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-xoz4v.gb1.brightbox.com\" not found" node="srv-xoz4v.gb1.brightbox.com" Jan 30 18:07:31.715352 kubelet[2540]: I0130 18:07:31.714640 2540 kubelet_node_status.go:76] "Successfully registered node" node="srv-xoz4v.gb1.brightbox.com" Jan 30 18:07:31.730725 kubelet[2540]: I0130 18:07:31.730680 2540 apiserver.go:52] "Watching apiserver" Jan 30 18:07:31.763951 kubelet[2540]: I0130 18:07:31.763863 2540 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 18:07:33.589148 systemd[1]: Reloading requested from client PID 2843 ('systemctl') (unit session-11.scope)... Jan 30 18:07:33.589190 systemd[1]: Reloading... Jan 30 18:07:33.717012 zram_generator::config[2886]: No configuration found. Jan 30 18:07:33.928973 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 18:07:34.049043 systemd[1]: Reloading finished in 459 ms. Jan 30 18:07:34.105831 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 18:07:34.122285 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 18:07:34.123487 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 18:07:34.130373 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 18:07:34.408982 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 18:07:34.423639 (kubelet)[2960]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 18:07:34.517716 kubelet[2960]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 18:07:34.517716 kubelet[2960]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 18:07:34.517716 kubelet[2960]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 18:07:34.518635 kubelet[2960]: I0130 18:07:34.517756 2960 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 18:07:34.526349 kubelet[2960]: I0130 18:07:34.526263 2960 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 18:07:34.526349 kubelet[2960]: I0130 18:07:34.526351 2960 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 18:07:34.526848 kubelet[2960]: I0130 18:07:34.526815 2960 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 18:07:34.529548 kubelet[2960]: I0130 18:07:34.529513 2960 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 18:07:34.535129 kubelet[2960]: I0130 18:07:34.534107 2960 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 18:07:34.547627 kubelet[2960]: I0130 18:07:34.546177 2960 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 18:07:34.547627 kubelet[2960]: I0130 18:07:34.546918 2960 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 18:07:34.547627 kubelet[2960]: I0130 18:07:34.546965 2960 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-xoz4v.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 18:07:34.547627 kubelet[2960]: I0130 18:07:34.547243 2960 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 18:07:34.548656 kubelet[2960]: I0130 18:07:34.547260 2960 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 18:07:34.549231 kubelet[2960]: I0130 18:07:34.548730 2960 state_mem.go:36] "Initialized new in-memory state store" Jan 30 18:07:34.549231 kubelet[2960]: I0130 18:07:34.549011 2960 kubelet.go:400] "Attempting to sync node with API server" Jan 30 18:07:34.549231 kubelet[2960]: I0130 18:07:34.549035 2960 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 18:07:34.549231 kubelet[2960]: I0130 18:07:34.549088 2960 kubelet.go:312] "Adding apiserver pod source" Jan 30 18:07:34.549231 kubelet[2960]: I0130 18:07:34.549111 2960 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 18:07:34.569777 kubelet[2960]: I0130 18:07:34.568397 2960 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 18:07:34.569777 kubelet[2960]: I0130 18:07:34.568750 2960 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 18:07:34.573488 kubelet[2960]: I0130 18:07:34.572307 2960 server.go:1264] "Started kubelet" Jan 30 18:07:34.578721 kubelet[2960]: I0130 18:07:34.576285 2960 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 18:07:34.579681 kubelet[2960]: I0130 18:07:34.577159 2960 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 18:07:34.580647 kubelet[2960]: I0130 18:07:34.580614 2960 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 18:07:34.591225 kubelet[2960]: I0130 18:07:34.591150 2960 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 18:07:34.594548 kubelet[2960]: I0130 18:07:34.591566 2960 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 18:07:34.594548 kubelet[2960]: I0130 18:07:34.591608 2960 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 18:07:34.597563 kubelet[2960]: I0130 18:07:34.594950 2960 factory.go:221] Registration of the systemd container factory successfully Jan 30 18:07:34.597563 kubelet[2960]: I0130 18:07:34.595121 2960 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 18:07:34.597563 kubelet[2960]: I0130 18:07:34.596051 2960 reconciler.go:26] "Reconciler: start to sync state" Jan 30 18:07:34.599479 kubelet[2960]: E0130 18:07:34.599421 2960 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 18:07:34.601964 kubelet[2960]: I0130 18:07:34.601338 2960 factory.go:221] Registration of the containerd container factory successfully Jan 30 18:07:34.603760 kubelet[2960]: I0130 18:07:34.602858 2960 server.go:455] "Adding debug handlers to kubelet server" Jan 30 18:07:34.628645 kubelet[2960]: I0130 18:07:34.628487 2960 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 18:07:34.634349 kubelet[2960]: I0130 18:07:34.634299 2960 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 18:07:34.634824 kubelet[2960]: I0130 18:07:34.634632 2960 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 18:07:34.634824 kubelet[2960]: I0130 18:07:34.634670 2960 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 18:07:34.634824 kubelet[2960]: E0130 18:07:34.634743 2960 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 18:07:34.716524 kubelet[2960]: I0130 18:07:34.715341 2960 kubelet_node_status.go:73] "Attempting to register node" node="srv-xoz4v.gb1.brightbox.com" Jan 30 18:07:34.732017 kubelet[2960]: I0130 18:07:34.731512 2960 kubelet_node_status.go:112] "Node was previously registered" node="srv-xoz4v.gb1.brightbox.com" Jan 30 18:07:34.732017 kubelet[2960]: I0130 18:07:34.731648 2960 kubelet_node_status.go:76] "Successfully registered node" node="srv-xoz4v.gb1.brightbox.com" Jan 30 18:07:34.736481 kubelet[2960]: E0130 18:07:34.735229 2960 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 18:07:34.787240 kubelet[2960]: I0130 18:07:34.786912 2960 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 18:07:34.787240 kubelet[2960]: I0130 18:07:34.786981 2960 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 18:07:34.787240 kubelet[2960]: I0130 18:07:34.787015 2960 state_mem.go:36] "Initialized new in-memory state store" Jan 30 18:07:34.787644 kubelet[2960]: I0130 18:07:34.787409 2960 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 18:07:34.787644 kubelet[2960]: I0130 18:07:34.787429 2960 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 18:07:34.787644 kubelet[2960]: I0130 18:07:34.787520 2960 policy_none.go:49] "None policy: Start" Jan 30 18:07:34.789188 kubelet[2960]: I0130 18:07:34.788343 2960 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 18:07:34.789188 kubelet[2960]: I0130 18:07:34.788378 2960 state_mem.go:35] "Initializing new in-memory state store" Jan 30 18:07:34.789188 kubelet[2960]: I0130 18:07:34.788633 2960 state_mem.go:75] "Updated machine memory state" Jan 30 18:07:34.793077 kubelet[2960]: I0130 18:07:34.792325 2960 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 18:07:34.793077 kubelet[2960]: I0130 18:07:34.792573 2960 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 18:07:34.794986 kubelet[2960]: I0130 18:07:34.794663 2960 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 18:07:34.936831 kubelet[2960]: I0130 18:07:34.936363 2960 topology_manager.go:215] "Topology Admit Handler" podUID="0cc7bb43a74537e8cd6ca95dcad28dcd" podNamespace="kube-system" podName="kube-apiserver-srv-xoz4v.gb1.brightbox.com" Jan 30 18:07:34.936831 kubelet[2960]: I0130 18:07:34.936544 2960 topology_manager.go:215] "Topology Admit Handler" podUID="b65d7107bab364d0710880eeacdeb686" podNamespace="kube-system" podName="kube-controller-manager-srv-xoz4v.gb1.brightbox.com" Jan 30 18:07:34.936831 kubelet[2960]: I0130 18:07:34.936657 2960 topology_manager.go:215] "Topology Admit Handler" podUID="0c5c985182516a0cbc189e0b43d0cb69" podNamespace="kube-system" podName="kube-scheduler-srv-xoz4v.gb1.brightbox.com" Jan 30 18:07:34.956816 kubelet[2960]: W0130 18:07:34.955806 2960 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 18:07:34.956816 kubelet[2960]: W0130 18:07:34.956175 2960 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 18:07:34.956816 kubelet[2960]: W0130 18:07:34.956413 2960 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 18:07:34.999950 kubelet[2960]: I0130 18:07:34.999789 2960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0cc7bb43a74537e8cd6ca95dcad28dcd-k8s-certs\") pod \"kube-apiserver-srv-xoz4v.gb1.brightbox.com\" (UID: \"0cc7bb43a74537e8cd6ca95dcad28dcd\") " pod="kube-system/kube-apiserver-srv-xoz4v.gb1.brightbox.com" Jan 30 18:07:35.000239 kubelet[2960]: I0130 18:07:35.000178 2960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b65d7107bab364d0710880eeacdeb686-ca-certs\") pod \"kube-controller-manager-srv-xoz4v.gb1.brightbox.com\" (UID: \"b65d7107bab364d0710880eeacdeb686\") " pod="kube-system/kube-controller-manager-srv-xoz4v.gb1.brightbox.com" Jan 30 18:07:35.000728 kubelet[2960]: I0130 18:07:35.000447 2960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b65d7107bab364d0710880eeacdeb686-k8s-certs\") pod \"kube-controller-manager-srv-xoz4v.gb1.brightbox.com\" (UID: \"b65d7107bab364d0710880eeacdeb686\") " pod="kube-system/kube-controller-manager-srv-xoz4v.gb1.brightbox.com" Jan 30 18:07:35.000728 kubelet[2960]: I0130 18:07:35.000560 2960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b65d7107bab364d0710880eeacdeb686-kubeconfig\") pod \"kube-controller-manager-srv-xoz4v.gb1.brightbox.com\" (UID: \"b65d7107bab364d0710880eeacdeb686\") " pod="kube-system/kube-controller-manager-srv-xoz4v.gb1.brightbox.com" Jan 30 18:07:35.000728 kubelet[2960]: I0130 18:07:35.000609 2960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b65d7107bab364d0710880eeacdeb686-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-xoz4v.gb1.brightbox.com\" (UID: \"b65d7107bab364d0710880eeacdeb686\") " pod="kube-system/kube-controller-manager-srv-xoz4v.gb1.brightbox.com" Jan 30 18:07:35.000728 kubelet[2960]: I0130 18:07:35.000654 2960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0cc7bb43a74537e8cd6ca95dcad28dcd-ca-certs\") pod \"kube-apiserver-srv-xoz4v.gb1.brightbox.com\" (UID: \"0cc7bb43a74537e8cd6ca95dcad28dcd\") " pod="kube-system/kube-apiserver-srv-xoz4v.gb1.brightbox.com" Jan 30 18:07:35.001992 kubelet[2960]: I0130 18:07:35.001739 2960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0cc7bb43a74537e8cd6ca95dcad28dcd-usr-share-ca-certificates\") pod \"kube-apiserver-srv-xoz4v.gb1.brightbox.com\" (UID: \"0cc7bb43a74537e8cd6ca95dcad28dcd\") " pod="kube-system/kube-apiserver-srv-xoz4v.gb1.brightbox.com" Jan 30 18:07:35.001992 kubelet[2960]: I0130 18:07:35.001802 2960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b65d7107bab364d0710880eeacdeb686-flexvolume-dir\") pod \"kube-controller-manager-srv-xoz4v.gb1.brightbox.com\" (UID: \"b65d7107bab364d0710880eeacdeb686\") " pod="kube-system/kube-controller-manager-srv-xoz4v.gb1.brightbox.com" Jan 30 18:07:35.001992 kubelet[2960]: I0130 18:07:35.001844 2960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0c5c985182516a0cbc189e0b43d0cb69-kubeconfig\") pod \"kube-scheduler-srv-xoz4v.gb1.brightbox.com\" (UID: \"0c5c985182516a0cbc189e0b43d0cb69\") " pod="kube-system/kube-scheduler-srv-xoz4v.gb1.brightbox.com" Jan 30 18:07:35.555889 kubelet[2960]: I0130 18:07:35.555810 2960 apiserver.go:52] "Watching apiserver" Jan 30 18:07:35.594904 kubelet[2960]: I0130 18:07:35.594821 2960 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 18:07:35.873251 kubelet[2960]: I0130 18:07:35.872722 2960 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-xoz4v.gb1.brightbox.com" podStartSLOduration=1.872683505 podStartE2EDuration="1.872683505s" podCreationTimestamp="2025-01-30 18:07:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 18:07:35.830151346 +0000 UTC m=+1.396756638" watchObservedRunningTime="2025-01-30 18:07:35.872683505 +0000 UTC m=+1.439288782" Jan 30 18:07:35.900595 kubelet[2960]: I0130 18:07:35.899383 2960 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-xoz4v.gb1.brightbox.com" podStartSLOduration=1.899361335 podStartE2EDuration="1.899361335s" podCreationTimestamp="2025-01-30 18:07:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 18:07:35.873659602 +0000 UTC m=+1.440264885" watchObservedRunningTime="2025-01-30 18:07:35.899361335 +0000 UTC m=+1.465966627" Jan 30 18:07:35.900595 kubelet[2960]: I0130 18:07:35.900051 2960 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-xoz4v.gb1.brightbox.com" podStartSLOduration=1.900041836 podStartE2EDuration="1.900041836s" podCreationTimestamp="2025-01-30 18:07:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 18:07:35.894106656 +0000 UTC m=+1.460711940" watchObservedRunningTime="2025-01-30 18:07:35.900041836 +0000 UTC m=+1.466647127" Jan 30 18:07:40.526785 sudo[1938]: pam_unix(sudo:session): session closed for user root Jan 30 18:07:40.673765 sshd[1934]: pam_unix(sshd:session): session closed for user core Jan 30 18:07:40.680576 systemd[1]: sshd@9-10.230.68.22:22-139.178.89.65:60178.service: Deactivated successfully. Jan 30 18:07:40.684687 systemd-logind[1602]: Session 11 logged out. Waiting for processes to exit. Jan 30 18:07:40.685125 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 18:07:40.689273 systemd-logind[1602]: Removed session 11. Jan 30 18:07:49.108073 kubelet[2960]: I0130 18:07:49.108028 2960 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 18:07:49.108949 containerd[1633]: time="2025-01-30T18:07:49.108826444Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 18:07:49.109641 kubelet[2960]: I0130 18:07:49.109605 2960 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 18:07:49.392772 kubelet[2960]: I0130 18:07:49.392559 2960 topology_manager.go:215] "Topology Admit Handler" podUID="c59f662b-6bc8-4625-ad7e-2fe958d9c275" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-gshxq" Jan 30 18:07:49.495113 kubelet[2960]: I0130 18:07:49.495046 2960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c59f662b-6bc8-4625-ad7e-2fe958d9c275-var-lib-calico\") pod \"tigera-operator-7bc55997bb-gshxq\" (UID: \"c59f662b-6bc8-4625-ad7e-2fe958d9c275\") " pod="tigera-operator/tigera-operator-7bc55997bb-gshxq" Jan 30 18:07:49.495113 kubelet[2960]: I0130 18:07:49.495113 2960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtmfj\" (UniqueName: \"kubernetes.io/projected/c59f662b-6bc8-4625-ad7e-2fe958d9c275-kube-api-access-dtmfj\") pod \"tigera-operator-7bc55997bb-gshxq\" (UID: \"c59f662b-6bc8-4625-ad7e-2fe958d9c275\") " pod="tigera-operator/tigera-operator-7bc55997bb-gshxq" Jan 30 18:07:49.507372 kubelet[2960]: I0130 18:07:49.507315 2960 topology_manager.go:215] "Topology Admit Handler" podUID="990799f4-ce6b-42ef-9a7b-e07614828cd7" podNamespace="kube-system" podName="kube-proxy-gg99p" Jan 30 18:07:49.596039 kubelet[2960]: I0130 18:07:49.595977 2960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/990799f4-ce6b-42ef-9a7b-e07614828cd7-kube-proxy\") pod \"kube-proxy-gg99p\" (UID: \"990799f4-ce6b-42ef-9a7b-e07614828cd7\") " pod="kube-system/kube-proxy-gg99p" Jan 30 18:07:49.596039 kubelet[2960]: I0130 18:07:49.596050 2960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/990799f4-ce6b-42ef-9a7b-e07614828cd7-xtables-lock\") pod \"kube-proxy-gg99p\" (UID: \"990799f4-ce6b-42ef-9a7b-e07614828cd7\") " pod="kube-system/kube-proxy-gg99p" Jan 30 18:07:49.596039 kubelet[2960]: I0130 18:07:49.596132 2960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/990799f4-ce6b-42ef-9a7b-e07614828cd7-lib-modules\") pod \"kube-proxy-gg99p\" (UID: \"990799f4-ce6b-42ef-9a7b-e07614828cd7\") " pod="kube-system/kube-proxy-gg99p" Jan 30 18:07:49.596039 kubelet[2960]: I0130 18:07:49.596159 2960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5q28\" (UniqueName: \"kubernetes.io/projected/990799f4-ce6b-42ef-9a7b-e07614828cd7-kube-api-access-b5q28\") pod \"kube-proxy-gg99p\" (UID: \"990799f4-ce6b-42ef-9a7b-e07614828cd7\") " pod="kube-system/kube-proxy-gg99p" Jan 30 18:07:49.707358 containerd[1633]: time="2025-01-30T18:07:49.706706820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-gshxq,Uid:c59f662b-6bc8-4625-ad7e-2fe958d9c275,Namespace:tigera-operator,Attempt:0,}" Jan 30 18:07:49.746114 containerd[1633]: time="2025-01-30T18:07:49.745759828Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 18:07:49.746114 containerd[1633]: time="2025-01-30T18:07:49.746048560Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 18:07:49.746114 containerd[1633]: time="2025-01-30T18:07:49.746074287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 18:07:49.746477 containerd[1633]: time="2025-01-30T18:07:49.746241034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 18:07:49.816602 containerd[1633]: time="2025-01-30T18:07:49.816388078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gg99p,Uid:990799f4-ce6b-42ef-9a7b-e07614828cd7,Namespace:kube-system,Attempt:0,}" Jan 30 18:07:49.827587 containerd[1633]: time="2025-01-30T18:07:49.827280117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-gshxq,Uid:c59f662b-6bc8-4625-ad7e-2fe958d9c275,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"9ac411bb9e9038f73738daf12977df4cf963c1985c7dbdcf2ae712470d32e2f6\"" Jan 30 18:07:49.836685 containerd[1633]: time="2025-01-30T18:07:49.836641882Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 30 18:07:49.857462 containerd[1633]: time="2025-01-30T18:07:49.857231759Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 18:07:49.857462 containerd[1633]: time="2025-01-30T18:07:49.857354749Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 18:07:49.857462 containerd[1633]: time="2025-01-30T18:07:49.857384596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 18:07:49.857981 containerd[1633]: time="2025-01-30T18:07:49.857570299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 18:07:49.909662 containerd[1633]: time="2025-01-30T18:07:49.909608535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gg99p,Uid:990799f4-ce6b-42ef-9a7b-e07614828cd7,Namespace:kube-system,Attempt:0,} returns sandbox id \"71cf5f422d43926072fe1c831223caaf03dd2efef672f13aa9f17521a89088b7\"" Jan 30 18:07:49.914925 containerd[1633]: time="2025-01-30T18:07:49.914860506Z" level=info msg="CreateContainer within sandbox \"71cf5f422d43926072fe1c831223caaf03dd2efef672f13aa9f17521a89088b7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 18:07:49.928840 containerd[1633]: time="2025-01-30T18:07:49.928737058Z" level=info msg="CreateContainer within sandbox \"71cf5f422d43926072fe1c831223caaf03dd2efef672f13aa9f17521a89088b7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7011b4f9d63187e6a314a825bb161a22cf375042b85ff8e15ed52e7edd766e0f\"" Jan 30 18:07:49.930924 containerd[1633]: time="2025-01-30T18:07:49.929692118Z" level=info msg="StartContainer for \"7011b4f9d63187e6a314a825bb161a22cf375042b85ff8e15ed52e7edd766e0f\"" Jan 30 18:07:50.020961 containerd[1633]: time="2025-01-30T18:07:50.020232739Z" level=info msg="StartContainer for \"7011b4f9d63187e6a314a825bb161a22cf375042b85ff8e15ed52e7edd766e0f\" returns successfully" Jan 30 18:07:50.835200 systemd[1]: Started sshd@17-10.230.68.22:22-136.232.203.134:64409.service - OpenSSH per-connection server daemon (136.232.203.134:64409). Jan 30 18:07:52.004930 sshd[3280]: Invalid user test from 136.232.203.134 port 64409 Jan 30 18:07:52.160715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount833259522.mount: Deactivated successfully. Jan 30 18:07:52.256105 sshd[3280]: Received disconnect from 136.232.203.134 port 64409:11: Bye Bye [preauth] Jan 30 18:07:52.256105 sshd[3280]: Disconnected from invalid user test 136.232.203.134 port 64409 [preauth] Jan 30 18:07:52.262620 systemd[1]: sshd@17-10.230.68.22:22-136.232.203.134:64409.service: Deactivated successfully. Jan 30 18:07:53.033454 containerd[1633]: time="2025-01-30T18:07:53.033339918Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:07:53.035670 containerd[1633]: time="2025-01-30T18:07:53.035622456Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Jan 30 18:07:53.036649 containerd[1633]: time="2025-01-30T18:07:53.036614323Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:07:53.048647 containerd[1633]: time="2025-01-30T18:07:53.048570816Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:07:53.050529 containerd[1633]: time="2025-01-30T18:07:53.050382283Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 3.212457039s" Jan 30 18:07:53.050529 containerd[1633]: time="2025-01-30T18:07:53.050434295Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 30 18:07:53.055795 containerd[1633]: time="2025-01-30T18:07:53.055707414Z" level=info msg="CreateContainer within sandbox \"9ac411bb9e9038f73738daf12977df4cf963c1985c7dbdcf2ae712470d32e2f6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 30 18:07:53.091953 containerd[1633]: time="2025-01-30T18:07:53.091838128Z" level=info msg="CreateContainer within sandbox \"9ac411bb9e9038f73738daf12977df4cf963c1985c7dbdcf2ae712470d32e2f6\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"336c0672209870eb70026f8322ce3b4aea5d73165c2ea177ccf7106bdb80b63b\"" Jan 30 18:07:53.093023 containerd[1633]: time="2025-01-30T18:07:53.092902454Z" level=info msg="StartContainer for \"336c0672209870eb70026f8322ce3b4aea5d73165c2ea177ccf7106bdb80b63b\"" Jan 30 18:07:53.144082 systemd[1]: run-containerd-runc-k8s.io-336c0672209870eb70026f8322ce3b4aea5d73165c2ea177ccf7106bdb80b63b-runc.4SbKEH.mount: Deactivated successfully. Jan 30 18:07:53.183539 containerd[1633]: time="2025-01-30T18:07:53.183469161Z" level=info msg="StartContainer for \"336c0672209870eb70026f8322ce3b4aea5d73165c2ea177ccf7106bdb80b63b\" returns successfully" Jan 30 18:07:53.750831 kubelet[2960]: I0130 18:07:53.749230 2960 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-gshxq" podStartSLOduration=1.530784718 podStartE2EDuration="4.749204891s" podCreationTimestamp="2025-01-30 18:07:49 +0000 UTC" firstStartedPulling="2025-01-30 18:07:49.833184808 +0000 UTC m=+15.399790081" lastFinishedPulling="2025-01-30 18:07:53.051604975 +0000 UTC m=+18.618210254" observedRunningTime="2025-01-30 18:07:53.748596085 +0000 UTC m=+19.315201365" watchObservedRunningTime="2025-01-30 18:07:53.749204891 +0000 UTC m=+19.315810174" Jan 30 18:07:53.751639 kubelet[2960]: I0130 18:07:53.751607 2960 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gg99p" podStartSLOduration=4.751550054 podStartE2EDuration="4.751550054s" podCreationTimestamp="2025-01-30 18:07:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 18:07:50.727487328 +0000 UTC m=+16.294092619" watchObservedRunningTime="2025-01-30 18:07:53.751550054 +0000 UTC m=+19.318155360" Jan 30 18:07:56.489647 kubelet[2960]: I0130 18:07:56.489566 2960 topology_manager.go:215] "Topology Admit Handler" podUID="19382625-9aae-428e-804f-6136f46a9d91" podNamespace="calico-system" podName="calico-typha-d7cd8566f-xrvhl" Jan 30 18:07:56.551461 kubelet[2960]: I0130 18:07:56.551175 2960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19382625-9aae-428e-804f-6136f46a9d91-tigera-ca-bundle\") pod \"calico-typha-d7cd8566f-xrvhl\" (UID: \"19382625-9aae-428e-804f-6136f46a9d91\") " pod="calico-system/calico-typha-d7cd8566f-xrvhl" Jan 30 18:07:56.553093 kubelet[2960]: I0130 18:07:56.552975 2960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/19382625-9aae-428e-804f-6136f46a9d91-typha-certs\") pod \"calico-typha-d7cd8566f-xrvhl\" (UID: \"19382625-9aae-428e-804f-6136f46a9d91\") " pod="calico-system/calico-typha-d7cd8566f-xrvhl" Jan 30 18:07:56.553093 kubelet[2960]: I0130 18:07:56.553025 2960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj9s7\" (UniqueName: \"kubernetes.io/projected/19382625-9aae-428e-804f-6136f46a9d91-kube-api-access-pj9s7\") pod \"calico-typha-d7cd8566f-xrvhl\" (UID: \"19382625-9aae-428e-804f-6136f46a9d91\") " pod="calico-system/calico-typha-d7cd8566f-xrvhl" Jan 30 18:07:56.599556 kubelet[2960]: I0130 18:07:56.598537 2960 topology_manager.go:215] "Topology Admit Handler" podUID="38063852-f177-4399-9b62-f941670f8d9b" podNamespace="calico-system" podName="calico-node-kbd4g" Jan 30 18:07:56.655076 kubelet[2960]: I0130 18:07:56.654906 2960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/38063852-f177-4399-9b62-f941670f8d9b-xtables-lock\") pod \"calico-node-kbd4g\" (UID: \"38063852-f177-4399-9b62-f941670f8d9b\") " pod="calico-system/calico-node-kbd4g" Jan 30 18:07:56.655076 kubelet[2960]: I0130 18:07:56.655360 2960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38063852-f177-4399-9b62-f941670f8d9b-tigera-ca-bundle\") pod \"calico-node-kbd4g\" (UID: \"38063852-f177-4399-9b62-f941670f8d9b\") " pod="calico-system/calico-node-kbd4g" Jan 30 18:07:56.656015 kubelet[2960]: I0130 18:07:56.655413 2960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/38063852-f177-4399-9b62-f941670f8d9b-var-run-calico\") pod \"calico-node-kbd4g\" (UID: \"38063852-f177-4399-9b62-f941670f8d9b\") " pod="calico-system/calico-node-kbd4g" Jan 30 18:07:56.656015 kubelet[2960]: I0130 18:07:56.655904 2960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/38063852-f177-4399-9b62-f941670f8d9b-policysync\") pod \"calico-node-kbd4g\" (UID: \"38063852-f177-4399-9b62-f941670f8d9b\") " pod="calico-system/calico-node-kbd4g" Jan 30 18:07:56.656015 kubelet[2960]: I0130 18:07:56.655944 2960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/38063852-f177-4399-9b62-f941670f8d9b-node-certs\") pod \"calico-node-kbd4g\" (UID: \"38063852-f177-4399-9b62-f941670f8d9b\") " pod="calico-system/calico-node-kbd4g" Jan 30 18:07:56.656168 kubelet[2960]: I0130 18:07:56.656023 2960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/38063852-f177-4399-9b62-f941670f8d9b-flexvol-driver-host\") pod \"calico-node-kbd4g\" (UID: \"38063852-f177-4399-9b62-f941670f8d9b\") " pod="calico-system/calico-node-kbd4g" Jan 30 18:07:56.656237 kubelet[2960]: I0130 18:07:56.656175 2960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xccw7\" (UniqueName: \"kubernetes.io/projected/38063852-f177-4399-9b62-f941670f8d9b-kube-api-access-xccw7\") pod \"calico-node-kbd4g\" (UID: \"38063852-f177-4399-9b62-f941670f8d9b\") " pod="calico-system/calico-node-kbd4g" Jan 30 18:07:56.656772 kubelet[2960]: I0130 18:07:56.656428 2960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/38063852-f177-4399-9b62-f941670f8d9b-lib-modules\") pod \"calico-node-kbd4g\" (UID: \"38063852-f177-4399-9b62-f941670f8d9b\") " pod="calico-system/calico-node-kbd4g" Jan 30 18:07:56.657153 kubelet[2960]: I0130 18:07:56.656851 2960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/38063852-f177-4399-9b62-f941670f8d9b-cni-net-dir\") pod \"calico-node-kbd4g\" (UID: \"38063852-f177-4399-9b62-f941670f8d9b\") " pod="calico-system/calico-node-kbd4g" Jan 30 18:07:56.657242 kubelet[2960]: I0130 18:07:56.657192 2960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/38063852-f177-4399-9b62-f941670f8d9b-var-lib-calico\") pod \"calico-node-kbd4g\" (UID: \"38063852-f177-4399-9b62-f941670f8d9b\") " pod="calico-system/calico-node-kbd4g" Jan 30 18:07:56.657242 kubelet[2960]: I0130 18:07:56.657224 2960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/38063852-f177-4399-9b62-f941670f8d9b-cni-bin-dir\") pod \"calico-node-kbd4g\" (UID: \"38063852-f177-4399-9b62-f941670f8d9b\") " pod="calico-system/calico-node-kbd4g" Jan 30 18:07:56.657350 kubelet[2960]: I0130 18:07:56.657261 2960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/38063852-f177-4399-9b62-f941670f8d9b-cni-log-dir\") pod \"calico-node-kbd4g\" (UID: \"38063852-f177-4399-9b62-f941670f8d9b\") " pod="calico-system/calico-node-kbd4g" Jan 30 18:07:56.725705 kubelet[2960]: I0130 18:07:56.725624 2960 topology_manager.go:215] "Topology Admit Handler" podUID="ef41d1d4-bd1d-45f6-8486-f723f43e3c94" podNamespace="calico-system" podName="csi-node-driver-tdn4g" Jan 30 18:07:56.726371 kubelet[2960]: E0130 18:07:56.726060 2960 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tdn4g" podUID="ef41d1d4-bd1d-45f6-8486-f723f43e3c94" Jan 30 18:07:56.758582 kubelet[2960]: I0130 18:07:56.758371 2960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l75vv\" (UniqueName: \"kubernetes.io/projected/ef41d1d4-bd1d-45f6-8486-f723f43e3c94-kube-api-access-l75vv\") pod \"csi-node-driver-tdn4g\" (UID: \"ef41d1d4-bd1d-45f6-8486-f723f43e3c94\") " pod="calico-system/csi-node-driver-tdn4g" Jan 30 18:07:56.758582 kubelet[2960]: I0130 18:07:56.758573 2960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ef41d1d4-bd1d-45f6-8486-f723f43e3c94-kubelet-dir\") pod \"csi-node-driver-tdn4g\" (UID: \"ef41d1d4-bd1d-45f6-8486-f723f43e3c94\") " pod="calico-system/csi-node-driver-tdn4g" Jan 30 18:07:56.758931 kubelet[2960]: I0130 18:07:56.758607 2960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ef41d1d4-bd1d-45f6-8486-f723f43e3c94-socket-dir\") pod \"csi-node-driver-tdn4g\" (UID: \"ef41d1d4-bd1d-45f6-8486-f723f43e3c94\") " pod="calico-system/csi-node-driver-tdn4g" Jan 30 18:07:56.758931 kubelet[2960]: I0130 18:07:56.758667 2960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ef41d1d4-bd1d-45f6-8486-f723f43e3c94-varrun\") pod \"csi-node-driver-tdn4g\" (UID: \"ef41d1d4-bd1d-45f6-8486-f723f43e3c94\") " pod="calico-system/csi-node-driver-tdn4g" Jan 30 18:07:56.758931 kubelet[2960]: I0130 18:07:56.758753 2960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ef41d1d4-bd1d-45f6-8486-f723f43e3c94-registration-dir\") pod \"csi-node-driver-tdn4g\" (UID: \"ef41d1d4-bd1d-45f6-8486-f723f43e3c94\") " pod="calico-system/csi-node-driver-tdn4g" Jan 30 18:07:56.784559 kubelet[2960]: E0130 18:07:56.784267 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:07:56.784559 kubelet[2960]: W0130 18:07:56.784348 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:07:56.793090 kubelet[2960]: E0130 18:07:56.791550 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:07:56.793190 kubelet[2960]: E0130 18:07:56.793150 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:07:56.793269 kubelet[2960]: W0130 18:07:56.793184 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:07:56.793269 kubelet[2960]: E0130 18:07:56.793218 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:07:56.807759 containerd[1633]: time="2025-01-30T18:07:56.804747179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-d7cd8566f-xrvhl,Uid:19382625-9aae-428e-804f-6136f46a9d91,Namespace:calico-system,Attempt:0,}" Jan 30 18:07:56.861659 kubelet[2960]: E0130 18:07:56.860548 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:07:56.861659 kubelet[2960]: W0130 18:07:56.860916 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:07:56.861659 kubelet[2960]: E0130 18:07:56.860963 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:07:56.864196 kubelet[2960]: E0130 18:07:56.863836 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:07:56.866141 kubelet[2960]: W0130 18:07:56.864685 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:07:56.866141 kubelet[2960]: E0130 18:07:56.864723 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:07:56.866141 kubelet[2960]: E0130 18:07:56.865790 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:07:56.866141 kubelet[2960]: W0130 18:07:56.865805 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:07:56.866141 kubelet[2960]: E0130 18:07:56.865821 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:07:56.866854 kubelet[2960]: E0130 18:07:56.866516 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:07:56.866854 kubelet[2960]: W0130 18:07:56.866538 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:07:56.866854 kubelet[2960]: E0130 18:07:56.866647 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:07:56.867557 kubelet[2960]: E0130 18:07:56.866965 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:07:56.867557 kubelet[2960]: W0130 18:07:56.866979 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:07:56.867557 kubelet[2960]: E0130 18:07:56.867074 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:07:56.867557 kubelet[2960]: E0130 18:07:56.867351 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:07:56.867557 kubelet[2960]: W0130 18:07:56.867365 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:07:56.867557 kubelet[2960]: E0130 18:07:56.867423 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:07:56.870321 kubelet[2960]: E0130 18:07:56.868162 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:07:56.870321 kubelet[2960]: W0130 18:07:56.868178 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:07:56.870321 kubelet[2960]: E0130 18:07:56.868272 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:07:56.870321 kubelet[2960]: E0130 18:07:56.868971 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:07:56.870321 kubelet[2960]: W0130 18:07:56.868985 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:07:56.870321 kubelet[2960]: E0130 18:07:56.869119 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:07:56.870321 kubelet[2960]: E0130 18:07:56.869282 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:07:56.870321 kubelet[2960]: W0130 18:07:56.869295 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:07:56.870321 kubelet[2960]: E0130 18:07:56.869402 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:07:56.874840 kubelet[2960]: E0130 18:07:56.871069 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:07:56.874840 kubelet[2960]: W0130 18:07:56.871086 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:07:56.874840 kubelet[2960]: E0130 18:07:56.871191 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:07:56.874840 kubelet[2960]: E0130 18:07:56.871394 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:07:56.874840 kubelet[2960]: W0130 18:07:56.871420 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:07:56.874840 kubelet[2960]: E0130 18:07:56.871508 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:07:56.874840 kubelet[2960]: E0130 18:07:56.871727 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:07:56.874840 kubelet[2960]: W0130 18:07:56.871740 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:07:56.874840 kubelet[2960]: E0130 18:07:56.871843 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:07:56.874840 kubelet[2960]: E0130 18:07:56.872151 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:07:56.875585 kubelet[2960]: W0130 18:07:56.872167 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:07:56.875585 kubelet[2960]: E0130 18:07:56.872258 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:07:56.875585 kubelet[2960]: E0130 18:07:56.872695 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:07:56.875585 kubelet[2960]: W0130 18:07:56.872709 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:07:56.875585 kubelet[2960]: E0130 18:07:56.874026 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:07:56.875585 kubelet[2960]: E0130 18:07:56.874119 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:07:56.875585 kubelet[2960]: W0130 18:07:56.874133 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:07:56.875585 kubelet[2960]: E0130 18:07:56.874222 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:07:56.875585 kubelet[2960]: E0130 18:07:56.874442 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:07:56.875585 kubelet[2960]: W0130 18:07:56.874456 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:07:56.883481 kubelet[2960]: E0130 18:07:56.874554 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:07:56.883481 kubelet[2960]: E0130 18:07:56.874810 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:07:56.883481 kubelet[2960]: W0130 18:07:56.874825 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:07:56.883481 kubelet[2960]: E0130 18:07:56.875018 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:07:56.883481 kubelet[2960]: E0130 18:07:56.875211 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:07:56.883481 kubelet[2960]: W0130 18:07:56.875232 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:07:56.883481 kubelet[2960]: E0130 18:07:56.875489 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:07:56.883481 kubelet[2960]: E0130 18:07:56.875696 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:07:56.883481 kubelet[2960]: W0130 18:07:56.875710 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:07:56.883481 kubelet[2960]: E0130 18:07:56.875799 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:07:56.900183 kubelet[2960]: E0130 18:07:56.877084 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:07:56.900183 kubelet[2960]: W0130 18:07:56.877099 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:07:56.900183 kubelet[2960]: E0130 18:07:56.877902 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:07:56.900183 kubelet[2960]: E0130 18:07:56.878963 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:07:56.900183 kubelet[2960]: W0130 18:07:56.878978 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:07:56.900183 kubelet[2960]: E0130 18:07:56.879553 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:07:56.900183 kubelet[2960]: E0130 18:07:56.881048 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:07:56.900183 kubelet[2960]: W0130 18:07:56.881064 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:07:56.900183 kubelet[2960]: E0130 18:07:56.881141 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:07:56.900183 kubelet[2960]: E0130 18:07:56.882784 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:07:56.900765 kubelet[2960]: W0130 18:07:56.882799 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:07:56.900765 kubelet[2960]: E0130 18:07:56.882899 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:07:56.900765 kubelet[2960]: E0130 18:07:56.883179 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:07:56.900765 kubelet[2960]: W0130 18:07:56.883196 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:07:56.900765 kubelet[2960]: E0130 18:07:56.883723 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:07:56.903114 kubelet[2960]: E0130 18:07:56.902839 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:07:56.903335 kubelet[2960]: W0130 18:07:56.903150 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:07:56.903561 kubelet[2960]: E0130 18:07:56.903447 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:07:56.919108 containerd[1633]: time="2025-01-30T18:07:56.918930354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kbd4g,Uid:38063852-f177-4399-9b62-f941670f8d9b,Namespace:calico-system,Attempt:0,}" Jan 30 18:07:56.949808 kubelet[2960]: E0130 18:07:56.949567 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:07:56.949808 kubelet[2960]: W0130 18:07:56.949604 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:07:56.949808 kubelet[2960]: E0130 18:07:56.949662 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:07:56.968070 containerd[1633]: time="2025-01-30T18:07:56.966365543Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 18:07:56.968070 containerd[1633]: time="2025-01-30T18:07:56.966486815Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 18:07:56.968070 containerd[1633]: time="2025-01-30T18:07:56.966531441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 18:07:56.968070 containerd[1633]: time="2025-01-30T18:07:56.966769673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 18:07:57.045967 containerd[1633]: time="2025-01-30T18:07:57.041968992Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 18:07:57.045967 containerd[1633]: time="2025-01-30T18:07:57.042082461Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 18:07:57.045967 containerd[1633]: time="2025-01-30T18:07:57.042102153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 18:07:57.045967 containerd[1633]: time="2025-01-30T18:07:57.042333133Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 18:07:57.240943 containerd[1633]: time="2025-01-30T18:07:57.239969104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-d7cd8566f-xrvhl,Uid:19382625-9aae-428e-804f-6136f46a9d91,Namespace:calico-system,Attempt:0,} returns sandbox id \"a6a073c7dc21138f522921dd14dd34415f1675ff890bd0bd28cc28e50ad7e3b2\"" Jan 30 18:07:57.251308 containerd[1633]: time="2025-01-30T18:07:57.249668958Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 30 18:07:57.256316 containerd[1633]: time="2025-01-30T18:07:57.256126807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kbd4g,Uid:38063852-f177-4399-9b62-f941670f8d9b,Namespace:calico-system,Attempt:0,} returns sandbox id \"2a5248e33361087ad19d28969c81d172be42cd79101142943217e1ee517f811a\"" Jan 30 18:07:58.637566 kubelet[2960]: E0130 18:07:58.635589 2960 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tdn4g" podUID="ef41d1d4-bd1d-45f6-8486-f723f43e3c94" Jan 30 18:07:58.867076 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1463556941.mount: Deactivated successfully. Jan 30 18:07:59.531918 systemd[1]: Started sshd@18-10.230.68.22:22-113.200.60.74:46014.service - OpenSSH per-connection server daemon (113.200.60.74:46014). Jan 30 18:08:00.060140 containerd[1633]: time="2025-01-30T18:08:00.059975434Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:08:00.062049 containerd[1633]: time="2025-01-30T18:08:00.061998724Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Jan 30 18:08:00.068283 containerd[1633]: time="2025-01-30T18:08:00.068219297Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:08:00.071607 containerd[1633]: time="2025-01-30T18:08:00.071548283Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:08:00.072843 containerd[1633]: time="2025-01-30T18:08:00.072657460Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.822812089s" Jan 30 18:08:00.072843 containerd[1633]: time="2025-01-30T18:08:00.072709139Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 30 18:08:00.077271 containerd[1633]: time="2025-01-30T18:08:00.077133896Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 30 18:08:00.089205 containerd[1633]: time="2025-01-30T18:08:00.089083686Z" level=info msg="CreateContainer within sandbox \"a6a073c7dc21138f522921dd14dd34415f1675ff890bd0bd28cc28e50ad7e3b2\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 30 18:08:00.107789 containerd[1633]: time="2025-01-30T18:08:00.107750895Z" level=info msg="CreateContainer within sandbox \"a6a073c7dc21138f522921dd14dd34415f1675ff890bd0bd28cc28e50ad7e3b2\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"bf3f58dcdaefa1a97eef64d8bcdbdb1abfeae1ac37a6746deb22716c14171fb0\"" Jan 30 18:08:00.109342 containerd[1633]: time="2025-01-30T18:08:00.108825571Z" level=info msg="StartContainer for \"bf3f58dcdaefa1a97eef64d8bcdbdb1abfeae1ac37a6746deb22716c14171fb0\"" Jan 30 18:08:00.222659 containerd[1633]: time="2025-01-30T18:08:00.222604585Z" level=info msg="StartContainer for \"bf3f58dcdaefa1a97eef64d8bcdbdb1abfeae1ac37a6746deb22716c14171fb0\" returns successfully" Jan 30 18:08:00.638220 kubelet[2960]: E0130 18:08:00.637557 2960 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tdn4g" podUID="ef41d1d4-bd1d-45f6-8486-f723f43e3c94" Jan 30 18:08:00.885305 kubelet[2960]: E0130 18:08:00.885246 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:08:00.885305 kubelet[2960]: W0130 18:08:00.885281 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:08:00.885822 kubelet[2960]: E0130 18:08:00.885321 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:08:00.885822 kubelet[2960]: E0130 18:08:00.885607 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:08:00.885822 kubelet[2960]: W0130 18:08:00.885622 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:08:00.885822 kubelet[2960]: E0130 18:08:00.885638 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:08:00.886293 kubelet[2960]: E0130 18:08:00.885931 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:08:00.886293 kubelet[2960]: W0130 18:08:00.885945 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:08:00.886293 kubelet[2960]: E0130 18:08:00.885958 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:08:00.886293 kubelet[2960]: E0130 18:08:00.886212 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:08:00.886293 kubelet[2960]: W0130 18:08:00.886225 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:08:00.886293 kubelet[2960]: E0130 18:08:00.886238 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:08:00.886839 kubelet[2960]: E0130 18:08:00.886532 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:08:00.886839 kubelet[2960]: W0130 18:08:00.886547 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:08:00.886839 kubelet[2960]: E0130 18:08:00.886572 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:08:00.886839 kubelet[2960]: E0130 18:08:00.886798 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:08:00.886839 kubelet[2960]: W0130 18:08:00.886812 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:08:00.886839 kubelet[2960]: E0130 18:08:00.886827 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:08:00.887430 kubelet[2960]: E0130 18:08:00.887137 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:08:00.887430 kubelet[2960]: W0130 18:08:00.887151 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:08:00.887430 kubelet[2960]: E0130 18:08:00.887165 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:08:00.887430 kubelet[2960]: E0130 18:08:00.887412 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:08:00.887857 kubelet[2960]: W0130 18:08:00.887439 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:08:00.887857 kubelet[2960]: E0130 18:08:00.887454 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:08:00.887857 kubelet[2960]: E0130 18:08:00.887698 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:08:00.887857 kubelet[2960]: W0130 18:08:00.887712 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:08:00.887857 kubelet[2960]: E0130 18:08:00.887729 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:08:00.888556 kubelet[2960]: E0130 18:08:00.887978 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:08:00.888556 kubelet[2960]: W0130 18:08:00.888003 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:08:00.888556 kubelet[2960]: E0130 18:08:00.888016 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:08:00.888556 kubelet[2960]: E0130 18:08:00.888301 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:08:00.888556 kubelet[2960]: W0130 18:08:00.888315 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:08:00.888556 kubelet[2960]: E0130 18:08:00.888329 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:08:00.888932 kubelet[2960]: E0130 18:08:00.888569 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:08:00.888932 kubelet[2960]: W0130 18:08:00.888586 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:08:00.888932 kubelet[2960]: E0130 18:08:00.888600 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:08:00.890045 kubelet[2960]: E0130 18:08:00.890024 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:08:00.890045 kubelet[2960]: W0130 18:08:00.890044 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:08:00.890230 kubelet[2960]: E0130 18:08:00.890061 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:08:00.890349 kubelet[2960]: E0130 18:08:00.890329 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:08:00.890349 kubelet[2960]: W0130 18:08:00.890348 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:08:00.890468 kubelet[2960]: E0130 18:08:00.890363 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:08:00.890639 kubelet[2960]: E0130 18:08:00.890619 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:08:00.890639 kubelet[2960]: W0130 18:08:00.890638 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:08:00.891019 kubelet[2960]: E0130 18:08:00.890653 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:08:00.901237 kubelet[2960]: E0130 18:08:00.901215 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:08:00.901582 kubelet[2960]: W0130 18:08:00.901360 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:08:00.901582 kubelet[2960]: E0130 18:08:00.901387 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:08:00.901802 kubelet[2960]: E0130 18:08:00.901782 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:08:00.902150 kubelet[2960]: W0130 18:08:00.902028 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:08:00.902150 kubelet[2960]: E0130 18:08:00.902063 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:08:00.902510 kubelet[2960]: E0130 18:08:00.902474 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:08:00.902510 kubelet[2960]: W0130 18:08:00.902502 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:08:00.902652 kubelet[2960]: E0130 18:08:00.902530 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:08:00.902919 kubelet[2960]: E0130 18:08:00.902896 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:08:00.902919 kubelet[2960]: W0130 18:08:00.902917 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:08:00.903009 kubelet[2960]: E0130 18:08:00.902941 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:08:00.903245 kubelet[2960]: E0130 18:08:00.903224 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:08:00.903312 kubelet[2960]: W0130 18:08:00.903244 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:08:00.903418 kubelet[2960]: E0130 18:08:00.903395 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:08:00.903713 kubelet[2960]: E0130 18:08:00.903688 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:08:00.903713 kubelet[2960]: W0130 18:08:00.903708 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:08:00.903937 kubelet[2960]: E0130 18:08:00.903827 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:08:00.904212 kubelet[2960]: E0130 18:08:00.904193 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:08:00.904276 kubelet[2960]: W0130 18:08:00.904227 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:08:00.904329 kubelet[2960]: E0130 18:08:00.904311 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:08:00.904632 kubelet[2960]: E0130 18:08:00.904612 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:08:00.904632 kubelet[2960]: W0130 18:08:00.904630 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:08:00.904861 kubelet[2960]: E0130 18:08:00.904665 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:08:00.905323 kubelet[2960]: E0130 18:08:00.905245 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:08:00.905323 kubelet[2960]: W0130 18:08:00.905277 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:08:00.905323 kubelet[2960]: E0130 18:08:00.905294 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:08:00.906317 kubelet[2960]: E0130 18:08:00.906186 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:08:00.906317 kubelet[2960]: W0130 18:08:00.906220 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:08:00.906317 kubelet[2960]: E0130 18:08:00.906260 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:08:00.907094 kubelet[2960]: E0130 18:08:00.906936 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:08:00.907094 kubelet[2960]: W0130 18:08:00.906963 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:08:00.907094 kubelet[2960]: E0130 18:08:00.906989 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:08:00.907770 kubelet[2960]: E0130 18:08:00.907607 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:08:00.907770 kubelet[2960]: W0130 18:08:00.907628 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:08:00.907770 kubelet[2960]: E0130 18:08:00.907693 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:08:00.908449 kubelet[2960]: E0130 18:08:00.908267 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:08:00.908449 kubelet[2960]: W0130 18:08:00.908293 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:08:00.908449 kubelet[2960]: E0130 18:08:00.908384 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:08:00.909163 kubelet[2960]: E0130 18:08:00.908930 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:08:00.909163 kubelet[2960]: W0130 18:08:00.908951 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:08:00.909163 kubelet[2960]: E0130 18:08:00.909093 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:08:00.910344 kubelet[2960]: E0130 18:08:00.909796 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:08:00.910344 kubelet[2960]: W0130 18:08:00.909815 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:08:00.910344 kubelet[2960]: E0130 18:08:00.909914 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:08:00.911070 kubelet[2960]: E0130 18:08:00.911025 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:08:00.911369 kubelet[2960]: W0130 18:08:00.911302 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:08:00.911691 kubelet[2960]: E0130 18:08:00.911638 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:08:00.913025 kubelet[2960]: E0130 18:08:00.912985 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:08:00.913292 kubelet[2960]: W0130 18:08:00.913085 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:08:00.913292 kubelet[2960]: E0130 18:08:00.913106 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:08:00.914510 kubelet[2960]: E0130 18:08:00.914441 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:08:00.914510 kubelet[2960]: W0130 18:08:00.914461 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:08:00.914510 kubelet[2960]: E0130 18:08:00.914477 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:08:01.084345 systemd[1]: run-containerd-runc-k8s.io-bf3f58dcdaefa1a97eef64d8bcdbdb1abfeae1ac37a6746deb22716c14171fb0-runc.ntg4MX.mount: Deactivated successfully. Jan 30 18:08:01.799957 kubelet[2960]: I0130 18:08:01.799593 2960 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 18:08:01.832046 containerd[1633]: time="2025-01-30T18:08:01.831980233Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:08:01.833510 containerd[1633]: time="2025-01-30T18:08:01.833467888Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Jan 30 18:08:01.834221 containerd[1633]: time="2025-01-30T18:08:01.833941880Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:08:01.836591 containerd[1633]: time="2025-01-30T18:08:01.836521132Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:08:01.837791 containerd[1633]: time="2025-01-30T18:08:01.837724712Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.760543068s" Jan 30 18:08:01.838061 containerd[1633]: time="2025-01-30T18:08:01.837945546Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 30 18:08:01.841756 containerd[1633]: time="2025-01-30T18:08:01.840471815Z" level=info msg="CreateContainer within sandbox \"2a5248e33361087ad19d28969c81d172be42cd79101142943217e1ee517f811a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 30 18:08:01.862099 containerd[1633]: time="2025-01-30T18:08:01.862048176Z" level=info msg="CreateContainer within sandbox \"2a5248e33361087ad19d28969c81d172be42cd79101142943217e1ee517f811a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"12c0eefbb357f7a75104afefaad2aabd170dc95af7ab41ab9a1f2dd165c97503\"" Jan 30 18:08:01.862897 containerd[1633]: time="2025-01-30T18:08:01.862710788Z" level=info msg="StartContainer for \"12c0eefbb357f7a75104afefaad2aabd170dc95af7ab41ab9a1f2dd165c97503\"" Jan 30 18:08:01.896745 kubelet[2960]: E0130 18:08:01.896709 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:08:01.897148 kubelet[2960]: W0130 18:08:01.896938 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:08:01.897148 kubelet[2960]: E0130 18:08:01.896974 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:08:01.898058 kubelet[2960]: E0130 18:08:01.897828 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:08:01.898058 kubelet[2960]: W0130 18:08:01.897846 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:08:01.898058 kubelet[2960]: E0130 18:08:01.897904 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:08:01.898530 kubelet[2960]: E0130 18:08:01.898420 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:08:01.898530 kubelet[2960]: W0130 18:08:01.898454 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:08:01.898530 kubelet[2960]: E0130 18:08:01.898470 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:08:01.899185 kubelet[2960]: E0130 18:08:01.899011 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:08:01.899185 kubelet[2960]: W0130 18:08:01.899029 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:08:01.899185 kubelet[2960]: E0130 18:08:01.899045 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:08:01.899568 kubelet[2960]: E0130 18:08:01.899500 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:08:01.899568 kubelet[2960]: W0130 18:08:01.899518 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:08:01.899781 kubelet[2960]: E0130 18:08:01.899534 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:08:01.900255 kubelet[2960]: E0130 18:08:01.900114 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:08:01.900255 kubelet[2960]: W0130 18:08:01.900134 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:08:01.900255 kubelet[2960]: E0130 18:08:01.900149 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:08:01.900603 kubelet[2960]: E0130 18:08:01.900505 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:08:01.900603 kubelet[2960]: W0130 18:08:01.900547 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:08:01.900917 kubelet[2960]: E0130 18:08:01.900566 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:08:01.901150 kubelet[2960]: E0130 18:08:01.901133 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:08:01.901296 kubelet[2960]: W0130 18:08:01.901227 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:08:01.901450 kubelet[2960]: E0130 18:08:01.901276 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:08:01.901836 kubelet[2960]: E0130 18:08:01.901818 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:08:01.902050 kubelet[2960]: W0130 18:08:01.901902 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:08:01.902050 kubelet[2960]: E0130 18:08:01.901923 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:08:01.906925 kubelet[2960]: E0130 18:08:01.904298 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:08:01.906925 kubelet[2960]: W0130 18:08:01.904320 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:08:01.906925 kubelet[2960]: E0130 18:08:01.904336 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:08:01.909759 kubelet[2960]: E0130 18:08:01.908675 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:08:01.912955 kubelet[2960]: W0130 18:08:01.912026 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:08:01.912955 kubelet[2960]: E0130 18:08:01.912144 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:08:01.913463 kubelet[2960]: E0130 18:08:01.913434 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:08:01.913595 kubelet[2960]: W0130 18:08:01.913565 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:08:01.913785 kubelet[2960]: E0130 18:08:01.913729 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:08:01.914594 kubelet[2960]: E0130 18:08:01.914575 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:08:01.916753 kubelet[2960]: W0130 18:08:01.916205 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:08:01.916753 kubelet[2960]: E0130 18:08:01.916673 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:08:01.918899 kubelet[2960]: E0130 18:08:01.917264 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:08:01.918899 kubelet[2960]: W0130 18:08:01.918755 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:08:01.919730 kubelet[2960]: E0130 18:08:01.918779 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:08:01.924215 kubelet[2960]: E0130 18:08:01.924169 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:08:01.924215 kubelet[2960]: W0130 18:08:01.924198 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:08:01.924797 kubelet[2960]: E0130 18:08:01.924223 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:08:01.931408 kubelet[2960]: E0130 18:08:01.931347 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:08:01.931936 kubelet[2960]: W0130 18:08:01.931503 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:08:01.931936 kubelet[2960]: E0130 18:08:01.931568 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:08:01.932718 kubelet[2960]: E0130 18:08:01.932579 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:08:01.933076 kubelet[2960]: W0130 18:08:01.932950 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:08:01.933076 kubelet[2960]: E0130 18:08:01.933017 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:08:01.934241 kubelet[2960]: E0130 18:08:01.934221 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:08:01.934755 kubelet[2960]: W0130 18:08:01.934324 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:08:01.934755 kubelet[2960]: E0130 18:08:01.934351 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:08:01.940018 kubelet[2960]: E0130 18:08:01.939989 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:08:01.940018 kubelet[2960]: W0130 18:08:01.940015 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:08:01.940344 kubelet[2960]: E0130 18:08:01.940049 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:08:01.940941 kubelet[2960]: E0130 18:08:01.940922 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:08:01.941381 kubelet[2960]: W0130 18:08:01.941339 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:08:01.942560 kubelet[2960]: E0130 18:08:01.942496 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:08:01.942777 kubelet[2960]: W0130 18:08:01.942668 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:08:01.943160 kubelet[2960]: E0130 18:08:01.943126 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:08:01.943739 kubelet[2960]: E0130 18:08:01.943353 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:08:01.943739 kubelet[2960]: W0130 18:08:01.943655 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:08:01.943739 kubelet[2960]: E0130 18:08:01.943679 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:08:01.944348 kubelet[2960]: E0130 18:08:01.943363 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:08:01.944906 kubelet[2960]: E0130 18:08:01.944553 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:08:01.944906 kubelet[2960]: W0130 18:08:01.944816 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:08:01.944906 kubelet[2960]: E0130 18:08:01.944834 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:08:01.946197 kubelet[2960]: E0130 18:08:01.945950 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:08:01.946197 kubelet[2960]: W0130 18:08:01.945968 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:08:01.946197 kubelet[2960]: E0130 18:08:01.946115 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:08:01.946955 kubelet[2960]: E0130 18:08:01.946899 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:08:01.946955 kubelet[2960]: W0130 18:08:01.946917 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:08:01.947310 kubelet[2960]: E0130 18:08:01.947127 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:08:01.947977 kubelet[2960]: E0130 18:08:01.947942 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:08:01.948171 kubelet[2960]: W0130 18:08:01.948069 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:08:01.948403 kubelet[2960]: E0130 18:08:01.948382 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:08:01.948703 kubelet[2960]: E0130 18:08:01.948658 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:08:01.949215 kubelet[2960]: W0130 18:08:01.948800 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:08:01.949403 kubelet[2960]: E0130 18:08:01.949313 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:08:01.949774 kubelet[2960]: E0130 18:08:01.949737 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:08:01.950026 kubelet[2960]: W0130 18:08:01.949881 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:08:01.950026 kubelet[2960]: E0130 18:08:01.949956 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:08:01.950559 kubelet[2960]: E0130 18:08:01.950410 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:08:01.950559 kubelet[2960]: W0130 18:08:01.950436 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:08:01.950924 kubelet[2960]: E0130 18:08:01.950698 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:08:01.951683 kubelet[2960]: E0130 18:08:01.951653 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:08:01.951805 kubelet[2960]: W0130 18:08:01.951769 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:08:01.952170 kubelet[2960]: E0130 18:08:01.952098 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:08:01.952364 kubelet[2960]: E0130 18:08:01.952304 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:08:01.952364 kubelet[2960]: W0130 18:08:01.952331 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:08:01.952658 kubelet[2960]: E0130 18:08:01.952527 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:08:01.953381 kubelet[2960]: E0130 18:08:01.953050 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:08:01.953381 kubelet[2960]: W0130 18:08:01.953095 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:08:01.953381 kubelet[2960]: E0130 18:08:01.953114 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:08:01.953927 kubelet[2960]: E0130 18:08:01.953908 2960 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 18:08:01.954190 kubelet[2960]: W0130 18:08:01.954060 2960 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 18:08:01.954190 kubelet[2960]: E0130 18:08:01.954158 2960 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 18:08:01.981175 containerd[1633]: time="2025-01-30T18:08:01.981126581Z" level=info msg="StartContainer for \"12c0eefbb357f7a75104afefaad2aabd170dc95af7ab41ab9a1f2dd165c97503\" returns successfully" Jan 30 18:08:02.084752 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-12c0eefbb357f7a75104afefaad2aabd170dc95af7ab41ab9a1f2dd165c97503-rootfs.mount: Deactivated successfully. Jan 30 18:08:02.183438 containerd[1633]: time="2025-01-30T18:08:02.152685137Z" level=info msg="shim disconnected" id=12c0eefbb357f7a75104afefaad2aabd170dc95af7ab41ab9a1f2dd165c97503 namespace=k8s.io Jan 30 18:08:02.183438 containerd[1633]: time="2025-01-30T18:08:02.183243755Z" level=warning msg="cleaning up after shim disconnected" id=12c0eefbb357f7a75104afefaad2aabd170dc95af7ab41ab9a1f2dd165c97503 namespace=k8s.io Jan 30 18:08:02.183438 containerd[1633]: time="2025-01-30T18:08:02.183279959Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 18:08:02.636522 kubelet[2960]: E0130 18:08:02.635935 2960 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tdn4g" podUID="ef41d1d4-bd1d-45f6-8486-f723f43e3c94" Jan 30 18:08:02.807890 containerd[1633]: time="2025-01-30T18:08:02.807050459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 30 18:08:02.837315 kubelet[2960]: I0130 18:08:02.837225 2960 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-d7cd8566f-xrvhl" podStartSLOduration=4.010181566 podStartE2EDuration="6.837169777s" podCreationTimestamp="2025-01-30 18:07:56 +0000 UTC" firstStartedPulling="2025-01-30 18:07:57.24723111 +0000 UTC m=+22.813836390" lastFinishedPulling="2025-01-30 18:08:00.074219336 +0000 UTC m=+25.640824601" observedRunningTime="2025-01-30 18:08:00.82114384 +0000 UTC m=+26.387749126" watchObservedRunningTime="2025-01-30 18:08:02.837169777 +0000 UTC m=+28.403775062" Jan 30 18:08:04.636618 kubelet[2960]: E0130 18:08:04.636323 2960 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tdn4g" podUID="ef41d1d4-bd1d-45f6-8486-f723f43e3c94" Jan 30 18:08:06.637906 kubelet[2960]: E0130 18:08:06.636118 2960 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tdn4g" podUID="ef41d1d4-bd1d-45f6-8486-f723f43e3c94" Jan 30 18:08:08.635543 kubelet[2960]: E0130 18:08:08.635470 2960 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tdn4g" podUID="ef41d1d4-bd1d-45f6-8486-f723f43e3c94" Jan 30 18:08:08.898344 containerd[1633]: time="2025-01-30T18:08:08.898143804Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:08:08.899611 containerd[1633]: time="2025-01-30T18:08:08.899345360Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 30 18:08:08.900270 containerd[1633]: time="2025-01-30T18:08:08.900229528Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:08:08.903288 containerd[1633]: time="2025-01-30T18:08:08.903251253Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:08:08.904881 containerd[1633]: time="2025-01-30T18:08:08.904649362Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 6.097535224s" Jan 30 18:08:08.904881 containerd[1633]: time="2025-01-30T18:08:08.904707267Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 30 18:08:08.909908 containerd[1633]: time="2025-01-30T18:08:08.909770996Z" level=info msg="CreateContainer within sandbox \"2a5248e33361087ad19d28969c81d172be42cd79101142943217e1ee517f811a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 30 18:08:08.927813 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1478012984.mount: Deactivated successfully. Jan 30 18:08:08.938560 containerd[1633]: time="2025-01-30T18:08:08.938489708Z" level=info msg="CreateContainer within sandbox \"2a5248e33361087ad19d28969c81d172be42cd79101142943217e1ee517f811a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"04ac659f05d57f6c54b193717c87491304e5d8149b45bb5b97bbf5b140869221\"" Jan 30 18:08:08.939693 containerd[1633]: time="2025-01-30T18:08:08.939645890Z" level=info msg="StartContainer for \"04ac659f05d57f6c54b193717c87491304e5d8149b45bb5b97bbf5b140869221\"" Jan 30 18:08:09.064053 containerd[1633]: time="2025-01-30T18:08:09.063947231Z" level=info msg="StartContainer for \"04ac659f05d57f6c54b193717c87491304e5d8149b45bb5b97bbf5b140869221\" returns successfully" Jan 30 18:08:09.940357 systemd-journald[1172]: Under memory pressure, flushing caches. Jan 30 18:08:09.922356 systemd-resolved[1511]: Under memory pressure, flushing caches. Jan 30 18:08:09.922460 systemd-resolved[1511]: Flushed all caches. Jan 30 18:08:10.187741 kubelet[2960]: I0130 18:08:10.184305 2960 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 30 18:08:10.244213 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-04ac659f05d57f6c54b193717c87491304e5d8149b45bb5b97bbf5b140869221-rootfs.mount: Deactivated successfully. Jan 30 18:08:10.251716 kubelet[2960]: I0130 18:08:10.251101 2960 topology_manager.go:215] "Topology Admit Handler" podUID="27fc1c1a-8707-42c1-a228-145009810bf8" podNamespace="kube-system" podName="coredns-7db6d8ff4d-cs5tc" Jan 30 18:08:10.253040 kubelet[2960]: I0130 18:08:10.252457 2960 topology_manager.go:215] "Topology Admit Handler" podUID="8b9ee45a-38b3-4449-976f-eb0f425ce59f" podNamespace="calico-apiserver" podName="calico-apiserver-86c44fb797-kj4m5" Jan 30 18:08:10.262457 kubelet[2960]: I0130 18:08:10.261439 2960 topology_manager.go:215] "Topology Admit Handler" podUID="ba41a6ef-6128-4cea-831d-b51553e05c5c" podNamespace="calico-apiserver" podName="calico-apiserver-86c44fb797-bqbzc" Jan 30 18:08:10.262744 containerd[1633]: time="2025-01-30T18:08:10.262583482Z" level=info msg="shim disconnected" id=04ac659f05d57f6c54b193717c87491304e5d8149b45bb5b97bbf5b140869221 namespace=k8s.io Jan 30 18:08:10.262744 containerd[1633]: time="2025-01-30T18:08:10.262698912Z" level=warning msg="cleaning up after shim disconnected" id=04ac659f05d57f6c54b193717c87491304e5d8149b45bb5b97bbf5b140869221 namespace=k8s.io Jan 30 18:08:10.265558 containerd[1633]: time="2025-01-30T18:08:10.262716010Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 18:08:10.267745 kubelet[2960]: I0130 18:08:10.267676 2960 topology_manager.go:215] "Topology Admit Handler" podUID="89cbff21-1d7a-4590-8dec-898c81289a13" podNamespace="calico-system" podName="calico-kube-controllers-6748df8946-p6bqf" Jan 30 18:08:10.272680 kubelet[2960]: I0130 18:08:10.272579 2960 topology_manager.go:215] "Topology Admit Handler" podUID="e38ccf5f-14a4-44d4-bb9c-2fd7d94168d8" podNamespace="kube-system" podName="coredns-7db6d8ff4d-2hwgm" Jan 30 18:08:10.313258 kubelet[2960]: I0130 18:08:10.313113 2960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mts8m\" (UniqueName: \"kubernetes.io/projected/89cbff21-1d7a-4590-8dec-898c81289a13-kube-api-access-mts8m\") pod \"calico-kube-controllers-6748df8946-p6bqf\" (UID: \"89cbff21-1d7a-4590-8dec-898c81289a13\") " pod="calico-system/calico-kube-controllers-6748df8946-p6bqf" Jan 30 18:08:10.313258 kubelet[2960]: I0130 18:08:10.313236 2960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfq9z\" (UniqueName: \"kubernetes.io/projected/27fc1c1a-8707-42c1-a228-145009810bf8-kube-api-access-lfq9z\") pod \"coredns-7db6d8ff4d-cs5tc\" (UID: \"27fc1c1a-8707-42c1-a228-145009810bf8\") " pod="kube-system/coredns-7db6d8ff4d-cs5tc" Jan 30 18:08:10.313547 kubelet[2960]: I0130 18:08:10.313280 2960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h4x7\" (UniqueName: \"kubernetes.io/projected/8b9ee45a-38b3-4449-976f-eb0f425ce59f-kube-api-access-2h4x7\") pod \"calico-apiserver-86c44fb797-kj4m5\" (UID: \"8b9ee45a-38b3-4449-976f-eb0f425ce59f\") " pod="calico-apiserver/calico-apiserver-86c44fb797-kj4m5" Jan 30 18:08:10.313547 kubelet[2960]: I0130 18:08:10.313322 2960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ba41a6ef-6128-4cea-831d-b51553e05c5c-calico-apiserver-certs\") pod \"calico-apiserver-86c44fb797-bqbzc\" (UID: \"ba41a6ef-6128-4cea-831d-b51553e05c5c\") " pod="calico-apiserver/calico-apiserver-86c44fb797-bqbzc" Jan 30 18:08:10.313547 kubelet[2960]: I0130 18:08:10.313361 2960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbbmm\" (UniqueName: \"kubernetes.io/projected/ba41a6ef-6128-4cea-831d-b51553e05c5c-kube-api-access-qbbmm\") pod \"calico-apiserver-86c44fb797-bqbzc\" (UID: \"ba41a6ef-6128-4cea-831d-b51553e05c5c\") " pod="calico-apiserver/calico-apiserver-86c44fb797-bqbzc" Jan 30 18:08:10.313547 kubelet[2960]: I0130 18:08:10.313392 2960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8b9ee45a-38b3-4449-976f-eb0f425ce59f-calico-apiserver-certs\") pod \"calico-apiserver-86c44fb797-kj4m5\" (UID: \"8b9ee45a-38b3-4449-976f-eb0f425ce59f\") " pod="calico-apiserver/calico-apiserver-86c44fb797-kj4m5" Jan 30 18:08:10.313547 kubelet[2960]: I0130 18:08:10.313424 2960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x77p\" (UniqueName: \"kubernetes.io/projected/e38ccf5f-14a4-44d4-bb9c-2fd7d94168d8-kube-api-access-4x77p\") pod \"coredns-7db6d8ff4d-2hwgm\" (UID: \"e38ccf5f-14a4-44d4-bb9c-2fd7d94168d8\") " pod="kube-system/coredns-7db6d8ff4d-2hwgm" Jan 30 18:08:10.314966 kubelet[2960]: I0130 18:08:10.313461 2960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27fc1c1a-8707-42c1-a228-145009810bf8-config-volume\") pod \"coredns-7db6d8ff4d-cs5tc\" (UID: \"27fc1c1a-8707-42c1-a228-145009810bf8\") " pod="kube-system/coredns-7db6d8ff4d-cs5tc" Jan 30 18:08:10.314966 kubelet[2960]: I0130 18:08:10.313490 2960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89cbff21-1d7a-4590-8dec-898c81289a13-tigera-ca-bundle\") pod \"calico-kube-controllers-6748df8946-p6bqf\" (UID: \"89cbff21-1d7a-4590-8dec-898c81289a13\") " pod="calico-system/calico-kube-controllers-6748df8946-p6bqf" Jan 30 18:08:10.314966 kubelet[2960]: I0130 18:08:10.313517 2960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e38ccf5f-14a4-44d4-bb9c-2fd7d94168d8-config-volume\") pod \"coredns-7db6d8ff4d-2hwgm\" (UID: \"e38ccf5f-14a4-44d4-bb9c-2fd7d94168d8\") " pod="kube-system/coredns-7db6d8ff4d-2hwgm" Jan 30 18:08:10.624992 containerd[1633]: time="2025-01-30T18:08:10.624246756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2hwgm,Uid:e38ccf5f-14a4-44d4-bb9c-2fd7d94168d8,Namespace:kube-system,Attempt:0,}" Jan 30 18:08:10.624992 containerd[1633]: time="2025-01-30T18:08:10.624518382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86c44fb797-kj4m5,Uid:8b9ee45a-38b3-4449-976f-eb0f425ce59f,Namespace:calico-apiserver,Attempt:0,}" Jan 30 18:08:10.624992 containerd[1633]: time="2025-01-30T18:08:10.624963350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-cs5tc,Uid:27fc1c1a-8707-42c1-a228-145009810bf8,Namespace:kube-system,Attempt:0,}" Jan 30 18:08:10.632324 containerd[1633]: time="2025-01-30T18:08:10.632256682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86c44fb797-bqbzc,Uid:ba41a6ef-6128-4cea-831d-b51553e05c5c,Namespace:calico-apiserver,Attempt:0,}" Jan 30 18:08:10.633756 containerd[1633]: time="2025-01-30T18:08:10.633646756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6748df8946-p6bqf,Uid:89cbff21-1d7a-4590-8dec-898c81289a13,Namespace:calico-system,Attempt:0,}" Jan 30 18:08:10.655043 containerd[1633]: time="2025-01-30T18:08:10.654944493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tdn4g,Uid:ef41d1d4-bd1d-45f6-8486-f723f43e3c94,Namespace:calico-system,Attempt:0,}" Jan 30 18:08:10.848207 containerd[1633]: time="2025-01-30T18:08:10.848147128Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 30 18:08:11.051959 kubelet[2960]: I0130 18:08:11.050078 2960 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 18:08:11.145930 containerd[1633]: time="2025-01-30T18:08:11.145814594Z" level=error msg="Failed to destroy network for sandbox \"5b5926bae0ba52a01a3d372bf2b9a6c0b1f30f5197df2750cf8c4e4c586a36c7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 18:08:11.147061 containerd[1633]: time="2025-01-30T18:08:11.147023780Z" level=error msg="Failed to destroy network for sandbox \"265d769654847538caba8d9112bcc641bd0117d300f01a0e5f8844084c36ecbb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 18:08:11.147397 containerd[1633]: time="2025-01-30T18:08:11.147358130Z" level=error msg="Failed to destroy network for sandbox \"0e69a7e4ce5f71f3d7cceb985b0c50dad158ba76a627db60dbeebcb50f031c6f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 18:08:11.151802 containerd[1633]: time="2025-01-30T18:08:11.151762027Z" level=error msg="encountered an error cleaning up failed sandbox \"265d769654847538caba8d9112bcc641bd0117d300f01a0e5f8844084c36ecbb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 18:08:11.152335 containerd[1633]: time="2025-01-30T18:08:11.152028697Z" level=error msg="encountered an error cleaning up failed sandbox \"0e69a7e4ce5f71f3d7cceb985b0c50dad158ba76a627db60dbeebcb50f031c6f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 18:08:11.158754 containerd[1633]: time="2025-01-30T18:08:11.158425733Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86c44fb797-bqbzc,Uid:ba41a6ef-6128-4cea-831d-b51553e05c5c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"265d769654847538caba8d9112bcc641bd0117d300f01a0e5f8844084c36ecbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 18:08:11.159898 containerd[1633]: time="2025-01-30T18:08:11.159332320Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86c44fb797-kj4m5,Uid:8b9ee45a-38b3-4449-976f-eb0f425ce59f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0e69a7e4ce5f71f3d7cceb985b0c50dad158ba76a627db60dbeebcb50f031c6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 18:08:11.163977 containerd[1633]: time="2025-01-30T18:08:11.163933934Z" level=error msg="Failed to destroy network for sandbox \"2c543aee0cf2947030f0332cea24591a96c45cb9ca5b343f100a56c463e2b9c4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 18:08:11.165520 containerd[1633]: time="2025-01-30T18:08:11.164546264Z" level=error msg="encountered an error cleaning up failed sandbox \"2c543aee0cf2947030f0332cea24591a96c45cb9ca5b343f100a56c463e2b9c4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 18:08:11.165520 containerd[1633]: time="2025-01-30T18:08:11.164599610Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-cs5tc,Uid:27fc1c1a-8707-42c1-a228-145009810bf8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2c543aee0cf2947030f0332cea24591a96c45cb9ca5b343f100a56c463e2b9c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 18:08:11.165520 containerd[1633]: time="2025-01-30T18:08:11.164893297Z" level=error msg="encountered an error cleaning up failed sandbox \"5b5926bae0ba52a01a3d372bf2b9a6c0b1f30f5197df2750cf8c4e4c586a36c7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 18:08:11.165520 containerd[1633]: time="2025-01-30T18:08:11.164939064Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2hwgm,Uid:e38ccf5f-14a4-44d4-bb9c-2fd7d94168d8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5b5926bae0ba52a01a3d372bf2b9a6c0b1f30f5197df2750cf8c4e4c586a36c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 18:08:11.165520 containerd[1633]: time="2025-01-30T18:08:11.165098057Z" level=error msg="Failed to destroy network for sandbox \"fc4005b90f7b1f714fafceaa95e5548e3924e37e278dc88e3605c5957c6d3b99\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 18:08:11.165520 containerd[1633]: time="2025-01-30T18:08:11.165220649Z" level=error msg="Failed to destroy network for sandbox \"ab43b944ab38054694cdc3cc58b471b897d918150326577a7798fc7fc96ce9dd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 18:08:11.166292 containerd[1633]: time="2025-01-30T18:08:11.166257454Z" level=error msg="encountered an error cleaning up failed sandbox \"fc4005b90f7b1f714fafceaa95e5548e3924e37e278dc88e3605c5957c6d3b99\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 18:08:11.166496 containerd[1633]: time="2025-01-30T18:08:11.166442566Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6748df8946-p6bqf,Uid:89cbff21-1d7a-4590-8dec-898c81289a13,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fc4005b90f7b1f714fafceaa95e5548e3924e37e278dc88e3605c5957c6d3b99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 18:08:11.166786 containerd[1633]: time="2025-01-30T18:08:11.166548526Z" level=error msg="encountered an error cleaning up failed sandbox \"ab43b944ab38054694cdc3cc58b471b897d918150326577a7798fc7fc96ce9dd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 18:08:11.167021 containerd[1633]: time="2025-01-30T18:08:11.166984208Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tdn4g,Uid:ef41d1d4-bd1d-45f6-8486-f723f43e3c94,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ab43b944ab38054694cdc3cc58b471b897d918150326577a7798fc7fc96ce9dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 18:08:11.172145 kubelet[2960]: E0130 18:08:11.165405 2960 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b5926bae0ba52a01a3d372bf2b9a6c0b1f30f5197df2750cf8c4e4c586a36c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 18:08:11.172315 kubelet[2960]: E0130 18:08:11.172253 2960 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab43b944ab38054694cdc3cc58b471b897d918150326577a7798fc7fc96ce9dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 18:08:11.172382 kubelet[2960]: E0130 18:08:11.172333 2960 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab43b944ab38054694cdc3cc58b471b897d918150326577a7798fc7fc96ce9dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tdn4g" Jan 30 18:08:11.172443 kubelet[2960]: E0130 18:08:11.172381 2960 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab43b944ab38054694cdc3cc58b471b897d918150326577a7798fc7fc96ce9dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tdn4g" Jan 30 18:08:11.172511 kubelet[2960]: E0130 18:08:11.172454 2960 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tdn4g_calico-system(ef41d1d4-bd1d-45f6-8486-f723f43e3c94)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tdn4g_calico-system(ef41d1d4-bd1d-45f6-8486-f723f43e3c94)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ab43b944ab38054694cdc3cc58b471b897d918150326577a7798fc7fc96ce9dd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tdn4g" podUID="ef41d1d4-bd1d-45f6-8486-f723f43e3c94" Jan 30 18:08:11.173494 kubelet[2960]: E0130 18:08:11.172624 2960 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b5926bae0ba52a01a3d372bf2b9a6c0b1f30f5197df2750cf8c4e4c586a36c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-2hwgm" Jan 30 18:08:11.173494 kubelet[2960]: E0130 18:08:11.172662 2960 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b5926bae0ba52a01a3d372bf2b9a6c0b1f30f5197df2750cf8c4e4c586a36c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-2hwgm" Jan 30 18:08:11.173494 kubelet[2960]: E0130 18:08:11.172793 2960 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc4005b90f7b1f714fafceaa95e5548e3924e37e278dc88e3605c5957c6d3b99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 18:08:11.173684 kubelet[2960]: E0130 18:08:11.172706 2960 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-2hwgm_kube-system(e38ccf5f-14a4-44d4-bb9c-2fd7d94168d8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-2hwgm_kube-system(e38ccf5f-14a4-44d4-bb9c-2fd7d94168d8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5b5926bae0ba52a01a3d372bf2b9a6c0b1f30f5197df2750cf8c4e4c586a36c7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-2hwgm" podUID="e38ccf5f-14a4-44d4-bb9c-2fd7d94168d8" Jan 30 18:08:11.173684 kubelet[2960]: E0130 18:08:11.173032 2960 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e69a7e4ce5f71f3d7cceb985b0c50dad158ba76a627db60dbeebcb50f031c6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 18:08:11.173684 kubelet[2960]: E0130 18:08:11.173075 2960 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e69a7e4ce5f71f3d7cceb985b0c50dad158ba76a627db60dbeebcb50f031c6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86c44fb797-kj4m5" Jan 30 18:08:11.173888 kubelet[2960]: E0130 18:08:11.173100 2960 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e69a7e4ce5f71f3d7cceb985b0c50dad158ba76a627db60dbeebcb50f031c6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86c44fb797-kj4m5" Jan 30 18:08:11.173888 kubelet[2960]: E0130 18:08:11.173169 2960 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-86c44fb797-kj4m5_calico-apiserver(8b9ee45a-38b3-4449-976f-eb0f425ce59f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-86c44fb797-kj4m5_calico-apiserver(8b9ee45a-38b3-4449-976f-eb0f425ce59f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0e69a7e4ce5f71f3d7cceb985b0c50dad158ba76a627db60dbeebcb50f031c6f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-86c44fb797-kj4m5" podUID="8b9ee45a-38b3-4449-976f-eb0f425ce59f" Jan 30 18:08:11.173888 kubelet[2960]: E0130 18:08:11.173228 2960 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c543aee0cf2947030f0332cea24591a96c45cb9ca5b343f100a56c463e2b9c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 18:08:11.174445 kubelet[2960]: E0130 18:08:11.173259 2960 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c543aee0cf2947030f0332cea24591a96c45cb9ca5b343f100a56c463e2b9c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-cs5tc" Jan 30 18:08:11.174445 kubelet[2960]: E0130 18:08:11.173280 2960 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c543aee0cf2947030f0332cea24591a96c45cb9ca5b343f100a56c463e2b9c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-cs5tc" Jan 30 18:08:11.174445 kubelet[2960]: E0130 18:08:11.173319 2960 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-cs5tc_kube-system(27fc1c1a-8707-42c1-a228-145009810bf8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-cs5tc_kube-system(27fc1c1a-8707-42c1-a228-145009810bf8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2c543aee0cf2947030f0332cea24591a96c45cb9ca5b343f100a56c463e2b9c4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-cs5tc" podUID="27fc1c1a-8707-42c1-a228-145009810bf8" Jan 30 18:08:11.174611 kubelet[2960]: E0130 18:08:11.165406 2960 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"265d769654847538caba8d9112bcc641bd0117d300f01a0e5f8844084c36ecbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 18:08:11.174611 kubelet[2960]: E0130 18:08:11.173369 2960 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"265d769654847538caba8d9112bcc641bd0117d300f01a0e5f8844084c36ecbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86c44fb797-bqbzc" Jan 30 18:08:11.174611 kubelet[2960]: E0130 18:08:11.173390 2960 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"265d769654847538caba8d9112bcc641bd0117d300f01a0e5f8844084c36ecbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86c44fb797-bqbzc" Jan 30 18:08:11.174737 kubelet[2960]: E0130 18:08:11.173425 2960 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-86c44fb797-bqbzc_calico-apiserver(ba41a6ef-6128-4cea-831d-b51553e05c5c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-86c44fb797-bqbzc_calico-apiserver(ba41a6ef-6128-4cea-831d-b51553e05c5c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"265d769654847538caba8d9112bcc641bd0117d300f01a0e5f8844084c36ecbb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-86c44fb797-bqbzc" podUID="ba41a6ef-6128-4cea-831d-b51553e05c5c" Jan 30 18:08:11.174737 kubelet[2960]: E0130 18:08:11.172877 2960 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc4005b90f7b1f714fafceaa95e5548e3924e37e278dc88e3605c5957c6d3b99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6748df8946-p6bqf" Jan 30 18:08:11.174737 kubelet[2960]: E0130 18:08:11.174094 2960 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc4005b90f7b1f714fafceaa95e5548e3924e37e278dc88e3605c5957c6d3b99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6748df8946-p6bqf" Jan 30 18:08:11.174933 kubelet[2960]: E0130 18:08:11.174162 2960 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6748df8946-p6bqf_calico-system(89cbff21-1d7a-4590-8dec-898c81289a13)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6748df8946-p6bqf_calico-system(89cbff21-1d7a-4590-8dec-898c81289a13)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fc4005b90f7b1f714fafceaa95e5548e3924e37e278dc88e3605c5957c6d3b99\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6748df8946-p6bqf" podUID="89cbff21-1d7a-4590-8dec-898c81289a13" Jan 30 18:08:11.838952 kubelet[2960]: I0130 18:08:11.838706 2960 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc4005b90f7b1f714fafceaa95e5548e3924e37e278dc88e3605c5957c6d3b99" Jan 30 18:08:11.841542 kubelet[2960]: I0130 18:08:11.841016 2960 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c543aee0cf2947030f0332cea24591a96c45cb9ca5b343f100a56c463e2b9c4" Jan 30 18:08:11.852885 kubelet[2960]: I0130 18:08:11.852810 2960 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e69a7e4ce5f71f3d7cceb985b0c50dad158ba76a627db60dbeebcb50f031c6f" Jan 30 18:08:11.856036 kubelet[2960]: I0130 18:08:11.855931 2960 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b5926bae0ba52a01a3d372bf2b9a6c0b1f30f5197df2750cf8c4e4c586a36c7" Jan 30 18:08:11.858999 kubelet[2960]: I0130 18:08:11.858951 2960 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab43b944ab38054694cdc3cc58b471b897d918150326577a7798fc7fc96ce9dd" Jan 30 18:08:11.861861 kubelet[2960]: I0130 18:08:11.861306 2960 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="265d769654847538caba8d9112bcc641bd0117d300f01a0e5f8844084c36ecbb" Jan 30 18:08:11.905096 containerd[1633]: time="2025-01-30T18:08:11.904283278Z" level=info msg="StopPodSandbox for \"5b5926bae0ba52a01a3d372bf2b9a6c0b1f30f5197df2750cf8c4e4c586a36c7\"" Jan 30 18:08:11.906938 containerd[1633]: time="2025-01-30T18:08:11.905208842Z" level=info msg="StopPodSandbox for \"265d769654847538caba8d9112bcc641bd0117d300f01a0e5f8844084c36ecbb\"" Jan 30 18:08:11.907780 containerd[1633]: time="2025-01-30T18:08:11.907053548Z" level=info msg="Ensure that sandbox 5b5926bae0ba52a01a3d372bf2b9a6c0b1f30f5197df2750cf8c4e4c586a36c7 in task-service has been cleanup successfully" Jan 30 18:08:11.908695 containerd[1633]: time="2025-01-30T18:08:11.908452887Z" level=info msg="StopPodSandbox for \"2c543aee0cf2947030f0332cea24591a96c45cb9ca5b343f100a56c463e2b9c4\"" Jan 30 18:08:11.909297 containerd[1633]: time="2025-01-30T18:08:11.909226859Z" level=info msg="Ensure that sandbox 2c543aee0cf2947030f0332cea24591a96c45cb9ca5b343f100a56c463e2b9c4 in task-service has been cleanup successfully" Jan 30 18:08:11.910292 containerd[1633]: time="2025-01-30T18:08:11.909813660Z" level=info msg="StopPodSandbox for \"ab43b944ab38054694cdc3cc58b471b897d918150326577a7798fc7fc96ce9dd\"" Jan 30 18:08:11.910292 containerd[1633]: time="2025-01-30T18:08:11.910041483Z" level=info msg="Ensure that sandbox ab43b944ab38054694cdc3cc58b471b897d918150326577a7798fc7fc96ce9dd in task-service has been cleanup successfully" Jan 30 18:08:11.910404 containerd[1633]: time="2025-01-30T18:08:11.910359098Z" level=info msg="StopPodSandbox for \"fc4005b90f7b1f714fafceaa95e5548e3924e37e278dc88e3605c5957c6d3b99\"" Jan 30 18:08:11.910970 containerd[1633]: time="2025-01-30T18:08:11.910939636Z" level=info msg="StopPodSandbox for \"0e69a7e4ce5f71f3d7cceb985b0c50dad158ba76a627db60dbeebcb50f031c6f\"" Jan 30 18:08:11.911222 containerd[1633]: time="2025-01-30T18:08:11.911191834Z" level=info msg="Ensure that sandbox 0e69a7e4ce5f71f3d7cceb985b0c50dad158ba76a627db60dbeebcb50f031c6f in task-service has been cleanup successfully" Jan 30 18:08:11.911572 containerd[1633]: time="2025-01-30T18:08:11.911529433Z" level=info msg="Ensure that sandbox 265d769654847538caba8d9112bcc641bd0117d300f01a0e5f8844084c36ecbb in task-service has been cleanup successfully" Jan 30 18:08:11.914775 containerd[1633]: time="2025-01-30T18:08:11.911810347Z" level=info msg="Ensure that sandbox fc4005b90f7b1f714fafceaa95e5548e3924e37e278dc88e3605c5957c6d3b99 in task-service has been cleanup successfully" Jan 30 18:08:12.037955 containerd[1633]: time="2025-01-30T18:08:12.036583986Z" level=error msg="StopPodSandbox for \"ab43b944ab38054694cdc3cc58b471b897d918150326577a7798fc7fc96ce9dd\" failed" error="failed to destroy network for sandbox \"ab43b944ab38054694cdc3cc58b471b897d918150326577a7798fc7fc96ce9dd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 18:08:12.038163 kubelet[2960]: E0130 18:08:12.036962 2960 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ab43b944ab38054694cdc3cc58b471b897d918150326577a7798fc7fc96ce9dd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ab43b944ab38054694cdc3cc58b471b897d918150326577a7798fc7fc96ce9dd" Jan 30 18:08:12.038163 kubelet[2960]: E0130 18:08:12.037053 2960 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ab43b944ab38054694cdc3cc58b471b897d918150326577a7798fc7fc96ce9dd"} Jan 30 18:08:12.038163 kubelet[2960]: E0130 18:08:12.037162 2960 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ef41d1d4-bd1d-45f6-8486-f723f43e3c94\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ab43b944ab38054694cdc3cc58b471b897d918150326577a7798fc7fc96ce9dd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 18:08:12.038163 kubelet[2960]: E0130 18:08:12.037199 2960 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ef41d1d4-bd1d-45f6-8486-f723f43e3c94\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ab43b944ab38054694cdc3cc58b471b897d918150326577a7798fc7fc96ce9dd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tdn4g" podUID="ef41d1d4-bd1d-45f6-8486-f723f43e3c94" Jan 30 18:08:12.041705 containerd[1633]: time="2025-01-30T18:08:12.041637629Z" level=error msg="StopPodSandbox for \"5b5926bae0ba52a01a3d372bf2b9a6c0b1f30f5197df2750cf8c4e4c586a36c7\" failed" error="failed to destroy network for sandbox \"5b5926bae0ba52a01a3d372bf2b9a6c0b1f30f5197df2750cf8c4e4c586a36c7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 18:08:12.042101 kubelet[2960]: E0130 18:08:12.042050 2960 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5b5926bae0ba52a01a3d372bf2b9a6c0b1f30f5197df2750cf8c4e4c586a36c7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5b5926bae0ba52a01a3d372bf2b9a6c0b1f30f5197df2750cf8c4e4c586a36c7" Jan 30 18:08:12.042213 kubelet[2960]: E0130 18:08:12.042117 2960 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5b5926bae0ba52a01a3d372bf2b9a6c0b1f30f5197df2750cf8c4e4c586a36c7"} Jan 30 18:08:12.042213 kubelet[2960]: E0130 18:08:12.042172 2960 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e38ccf5f-14a4-44d4-bb9c-2fd7d94168d8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5b5926bae0ba52a01a3d372bf2b9a6c0b1f30f5197df2750cf8c4e4c586a36c7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 18:08:12.042381 kubelet[2960]: E0130 18:08:12.042205 2960 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e38ccf5f-14a4-44d4-bb9c-2fd7d94168d8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5b5926bae0ba52a01a3d372bf2b9a6c0b1f30f5197df2750cf8c4e4c586a36c7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-2hwgm" podUID="e38ccf5f-14a4-44d4-bb9c-2fd7d94168d8" Jan 30 18:08:12.061949 containerd[1633]: time="2025-01-30T18:08:12.059979269Z" level=error msg="StopPodSandbox for \"0e69a7e4ce5f71f3d7cceb985b0c50dad158ba76a627db60dbeebcb50f031c6f\" failed" error="failed to destroy network for sandbox \"0e69a7e4ce5f71f3d7cceb985b0c50dad158ba76a627db60dbeebcb50f031c6f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 18:08:12.062216 kubelet[2960]: E0130 18:08:12.060309 2960 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0e69a7e4ce5f71f3d7cceb985b0c50dad158ba76a627db60dbeebcb50f031c6f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0e69a7e4ce5f71f3d7cceb985b0c50dad158ba76a627db60dbeebcb50f031c6f" Jan 30 18:08:12.062216 kubelet[2960]: E0130 18:08:12.060376 2960 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0e69a7e4ce5f71f3d7cceb985b0c50dad158ba76a627db60dbeebcb50f031c6f"} Jan 30 18:08:12.062216 kubelet[2960]: E0130 18:08:12.060430 2960 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8b9ee45a-38b3-4449-976f-eb0f425ce59f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0e69a7e4ce5f71f3d7cceb985b0c50dad158ba76a627db60dbeebcb50f031c6f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 18:08:12.062216 kubelet[2960]: E0130 18:08:12.060469 2960 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8b9ee45a-38b3-4449-976f-eb0f425ce59f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0e69a7e4ce5f71f3d7cceb985b0c50dad158ba76a627db60dbeebcb50f031c6f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-86c44fb797-kj4m5" podUID="8b9ee45a-38b3-4449-976f-eb0f425ce59f" Jan 30 18:08:12.065985 containerd[1633]: time="2025-01-30T18:08:12.064906685Z" level=error msg="StopPodSandbox for \"265d769654847538caba8d9112bcc641bd0117d300f01a0e5f8844084c36ecbb\" failed" error="failed to destroy network for sandbox \"265d769654847538caba8d9112bcc641bd0117d300f01a0e5f8844084c36ecbb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 18:08:12.066121 kubelet[2960]: E0130 18:08:12.065969 2960 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"265d769654847538caba8d9112bcc641bd0117d300f01a0e5f8844084c36ecbb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="265d769654847538caba8d9112bcc641bd0117d300f01a0e5f8844084c36ecbb" Jan 30 18:08:12.066121 kubelet[2960]: E0130 18:08:12.066048 2960 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"265d769654847538caba8d9112bcc641bd0117d300f01a0e5f8844084c36ecbb"} Jan 30 18:08:12.066121 kubelet[2960]: E0130 18:08:12.066101 2960 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ba41a6ef-6128-4cea-831d-b51553e05c5c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"265d769654847538caba8d9112bcc641bd0117d300f01a0e5f8844084c36ecbb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 18:08:12.066345 kubelet[2960]: E0130 18:08:12.066134 2960 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ba41a6ef-6128-4cea-831d-b51553e05c5c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"265d769654847538caba8d9112bcc641bd0117d300f01a0e5f8844084c36ecbb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-86c44fb797-bqbzc" podUID="ba41a6ef-6128-4cea-831d-b51553e05c5c" Jan 30 18:08:12.068241 containerd[1633]: time="2025-01-30T18:08:12.068190007Z" level=error msg="StopPodSandbox for \"fc4005b90f7b1f714fafceaa95e5548e3924e37e278dc88e3605c5957c6d3b99\" failed" error="failed to destroy network for sandbox \"fc4005b90f7b1f714fafceaa95e5548e3924e37e278dc88e3605c5957c6d3b99\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 18:08:12.069123 kubelet[2960]: E0130 18:08:12.069014 2960 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fc4005b90f7b1f714fafceaa95e5548e3924e37e278dc88e3605c5957c6d3b99\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fc4005b90f7b1f714fafceaa95e5548e3924e37e278dc88e3605c5957c6d3b99" Jan 30 18:08:12.069123 kubelet[2960]: E0130 18:08:12.069084 2960 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fc4005b90f7b1f714fafceaa95e5548e3924e37e278dc88e3605c5957c6d3b99"} Jan 30 18:08:12.069241 kubelet[2960]: E0130 18:08:12.069125 2960 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"89cbff21-1d7a-4590-8dec-898c81289a13\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fc4005b90f7b1f714fafceaa95e5548e3924e37e278dc88e3605c5957c6d3b99\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 18:08:12.069241 kubelet[2960]: E0130 18:08:12.069168 2960 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"89cbff21-1d7a-4590-8dec-898c81289a13\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fc4005b90f7b1f714fafceaa95e5548e3924e37e278dc88e3605c5957c6d3b99\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6748df8946-p6bqf" podUID="89cbff21-1d7a-4590-8dec-898c81289a13" Jan 30 18:08:12.073334 containerd[1633]: time="2025-01-30T18:08:12.071071737Z" level=error msg="StopPodSandbox for \"2c543aee0cf2947030f0332cea24591a96c45cb9ca5b343f100a56c463e2b9c4\" failed" error="failed to destroy network for sandbox \"2c543aee0cf2947030f0332cea24591a96c45cb9ca5b343f100a56c463e2b9c4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 18:08:12.073480 kubelet[2960]: E0130 18:08:12.073437 2960 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2c543aee0cf2947030f0332cea24591a96c45cb9ca5b343f100a56c463e2b9c4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2c543aee0cf2947030f0332cea24591a96c45cb9ca5b343f100a56c463e2b9c4" Jan 30 18:08:12.073559 kubelet[2960]: E0130 18:08:12.073489 2960 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2c543aee0cf2947030f0332cea24591a96c45cb9ca5b343f100a56c463e2b9c4"} Jan 30 18:08:12.073559 kubelet[2960]: E0130 18:08:12.073527 2960 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"27fc1c1a-8707-42c1-a228-145009810bf8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2c543aee0cf2947030f0332cea24591a96c45cb9ca5b343f100a56c463e2b9c4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 18:08:12.073693 kubelet[2960]: E0130 18:08:12.073558 2960 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"27fc1c1a-8707-42c1-a228-145009810bf8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2c543aee0cf2947030f0332cea24591a96c45cb9ca5b343f100a56c463e2b9c4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-cs5tc" podUID="27fc1c1a-8707-42c1-a228-145009810bf8" Jan 30 18:08:20.263397 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1428288306.mount: Deactivated successfully. Jan 30 18:08:20.381317 containerd[1633]: time="2025-01-30T18:08:20.362614639Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 30 18:08:20.381317 containerd[1633]: time="2025-01-30T18:08:20.381206111Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:08:20.422300 containerd[1633]: time="2025-01-30T18:08:20.422183277Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:08:20.438249 containerd[1633]: time="2025-01-30T18:08:20.438008050Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:08:20.441413 containerd[1633]: time="2025-01-30T18:08:20.441373259Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 9.588878339s" Jan 30 18:08:20.441539 containerd[1633]: time="2025-01-30T18:08:20.441513253Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 30 18:08:20.467648 containerd[1633]: time="2025-01-30T18:08:20.467417633Z" level=info msg="CreateContainer within sandbox \"2a5248e33361087ad19d28969c81d172be42cd79101142943217e1ee517f811a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 30 18:08:20.538945 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1861895376.mount: Deactivated successfully. Jan 30 18:08:20.570679 containerd[1633]: time="2025-01-30T18:08:20.570607655Z" level=info msg="CreateContainer within sandbox \"2a5248e33361087ad19d28969c81d172be42cd79101142943217e1ee517f811a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"af1381c7981811448d96976b39b43039203344c808c9a1c28f785eb9cb8d5c78\"" Jan 30 18:08:20.572743 containerd[1633]: time="2025-01-30T18:08:20.572638443Z" level=info msg="StartContainer for \"af1381c7981811448d96976b39b43039203344c808c9a1c28f785eb9cb8d5c78\"" Jan 30 18:08:20.835567 containerd[1633]: time="2025-01-30T18:08:20.831472965Z" level=info msg="StartContainer for \"af1381c7981811448d96976b39b43039203344c808c9a1c28f785eb9cb8d5c78\" returns successfully" Jan 30 18:08:20.990820 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 30 18:08:20.991655 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 30 18:08:21.012284 kubelet[2960]: I0130 18:08:20.988541 2960 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-kbd4g" podStartSLOduration=1.777183741 podStartE2EDuration="24.961510465s" podCreationTimestamp="2025-01-30 18:07:56 +0000 UTC" firstStartedPulling="2025-01-30 18:07:57.258454746 +0000 UTC m=+22.825060011" lastFinishedPulling="2025-01-30 18:08:20.442781463 +0000 UTC m=+46.009386735" observedRunningTime="2025-01-30 18:08:20.958555991 +0000 UTC m=+46.525161284" watchObservedRunningTime="2025-01-30 18:08:20.961510465 +0000 UTC m=+46.528115744" Jan 30 18:08:21.890286 systemd-resolved[1511]: Under memory pressure, flushing caches. Jan 30 18:08:21.898176 systemd-journald[1172]: Under memory pressure, flushing caches. Jan 30 18:08:21.890396 systemd-resolved[1511]: Flushed all caches. Jan 30 18:08:21.958125 systemd[1]: run-containerd-runc-k8s.io-af1381c7981811448d96976b39b43039203344c808c9a1c28f785eb9cb8d5c78-runc.ShCIdf.mount: Deactivated successfully. Jan 30 18:08:23.055621 kernel: bpftool[4268]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 30 18:08:23.406898 systemd-networkd[1256]: vxlan.calico: Link UP Jan 30 18:08:23.406913 systemd-networkd[1256]: vxlan.calico: Gained carrier Jan 30 18:08:23.639886 containerd[1633]: time="2025-01-30T18:08:23.639106742Z" level=info msg="StopPodSandbox for \"5b5926bae0ba52a01a3d372bf2b9a6c0b1f30f5197df2750cf8c4e4c586a36c7\"" Jan 30 18:08:23.950895 systemd-journald[1172]: Under memory pressure, flushing caches. Jan 30 18:08:23.938307 systemd-resolved[1511]: Under memory pressure, flushing caches. Jan 30 18:08:23.947453 systemd-resolved[1511]: Flushed all caches. Jan 30 18:08:23.999499 containerd[1633]: 2025-01-30 18:08:23.759 [INFO][4326] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5b5926bae0ba52a01a3d372bf2b9a6c0b1f30f5197df2750cf8c4e4c586a36c7" Jan 30 18:08:23.999499 containerd[1633]: 2025-01-30 18:08:23.759 [INFO][4326] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5b5926bae0ba52a01a3d372bf2b9a6c0b1f30f5197df2750cf8c4e4c586a36c7" iface="eth0" netns="/var/run/netns/cni-78f253c9-6e48-cde5-ed64-f0f388481e18" Jan 30 18:08:23.999499 containerd[1633]: 2025-01-30 18:08:23.760 [INFO][4326] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5b5926bae0ba52a01a3d372bf2b9a6c0b1f30f5197df2750cf8c4e4c586a36c7" iface="eth0" netns="/var/run/netns/cni-78f253c9-6e48-cde5-ed64-f0f388481e18" Jan 30 18:08:23.999499 containerd[1633]: 2025-01-30 18:08:23.764 [INFO][4326] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5b5926bae0ba52a01a3d372bf2b9a6c0b1f30f5197df2750cf8c4e4c586a36c7" iface="eth0" netns="/var/run/netns/cni-78f253c9-6e48-cde5-ed64-f0f388481e18" Jan 30 18:08:23.999499 containerd[1633]: 2025-01-30 18:08:23.764 [INFO][4326] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5b5926bae0ba52a01a3d372bf2b9a6c0b1f30f5197df2750cf8c4e4c586a36c7" Jan 30 18:08:23.999499 containerd[1633]: 2025-01-30 18:08:23.764 [INFO][4326] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5b5926bae0ba52a01a3d372bf2b9a6c0b1f30f5197df2750cf8c4e4c586a36c7" Jan 30 18:08:23.999499 containerd[1633]: 2025-01-30 18:08:23.969 [INFO][4340] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5b5926bae0ba52a01a3d372bf2b9a6c0b1f30f5197df2750cf8c4e4c586a36c7" HandleID="k8s-pod-network.5b5926bae0ba52a01a3d372bf2b9a6c0b1f30f5197df2750cf8c4e4c586a36c7" Workload="srv--xoz4v.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--2hwgm-eth0" Jan 30 18:08:23.999499 containerd[1633]: 2025-01-30 18:08:23.973 [INFO][4340] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 18:08:23.999499 containerd[1633]: 2025-01-30 18:08:23.973 [INFO][4340] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 18:08:23.999499 containerd[1633]: 2025-01-30 18:08:23.991 [WARNING][4340] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5b5926bae0ba52a01a3d372bf2b9a6c0b1f30f5197df2750cf8c4e4c586a36c7" HandleID="k8s-pod-network.5b5926bae0ba52a01a3d372bf2b9a6c0b1f30f5197df2750cf8c4e4c586a36c7" Workload="srv--xoz4v.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--2hwgm-eth0" Jan 30 18:08:23.999499 containerd[1633]: 2025-01-30 18:08:23.991 [INFO][4340] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5b5926bae0ba52a01a3d372bf2b9a6c0b1f30f5197df2750cf8c4e4c586a36c7" HandleID="k8s-pod-network.5b5926bae0ba52a01a3d372bf2b9a6c0b1f30f5197df2750cf8c4e4c586a36c7" Workload="srv--xoz4v.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--2hwgm-eth0" Jan 30 18:08:23.999499 containerd[1633]: 2025-01-30 18:08:23.993 [INFO][4340] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 18:08:23.999499 containerd[1633]: 2025-01-30 18:08:23.996 [INFO][4326] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5b5926bae0ba52a01a3d372bf2b9a6c0b1f30f5197df2750cf8c4e4c586a36c7" Jan 30 18:08:24.006523 systemd[1]: run-netns-cni\x2d78f253c9\x2d6e48\x2dcde5\x2ded64\x2df0f388481e18.mount: Deactivated successfully. Jan 30 18:08:24.015658 containerd[1633]: time="2025-01-30T18:08:24.015544304Z" level=info msg="TearDown network for sandbox \"5b5926bae0ba52a01a3d372bf2b9a6c0b1f30f5197df2750cf8c4e4c586a36c7\" successfully" Jan 30 18:08:24.015658 containerd[1633]: time="2025-01-30T18:08:24.015651808Z" level=info msg="StopPodSandbox for \"5b5926bae0ba52a01a3d372bf2b9a6c0b1f30f5197df2750cf8c4e4c586a36c7\" returns successfully" Jan 30 18:08:24.020968 containerd[1633]: time="2025-01-30T18:08:24.020758543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2hwgm,Uid:e38ccf5f-14a4-44d4-bb9c-2fd7d94168d8,Namespace:kube-system,Attempt:1,}" Jan 30 18:08:24.216833 systemd-networkd[1256]: cali16b64e874ba: Link UP Jan 30 18:08:24.217992 systemd-networkd[1256]: cali16b64e874ba: Gained carrier Jan 30 18:08:24.264784 containerd[1633]: 2025-01-30 18:08:24.107 [INFO][4374] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--xoz4v.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--2hwgm-eth0 coredns-7db6d8ff4d- kube-system e38ccf5f-14a4-44d4-bb9c-2fd7d94168d8 796 0 2025-01-30 18:07:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-xoz4v.gb1.brightbox.com coredns-7db6d8ff4d-2hwgm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali16b64e874ba [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c7ec2ff2ee3042b32fc2588748dfb706cdc9393a4a1e6c28eefbb8170b1cc3f7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2hwgm" WorkloadEndpoint="srv--xoz4v.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--2hwgm-" Jan 30 18:08:24.264784 containerd[1633]: 2025-01-30 18:08:24.107 [INFO][4374] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c7ec2ff2ee3042b32fc2588748dfb706cdc9393a4a1e6c28eefbb8170b1cc3f7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2hwgm" WorkloadEndpoint="srv--xoz4v.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--2hwgm-eth0" Jan 30 18:08:24.264784 containerd[1633]: 2025-01-30 18:08:24.151 [INFO][4386] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c7ec2ff2ee3042b32fc2588748dfb706cdc9393a4a1e6c28eefbb8170b1cc3f7" HandleID="k8s-pod-network.c7ec2ff2ee3042b32fc2588748dfb706cdc9393a4a1e6c28eefbb8170b1cc3f7" Workload="srv--xoz4v.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--2hwgm-eth0" Jan 30 18:08:24.264784 containerd[1633]: 2025-01-30 18:08:24.169 [INFO][4386] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c7ec2ff2ee3042b32fc2588748dfb706cdc9393a4a1e6c28eefbb8170b1cc3f7" HandleID="k8s-pod-network.c7ec2ff2ee3042b32fc2588748dfb706cdc9393a4a1e6c28eefbb8170b1cc3f7" Workload="srv--xoz4v.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--2hwgm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003187c0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-xoz4v.gb1.brightbox.com", "pod":"coredns-7db6d8ff4d-2hwgm", "timestamp":"2025-01-30 18:08:24.151615059 +0000 UTC"}, Hostname:"srv-xoz4v.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 18:08:24.264784 containerd[1633]: 2025-01-30 18:08:24.169 [INFO][4386] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 18:08:24.264784 containerd[1633]: 2025-01-30 18:08:24.169 [INFO][4386] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 18:08:24.264784 containerd[1633]: 2025-01-30 18:08:24.170 [INFO][4386] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-xoz4v.gb1.brightbox.com' Jan 30 18:08:24.264784 containerd[1633]: 2025-01-30 18:08:24.172 [INFO][4386] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c7ec2ff2ee3042b32fc2588748dfb706cdc9393a4a1e6c28eefbb8170b1cc3f7" host="srv-xoz4v.gb1.brightbox.com" Jan 30 18:08:24.264784 containerd[1633]: 2025-01-30 18:08:24.181 [INFO][4386] ipam/ipam.go 372: Looking up existing affinities for host host="srv-xoz4v.gb1.brightbox.com" Jan 30 18:08:24.264784 containerd[1633]: 2025-01-30 18:08:24.187 [INFO][4386] ipam/ipam.go 489: Trying affinity for 192.168.37.192/26 host="srv-xoz4v.gb1.brightbox.com" Jan 30 18:08:24.264784 containerd[1633]: 2025-01-30 18:08:24.189 [INFO][4386] ipam/ipam.go 155: Attempting to load block cidr=192.168.37.192/26 host="srv-xoz4v.gb1.brightbox.com" Jan 30 18:08:24.264784 containerd[1633]: 2025-01-30 18:08:24.192 [INFO][4386] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.37.192/26 host="srv-xoz4v.gb1.brightbox.com" Jan 30 18:08:24.264784 containerd[1633]: 2025-01-30 18:08:24.192 [INFO][4386] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.37.192/26 handle="k8s-pod-network.c7ec2ff2ee3042b32fc2588748dfb706cdc9393a4a1e6c28eefbb8170b1cc3f7" host="srv-xoz4v.gb1.brightbox.com" Jan 30 18:08:24.264784 containerd[1633]: 2025-01-30 18:08:24.194 [INFO][4386] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c7ec2ff2ee3042b32fc2588748dfb706cdc9393a4a1e6c28eefbb8170b1cc3f7 Jan 30 18:08:24.264784 containerd[1633]: 2025-01-30 18:08:24.200 [INFO][4386] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.37.192/26 handle="k8s-pod-network.c7ec2ff2ee3042b32fc2588748dfb706cdc9393a4a1e6c28eefbb8170b1cc3f7" host="srv-xoz4v.gb1.brightbox.com" Jan 30 18:08:24.264784 containerd[1633]: 2025-01-30 18:08:24.206 [INFO][4386] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.37.193/26] block=192.168.37.192/26 handle="k8s-pod-network.c7ec2ff2ee3042b32fc2588748dfb706cdc9393a4a1e6c28eefbb8170b1cc3f7" host="srv-xoz4v.gb1.brightbox.com" Jan 30 18:08:24.264784 containerd[1633]: 2025-01-30 18:08:24.206 [INFO][4386] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.37.193/26] handle="k8s-pod-network.c7ec2ff2ee3042b32fc2588748dfb706cdc9393a4a1e6c28eefbb8170b1cc3f7" host="srv-xoz4v.gb1.brightbox.com" Jan 30 18:08:24.264784 containerd[1633]: 2025-01-30 18:08:24.206 [INFO][4386] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 18:08:24.264784 containerd[1633]: 2025-01-30 18:08:24.206 [INFO][4386] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.37.193/26] IPv6=[] ContainerID="c7ec2ff2ee3042b32fc2588748dfb706cdc9393a4a1e6c28eefbb8170b1cc3f7" HandleID="k8s-pod-network.c7ec2ff2ee3042b32fc2588748dfb706cdc9393a4a1e6c28eefbb8170b1cc3f7" Workload="srv--xoz4v.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--2hwgm-eth0" Jan 30 18:08:24.271052 containerd[1633]: 2025-01-30 18:08:24.211 [INFO][4374] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c7ec2ff2ee3042b32fc2588748dfb706cdc9393a4a1e6c28eefbb8170b1cc3f7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2hwgm" WorkloadEndpoint="srv--xoz4v.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--2hwgm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xoz4v.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--2hwgm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e38ccf5f-14a4-44d4-bb9c-2fd7d94168d8", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 18, 7, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xoz4v.gb1.brightbox.com", ContainerID:"", Pod:"coredns-7db6d8ff4d-2hwgm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.37.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali16b64e874ba", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 18:08:24.271052 containerd[1633]: 2025-01-30 18:08:24.212 [INFO][4374] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.37.193/32] ContainerID="c7ec2ff2ee3042b32fc2588748dfb706cdc9393a4a1e6c28eefbb8170b1cc3f7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2hwgm" WorkloadEndpoint="srv--xoz4v.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--2hwgm-eth0" Jan 30 18:08:24.271052 containerd[1633]: 2025-01-30 18:08:24.212 [INFO][4374] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali16b64e874ba ContainerID="c7ec2ff2ee3042b32fc2588748dfb706cdc9393a4a1e6c28eefbb8170b1cc3f7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2hwgm" WorkloadEndpoint="srv--xoz4v.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--2hwgm-eth0" Jan 30 18:08:24.271052 containerd[1633]: 2025-01-30 18:08:24.219 [INFO][4374] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c7ec2ff2ee3042b32fc2588748dfb706cdc9393a4a1e6c28eefbb8170b1cc3f7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2hwgm" WorkloadEndpoint="srv--xoz4v.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--2hwgm-eth0" Jan 30 18:08:24.271052 containerd[1633]: 2025-01-30 18:08:24.219 [INFO][4374] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c7ec2ff2ee3042b32fc2588748dfb706cdc9393a4a1e6c28eefbb8170b1cc3f7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2hwgm" WorkloadEndpoint="srv--xoz4v.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--2hwgm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xoz4v.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--2hwgm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e38ccf5f-14a4-44d4-bb9c-2fd7d94168d8", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 18, 7, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xoz4v.gb1.brightbox.com", ContainerID:"c7ec2ff2ee3042b32fc2588748dfb706cdc9393a4a1e6c28eefbb8170b1cc3f7", Pod:"coredns-7db6d8ff4d-2hwgm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.37.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali16b64e874ba", MAC:"66:3c:3b:78:7f:30", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 18:08:24.271052 containerd[1633]: 2025-01-30 18:08:24.261 [INFO][4374] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c7ec2ff2ee3042b32fc2588748dfb706cdc9393a4a1e6c28eefbb8170b1cc3f7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2hwgm" WorkloadEndpoint="srv--xoz4v.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--2hwgm-eth0" Jan 30 18:08:24.397971 containerd[1633]: time="2025-01-30T18:08:24.396126119Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 18:08:24.397971 containerd[1633]: time="2025-01-30T18:08:24.396226302Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 18:08:24.397971 containerd[1633]: time="2025-01-30T18:08:24.396275859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 18:08:24.402720 containerd[1633]: time="2025-01-30T18:08:24.402214539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 18:08:24.573251 containerd[1633]: time="2025-01-30T18:08:24.573196264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2hwgm,Uid:e38ccf5f-14a4-44d4-bb9c-2fd7d94168d8,Namespace:kube-system,Attempt:1,} returns sandbox id \"c7ec2ff2ee3042b32fc2588748dfb706cdc9393a4a1e6c28eefbb8170b1cc3f7\"" Jan 30 18:08:24.599491 containerd[1633]: time="2025-01-30T18:08:24.599419044Z" level=info msg="CreateContainer within sandbox \"c7ec2ff2ee3042b32fc2588748dfb706cdc9393a4a1e6c28eefbb8170b1cc3f7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 18:08:24.620992 containerd[1633]: time="2025-01-30T18:08:24.620917157Z" level=info msg="CreateContainer within sandbox \"c7ec2ff2ee3042b32fc2588748dfb706cdc9393a4a1e6c28eefbb8170b1cc3f7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"736c2447a33d82c150a5f1211667d8f994dd41cedb435c7cce8697db39b89586\"" Jan 30 18:08:24.623802 containerd[1633]: time="2025-01-30T18:08:24.623041790Z" level=info msg="StartContainer for \"736c2447a33d82c150a5f1211667d8f994dd41cedb435c7cce8697db39b89586\"" Jan 30 18:08:24.703136 containerd[1633]: time="2025-01-30T18:08:24.703088113Z" level=info msg="StartContainer for \"736c2447a33d82c150a5f1211667d8f994dd41cedb435c7cce8697db39b89586\" returns successfully" Jan 30 18:08:24.834200 systemd-networkd[1256]: vxlan.calico: Gained IPv6LL Jan 30 18:08:25.474255 systemd-networkd[1256]: cali16b64e874ba: Gained IPv6LL Jan 30 18:08:25.636962 containerd[1633]: time="2025-01-30T18:08:25.636581538Z" level=info msg="StopPodSandbox for \"2c543aee0cf2947030f0332cea24591a96c45cb9ca5b343f100a56c463e2b9c4\"" Jan 30 18:08:25.637619 containerd[1633]: time="2025-01-30T18:08:25.637332737Z" level=info msg="StopPodSandbox for \"fc4005b90f7b1f714fafceaa95e5548e3924e37e278dc88e3605c5957c6d3b99\"" Jan 30 18:08:25.773981 kubelet[2960]: I0130 18:08:25.773313 2960 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-2hwgm" podStartSLOduration=36.773287271 podStartE2EDuration="36.773287271s" podCreationTimestamp="2025-01-30 18:07:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 18:08:24.976281535 +0000 UTC m=+50.542886823" watchObservedRunningTime="2025-01-30 18:08:25.773287271 +0000 UTC m=+51.339892551" Jan 30 18:08:25.857817 containerd[1633]: 2025-01-30 18:08:25.771 [INFO][4510] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fc4005b90f7b1f714fafceaa95e5548e3924e37e278dc88e3605c5957c6d3b99" Jan 30 18:08:25.857817 containerd[1633]: 2025-01-30 18:08:25.776 [INFO][4510] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fc4005b90f7b1f714fafceaa95e5548e3924e37e278dc88e3605c5957c6d3b99" iface="eth0" netns="/var/run/netns/cni-1bb5c595-7b63-6c8e-f900-0f03b2799fcd" Jan 30 18:08:25.857817 containerd[1633]: 2025-01-30 18:08:25.777 [INFO][4510] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fc4005b90f7b1f714fafceaa95e5548e3924e37e278dc88e3605c5957c6d3b99" iface="eth0" netns="/var/run/netns/cni-1bb5c595-7b63-6c8e-f900-0f03b2799fcd" Jan 30 18:08:25.857817 containerd[1633]: 2025-01-30 18:08:25.777 [INFO][4510] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fc4005b90f7b1f714fafceaa95e5548e3924e37e278dc88e3605c5957c6d3b99" iface="eth0" netns="/var/run/netns/cni-1bb5c595-7b63-6c8e-f900-0f03b2799fcd" Jan 30 18:08:25.857817 containerd[1633]: 2025-01-30 18:08:25.778 [INFO][4510] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fc4005b90f7b1f714fafceaa95e5548e3924e37e278dc88e3605c5957c6d3b99" Jan 30 18:08:25.857817 containerd[1633]: 2025-01-30 18:08:25.778 [INFO][4510] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fc4005b90f7b1f714fafceaa95e5548e3924e37e278dc88e3605c5957c6d3b99" Jan 30 18:08:25.857817 containerd[1633]: 2025-01-30 18:08:25.830 [INFO][4522] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fc4005b90f7b1f714fafceaa95e5548e3924e37e278dc88e3605c5957c6d3b99" HandleID="k8s-pod-network.fc4005b90f7b1f714fafceaa95e5548e3924e37e278dc88e3605c5957c6d3b99" Workload="srv--xoz4v.gb1.brightbox.com-k8s-calico--kube--controllers--6748df8946--p6bqf-eth0" Jan 30 18:08:25.857817 containerd[1633]: 2025-01-30 18:08:25.831 [INFO][4522] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 18:08:25.857817 containerd[1633]: 2025-01-30 18:08:25.831 [INFO][4522] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 18:08:25.857817 containerd[1633]: 2025-01-30 18:08:25.844 [WARNING][4522] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fc4005b90f7b1f714fafceaa95e5548e3924e37e278dc88e3605c5957c6d3b99" HandleID="k8s-pod-network.fc4005b90f7b1f714fafceaa95e5548e3924e37e278dc88e3605c5957c6d3b99" Workload="srv--xoz4v.gb1.brightbox.com-k8s-calico--kube--controllers--6748df8946--p6bqf-eth0" Jan 30 18:08:25.857817 containerd[1633]: 2025-01-30 18:08:25.844 [INFO][4522] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fc4005b90f7b1f714fafceaa95e5548e3924e37e278dc88e3605c5957c6d3b99" HandleID="k8s-pod-network.fc4005b90f7b1f714fafceaa95e5548e3924e37e278dc88e3605c5957c6d3b99" Workload="srv--xoz4v.gb1.brightbox.com-k8s-calico--kube--controllers--6748df8946--p6bqf-eth0" Jan 30 18:08:25.857817 containerd[1633]: 2025-01-30 18:08:25.847 [INFO][4522] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 18:08:25.857817 containerd[1633]: 2025-01-30 18:08:25.854 [INFO][4510] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fc4005b90f7b1f714fafceaa95e5548e3924e37e278dc88e3605c5957c6d3b99" Jan 30 18:08:25.863314 containerd[1633]: time="2025-01-30T18:08:25.858100662Z" level=info msg="TearDown network for sandbox \"fc4005b90f7b1f714fafceaa95e5548e3924e37e278dc88e3605c5957c6d3b99\" successfully" Jan 30 18:08:25.863314 containerd[1633]: time="2025-01-30T18:08:25.858161204Z" level=info msg="StopPodSandbox for \"fc4005b90f7b1f714fafceaa95e5548e3924e37e278dc88e3605c5957c6d3b99\" returns successfully" Jan 30 18:08:25.863314 containerd[1633]: time="2025-01-30T18:08:25.859481466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6748df8946-p6bqf,Uid:89cbff21-1d7a-4590-8dec-898c81289a13,Namespace:calico-system,Attempt:1,}" Jan 30 18:08:25.862761 systemd[1]: run-netns-cni\x2d1bb5c595\x2d7b63\x2d6c8e\x2df900\x2d0f03b2799fcd.mount: Deactivated successfully. Jan 30 18:08:25.901082 containerd[1633]: 2025-01-30 18:08:25.776 [INFO][4509] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2c543aee0cf2947030f0332cea24591a96c45cb9ca5b343f100a56c463e2b9c4" Jan 30 18:08:25.901082 containerd[1633]: 2025-01-30 18:08:25.777 [INFO][4509] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2c543aee0cf2947030f0332cea24591a96c45cb9ca5b343f100a56c463e2b9c4" iface="eth0" netns="/var/run/netns/cni-3bce5f80-ebfe-2791-1721-22234ac2da31" Jan 30 18:08:25.901082 containerd[1633]: 2025-01-30 18:08:25.778 [INFO][4509] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2c543aee0cf2947030f0332cea24591a96c45cb9ca5b343f100a56c463e2b9c4" iface="eth0" netns="/var/run/netns/cni-3bce5f80-ebfe-2791-1721-22234ac2da31" Jan 30 18:08:25.901082 containerd[1633]: 2025-01-30 18:08:25.778 [INFO][4509] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2c543aee0cf2947030f0332cea24591a96c45cb9ca5b343f100a56c463e2b9c4" iface="eth0" netns="/var/run/netns/cni-3bce5f80-ebfe-2791-1721-22234ac2da31" Jan 30 18:08:25.901082 containerd[1633]: 2025-01-30 18:08:25.778 [INFO][4509] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2c543aee0cf2947030f0332cea24591a96c45cb9ca5b343f100a56c463e2b9c4" Jan 30 18:08:25.901082 containerd[1633]: 2025-01-30 18:08:25.778 [INFO][4509] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2c543aee0cf2947030f0332cea24591a96c45cb9ca5b343f100a56c463e2b9c4" Jan 30 18:08:25.901082 containerd[1633]: 2025-01-30 18:08:25.867 [INFO][4523] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2c543aee0cf2947030f0332cea24591a96c45cb9ca5b343f100a56c463e2b9c4" HandleID="k8s-pod-network.2c543aee0cf2947030f0332cea24591a96c45cb9ca5b343f100a56c463e2b9c4" Workload="srv--xoz4v.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--cs5tc-eth0" Jan 30 18:08:25.901082 containerd[1633]: 2025-01-30 18:08:25.867 [INFO][4523] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 18:08:25.901082 containerd[1633]: 2025-01-30 18:08:25.867 [INFO][4523] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 18:08:25.901082 containerd[1633]: 2025-01-30 18:08:25.882 [WARNING][4523] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2c543aee0cf2947030f0332cea24591a96c45cb9ca5b343f100a56c463e2b9c4" HandleID="k8s-pod-network.2c543aee0cf2947030f0332cea24591a96c45cb9ca5b343f100a56c463e2b9c4" Workload="srv--xoz4v.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--cs5tc-eth0" Jan 30 18:08:25.901082 containerd[1633]: 2025-01-30 18:08:25.882 [INFO][4523] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2c543aee0cf2947030f0332cea24591a96c45cb9ca5b343f100a56c463e2b9c4" HandleID="k8s-pod-network.2c543aee0cf2947030f0332cea24591a96c45cb9ca5b343f100a56c463e2b9c4" Workload="srv--xoz4v.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--cs5tc-eth0" Jan 30 18:08:25.901082 containerd[1633]: 2025-01-30 18:08:25.886 [INFO][4523] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 18:08:25.901082 containerd[1633]: 2025-01-30 18:08:25.893 [INFO][4509] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2c543aee0cf2947030f0332cea24591a96c45cb9ca5b343f100a56c463e2b9c4" Jan 30 18:08:25.909983 containerd[1633]: time="2025-01-30T18:08:25.904420902Z" level=info msg="TearDown network for sandbox \"2c543aee0cf2947030f0332cea24591a96c45cb9ca5b343f100a56c463e2b9c4\" successfully" Jan 30 18:08:25.909983 containerd[1633]: time="2025-01-30T18:08:25.904507176Z" level=info msg="StopPodSandbox for \"2c543aee0cf2947030f0332cea24591a96c45cb9ca5b343f100a56c463e2b9c4\" returns successfully" Jan 30 18:08:25.909983 containerd[1633]: time="2025-01-30T18:08:25.908268631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-cs5tc,Uid:27fc1c1a-8707-42c1-a228-145009810bf8,Namespace:kube-system,Attempt:1,}" Jan 30 18:08:25.906008 systemd[1]: run-netns-cni\x2d3bce5f80\x2debfe\x2d2791\x2d1721\x2d22234ac2da31.mount: Deactivated successfully. Jan 30 18:08:26.172223 systemd-networkd[1256]: cali2bbf1d491b0: Link UP Jan 30 18:08:26.172640 systemd-networkd[1256]: cali2bbf1d491b0: Gained carrier Jan 30 18:08:26.210068 containerd[1633]: 2025-01-30 18:08:25.983 [INFO][4536] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--xoz4v.gb1.brightbox.com-k8s-calico--kube--controllers--6748df8946--p6bqf-eth0 calico-kube-controllers-6748df8946- calico-system 89cbff21-1d7a-4590-8dec-898c81289a13 811 0 2025-01-30 18:07:56 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6748df8946 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-xoz4v.gb1.brightbox.com calico-kube-controllers-6748df8946-p6bqf eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali2bbf1d491b0 [] []}} ContainerID="d04f5bacfaa13b59571d7f32cfa675c627125c5d0164177ba16b1e50822e30a4" Namespace="calico-system" Pod="calico-kube-controllers-6748df8946-p6bqf" WorkloadEndpoint="srv--xoz4v.gb1.brightbox.com-k8s-calico--kube--controllers--6748df8946--p6bqf-" Jan 30 18:08:26.210068 containerd[1633]: 2025-01-30 18:08:25.983 [INFO][4536] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d04f5bacfaa13b59571d7f32cfa675c627125c5d0164177ba16b1e50822e30a4" Namespace="calico-system" Pod="calico-kube-controllers-6748df8946-p6bqf" WorkloadEndpoint="srv--xoz4v.gb1.brightbox.com-k8s-calico--kube--controllers--6748df8946--p6bqf-eth0" Jan 30 18:08:26.210068 containerd[1633]: 2025-01-30 18:08:26.090 [INFO][4557] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d04f5bacfaa13b59571d7f32cfa675c627125c5d0164177ba16b1e50822e30a4" HandleID="k8s-pod-network.d04f5bacfaa13b59571d7f32cfa675c627125c5d0164177ba16b1e50822e30a4" Workload="srv--xoz4v.gb1.brightbox.com-k8s-calico--kube--controllers--6748df8946--p6bqf-eth0" Jan 30 18:08:26.210068 containerd[1633]: 2025-01-30 18:08:26.106 [INFO][4557] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d04f5bacfaa13b59571d7f32cfa675c627125c5d0164177ba16b1e50822e30a4" HandleID="k8s-pod-network.d04f5bacfaa13b59571d7f32cfa675c627125c5d0164177ba16b1e50822e30a4" Workload="srv--xoz4v.gb1.brightbox.com-k8s-calico--kube--controllers--6748df8946--p6bqf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004d2590), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-xoz4v.gb1.brightbox.com", "pod":"calico-kube-controllers-6748df8946-p6bqf", "timestamp":"2025-01-30 18:08:26.090613342 +0000 UTC"}, Hostname:"srv-xoz4v.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 18:08:26.210068 containerd[1633]: 2025-01-30 18:08:26.106 [INFO][4557] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 18:08:26.210068 containerd[1633]: 2025-01-30 18:08:26.107 [INFO][4557] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 18:08:26.210068 containerd[1633]: 2025-01-30 18:08:26.107 [INFO][4557] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-xoz4v.gb1.brightbox.com' Jan 30 18:08:26.210068 containerd[1633]: 2025-01-30 18:08:26.110 [INFO][4557] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d04f5bacfaa13b59571d7f32cfa675c627125c5d0164177ba16b1e50822e30a4" host="srv-xoz4v.gb1.brightbox.com" Jan 30 18:08:26.210068 containerd[1633]: 2025-01-30 18:08:26.118 [INFO][4557] ipam/ipam.go 372: Looking up existing affinities for host host="srv-xoz4v.gb1.brightbox.com" Jan 30 18:08:26.210068 containerd[1633]: 2025-01-30 18:08:26.127 [INFO][4557] ipam/ipam.go 489: Trying affinity for 192.168.37.192/26 host="srv-xoz4v.gb1.brightbox.com" Jan 30 18:08:26.210068 containerd[1633]: 2025-01-30 18:08:26.129 [INFO][4557] ipam/ipam.go 155: Attempting to load block cidr=192.168.37.192/26 host="srv-xoz4v.gb1.brightbox.com" Jan 30 18:08:26.210068 containerd[1633]: 2025-01-30 18:08:26.134 [INFO][4557] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.37.192/26 host="srv-xoz4v.gb1.brightbox.com" Jan 30 18:08:26.210068 containerd[1633]: 2025-01-30 18:08:26.134 [INFO][4557] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.37.192/26 handle="k8s-pod-network.d04f5bacfaa13b59571d7f32cfa675c627125c5d0164177ba16b1e50822e30a4" host="srv-xoz4v.gb1.brightbox.com" Jan 30 18:08:26.210068 containerd[1633]: 2025-01-30 18:08:26.137 [INFO][4557] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d04f5bacfaa13b59571d7f32cfa675c627125c5d0164177ba16b1e50822e30a4 Jan 30 18:08:26.210068 containerd[1633]: 2025-01-30 18:08:26.149 [INFO][4557] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.37.192/26 handle="k8s-pod-network.d04f5bacfaa13b59571d7f32cfa675c627125c5d0164177ba16b1e50822e30a4" host="srv-xoz4v.gb1.brightbox.com" Jan 30 18:08:26.210068 containerd[1633]: 2025-01-30 18:08:26.158 [INFO][4557] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.37.194/26] block=192.168.37.192/26 handle="k8s-pod-network.d04f5bacfaa13b59571d7f32cfa675c627125c5d0164177ba16b1e50822e30a4" host="srv-xoz4v.gb1.brightbox.com" Jan 30 18:08:26.210068 containerd[1633]: 2025-01-30 18:08:26.159 [INFO][4557] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.37.194/26] handle="k8s-pod-network.d04f5bacfaa13b59571d7f32cfa675c627125c5d0164177ba16b1e50822e30a4" host="srv-xoz4v.gb1.brightbox.com" Jan 30 18:08:26.210068 containerd[1633]: 2025-01-30 18:08:26.159 [INFO][4557] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 18:08:26.210068 containerd[1633]: 2025-01-30 18:08:26.159 [INFO][4557] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.37.194/26] IPv6=[] ContainerID="d04f5bacfaa13b59571d7f32cfa675c627125c5d0164177ba16b1e50822e30a4" HandleID="k8s-pod-network.d04f5bacfaa13b59571d7f32cfa675c627125c5d0164177ba16b1e50822e30a4" Workload="srv--xoz4v.gb1.brightbox.com-k8s-calico--kube--controllers--6748df8946--p6bqf-eth0" Jan 30 18:08:26.213698 containerd[1633]: 2025-01-30 18:08:26.163 [INFO][4536] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d04f5bacfaa13b59571d7f32cfa675c627125c5d0164177ba16b1e50822e30a4" Namespace="calico-system" Pod="calico-kube-controllers-6748df8946-p6bqf" WorkloadEndpoint="srv--xoz4v.gb1.brightbox.com-k8s-calico--kube--controllers--6748df8946--p6bqf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xoz4v.gb1.brightbox.com-k8s-calico--kube--controllers--6748df8946--p6bqf-eth0", GenerateName:"calico-kube-controllers-6748df8946-", Namespace:"calico-system", SelfLink:"", UID:"89cbff21-1d7a-4590-8dec-898c81289a13", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 18, 7, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6748df8946", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xoz4v.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-6748df8946-p6bqf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.37.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2bbf1d491b0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 18:08:26.213698 containerd[1633]: 2025-01-30 18:08:26.163 [INFO][4536] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.37.194/32] ContainerID="d04f5bacfaa13b59571d7f32cfa675c627125c5d0164177ba16b1e50822e30a4" Namespace="calico-system" Pod="calico-kube-controllers-6748df8946-p6bqf" WorkloadEndpoint="srv--xoz4v.gb1.brightbox.com-k8s-calico--kube--controllers--6748df8946--p6bqf-eth0" Jan 30 18:08:26.213698 containerd[1633]: 2025-01-30 18:08:26.163 [INFO][4536] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2bbf1d491b0 ContainerID="d04f5bacfaa13b59571d7f32cfa675c627125c5d0164177ba16b1e50822e30a4" Namespace="calico-system" Pod="calico-kube-controllers-6748df8946-p6bqf" WorkloadEndpoint="srv--xoz4v.gb1.brightbox.com-k8s-calico--kube--controllers--6748df8946--p6bqf-eth0" Jan 30 18:08:26.213698 containerd[1633]: 2025-01-30 18:08:26.173 [INFO][4536] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d04f5bacfaa13b59571d7f32cfa675c627125c5d0164177ba16b1e50822e30a4" Namespace="calico-system" Pod="calico-kube-controllers-6748df8946-p6bqf" WorkloadEndpoint="srv--xoz4v.gb1.brightbox.com-k8s-calico--kube--controllers--6748df8946--p6bqf-eth0" Jan 30 18:08:26.213698 containerd[1633]: 2025-01-30 18:08:26.175 [INFO][4536] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d04f5bacfaa13b59571d7f32cfa675c627125c5d0164177ba16b1e50822e30a4" Namespace="calico-system" Pod="calico-kube-controllers-6748df8946-p6bqf" WorkloadEndpoint="srv--xoz4v.gb1.brightbox.com-k8s-calico--kube--controllers--6748df8946--p6bqf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xoz4v.gb1.brightbox.com-k8s-calico--kube--controllers--6748df8946--p6bqf-eth0", GenerateName:"calico-kube-controllers-6748df8946-", Namespace:"calico-system", SelfLink:"", UID:"89cbff21-1d7a-4590-8dec-898c81289a13", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 18, 7, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6748df8946", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xoz4v.gb1.brightbox.com", ContainerID:"d04f5bacfaa13b59571d7f32cfa675c627125c5d0164177ba16b1e50822e30a4", Pod:"calico-kube-controllers-6748df8946-p6bqf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.37.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2bbf1d491b0", MAC:"e6:5b:f2:3e:78:e6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 18:08:26.213698 containerd[1633]: 2025-01-30 18:08:26.201 [INFO][4536] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d04f5bacfaa13b59571d7f32cfa675c627125c5d0164177ba16b1e50822e30a4" Namespace="calico-system" Pod="calico-kube-controllers-6748df8946-p6bqf" WorkloadEndpoint="srv--xoz4v.gb1.brightbox.com-k8s-calico--kube--controllers--6748df8946--p6bqf-eth0" Jan 30 18:08:26.255584 systemd-networkd[1256]: cali3f59a49b742: Link UP Jan 30 18:08:26.259354 systemd-networkd[1256]: cali3f59a49b742: Gained carrier Jan 30 18:08:26.292923 containerd[1633]: time="2025-01-30T18:08:26.291544190Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 18:08:26.292923 containerd[1633]: time="2025-01-30T18:08:26.291667402Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 18:08:26.292923 containerd[1633]: time="2025-01-30T18:08:26.291691793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 18:08:26.292923 containerd[1633]: time="2025-01-30T18:08:26.291937242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 18:08:26.300279 containerd[1633]: 2025-01-30 18:08:26.064 [INFO][4545] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--xoz4v.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--cs5tc-eth0 coredns-7db6d8ff4d- kube-system 27fc1c1a-8707-42c1-a228-145009810bf8 812 0 2025-01-30 18:07:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-xoz4v.gb1.brightbox.com coredns-7db6d8ff4d-cs5tc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3f59a49b742 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="d7765d1b480e8bb7df242f6e566915552050e97c3d82a5fc724769ba71362fa8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cs5tc" WorkloadEndpoint="srv--xoz4v.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--cs5tc-" Jan 30 18:08:26.300279 containerd[1633]: 2025-01-30 18:08:26.066 [INFO][4545] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d7765d1b480e8bb7df242f6e566915552050e97c3d82a5fc724769ba71362fa8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cs5tc" WorkloadEndpoint="srv--xoz4v.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--cs5tc-eth0" Jan 30 18:08:26.300279 containerd[1633]: 2025-01-30 18:08:26.135 [INFO][4566] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d7765d1b480e8bb7df242f6e566915552050e97c3d82a5fc724769ba71362fa8" HandleID="k8s-pod-network.d7765d1b480e8bb7df242f6e566915552050e97c3d82a5fc724769ba71362fa8" Workload="srv--xoz4v.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--cs5tc-eth0" Jan 30 18:08:26.300279 containerd[1633]: 2025-01-30 18:08:26.151 [INFO][4566] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d7765d1b480e8bb7df242f6e566915552050e97c3d82a5fc724769ba71362fa8" HandleID="k8s-pod-network.d7765d1b480e8bb7df242f6e566915552050e97c3d82a5fc724769ba71362fa8" Workload="srv--xoz4v.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--cs5tc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318d70), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-xoz4v.gb1.brightbox.com", "pod":"coredns-7db6d8ff4d-cs5tc", "timestamp":"2025-01-30 18:08:26.135758151 +0000 UTC"}, Hostname:"srv-xoz4v.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 18:08:26.300279 containerd[1633]: 2025-01-30 18:08:26.151 [INFO][4566] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 18:08:26.300279 containerd[1633]: 2025-01-30 18:08:26.159 [INFO][4566] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 18:08:26.300279 containerd[1633]: 2025-01-30 18:08:26.159 [INFO][4566] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-xoz4v.gb1.brightbox.com' Jan 30 18:08:26.300279 containerd[1633]: 2025-01-30 18:08:26.163 [INFO][4566] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d7765d1b480e8bb7df242f6e566915552050e97c3d82a5fc724769ba71362fa8" host="srv-xoz4v.gb1.brightbox.com" Jan 30 18:08:26.300279 containerd[1633]: 2025-01-30 18:08:26.176 [INFO][4566] ipam/ipam.go 372: Looking up existing affinities for host host="srv-xoz4v.gb1.brightbox.com" Jan 30 18:08:26.300279 containerd[1633]: 2025-01-30 18:08:26.188 [INFO][4566] ipam/ipam.go 489: Trying affinity for 192.168.37.192/26 host="srv-xoz4v.gb1.brightbox.com" Jan 30 18:08:26.300279 containerd[1633]: 2025-01-30 18:08:26.191 [INFO][4566] ipam/ipam.go 155: Attempting to load block cidr=192.168.37.192/26 host="srv-xoz4v.gb1.brightbox.com" Jan 30 18:08:26.300279 containerd[1633]: 2025-01-30 18:08:26.207 [INFO][4566] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.37.192/26 host="srv-xoz4v.gb1.brightbox.com" Jan 30 18:08:26.300279 containerd[1633]: 2025-01-30 18:08:26.207 [INFO][4566] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.37.192/26 handle="k8s-pod-network.d7765d1b480e8bb7df242f6e566915552050e97c3d82a5fc724769ba71362fa8" host="srv-xoz4v.gb1.brightbox.com" Jan 30 18:08:26.300279 containerd[1633]: 2025-01-30 18:08:26.211 [INFO][4566] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d7765d1b480e8bb7df242f6e566915552050e97c3d82a5fc724769ba71362fa8 Jan 30 18:08:26.300279 containerd[1633]: 2025-01-30 18:08:26.222 [INFO][4566] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.37.192/26 handle="k8s-pod-network.d7765d1b480e8bb7df242f6e566915552050e97c3d82a5fc724769ba71362fa8" host="srv-xoz4v.gb1.brightbox.com" Jan 30 18:08:26.300279 containerd[1633]: 2025-01-30 18:08:26.233 [INFO][4566] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.37.195/26] block=192.168.37.192/26 handle="k8s-pod-network.d7765d1b480e8bb7df242f6e566915552050e97c3d82a5fc724769ba71362fa8" host="srv-xoz4v.gb1.brightbox.com" Jan 30 18:08:26.300279 containerd[1633]: 2025-01-30 18:08:26.234 [INFO][4566] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.37.195/26] handle="k8s-pod-network.d7765d1b480e8bb7df242f6e566915552050e97c3d82a5fc724769ba71362fa8" host="srv-xoz4v.gb1.brightbox.com" Jan 30 18:08:26.300279 containerd[1633]: 2025-01-30 18:08:26.235 [INFO][4566] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 18:08:26.300279 containerd[1633]: 2025-01-30 18:08:26.235 [INFO][4566] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.37.195/26] IPv6=[] ContainerID="d7765d1b480e8bb7df242f6e566915552050e97c3d82a5fc724769ba71362fa8" HandleID="k8s-pod-network.d7765d1b480e8bb7df242f6e566915552050e97c3d82a5fc724769ba71362fa8" Workload="srv--xoz4v.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--cs5tc-eth0" Jan 30 18:08:26.303520 containerd[1633]: 2025-01-30 18:08:26.243 [INFO][4545] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d7765d1b480e8bb7df242f6e566915552050e97c3d82a5fc724769ba71362fa8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cs5tc" WorkloadEndpoint="srv--xoz4v.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--cs5tc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xoz4v.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--cs5tc-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"27fc1c1a-8707-42c1-a228-145009810bf8", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 18, 7, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xoz4v.gb1.brightbox.com", ContainerID:"", Pod:"coredns-7db6d8ff4d-cs5tc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.37.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3f59a49b742", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 18:08:26.303520 containerd[1633]: 2025-01-30 18:08:26.248 [INFO][4545] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.37.195/32] ContainerID="d7765d1b480e8bb7df242f6e566915552050e97c3d82a5fc724769ba71362fa8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cs5tc" WorkloadEndpoint="srv--xoz4v.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--cs5tc-eth0" Jan 30 18:08:26.303520 containerd[1633]: 2025-01-30 18:08:26.248 [INFO][4545] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3f59a49b742 ContainerID="d7765d1b480e8bb7df242f6e566915552050e97c3d82a5fc724769ba71362fa8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cs5tc" WorkloadEndpoint="srv--xoz4v.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--cs5tc-eth0" Jan 30 18:08:26.303520 containerd[1633]: 2025-01-30 18:08:26.261 [INFO][4545] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d7765d1b480e8bb7df242f6e566915552050e97c3d82a5fc724769ba71362fa8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cs5tc" WorkloadEndpoint="srv--xoz4v.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--cs5tc-eth0" Jan 30 18:08:26.303520 containerd[1633]: 2025-01-30 18:08:26.261 [INFO][4545] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d7765d1b480e8bb7df242f6e566915552050e97c3d82a5fc724769ba71362fa8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cs5tc" WorkloadEndpoint="srv--xoz4v.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--cs5tc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xoz4v.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--cs5tc-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"27fc1c1a-8707-42c1-a228-145009810bf8", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 18, 7, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xoz4v.gb1.brightbox.com", ContainerID:"d7765d1b480e8bb7df242f6e566915552050e97c3d82a5fc724769ba71362fa8", Pod:"coredns-7db6d8ff4d-cs5tc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.37.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3f59a49b742", MAC:"a6:95:e0:c3:cc:8a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 18:08:26.303520 containerd[1633]: 2025-01-30 18:08:26.288 [INFO][4545] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d7765d1b480e8bb7df242f6e566915552050e97c3d82a5fc724769ba71362fa8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cs5tc" WorkloadEndpoint="srv--xoz4v.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--cs5tc-eth0" Jan 30 18:08:26.372320 containerd[1633]: time="2025-01-30T18:08:26.371690842Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 18:08:26.372320 containerd[1633]: time="2025-01-30T18:08:26.371933013Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 18:08:26.372320 containerd[1633]: time="2025-01-30T18:08:26.371960250Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 18:08:26.372903 containerd[1633]: time="2025-01-30T18:08:26.372648891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 18:08:26.459177 containerd[1633]: time="2025-01-30T18:08:26.458859547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6748df8946-p6bqf,Uid:89cbff21-1d7a-4590-8dec-898c81289a13,Namespace:calico-system,Attempt:1,} returns sandbox id \"d04f5bacfaa13b59571d7f32cfa675c627125c5d0164177ba16b1e50822e30a4\"" Jan 30 18:08:26.464287 containerd[1633]: time="2025-01-30T18:08:26.462737849Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 30 18:08:26.490847 containerd[1633]: time="2025-01-30T18:08:26.490786407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-cs5tc,Uid:27fc1c1a-8707-42c1-a228-145009810bf8,Namespace:kube-system,Attempt:1,} returns sandbox id \"d7765d1b480e8bb7df242f6e566915552050e97c3d82a5fc724769ba71362fa8\"" Jan 30 18:08:26.496364 containerd[1633]: time="2025-01-30T18:08:26.496313099Z" level=info msg="CreateContainer within sandbox \"d7765d1b480e8bb7df242f6e566915552050e97c3d82a5fc724769ba71362fa8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 18:08:26.511370 containerd[1633]: time="2025-01-30T18:08:26.511282656Z" level=info msg="CreateContainer within sandbox \"d7765d1b480e8bb7df242f6e566915552050e97c3d82a5fc724769ba71362fa8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"695127df613dd9099297fefe627ee286dd9b11212bc8a23c16821be6ccb85a9b\"" Jan 30 18:08:26.513190 containerd[1633]: time="2025-01-30T18:08:26.512857334Z" level=info msg="StartContainer for \"695127df613dd9099297fefe627ee286dd9b11212bc8a23c16821be6ccb85a9b\"" Jan 30 18:08:26.597336 containerd[1633]: time="2025-01-30T18:08:26.597277743Z" level=info msg="StartContainer for \"695127df613dd9099297fefe627ee286dd9b11212bc8a23c16821be6ccb85a9b\" returns successfully" Jan 30 18:08:26.639062 containerd[1633]: time="2025-01-30T18:08:26.638920751Z" level=info msg="StopPodSandbox for \"ab43b944ab38054694cdc3cc58b471b897d918150326577a7798fc7fc96ce9dd\"" Jan 30 18:08:26.641663 containerd[1633]: time="2025-01-30T18:08:26.641120539Z" level=info msg="StopPodSandbox for \"265d769654847538caba8d9112bcc641bd0117d300f01a0e5f8844084c36ecbb\"" Jan 30 18:08:26.835574 containerd[1633]: 2025-01-30 18:08:26.749 [INFO][4746] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ab43b944ab38054694cdc3cc58b471b897d918150326577a7798fc7fc96ce9dd" Jan 30 18:08:26.835574 containerd[1633]: 2025-01-30 18:08:26.751 [INFO][4746] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ab43b944ab38054694cdc3cc58b471b897d918150326577a7798fc7fc96ce9dd" iface="eth0" netns="/var/run/netns/cni-c41e89d6-bc40-d183-2403-18b3addf70df" Jan 30 18:08:26.835574 containerd[1633]: 2025-01-30 18:08:26.752 [INFO][4746] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ab43b944ab38054694cdc3cc58b471b897d918150326577a7798fc7fc96ce9dd" iface="eth0" netns="/var/run/netns/cni-c41e89d6-bc40-d183-2403-18b3addf70df" Jan 30 18:08:26.835574 containerd[1633]: 2025-01-30 18:08:26.755 [INFO][4746] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ab43b944ab38054694cdc3cc58b471b897d918150326577a7798fc7fc96ce9dd" iface="eth0" netns="/var/run/netns/cni-c41e89d6-bc40-d183-2403-18b3addf70df" Jan 30 18:08:26.835574 containerd[1633]: 2025-01-30 18:08:26.756 [INFO][4746] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ab43b944ab38054694cdc3cc58b471b897d918150326577a7798fc7fc96ce9dd" Jan 30 18:08:26.835574 containerd[1633]: 2025-01-30 18:08:26.756 [INFO][4746] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ab43b944ab38054694cdc3cc58b471b897d918150326577a7798fc7fc96ce9dd" Jan 30 18:08:26.835574 containerd[1633]: 2025-01-30 18:08:26.808 [INFO][4758] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ab43b944ab38054694cdc3cc58b471b897d918150326577a7798fc7fc96ce9dd" HandleID="k8s-pod-network.ab43b944ab38054694cdc3cc58b471b897d918150326577a7798fc7fc96ce9dd" Workload="srv--xoz4v.gb1.brightbox.com-k8s-csi--node--driver--tdn4g-eth0" Jan 30 18:08:26.835574 containerd[1633]: 2025-01-30 18:08:26.809 [INFO][4758] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 18:08:26.835574 containerd[1633]: 2025-01-30 18:08:26.809 [INFO][4758] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 18:08:26.835574 containerd[1633]: 2025-01-30 18:08:26.818 [WARNING][4758] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ab43b944ab38054694cdc3cc58b471b897d918150326577a7798fc7fc96ce9dd" HandleID="k8s-pod-network.ab43b944ab38054694cdc3cc58b471b897d918150326577a7798fc7fc96ce9dd" Workload="srv--xoz4v.gb1.brightbox.com-k8s-csi--node--driver--tdn4g-eth0" Jan 30 18:08:26.835574 containerd[1633]: 2025-01-30 18:08:26.818 [INFO][4758] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ab43b944ab38054694cdc3cc58b471b897d918150326577a7798fc7fc96ce9dd" HandleID="k8s-pod-network.ab43b944ab38054694cdc3cc58b471b897d918150326577a7798fc7fc96ce9dd" Workload="srv--xoz4v.gb1.brightbox.com-k8s-csi--node--driver--tdn4g-eth0" Jan 30 18:08:26.835574 containerd[1633]: 2025-01-30 18:08:26.821 [INFO][4758] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 18:08:26.835574 containerd[1633]: 2025-01-30 18:08:26.826 [INFO][4746] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ab43b944ab38054694cdc3cc58b471b897d918150326577a7798fc7fc96ce9dd" Jan 30 18:08:26.838755 containerd[1633]: time="2025-01-30T18:08:26.837633004Z" level=info msg="TearDown network for sandbox \"ab43b944ab38054694cdc3cc58b471b897d918150326577a7798fc7fc96ce9dd\" successfully" Jan 30 18:08:26.838755 containerd[1633]: time="2025-01-30T18:08:26.837731371Z" level=info msg="StopPodSandbox for \"ab43b944ab38054694cdc3cc58b471b897d918150326577a7798fc7fc96ce9dd\" returns successfully" Jan 30 18:08:26.845594 containerd[1633]: time="2025-01-30T18:08:26.845157208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tdn4g,Uid:ef41d1d4-bd1d-45f6-8486-f723f43e3c94,Namespace:calico-system,Attempt:1,}" Jan 30 18:08:26.849567 containerd[1633]: 2025-01-30 18:08:26.743 [INFO][4747] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="265d769654847538caba8d9112bcc641bd0117d300f01a0e5f8844084c36ecbb" Jan 30 18:08:26.849567 containerd[1633]: 2025-01-30 18:08:26.746 [INFO][4747] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="265d769654847538caba8d9112bcc641bd0117d300f01a0e5f8844084c36ecbb" iface="eth0" netns="/var/run/netns/cni-06b9e5c2-a430-68f1-ddc9-3c4a2f312ba5" Jan 30 18:08:26.849567 containerd[1633]: 2025-01-30 18:08:26.751 [INFO][4747] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="265d769654847538caba8d9112bcc641bd0117d300f01a0e5f8844084c36ecbb" iface="eth0" netns="/var/run/netns/cni-06b9e5c2-a430-68f1-ddc9-3c4a2f312ba5" Jan 30 18:08:26.849567 containerd[1633]: 2025-01-30 18:08:26.754 [INFO][4747] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="265d769654847538caba8d9112bcc641bd0117d300f01a0e5f8844084c36ecbb" iface="eth0" netns="/var/run/netns/cni-06b9e5c2-a430-68f1-ddc9-3c4a2f312ba5" Jan 30 18:08:26.849567 containerd[1633]: 2025-01-30 18:08:26.755 [INFO][4747] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="265d769654847538caba8d9112bcc641bd0117d300f01a0e5f8844084c36ecbb" Jan 30 18:08:26.849567 containerd[1633]: 2025-01-30 18:08:26.755 [INFO][4747] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="265d769654847538caba8d9112bcc641bd0117d300f01a0e5f8844084c36ecbb" Jan 30 18:08:26.849567 containerd[1633]: 2025-01-30 18:08:26.805 [INFO][4759] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="265d769654847538caba8d9112bcc641bd0117d300f01a0e5f8844084c36ecbb" HandleID="k8s-pod-network.265d769654847538caba8d9112bcc641bd0117d300f01a0e5f8844084c36ecbb" Workload="srv--xoz4v.gb1.brightbox.com-k8s-calico--apiserver--86c44fb797--bqbzc-eth0" Jan 30 18:08:26.849567 containerd[1633]: 2025-01-30 18:08:26.809 [INFO][4759] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 18:08:26.849567 containerd[1633]: 2025-01-30 18:08:26.821 [INFO][4759] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 18:08:26.849567 containerd[1633]: 2025-01-30 18:08:26.840 [WARNING][4759] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="265d769654847538caba8d9112bcc641bd0117d300f01a0e5f8844084c36ecbb" HandleID="k8s-pod-network.265d769654847538caba8d9112bcc641bd0117d300f01a0e5f8844084c36ecbb" Workload="srv--xoz4v.gb1.brightbox.com-k8s-calico--apiserver--86c44fb797--bqbzc-eth0" Jan 30 18:08:26.849567 containerd[1633]: 2025-01-30 18:08:26.840 [INFO][4759] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="265d769654847538caba8d9112bcc641bd0117d300f01a0e5f8844084c36ecbb" HandleID="k8s-pod-network.265d769654847538caba8d9112bcc641bd0117d300f01a0e5f8844084c36ecbb" Workload="srv--xoz4v.gb1.brightbox.com-k8s-calico--apiserver--86c44fb797--bqbzc-eth0" Jan 30 18:08:26.849567 containerd[1633]: 2025-01-30 18:08:26.843 [INFO][4759] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 18:08:26.849567 containerd[1633]: 2025-01-30 18:08:26.846 [INFO][4747] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="265d769654847538caba8d9112bcc641bd0117d300f01a0e5f8844084c36ecbb" Jan 30 18:08:26.853082 containerd[1633]: time="2025-01-30T18:08:26.850617446Z" level=info msg="TearDown network for sandbox \"265d769654847538caba8d9112bcc641bd0117d300f01a0e5f8844084c36ecbb\" successfully" Jan 30 18:08:26.853082 containerd[1633]: time="2025-01-30T18:08:26.850673466Z" level=info msg="StopPodSandbox for \"265d769654847538caba8d9112bcc641bd0117d300f01a0e5f8844084c36ecbb\" returns successfully" Jan 30 18:08:26.854606 containerd[1633]: time="2025-01-30T18:08:26.854223090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86c44fb797-bqbzc,Uid:ba41a6ef-6128-4cea-831d-b51553e05c5c,Namespace:calico-apiserver,Attempt:1,}" Jan 30 18:08:27.021004 systemd[1]: run-containerd-runc-k8s.io-d7765d1b480e8bb7df242f6e566915552050e97c3d82a5fc724769ba71362fa8-runc.ghdLIp.mount: Deactivated successfully. Jan 30 18:08:27.021286 systemd[1]: run-netns-cni\x2dc41e89d6\x2dbc40\x2dd183\x2d2403\x2d18b3addf70df.mount: Deactivated successfully. Jan 30 18:08:27.021461 systemd[1]: run-netns-cni\x2d06b9e5c2\x2da430\x2d68f1\x2dddc9\x2d3c4a2f312ba5.mount: Deactivated successfully. Jan 30 18:08:27.084941 kubelet[2960]: I0130 18:08:27.081273 2960 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-cs5tc" podStartSLOduration=38.081244607 podStartE2EDuration="38.081244607s" podCreationTimestamp="2025-01-30 18:07:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 18:08:27.040067124 +0000 UTC m=+52.606672410" watchObservedRunningTime="2025-01-30 18:08:27.081244607 +0000 UTC m=+52.647849887" Jan 30 18:08:27.347297 systemd-networkd[1256]: cali3408c576d6c: Link UP Jan 30 18:08:27.350489 systemd-networkd[1256]: cali3408c576d6c: Gained carrier Jan 30 18:08:27.388674 containerd[1633]: 2025-01-30 18:08:27.098 [INFO][4780] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--xoz4v.gb1.brightbox.com-k8s-calico--apiserver--86c44fb797--bqbzc-eth0 calico-apiserver-86c44fb797- calico-apiserver ba41a6ef-6128-4cea-831d-b51553e05c5c 832 0 2025-01-30 18:07:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:86c44fb797 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-xoz4v.gb1.brightbox.com calico-apiserver-86c44fb797-bqbzc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3408c576d6c [] []}} ContainerID="88881d5fc090f0a88218d05613f67ccf902663165d85aad395e4ed9f4785d742" Namespace="calico-apiserver" Pod="calico-apiserver-86c44fb797-bqbzc" WorkloadEndpoint="srv--xoz4v.gb1.brightbox.com-k8s-calico--apiserver--86c44fb797--bqbzc-" Jan 30 18:08:27.388674 containerd[1633]: 2025-01-30 18:08:27.098 [INFO][4780] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="88881d5fc090f0a88218d05613f67ccf902663165d85aad395e4ed9f4785d742" Namespace="calico-apiserver" Pod="calico-apiserver-86c44fb797-bqbzc" WorkloadEndpoint="srv--xoz4v.gb1.brightbox.com-k8s-calico--apiserver--86c44fb797--bqbzc-eth0" Jan 30 18:08:27.388674 containerd[1633]: 2025-01-30 18:08:27.206 [INFO][4798] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="88881d5fc090f0a88218d05613f67ccf902663165d85aad395e4ed9f4785d742" HandleID="k8s-pod-network.88881d5fc090f0a88218d05613f67ccf902663165d85aad395e4ed9f4785d742" Workload="srv--xoz4v.gb1.brightbox.com-k8s-calico--apiserver--86c44fb797--bqbzc-eth0" Jan 30 18:08:27.388674 containerd[1633]: 2025-01-30 18:08:27.225 [INFO][4798] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="88881d5fc090f0a88218d05613f67ccf902663165d85aad395e4ed9f4785d742" HandleID="k8s-pod-network.88881d5fc090f0a88218d05613f67ccf902663165d85aad395e4ed9f4785d742" Workload="srv--xoz4v.gb1.brightbox.com-k8s-calico--apiserver--86c44fb797--bqbzc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000103b20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-xoz4v.gb1.brightbox.com", "pod":"calico-apiserver-86c44fb797-bqbzc", "timestamp":"2025-01-30 18:08:27.205982001 +0000 UTC"}, Hostname:"srv-xoz4v.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 18:08:27.388674 containerd[1633]: 2025-01-30 18:08:27.225 [INFO][4798] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 18:08:27.388674 containerd[1633]: 2025-01-30 18:08:27.225 [INFO][4798] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 18:08:27.388674 containerd[1633]: 2025-01-30 18:08:27.225 [INFO][4798] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-xoz4v.gb1.brightbox.com' Jan 30 18:08:27.388674 containerd[1633]: 2025-01-30 18:08:27.229 [INFO][4798] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.88881d5fc090f0a88218d05613f67ccf902663165d85aad395e4ed9f4785d742" host="srv-xoz4v.gb1.brightbox.com" Jan 30 18:08:27.388674 containerd[1633]: 2025-01-30 18:08:27.240 [INFO][4798] ipam/ipam.go 372: Looking up existing affinities for host host="srv-xoz4v.gb1.brightbox.com" Jan 30 18:08:27.388674 containerd[1633]: 2025-01-30 18:08:27.250 [INFO][4798] ipam/ipam.go 489: Trying affinity for 192.168.37.192/26 host="srv-xoz4v.gb1.brightbox.com" Jan 30 18:08:27.388674 containerd[1633]: 2025-01-30 18:08:27.257 [INFO][4798] ipam/ipam.go 155: Attempting to load block cidr=192.168.37.192/26 host="srv-xoz4v.gb1.brightbox.com" Jan 30 18:08:27.388674 containerd[1633]: 2025-01-30 18:08:27.270 [INFO][4798] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.37.192/26 host="srv-xoz4v.gb1.brightbox.com" Jan 30 18:08:27.388674 containerd[1633]: 2025-01-30 18:08:27.271 [INFO][4798] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.37.192/26 handle="k8s-pod-network.88881d5fc090f0a88218d05613f67ccf902663165d85aad395e4ed9f4785d742" host="srv-xoz4v.gb1.brightbox.com" Jan 30 18:08:27.388674 containerd[1633]: 2025-01-30 18:08:27.276 [INFO][4798] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.88881d5fc090f0a88218d05613f67ccf902663165d85aad395e4ed9f4785d742 Jan 30 18:08:27.388674 containerd[1633]: 2025-01-30 18:08:27.288 [INFO][4798] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.37.192/26 handle="k8s-pod-network.88881d5fc090f0a88218d05613f67ccf902663165d85aad395e4ed9f4785d742" host="srv-xoz4v.gb1.brightbox.com" Jan 30 18:08:27.388674 containerd[1633]: 2025-01-30 18:08:27.306 [INFO][4798] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.37.196/26] block=192.168.37.192/26 handle="k8s-pod-network.88881d5fc090f0a88218d05613f67ccf902663165d85aad395e4ed9f4785d742" host="srv-xoz4v.gb1.brightbox.com" Jan 30 18:08:27.388674 containerd[1633]: 2025-01-30 18:08:27.307 [INFO][4798] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.37.196/26] handle="k8s-pod-network.88881d5fc090f0a88218d05613f67ccf902663165d85aad395e4ed9f4785d742" host="srv-xoz4v.gb1.brightbox.com" Jan 30 18:08:27.388674 containerd[1633]: 2025-01-30 18:08:27.307 [INFO][4798] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 18:08:27.388674 containerd[1633]: 2025-01-30 18:08:27.307 [INFO][4798] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.37.196/26] IPv6=[] ContainerID="88881d5fc090f0a88218d05613f67ccf902663165d85aad395e4ed9f4785d742" HandleID="k8s-pod-network.88881d5fc090f0a88218d05613f67ccf902663165d85aad395e4ed9f4785d742" Workload="srv--xoz4v.gb1.brightbox.com-k8s-calico--apiserver--86c44fb797--bqbzc-eth0" Jan 30 18:08:27.393743 containerd[1633]: 2025-01-30 18:08:27.323 [INFO][4780] cni-plugin/k8s.go 386: Populated endpoint ContainerID="88881d5fc090f0a88218d05613f67ccf902663165d85aad395e4ed9f4785d742" Namespace="calico-apiserver" Pod="calico-apiserver-86c44fb797-bqbzc" WorkloadEndpoint="srv--xoz4v.gb1.brightbox.com-k8s-calico--apiserver--86c44fb797--bqbzc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xoz4v.gb1.brightbox.com-k8s-calico--apiserver--86c44fb797--bqbzc-eth0", GenerateName:"calico-apiserver-86c44fb797-", Namespace:"calico-apiserver", SelfLink:"", UID:"ba41a6ef-6128-4cea-831d-b51553e05c5c", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 18, 7, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86c44fb797", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xoz4v.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-86c44fb797-bqbzc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.37.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3408c576d6c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 18:08:27.393743 containerd[1633]: 2025-01-30 18:08:27.325 [INFO][4780] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.37.196/32] ContainerID="88881d5fc090f0a88218d05613f67ccf902663165d85aad395e4ed9f4785d742" Namespace="calico-apiserver" Pod="calico-apiserver-86c44fb797-bqbzc" WorkloadEndpoint="srv--xoz4v.gb1.brightbox.com-k8s-calico--apiserver--86c44fb797--bqbzc-eth0" Jan 30 18:08:27.393743 containerd[1633]: 2025-01-30 18:08:27.329 [INFO][4780] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3408c576d6c ContainerID="88881d5fc090f0a88218d05613f67ccf902663165d85aad395e4ed9f4785d742" Namespace="calico-apiserver" Pod="calico-apiserver-86c44fb797-bqbzc" WorkloadEndpoint="srv--xoz4v.gb1.brightbox.com-k8s-calico--apiserver--86c44fb797--bqbzc-eth0" Jan 30 18:08:27.393743 containerd[1633]: 2025-01-30 18:08:27.351 [INFO][4780] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="88881d5fc090f0a88218d05613f67ccf902663165d85aad395e4ed9f4785d742" Namespace="calico-apiserver" Pod="calico-apiserver-86c44fb797-bqbzc" WorkloadEndpoint="srv--xoz4v.gb1.brightbox.com-k8s-calico--apiserver--86c44fb797--bqbzc-eth0" Jan 30 18:08:27.393743 containerd[1633]: 2025-01-30 18:08:27.353 [INFO][4780] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="88881d5fc090f0a88218d05613f67ccf902663165d85aad395e4ed9f4785d742" Namespace="calico-apiserver" Pod="calico-apiserver-86c44fb797-bqbzc" WorkloadEndpoint="srv--xoz4v.gb1.brightbox.com-k8s-calico--apiserver--86c44fb797--bqbzc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xoz4v.gb1.brightbox.com-k8s-calico--apiserver--86c44fb797--bqbzc-eth0", GenerateName:"calico-apiserver-86c44fb797-", Namespace:"calico-apiserver", SelfLink:"", UID:"ba41a6ef-6128-4cea-831d-b51553e05c5c", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 18, 7, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86c44fb797", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xoz4v.gb1.brightbox.com", ContainerID:"88881d5fc090f0a88218d05613f67ccf902663165d85aad395e4ed9f4785d742", Pod:"calico-apiserver-86c44fb797-bqbzc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.37.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3408c576d6c", MAC:"3a:75:23:6c:82:9e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 18:08:27.393743 containerd[1633]: 2025-01-30 18:08:27.377 [INFO][4780] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="88881d5fc090f0a88218d05613f67ccf902663165d85aad395e4ed9f4785d742" Namespace="calico-apiserver" Pod="calico-apiserver-86c44fb797-bqbzc" WorkloadEndpoint="srv--xoz4v.gb1.brightbox.com-k8s-calico--apiserver--86c44fb797--bqbzc-eth0" Jan 30 18:08:27.446280 systemd-networkd[1256]: calif6a228a1585: Link UP Jan 30 18:08:27.450011 systemd-networkd[1256]: calif6a228a1585: Gained carrier Jan 30 18:08:27.458113 systemd-networkd[1256]: cali3f59a49b742: Gained IPv6LL Jan 30 18:08:27.610097 containerd[1633]: time="2025-01-30T18:08:27.603709894Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 18:08:27.610097 containerd[1633]: time="2025-01-30T18:08:27.605830251Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 18:08:27.610097 containerd[1633]: time="2025-01-30T18:08:27.607200982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 18:08:27.612850 containerd[1633]: time="2025-01-30T18:08:27.609822000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 18:08:27.669893 containerd[1633]: time="2025-01-30T18:08:27.668672900Z" level=info msg="StopPodSandbox for \"0e69a7e4ce5f71f3d7cceb985b0c50dad158ba76a627db60dbeebcb50f031c6f\"" Jan 30 18:08:27.832615 containerd[1633]: 2025-01-30 18:08:27.076 [INFO][4770] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--xoz4v.gb1.brightbox.com-k8s-csi--node--driver--tdn4g-eth0 csi-node-driver- calico-system ef41d1d4-bd1d-45f6-8486-f723f43e3c94 833 0 2025-01-30 18:07:56 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-xoz4v.gb1.brightbox.com csi-node-driver-tdn4g eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif6a228a1585 [] []}} ContainerID="24b66df4f769590870178ae4101c8ff8e2e523efabfb44b30797aff61ef3c02f" Namespace="calico-system" Pod="csi-node-driver-tdn4g" WorkloadEndpoint="srv--xoz4v.gb1.brightbox.com-k8s-csi--node--driver--tdn4g-" Jan 30 18:08:27.832615 containerd[1633]: 2025-01-30 18:08:27.081 [INFO][4770] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="24b66df4f769590870178ae4101c8ff8e2e523efabfb44b30797aff61ef3c02f" Namespace="calico-system" Pod="csi-node-driver-tdn4g" WorkloadEndpoint="srv--xoz4v.gb1.brightbox.com-k8s-csi--node--driver--tdn4g-eth0" Jan 30 18:08:27.832615 containerd[1633]: 2025-01-30 18:08:27.256 [INFO][4804] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="24b66df4f769590870178ae4101c8ff8e2e523efabfb44b30797aff61ef3c02f" HandleID="k8s-pod-network.24b66df4f769590870178ae4101c8ff8e2e523efabfb44b30797aff61ef3c02f" Workload="srv--xoz4v.gb1.brightbox.com-k8s-csi--node--driver--tdn4g-eth0" Jan 30 18:08:27.832615 containerd[1633]: 2025-01-30 18:08:27.276 [INFO][4804] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="24b66df4f769590870178ae4101c8ff8e2e523efabfb44b30797aff61ef3c02f" HandleID="k8s-pod-network.24b66df4f769590870178ae4101c8ff8e2e523efabfb44b30797aff61ef3c02f" Workload="srv--xoz4v.gb1.brightbox.com-k8s-csi--node--driver--tdn4g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004bd230), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-xoz4v.gb1.brightbox.com", "pod":"csi-node-driver-tdn4g", "timestamp":"2025-01-30 18:08:27.256457482 +0000 UTC"}, Hostname:"srv-xoz4v.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 18:08:27.832615 containerd[1633]: 2025-01-30 18:08:27.276 [INFO][4804] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 18:08:27.832615 containerd[1633]: 2025-01-30 18:08:27.311 [INFO][4804] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 18:08:27.832615 containerd[1633]: 2025-01-30 18:08:27.311 [INFO][4804] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-xoz4v.gb1.brightbox.com' Jan 30 18:08:27.832615 containerd[1633]: 2025-01-30 18:08:27.318 [INFO][4804] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.24b66df4f769590870178ae4101c8ff8e2e523efabfb44b30797aff61ef3c02f" host="srv-xoz4v.gb1.brightbox.com" Jan 30 18:08:27.832615 containerd[1633]: 2025-01-30 18:08:27.338 [INFO][4804] ipam/ipam.go 372: Looking up existing affinities for host host="srv-xoz4v.gb1.brightbox.com" Jan 30 18:08:27.832615 containerd[1633]: 2025-01-30 18:08:27.358 [INFO][4804] ipam/ipam.go 489: Trying affinity for 192.168.37.192/26 host="srv-xoz4v.gb1.brightbox.com" Jan 30 18:08:27.832615 containerd[1633]: 2025-01-30 18:08:27.367 [INFO][4804] ipam/ipam.go 155: Attempting to load block cidr=192.168.37.192/26 host="srv-xoz4v.gb1.brightbox.com" Jan 30 18:08:27.832615 containerd[1633]: 2025-01-30 18:08:27.380 [INFO][4804] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.37.192/26 host="srv-xoz4v.gb1.brightbox.com" Jan 30 18:08:27.832615 containerd[1633]: 2025-01-30 18:08:27.380 [INFO][4804] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.37.192/26 handle="k8s-pod-network.24b66df4f769590870178ae4101c8ff8e2e523efabfb44b30797aff61ef3c02f" host="srv-xoz4v.gb1.brightbox.com" Jan 30 18:08:27.832615 containerd[1633]: 2025-01-30 18:08:27.384 [INFO][4804] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.24b66df4f769590870178ae4101c8ff8e2e523efabfb44b30797aff61ef3c02f Jan 30 18:08:27.832615 containerd[1633]: 2025-01-30 18:08:27.400 [INFO][4804] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.37.192/26 handle="k8s-pod-network.24b66df4f769590870178ae4101c8ff8e2e523efabfb44b30797aff61ef3c02f" host="srv-xoz4v.gb1.brightbox.com" Jan 30 18:08:27.832615 containerd[1633]: 2025-01-30 18:08:27.416 [INFO][4804] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.37.197/26] block=192.168.37.192/26 handle="k8s-pod-network.24b66df4f769590870178ae4101c8ff8e2e523efabfb44b30797aff61ef3c02f" host="srv-xoz4v.gb1.brightbox.com" Jan 30 18:08:27.832615 containerd[1633]: 2025-01-30 18:08:27.416 [INFO][4804] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.37.197/26] handle="k8s-pod-network.24b66df4f769590870178ae4101c8ff8e2e523efabfb44b30797aff61ef3c02f" host="srv-xoz4v.gb1.brightbox.com" Jan 30 18:08:27.832615 containerd[1633]: 2025-01-30 18:08:27.416 [INFO][4804] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 18:08:27.832615 containerd[1633]: 2025-01-30 18:08:27.416 [INFO][4804] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.37.197/26] IPv6=[] ContainerID="24b66df4f769590870178ae4101c8ff8e2e523efabfb44b30797aff61ef3c02f" HandleID="k8s-pod-network.24b66df4f769590870178ae4101c8ff8e2e523efabfb44b30797aff61ef3c02f" Workload="srv--xoz4v.gb1.brightbox.com-k8s-csi--node--driver--tdn4g-eth0" Jan 30 18:08:27.833881 containerd[1633]: 2025-01-30 18:08:27.430 [INFO][4770] cni-plugin/k8s.go 386: Populated endpoint ContainerID="24b66df4f769590870178ae4101c8ff8e2e523efabfb44b30797aff61ef3c02f" Namespace="calico-system" Pod="csi-node-driver-tdn4g" WorkloadEndpoint="srv--xoz4v.gb1.brightbox.com-k8s-csi--node--driver--tdn4g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xoz4v.gb1.brightbox.com-k8s-csi--node--driver--tdn4g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ef41d1d4-bd1d-45f6-8486-f723f43e3c94", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 18, 7, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xoz4v.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-tdn4g", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.37.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif6a228a1585", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 18:08:27.833881 containerd[1633]: 2025-01-30 18:08:27.430 [INFO][4770] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.37.197/32] ContainerID="24b66df4f769590870178ae4101c8ff8e2e523efabfb44b30797aff61ef3c02f" Namespace="calico-system" Pod="csi-node-driver-tdn4g" WorkloadEndpoint="srv--xoz4v.gb1.brightbox.com-k8s-csi--node--driver--tdn4g-eth0" Jan 30 18:08:27.833881 containerd[1633]: 2025-01-30 18:08:27.430 [INFO][4770] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif6a228a1585 ContainerID="24b66df4f769590870178ae4101c8ff8e2e523efabfb44b30797aff61ef3c02f" Namespace="calico-system" Pod="csi-node-driver-tdn4g" WorkloadEndpoint="srv--xoz4v.gb1.brightbox.com-k8s-csi--node--driver--tdn4g-eth0" Jan 30 18:08:27.833881 containerd[1633]: 2025-01-30 18:08:27.448 [INFO][4770] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="24b66df4f769590870178ae4101c8ff8e2e523efabfb44b30797aff61ef3c02f" Namespace="calico-system" Pod="csi-node-driver-tdn4g" WorkloadEndpoint="srv--xoz4v.gb1.brightbox.com-k8s-csi--node--driver--tdn4g-eth0" Jan 30 18:08:27.833881 containerd[1633]: 2025-01-30 18:08:27.451 [INFO][4770] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="24b66df4f769590870178ae4101c8ff8e2e523efabfb44b30797aff61ef3c02f" Namespace="calico-system" Pod="csi-node-driver-tdn4g" WorkloadEndpoint="srv--xoz4v.gb1.brightbox.com-k8s-csi--node--driver--tdn4g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xoz4v.gb1.brightbox.com-k8s-csi--node--driver--tdn4g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ef41d1d4-bd1d-45f6-8486-f723f43e3c94", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 18, 7, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xoz4v.gb1.brightbox.com", ContainerID:"24b66df4f769590870178ae4101c8ff8e2e523efabfb44b30797aff61ef3c02f", Pod:"csi-node-driver-tdn4g", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.37.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif6a228a1585", MAC:"3a:da:0a:0a:28:b7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 18:08:27.833881 containerd[1633]: 2025-01-30 18:08:27.817 [INFO][4770] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="24b66df4f769590870178ae4101c8ff8e2e523efabfb44b30797aff61ef3c02f" Namespace="calico-system" Pod="csi-node-driver-tdn4g" WorkloadEndpoint="srv--xoz4v.gb1.brightbox.com-k8s-csi--node--driver--tdn4g-eth0" Jan 30 18:08:27.907616 systemd-networkd[1256]: cali2bbf1d491b0: Gained IPv6LL Jan 30 18:08:27.996116 containerd[1633]: time="2025-01-30T18:08:27.992552489Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 18:08:28.008050 containerd[1633]: time="2025-01-30T18:08:28.000091891Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 18:08:28.008050 containerd[1633]: time="2025-01-30T18:08:28.000143745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 18:08:28.008050 containerd[1633]: time="2025-01-30T18:08:28.000325050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 18:08:28.015783 containerd[1633]: time="2025-01-30T18:08:28.015704219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86c44fb797-bqbzc,Uid:ba41a6ef-6128-4cea-831d-b51553e05c5c,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"88881d5fc090f0a88218d05613f67ccf902663165d85aad395e4ed9f4785d742\"" Jan 30 18:08:28.167672 containerd[1633]: 2025-01-30 18:08:27.898 [INFO][4876] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0e69a7e4ce5f71f3d7cceb985b0c50dad158ba76a627db60dbeebcb50f031c6f" Jan 30 18:08:28.167672 containerd[1633]: 2025-01-30 18:08:27.899 [INFO][4876] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0e69a7e4ce5f71f3d7cceb985b0c50dad158ba76a627db60dbeebcb50f031c6f" iface="eth0" netns="/var/run/netns/cni-805a01f9-c4c9-a15f-752e-b7bc267fb9a0" Jan 30 18:08:28.167672 containerd[1633]: 2025-01-30 18:08:27.901 [INFO][4876] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0e69a7e4ce5f71f3d7cceb985b0c50dad158ba76a627db60dbeebcb50f031c6f" iface="eth0" netns="/var/run/netns/cni-805a01f9-c4c9-a15f-752e-b7bc267fb9a0" Jan 30 18:08:28.167672 containerd[1633]: 2025-01-30 18:08:27.903 [INFO][4876] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0e69a7e4ce5f71f3d7cceb985b0c50dad158ba76a627db60dbeebcb50f031c6f" iface="eth0" netns="/var/run/netns/cni-805a01f9-c4c9-a15f-752e-b7bc267fb9a0" Jan 30 18:08:28.167672 containerd[1633]: 2025-01-30 18:08:27.903 [INFO][4876] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0e69a7e4ce5f71f3d7cceb985b0c50dad158ba76a627db60dbeebcb50f031c6f" Jan 30 18:08:28.167672 containerd[1633]: 2025-01-30 18:08:27.903 [INFO][4876] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0e69a7e4ce5f71f3d7cceb985b0c50dad158ba76a627db60dbeebcb50f031c6f" Jan 30 18:08:28.167672 containerd[1633]: 2025-01-30 18:08:28.111 [INFO][4900] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0e69a7e4ce5f71f3d7cceb985b0c50dad158ba76a627db60dbeebcb50f031c6f" HandleID="k8s-pod-network.0e69a7e4ce5f71f3d7cceb985b0c50dad158ba76a627db60dbeebcb50f031c6f" Workload="srv--xoz4v.gb1.brightbox.com-k8s-calico--apiserver--86c44fb797--kj4m5-eth0" Jan 30 18:08:28.167672 containerd[1633]: 2025-01-30 18:08:28.111 [INFO][4900] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 18:08:28.167672 containerd[1633]: 2025-01-30 18:08:28.111 [INFO][4900] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 18:08:28.167672 containerd[1633]: 2025-01-30 18:08:28.143 [WARNING][4900] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0e69a7e4ce5f71f3d7cceb985b0c50dad158ba76a627db60dbeebcb50f031c6f" HandleID="k8s-pod-network.0e69a7e4ce5f71f3d7cceb985b0c50dad158ba76a627db60dbeebcb50f031c6f" Workload="srv--xoz4v.gb1.brightbox.com-k8s-calico--apiserver--86c44fb797--kj4m5-eth0" Jan 30 18:08:28.167672 containerd[1633]: 2025-01-30 18:08:28.144 [INFO][4900] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0e69a7e4ce5f71f3d7cceb985b0c50dad158ba76a627db60dbeebcb50f031c6f" HandleID="k8s-pod-network.0e69a7e4ce5f71f3d7cceb985b0c50dad158ba76a627db60dbeebcb50f031c6f" Workload="srv--xoz4v.gb1.brightbox.com-k8s-calico--apiserver--86c44fb797--kj4m5-eth0" Jan 30 18:08:28.167672 containerd[1633]: 2025-01-30 18:08:28.149 [INFO][4900] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 18:08:28.167672 containerd[1633]: 2025-01-30 18:08:28.157 [INFO][4876] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0e69a7e4ce5f71f3d7cceb985b0c50dad158ba76a627db60dbeebcb50f031c6f" Jan 30 18:08:28.167672 containerd[1633]: time="2025-01-30T18:08:28.167492140Z" level=info msg="TearDown network for sandbox \"0e69a7e4ce5f71f3d7cceb985b0c50dad158ba76a627db60dbeebcb50f031c6f\" successfully" Jan 30 18:08:28.167672 containerd[1633]: time="2025-01-30T18:08:28.167533578Z" level=info msg="StopPodSandbox for \"0e69a7e4ce5f71f3d7cceb985b0c50dad158ba76a627db60dbeebcb50f031c6f\" returns successfully" Jan 30 18:08:28.175169 containerd[1633]: time="2025-01-30T18:08:28.175028240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86c44fb797-kj4m5,Uid:8b9ee45a-38b3-4449-976f-eb0f425ce59f,Namespace:calico-apiserver,Attempt:1,}" Jan 30 18:08:28.180299 systemd[1]: run-netns-cni\x2d805a01f9\x2dc4c9\x2da15f\x2d752e\x2db7bc267fb9a0.mount: Deactivated successfully. Jan 30 18:08:28.340791 containerd[1633]: time="2025-01-30T18:08:28.340728792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tdn4g,Uid:ef41d1d4-bd1d-45f6-8486-f723f43e3c94,Namespace:calico-system,Attempt:1,} returns sandbox id \"24b66df4f769590870178ae4101c8ff8e2e523efabfb44b30797aff61ef3c02f\"" Jan 30 18:08:28.483251 systemd-networkd[1256]: cali3408c576d6c: Gained IPv6LL Jan 30 18:08:28.565575 systemd-networkd[1256]: calif045820def5: Link UP Jan 30 18:08:28.567659 systemd-networkd[1256]: calif045820def5: Gained carrier Jan 30 18:08:28.605720 containerd[1633]: 2025-01-30 18:08:28.409 [INFO][4941] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--xoz4v.gb1.brightbox.com-k8s-calico--apiserver--86c44fb797--kj4m5-eth0 calico-apiserver-86c44fb797- calico-apiserver 8b9ee45a-38b3-4449-976f-eb0f425ce59f 851 0 2025-01-30 18:07:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:86c44fb797 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-xoz4v.gb1.brightbox.com calico-apiserver-86c44fb797-kj4m5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif045820def5 [] []}} ContainerID="2e1ccb33c0a11d83a89c4c8b776ef6ce9d9cb67a9be3df08ef8776c6c174b695" Namespace="calico-apiserver" Pod="calico-apiserver-86c44fb797-kj4m5" WorkloadEndpoint="srv--xoz4v.gb1.brightbox.com-k8s-calico--apiserver--86c44fb797--kj4m5-" Jan 30 18:08:28.605720 containerd[1633]: 2025-01-30 18:08:28.410 [INFO][4941] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2e1ccb33c0a11d83a89c4c8b776ef6ce9d9cb67a9be3df08ef8776c6c174b695" Namespace="calico-apiserver" Pod="calico-apiserver-86c44fb797-kj4m5" WorkloadEndpoint="srv--xoz4v.gb1.brightbox.com-k8s-calico--apiserver--86c44fb797--kj4m5-eth0" Jan 30 18:08:28.605720 containerd[1633]: 2025-01-30 18:08:28.466 [INFO][4960] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2e1ccb33c0a11d83a89c4c8b776ef6ce9d9cb67a9be3df08ef8776c6c174b695" HandleID="k8s-pod-network.2e1ccb33c0a11d83a89c4c8b776ef6ce9d9cb67a9be3df08ef8776c6c174b695" Workload="srv--xoz4v.gb1.brightbox.com-k8s-calico--apiserver--86c44fb797--kj4m5-eth0" Jan 30 18:08:28.605720 containerd[1633]: 2025-01-30 18:08:28.481 [INFO][4960] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2e1ccb33c0a11d83a89c4c8b776ef6ce9d9cb67a9be3df08ef8776c6c174b695" HandleID="k8s-pod-network.2e1ccb33c0a11d83a89c4c8b776ef6ce9d9cb67a9be3df08ef8776c6c174b695" Workload="srv--xoz4v.gb1.brightbox.com-k8s-calico--apiserver--86c44fb797--kj4m5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000334d90), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-xoz4v.gb1.brightbox.com", "pod":"calico-apiserver-86c44fb797-kj4m5", "timestamp":"2025-01-30 18:08:28.465993247 +0000 UTC"}, Hostname:"srv-xoz4v.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 18:08:28.605720 containerd[1633]: 2025-01-30 18:08:28.481 [INFO][4960] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 18:08:28.605720 containerd[1633]: 2025-01-30 18:08:28.481 [INFO][4960] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 18:08:28.605720 containerd[1633]: 2025-01-30 18:08:28.481 [INFO][4960] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-xoz4v.gb1.brightbox.com' Jan 30 18:08:28.605720 containerd[1633]: 2025-01-30 18:08:28.487 [INFO][4960] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2e1ccb33c0a11d83a89c4c8b776ef6ce9d9cb67a9be3df08ef8776c6c174b695" host="srv-xoz4v.gb1.brightbox.com" Jan 30 18:08:28.605720 containerd[1633]: 2025-01-30 18:08:28.500 [INFO][4960] ipam/ipam.go 372: Looking up existing affinities for host host="srv-xoz4v.gb1.brightbox.com" Jan 30 18:08:28.605720 containerd[1633]: 2025-01-30 18:08:28.512 [INFO][4960] ipam/ipam.go 489: Trying affinity for 192.168.37.192/26 host="srv-xoz4v.gb1.brightbox.com" Jan 30 18:08:28.605720 containerd[1633]: 2025-01-30 18:08:28.515 [INFO][4960] ipam/ipam.go 155: Attempting to load block cidr=192.168.37.192/26 host="srv-xoz4v.gb1.brightbox.com" Jan 30 18:08:28.605720 containerd[1633]: 2025-01-30 18:08:28.521 [INFO][4960] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.37.192/26 host="srv-xoz4v.gb1.brightbox.com" Jan 30 18:08:28.605720 containerd[1633]: 2025-01-30 18:08:28.521 [INFO][4960] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.37.192/26 handle="k8s-pod-network.2e1ccb33c0a11d83a89c4c8b776ef6ce9d9cb67a9be3df08ef8776c6c174b695" host="srv-xoz4v.gb1.brightbox.com" Jan 30 18:08:28.605720 containerd[1633]: 2025-01-30 18:08:28.525 [INFO][4960] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2e1ccb33c0a11d83a89c4c8b776ef6ce9d9cb67a9be3df08ef8776c6c174b695 Jan 30 18:08:28.605720 containerd[1633]: 2025-01-30 18:08:28.535 [INFO][4960] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.37.192/26 handle="k8s-pod-network.2e1ccb33c0a11d83a89c4c8b776ef6ce9d9cb67a9be3df08ef8776c6c174b695" host="srv-xoz4v.gb1.brightbox.com" Jan 30 18:08:28.605720 containerd[1633]: 2025-01-30 18:08:28.549 [INFO][4960] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.37.198/26] block=192.168.37.192/26 handle="k8s-pod-network.2e1ccb33c0a11d83a89c4c8b776ef6ce9d9cb67a9be3df08ef8776c6c174b695" host="srv-xoz4v.gb1.brightbox.com" Jan 30 18:08:28.605720 containerd[1633]: 2025-01-30 18:08:28.549 [INFO][4960] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.37.198/26] handle="k8s-pod-network.2e1ccb33c0a11d83a89c4c8b776ef6ce9d9cb67a9be3df08ef8776c6c174b695" host="srv-xoz4v.gb1.brightbox.com" Jan 30 18:08:28.605720 containerd[1633]: 2025-01-30 18:08:28.550 [INFO][4960] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 18:08:28.605720 containerd[1633]: 2025-01-30 18:08:28.550 [INFO][4960] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.37.198/26] IPv6=[] ContainerID="2e1ccb33c0a11d83a89c4c8b776ef6ce9d9cb67a9be3df08ef8776c6c174b695" HandleID="k8s-pod-network.2e1ccb33c0a11d83a89c4c8b776ef6ce9d9cb67a9be3df08ef8776c6c174b695" Workload="srv--xoz4v.gb1.brightbox.com-k8s-calico--apiserver--86c44fb797--kj4m5-eth0" Jan 30 18:08:28.612614 containerd[1633]: 2025-01-30 18:08:28.557 [INFO][4941] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2e1ccb33c0a11d83a89c4c8b776ef6ce9d9cb67a9be3df08ef8776c6c174b695" Namespace="calico-apiserver" Pod="calico-apiserver-86c44fb797-kj4m5" WorkloadEndpoint="srv--xoz4v.gb1.brightbox.com-k8s-calico--apiserver--86c44fb797--kj4m5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xoz4v.gb1.brightbox.com-k8s-calico--apiserver--86c44fb797--kj4m5-eth0", GenerateName:"calico-apiserver-86c44fb797-", Namespace:"calico-apiserver", SelfLink:"", UID:"8b9ee45a-38b3-4449-976f-eb0f425ce59f", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 18, 7, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86c44fb797", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xoz4v.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-86c44fb797-kj4m5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.37.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif045820def5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 18:08:28.612614 containerd[1633]: 2025-01-30 18:08:28.557 [INFO][4941] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.37.198/32] ContainerID="2e1ccb33c0a11d83a89c4c8b776ef6ce9d9cb67a9be3df08ef8776c6c174b695" Namespace="calico-apiserver" Pod="calico-apiserver-86c44fb797-kj4m5" WorkloadEndpoint="srv--xoz4v.gb1.brightbox.com-k8s-calico--apiserver--86c44fb797--kj4m5-eth0" Jan 30 18:08:28.612614 containerd[1633]: 2025-01-30 18:08:28.557 [INFO][4941] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif045820def5 ContainerID="2e1ccb33c0a11d83a89c4c8b776ef6ce9d9cb67a9be3df08ef8776c6c174b695" Namespace="calico-apiserver" Pod="calico-apiserver-86c44fb797-kj4m5" WorkloadEndpoint="srv--xoz4v.gb1.brightbox.com-k8s-calico--apiserver--86c44fb797--kj4m5-eth0" Jan 30 18:08:28.612614 containerd[1633]: 2025-01-30 18:08:28.568 [INFO][4941] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2e1ccb33c0a11d83a89c4c8b776ef6ce9d9cb67a9be3df08ef8776c6c174b695" Namespace="calico-apiserver" Pod="calico-apiserver-86c44fb797-kj4m5" WorkloadEndpoint="srv--xoz4v.gb1.brightbox.com-k8s-calico--apiserver--86c44fb797--kj4m5-eth0" Jan 30 18:08:28.612614 containerd[1633]: 2025-01-30 18:08:28.570 [INFO][4941] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2e1ccb33c0a11d83a89c4c8b776ef6ce9d9cb67a9be3df08ef8776c6c174b695" Namespace="calico-apiserver" Pod="calico-apiserver-86c44fb797-kj4m5" WorkloadEndpoint="srv--xoz4v.gb1.brightbox.com-k8s-calico--apiserver--86c44fb797--kj4m5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xoz4v.gb1.brightbox.com-k8s-calico--apiserver--86c44fb797--kj4m5-eth0", GenerateName:"calico-apiserver-86c44fb797-", Namespace:"calico-apiserver", SelfLink:"", UID:"8b9ee45a-38b3-4449-976f-eb0f425ce59f", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 18, 7, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86c44fb797", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xoz4v.gb1.brightbox.com", ContainerID:"2e1ccb33c0a11d83a89c4c8b776ef6ce9d9cb67a9be3df08ef8776c6c174b695", Pod:"calico-apiserver-86c44fb797-kj4m5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.37.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif045820def5", MAC:"96:a5:e6:81:c4:41", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 18:08:28.612614 containerd[1633]: 2025-01-30 18:08:28.596 [INFO][4941] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2e1ccb33c0a11d83a89c4c8b776ef6ce9d9cb67a9be3df08ef8776c6c174b695" Namespace="calico-apiserver" Pod="calico-apiserver-86c44fb797-kj4m5" WorkloadEndpoint="srv--xoz4v.gb1.brightbox.com-k8s-calico--apiserver--86c44fb797--kj4m5-eth0" Jan 30 18:08:28.691886 containerd[1633]: time="2025-01-30T18:08:28.688307521Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 18:08:28.691886 containerd[1633]: time="2025-01-30T18:08:28.688450699Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 18:08:28.691886 containerd[1633]: time="2025-01-30T18:08:28.688472496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 18:08:28.691886 containerd[1633]: time="2025-01-30T18:08:28.688628448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 18:08:28.813304 containerd[1633]: time="2025-01-30T18:08:28.813211301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86c44fb797-kj4m5,Uid:8b9ee45a-38b3-4449-976f-eb0f425ce59f,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"2e1ccb33c0a11d83a89c4c8b776ef6ce9d9cb67a9be3df08ef8776c6c174b695\"" Jan 30 18:08:28.866590 systemd-networkd[1256]: calif6a228a1585: Gained IPv6LL Jan 30 18:08:29.917271 containerd[1633]: time="2025-01-30T18:08:29.917208587Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:08:29.919184 containerd[1633]: time="2025-01-30T18:08:29.919114234Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 30 18:08:29.920086 containerd[1633]: time="2025-01-30T18:08:29.920027990Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:08:29.923340 containerd[1633]: time="2025-01-30T18:08:29.923277059Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:08:29.925242 containerd[1633]: time="2025-01-30T18:08:29.924753630Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 3.461971017s" Jan 30 18:08:29.925242 containerd[1633]: time="2025-01-30T18:08:29.924797779Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 30 18:08:29.930410 containerd[1633]: time="2025-01-30T18:08:29.930354648Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 18:08:29.949446 containerd[1633]: time="2025-01-30T18:08:29.949126107Z" level=info msg="CreateContainer within sandbox \"d04f5bacfaa13b59571d7f32cfa675c627125c5d0164177ba16b1e50822e30a4\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 30 18:08:29.983516 containerd[1633]: time="2025-01-30T18:08:29.983055801Z" level=info msg="CreateContainer within sandbox \"d04f5bacfaa13b59571d7f32cfa675c627125c5d0164177ba16b1e50822e30a4\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"2c791c7964ddf736d8c839d3db335c5833e886e9da2589971812f095d308f7a0\"" Jan 30 18:08:29.985691 containerd[1633]: time="2025-01-30T18:08:29.984231765Z" level=info msg="StartContainer for \"2c791c7964ddf736d8c839d3db335c5833e886e9da2589971812f095d308f7a0\"" Jan 30 18:08:30.019074 systemd-networkd[1256]: calif045820def5: Gained IPv6LL Jan 30 18:08:30.105977 containerd[1633]: time="2025-01-30T18:08:30.105250976Z" level=info msg="StartContainer for \"2c791c7964ddf736d8c839d3db335c5833e886e9da2589971812f095d308f7a0\" returns successfully" Jan 30 18:08:31.086122 kubelet[2960]: I0130 18:08:31.084130 2960 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6748df8946-p6bqf" podStartSLOduration=31.616616667 podStartE2EDuration="35.084086114s" podCreationTimestamp="2025-01-30 18:07:56 +0000 UTC" firstStartedPulling="2025-01-30 18:08:26.462123459 +0000 UTC m=+52.028728724" lastFinishedPulling="2025-01-30 18:08:29.929592894 +0000 UTC m=+55.496198171" observedRunningTime="2025-01-30 18:08:31.07775178 +0000 UTC m=+56.644357066" watchObservedRunningTime="2025-01-30 18:08:31.084086114 +0000 UTC m=+56.650691393" Jan 30 18:08:31.793409 systemd[1]: Started sshd@19-10.230.68.22:22-113.200.60.74:49152.service - OpenSSH per-connection server daemon (113.200.60.74:49152). Jan 30 18:08:33.687947 containerd[1633]: time="2025-01-30T18:08:33.687382264Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:08:33.690773 containerd[1633]: time="2025-01-30T18:08:33.690550633Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 30 18:08:33.693520 containerd[1633]: time="2025-01-30T18:08:33.693472811Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:08:33.698419 containerd[1633]: time="2025-01-30T18:08:33.698328821Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:08:33.702157 containerd[1633]: time="2025-01-30T18:08:33.701123523Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 3.77053477s" Jan 30 18:08:33.702157 containerd[1633]: time="2025-01-30T18:08:33.701258966Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 30 18:08:33.706368 containerd[1633]: time="2025-01-30T18:08:33.706274365Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 30 18:08:33.710820 containerd[1633]: time="2025-01-30T18:08:33.710678334Z" level=info msg="CreateContainer within sandbox \"88881d5fc090f0a88218d05613f67ccf902663165d85aad395e4ed9f4785d742\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 18:08:33.736880 containerd[1633]: time="2025-01-30T18:08:33.736746565Z" level=info msg="CreateContainer within sandbox \"88881d5fc090f0a88218d05613f67ccf902663165d85aad395e4ed9f4785d742\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b90b199bb37452bdf3606a3199e58267b17440344300abee5e8bec956d8881d0\"" Jan 30 18:08:33.740130 containerd[1633]: time="2025-01-30T18:08:33.738765014Z" level=info msg="StartContainer for \"b90b199bb37452bdf3606a3199e58267b17440344300abee5e8bec956d8881d0\"" Jan 30 18:08:33.862283 containerd[1633]: time="2025-01-30T18:08:33.861880007Z" level=info msg="StartContainer for \"b90b199bb37452bdf3606a3199e58267b17440344300abee5e8bec956d8881d0\" returns successfully" Jan 30 18:08:34.129564 kubelet[2960]: I0130 18:08:34.129303 2960 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-86c44fb797-bqbzc" podStartSLOduration=32.476843252 podStartE2EDuration="38.129267766s" podCreationTimestamp="2025-01-30 18:07:56 +0000 UTC" firstStartedPulling="2025-01-30 18:08:28.052195174 +0000 UTC m=+53.618800452" lastFinishedPulling="2025-01-30 18:08:33.704619696 +0000 UTC m=+59.271224966" observedRunningTime="2025-01-30 18:08:34.08880583 +0000 UTC m=+59.655411118" watchObservedRunningTime="2025-01-30 18:08:34.129267766 +0000 UTC m=+59.695873049" Jan 30 18:08:34.638150 containerd[1633]: time="2025-01-30T18:08:34.638098076Z" level=info msg="StopPodSandbox for \"0e69a7e4ce5f71f3d7cceb985b0c50dad158ba76a627db60dbeebcb50f031c6f\"" Jan 30 18:08:34.930662 containerd[1633]: 2025-01-30 18:08:34.783 [WARNING][5151] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0e69a7e4ce5f71f3d7cceb985b0c50dad158ba76a627db60dbeebcb50f031c6f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xoz4v.gb1.brightbox.com-k8s-calico--apiserver--86c44fb797--kj4m5-eth0", GenerateName:"calico-apiserver-86c44fb797-", Namespace:"calico-apiserver", SelfLink:"", UID:"8b9ee45a-38b3-4449-976f-eb0f425ce59f", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 18, 7, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86c44fb797", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xoz4v.gb1.brightbox.com", ContainerID:"2e1ccb33c0a11d83a89c4c8b776ef6ce9d9cb67a9be3df08ef8776c6c174b695", Pod:"calico-apiserver-86c44fb797-kj4m5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.37.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif045820def5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 18:08:34.930662 containerd[1633]: 2025-01-30 18:08:34.789 [INFO][5151] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0e69a7e4ce5f71f3d7cceb985b0c50dad158ba76a627db60dbeebcb50f031c6f" Jan 30 18:08:34.930662 containerd[1633]: 2025-01-30 18:08:34.789 [INFO][5151] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0e69a7e4ce5f71f3d7cceb985b0c50dad158ba76a627db60dbeebcb50f031c6f" iface="eth0" netns="" Jan 30 18:08:34.930662 containerd[1633]: 2025-01-30 18:08:34.789 [INFO][5151] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0e69a7e4ce5f71f3d7cceb985b0c50dad158ba76a627db60dbeebcb50f031c6f" Jan 30 18:08:34.930662 containerd[1633]: 2025-01-30 18:08:34.789 [INFO][5151] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0e69a7e4ce5f71f3d7cceb985b0c50dad158ba76a627db60dbeebcb50f031c6f" Jan 30 18:08:34.930662 containerd[1633]: 2025-01-30 18:08:34.908 [INFO][5157] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0e69a7e4ce5f71f3d7cceb985b0c50dad158ba76a627db60dbeebcb50f031c6f" HandleID="k8s-pod-network.0e69a7e4ce5f71f3d7cceb985b0c50dad158ba76a627db60dbeebcb50f031c6f" Workload="srv--xoz4v.gb1.brightbox.com-k8s-calico--apiserver--86c44fb797--kj4m5-eth0" Jan 30 18:08:34.930662 containerd[1633]: 2025-01-30 18:08:34.908 [INFO][5157] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 18:08:34.930662 containerd[1633]: 2025-01-30 18:08:34.908 [INFO][5157] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 18:08:34.930662 containerd[1633]: 2025-01-30 18:08:34.919 [WARNING][5157] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0e69a7e4ce5f71f3d7cceb985b0c50dad158ba76a627db60dbeebcb50f031c6f" HandleID="k8s-pod-network.0e69a7e4ce5f71f3d7cceb985b0c50dad158ba76a627db60dbeebcb50f031c6f" Workload="srv--xoz4v.gb1.brightbox.com-k8s-calico--apiserver--86c44fb797--kj4m5-eth0" Jan 30 18:08:34.930662 containerd[1633]: 2025-01-30 18:08:34.919 [INFO][5157] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0e69a7e4ce5f71f3d7cceb985b0c50dad158ba76a627db60dbeebcb50f031c6f" HandleID="k8s-pod-network.0e69a7e4ce5f71f3d7cceb985b0c50dad158ba76a627db60dbeebcb50f031c6f" Workload="srv--xoz4v.gb1.brightbox.com-k8s-calico--apiserver--86c44fb797--kj4m5-eth0" Jan 30 18:08:34.930662 containerd[1633]: 2025-01-30 18:08:34.921 [INFO][5157] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 18:08:34.930662 containerd[1633]: 2025-01-30 18:08:34.927 [INFO][5151] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0e69a7e4ce5f71f3d7cceb985b0c50dad158ba76a627db60dbeebcb50f031c6f" Jan 30 18:08:34.930662 containerd[1633]: time="2025-01-30T18:08:34.929844147Z" level=info msg="TearDown network for sandbox \"0e69a7e4ce5f71f3d7cceb985b0c50dad158ba76a627db60dbeebcb50f031c6f\" successfully" Jan 30 18:08:34.930662 containerd[1633]: time="2025-01-30T18:08:34.929928810Z" level=info msg="StopPodSandbox for \"0e69a7e4ce5f71f3d7cceb985b0c50dad158ba76a627db60dbeebcb50f031c6f\" returns successfully" Jan 30 18:08:34.952098 containerd[1633]: time="2025-01-30T18:08:34.952035737Z" level=info msg="RemovePodSandbox for \"0e69a7e4ce5f71f3d7cceb985b0c50dad158ba76a627db60dbeebcb50f031c6f\"" Jan 30 18:08:34.952406 containerd[1633]: time="2025-01-30T18:08:34.952367034Z" level=info msg="Forcibly stopping sandbox \"0e69a7e4ce5f71f3d7cceb985b0c50dad158ba76a627db60dbeebcb50f031c6f\"" Jan 30 18:08:35.070146 kubelet[2960]: I0130 18:08:35.070100 2960 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 18:08:35.086631 containerd[1633]: 2025-01-30 18:08:35.026 [WARNING][5175] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0e69a7e4ce5f71f3d7cceb985b0c50dad158ba76a627db60dbeebcb50f031c6f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xoz4v.gb1.brightbox.com-k8s-calico--apiserver--86c44fb797--kj4m5-eth0", GenerateName:"calico-apiserver-86c44fb797-", Namespace:"calico-apiserver", SelfLink:"", UID:"8b9ee45a-38b3-4449-976f-eb0f425ce59f", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 18, 7, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86c44fb797", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xoz4v.gb1.brightbox.com", ContainerID:"2e1ccb33c0a11d83a89c4c8b776ef6ce9d9cb67a9be3df08ef8776c6c174b695", Pod:"calico-apiserver-86c44fb797-kj4m5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.37.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif045820def5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 18:08:35.086631 containerd[1633]: 2025-01-30 18:08:35.026 [INFO][5175] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0e69a7e4ce5f71f3d7cceb985b0c50dad158ba76a627db60dbeebcb50f031c6f" Jan 30 18:08:35.086631 containerd[1633]: 2025-01-30 18:08:35.026 [INFO][5175] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0e69a7e4ce5f71f3d7cceb985b0c50dad158ba76a627db60dbeebcb50f031c6f" iface="eth0" netns="" Jan 30 18:08:35.086631 containerd[1633]: 2025-01-30 18:08:35.026 [INFO][5175] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0e69a7e4ce5f71f3d7cceb985b0c50dad158ba76a627db60dbeebcb50f031c6f" Jan 30 18:08:35.086631 containerd[1633]: 2025-01-30 18:08:35.026 [INFO][5175] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0e69a7e4ce5f71f3d7cceb985b0c50dad158ba76a627db60dbeebcb50f031c6f" Jan 30 18:08:35.086631 containerd[1633]: 2025-01-30 18:08:35.068 [INFO][5182] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0e69a7e4ce5f71f3d7cceb985b0c50dad158ba76a627db60dbeebcb50f031c6f" HandleID="k8s-pod-network.0e69a7e4ce5f71f3d7cceb985b0c50dad158ba76a627db60dbeebcb50f031c6f" Workload="srv--xoz4v.gb1.brightbox.com-k8s-calico--apiserver--86c44fb797--kj4m5-eth0" Jan 30 18:08:35.086631 containerd[1633]: 2025-01-30 18:08:35.069 [INFO][5182] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 18:08:35.086631 containerd[1633]: 2025-01-30 18:08:35.069 [INFO][5182] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 18:08:35.086631 containerd[1633]: 2025-01-30 18:08:35.079 [WARNING][5182] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0e69a7e4ce5f71f3d7cceb985b0c50dad158ba76a627db60dbeebcb50f031c6f" HandleID="k8s-pod-network.0e69a7e4ce5f71f3d7cceb985b0c50dad158ba76a627db60dbeebcb50f031c6f" Workload="srv--xoz4v.gb1.brightbox.com-k8s-calico--apiserver--86c44fb797--kj4m5-eth0" Jan 30 18:08:35.086631 containerd[1633]: 2025-01-30 18:08:35.079 [INFO][5182] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0e69a7e4ce5f71f3d7cceb985b0c50dad158ba76a627db60dbeebcb50f031c6f" HandleID="k8s-pod-network.0e69a7e4ce5f71f3d7cceb985b0c50dad158ba76a627db60dbeebcb50f031c6f" Workload="srv--xoz4v.gb1.brightbox.com-k8s-calico--apiserver--86c44fb797--kj4m5-eth0" Jan 30 18:08:35.086631 containerd[1633]: 2025-01-30 18:08:35.082 [INFO][5182] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 18:08:35.086631 containerd[1633]: 2025-01-30 18:08:35.084 [INFO][5175] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0e69a7e4ce5f71f3d7cceb985b0c50dad158ba76a627db60dbeebcb50f031c6f" Jan 30 18:08:35.089367 containerd[1633]: time="2025-01-30T18:08:35.087735639Z" level=info msg="TearDown network for sandbox \"0e69a7e4ce5f71f3d7cceb985b0c50dad158ba76a627db60dbeebcb50f031c6f\" successfully" Jan 30 18:08:35.092028 containerd[1633]: time="2025-01-30T18:08:35.091966483Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0e69a7e4ce5f71f3d7cceb985b0c50dad158ba76a627db60dbeebcb50f031c6f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 18:08:35.092162 containerd[1633]: time="2025-01-30T18:08:35.092093288Z" level=info msg="RemovePodSandbox \"0e69a7e4ce5f71f3d7cceb985b0c50dad158ba76a627db60dbeebcb50f031c6f\" returns successfully" Jan 30 18:08:35.097848 containerd[1633]: time="2025-01-30T18:08:35.097808550Z" level=info msg="StopPodSandbox for \"ab43b944ab38054694cdc3cc58b471b897d918150326577a7798fc7fc96ce9dd\"" Jan 30 18:08:35.411602 containerd[1633]: 2025-01-30 18:08:35.314 [WARNING][5200] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ab43b944ab38054694cdc3cc58b471b897d918150326577a7798fc7fc96ce9dd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xoz4v.gb1.brightbox.com-k8s-csi--node--driver--tdn4g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ef41d1d4-bd1d-45f6-8486-f723f43e3c94", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 18, 7, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xoz4v.gb1.brightbox.com", ContainerID:"24b66df4f769590870178ae4101c8ff8e2e523efabfb44b30797aff61ef3c02f", Pod:"csi-node-driver-tdn4g", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.37.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif6a228a1585", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 18:08:35.411602 containerd[1633]: 2025-01-30 18:08:35.314 [INFO][5200] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ab43b944ab38054694cdc3cc58b471b897d918150326577a7798fc7fc96ce9dd" Jan 30 18:08:35.411602 containerd[1633]: 2025-01-30 18:08:35.314 [INFO][5200] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ab43b944ab38054694cdc3cc58b471b897d918150326577a7798fc7fc96ce9dd" iface="eth0" netns="" Jan 30 18:08:35.411602 containerd[1633]: 2025-01-30 18:08:35.314 [INFO][5200] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ab43b944ab38054694cdc3cc58b471b897d918150326577a7798fc7fc96ce9dd" Jan 30 18:08:35.411602 containerd[1633]: 2025-01-30 18:08:35.314 [INFO][5200] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ab43b944ab38054694cdc3cc58b471b897d918150326577a7798fc7fc96ce9dd" Jan 30 18:08:35.411602 containerd[1633]: 2025-01-30 18:08:35.360 [INFO][5207] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ab43b944ab38054694cdc3cc58b471b897d918150326577a7798fc7fc96ce9dd" HandleID="k8s-pod-network.ab43b944ab38054694cdc3cc58b471b897d918150326577a7798fc7fc96ce9dd" Workload="srv--xoz4v.gb1.brightbox.com-k8s-csi--node--driver--tdn4g-eth0" Jan 30 18:08:35.411602 containerd[1633]: 2025-01-30 18:08:35.360 [INFO][5207] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 18:08:35.411602 containerd[1633]: 2025-01-30 18:08:35.360 [INFO][5207] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 18:08:35.411602 containerd[1633]: 2025-01-30 18:08:35.375 [WARNING][5207] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ab43b944ab38054694cdc3cc58b471b897d918150326577a7798fc7fc96ce9dd" HandleID="k8s-pod-network.ab43b944ab38054694cdc3cc58b471b897d918150326577a7798fc7fc96ce9dd" Workload="srv--xoz4v.gb1.brightbox.com-k8s-csi--node--driver--tdn4g-eth0" Jan 30 18:08:35.411602 containerd[1633]: 2025-01-30 18:08:35.375 [INFO][5207] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ab43b944ab38054694cdc3cc58b471b897d918150326577a7798fc7fc96ce9dd" HandleID="k8s-pod-network.ab43b944ab38054694cdc3cc58b471b897d918150326577a7798fc7fc96ce9dd" Workload="srv--xoz4v.gb1.brightbox.com-k8s-csi--node--driver--tdn4g-eth0" Jan 30 18:08:35.411602 containerd[1633]: 2025-01-30 18:08:35.382 [INFO][5207] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 18:08:35.411602 containerd[1633]: 2025-01-30 18:08:35.406 [INFO][5200] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ab43b944ab38054694cdc3cc58b471b897d918150326577a7798fc7fc96ce9dd" Jan 30 18:08:35.417454 containerd[1633]: time="2025-01-30T18:08:35.411666637Z" level=info msg="TearDown network for sandbox \"ab43b944ab38054694cdc3cc58b471b897d918150326577a7798fc7fc96ce9dd\" successfully" Jan 30 18:08:35.417454 containerd[1633]: time="2025-01-30T18:08:35.411711357Z" level=info msg="StopPodSandbox for \"ab43b944ab38054694cdc3cc58b471b897d918150326577a7798fc7fc96ce9dd\" returns successfully" Jan 30 18:08:35.417454 containerd[1633]: time="2025-01-30T18:08:35.413270428Z" level=info msg="RemovePodSandbox for \"ab43b944ab38054694cdc3cc58b471b897d918150326577a7798fc7fc96ce9dd\"" Jan 30 18:08:35.417454 containerd[1633]: time="2025-01-30T18:08:35.413348815Z" level=info msg="Forcibly stopping sandbox \"ab43b944ab38054694cdc3cc58b471b897d918150326577a7798fc7fc96ce9dd\"" Jan 30 18:08:35.619786 containerd[1633]: 2025-01-30 18:08:35.529 [WARNING][5230] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ab43b944ab38054694cdc3cc58b471b897d918150326577a7798fc7fc96ce9dd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xoz4v.gb1.brightbox.com-k8s-csi--node--driver--tdn4g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ef41d1d4-bd1d-45f6-8486-f723f43e3c94", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 18, 7, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xoz4v.gb1.brightbox.com", ContainerID:"24b66df4f769590870178ae4101c8ff8e2e523efabfb44b30797aff61ef3c02f", Pod:"csi-node-driver-tdn4g", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.37.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif6a228a1585", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 18:08:35.619786 containerd[1633]: 2025-01-30 18:08:35.529 [INFO][5230] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ab43b944ab38054694cdc3cc58b471b897d918150326577a7798fc7fc96ce9dd" Jan 30 18:08:35.619786 containerd[1633]: 2025-01-30 18:08:35.529 [INFO][5230] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ab43b944ab38054694cdc3cc58b471b897d918150326577a7798fc7fc96ce9dd" iface="eth0" netns="" Jan 30 18:08:35.619786 containerd[1633]: 2025-01-30 18:08:35.529 [INFO][5230] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ab43b944ab38054694cdc3cc58b471b897d918150326577a7798fc7fc96ce9dd" Jan 30 18:08:35.619786 containerd[1633]: 2025-01-30 18:08:35.529 [INFO][5230] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ab43b944ab38054694cdc3cc58b471b897d918150326577a7798fc7fc96ce9dd" Jan 30 18:08:35.619786 containerd[1633]: 2025-01-30 18:08:35.584 [INFO][5236] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ab43b944ab38054694cdc3cc58b471b897d918150326577a7798fc7fc96ce9dd" HandleID="k8s-pod-network.ab43b944ab38054694cdc3cc58b471b897d918150326577a7798fc7fc96ce9dd" Workload="srv--xoz4v.gb1.brightbox.com-k8s-csi--node--driver--tdn4g-eth0" Jan 30 18:08:35.619786 containerd[1633]: 2025-01-30 18:08:35.584 [INFO][5236] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 18:08:35.619786 containerd[1633]: 2025-01-30 18:08:35.584 [INFO][5236] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 18:08:35.619786 containerd[1633]: 2025-01-30 18:08:35.603 [WARNING][5236] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ab43b944ab38054694cdc3cc58b471b897d918150326577a7798fc7fc96ce9dd" HandleID="k8s-pod-network.ab43b944ab38054694cdc3cc58b471b897d918150326577a7798fc7fc96ce9dd" Workload="srv--xoz4v.gb1.brightbox.com-k8s-csi--node--driver--tdn4g-eth0" Jan 30 18:08:35.619786 containerd[1633]: 2025-01-30 18:08:35.603 [INFO][5236] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ab43b944ab38054694cdc3cc58b471b897d918150326577a7798fc7fc96ce9dd" HandleID="k8s-pod-network.ab43b944ab38054694cdc3cc58b471b897d918150326577a7798fc7fc96ce9dd" Workload="srv--xoz4v.gb1.brightbox.com-k8s-csi--node--driver--tdn4g-eth0" Jan 30 18:08:35.619786 containerd[1633]: 2025-01-30 18:08:35.607 [INFO][5236] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 18:08:35.619786 containerd[1633]: 2025-01-30 18:08:35.612 [INFO][5230] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ab43b944ab38054694cdc3cc58b471b897d918150326577a7798fc7fc96ce9dd" Jan 30 18:08:35.619786 containerd[1633]: time="2025-01-30T18:08:35.619417827Z" level=info msg="TearDown network for sandbox \"ab43b944ab38054694cdc3cc58b471b897d918150326577a7798fc7fc96ce9dd\" successfully" Jan 30 18:08:35.625893 containerd[1633]: time="2025-01-30T18:08:35.625811252Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ab43b944ab38054694cdc3cc58b471b897d918150326577a7798fc7fc96ce9dd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 18:08:35.625970 containerd[1633]: time="2025-01-30T18:08:35.625926060Z" level=info msg="RemovePodSandbox \"ab43b944ab38054694cdc3cc58b471b897d918150326577a7798fc7fc96ce9dd\" returns successfully" Jan 30 18:08:35.627321 containerd[1633]: time="2025-01-30T18:08:35.626948008Z" level=info msg="StopPodSandbox for \"2c543aee0cf2947030f0332cea24591a96c45cb9ca5b343f100a56c463e2b9c4\"" Jan 30 18:08:35.632950 containerd[1633]: time="2025-01-30T18:08:35.632717705Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:08:35.635961 containerd[1633]: time="2025-01-30T18:08:35.635858311Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:08:35.636567 containerd[1633]: time="2025-01-30T18:08:35.636506210Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 30 18:08:35.640309 containerd[1633]: time="2025-01-30T18:08:35.640246381Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:08:35.641570 containerd[1633]: time="2025-01-30T18:08:35.641508310Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.935150969s" Jan 30 18:08:35.641662 containerd[1633]: time="2025-01-30T18:08:35.641567038Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 30 18:08:35.643691 containerd[1633]: time="2025-01-30T18:08:35.643463424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 18:08:35.652355 containerd[1633]: time="2025-01-30T18:08:35.651938806Z" level=info msg="CreateContainer within sandbox \"24b66df4f769590870178ae4101c8ff8e2e523efabfb44b30797aff61ef3c02f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 30 18:08:35.681260 containerd[1633]: time="2025-01-30T18:08:35.679159970Z" level=info msg="CreateContainer within sandbox \"24b66df4f769590870178ae4101c8ff8e2e523efabfb44b30797aff61ef3c02f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"a14fc31ec22ed1bdbddbdec0b171f97ed571f889bd6a8520bf4ac187e74ddad1\"" Jan 30 18:08:35.680261 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount800816637.mount: Deactivated successfully. Jan 30 18:08:35.687878 containerd[1633]: time="2025-01-30T18:08:35.687778793Z" level=info msg="StartContainer for \"a14fc31ec22ed1bdbddbdec0b171f97ed571f889bd6a8520bf4ac187e74ddad1\"" Jan 30 18:08:35.760019 systemd[1]: run-containerd-runc-k8s.io-a14fc31ec22ed1bdbddbdec0b171f97ed571f889bd6a8520bf4ac187e74ddad1-runc.DGMGyA.mount: Deactivated successfully. Jan 30 18:08:35.854312 systemd-journald[1172]: Under memory pressure, flushing caches. Jan 30 18:08:35.853606 systemd-resolved[1511]: Under memory pressure, flushing caches. Jan 30 18:08:35.853705 systemd-resolved[1511]: Flushed all caches. Jan 30 18:08:35.878166 containerd[1633]: time="2025-01-30T18:08:35.877715295Z" level=info msg="StartContainer for \"a14fc31ec22ed1bdbddbdec0b171f97ed571f889bd6a8520bf4ac187e74ddad1\" returns successfully" Jan 30 18:08:35.888953 containerd[1633]: 2025-01-30 18:08:35.743 [WARNING][5254] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2c543aee0cf2947030f0332cea24591a96c45cb9ca5b343f100a56c463e2b9c4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xoz4v.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--cs5tc-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"27fc1c1a-8707-42c1-a228-145009810bf8", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 18, 7, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xoz4v.gb1.brightbox.com", ContainerID:"d7765d1b480e8bb7df242f6e566915552050e97c3d82a5fc724769ba71362fa8", Pod:"coredns-7db6d8ff4d-cs5tc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.37.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3f59a49b742", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 18:08:35.888953 containerd[1633]: 2025-01-30 18:08:35.749 [INFO][5254] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2c543aee0cf2947030f0332cea24591a96c45cb9ca5b343f100a56c463e2b9c4" Jan 30 18:08:35.888953 containerd[1633]: 2025-01-30 18:08:35.749 [INFO][5254] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2c543aee0cf2947030f0332cea24591a96c45cb9ca5b343f100a56c463e2b9c4" iface="eth0" netns="" Jan 30 18:08:35.888953 containerd[1633]: 2025-01-30 18:08:35.750 [INFO][5254] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2c543aee0cf2947030f0332cea24591a96c45cb9ca5b343f100a56c463e2b9c4" Jan 30 18:08:35.888953 containerd[1633]: 2025-01-30 18:08:35.750 [INFO][5254] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2c543aee0cf2947030f0332cea24591a96c45cb9ca5b343f100a56c463e2b9c4" Jan 30 18:08:35.888953 containerd[1633]: 2025-01-30 18:08:35.859 [INFO][5277] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2c543aee0cf2947030f0332cea24591a96c45cb9ca5b343f100a56c463e2b9c4" HandleID="k8s-pod-network.2c543aee0cf2947030f0332cea24591a96c45cb9ca5b343f100a56c463e2b9c4" Workload="srv--xoz4v.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--cs5tc-eth0" Jan 30 18:08:35.888953 containerd[1633]: 2025-01-30 18:08:35.859 [INFO][5277] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 18:08:35.888953 containerd[1633]: 2025-01-30 18:08:35.860 [INFO][5277] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 18:08:35.888953 containerd[1633]: 2025-01-30 18:08:35.872 [WARNING][5277] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2c543aee0cf2947030f0332cea24591a96c45cb9ca5b343f100a56c463e2b9c4" HandleID="k8s-pod-network.2c543aee0cf2947030f0332cea24591a96c45cb9ca5b343f100a56c463e2b9c4" Workload="srv--xoz4v.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--cs5tc-eth0" Jan 30 18:08:35.888953 containerd[1633]: 2025-01-30 18:08:35.872 [INFO][5277] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2c543aee0cf2947030f0332cea24591a96c45cb9ca5b343f100a56c463e2b9c4" HandleID="k8s-pod-network.2c543aee0cf2947030f0332cea24591a96c45cb9ca5b343f100a56c463e2b9c4" Workload="srv--xoz4v.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--cs5tc-eth0" Jan 30 18:08:35.888953 containerd[1633]: 2025-01-30 18:08:35.875 [INFO][5277] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 18:08:35.888953 containerd[1633]: 2025-01-30 18:08:35.882 [INFO][5254] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2c543aee0cf2947030f0332cea24591a96c45cb9ca5b343f100a56c463e2b9c4" Jan 30 18:08:35.888953 containerd[1633]: time="2025-01-30T18:08:35.888703997Z" level=info msg="TearDown network for sandbox \"2c543aee0cf2947030f0332cea24591a96c45cb9ca5b343f100a56c463e2b9c4\" successfully" Jan 30 18:08:35.888953 containerd[1633]: time="2025-01-30T18:08:35.888767811Z" level=info msg="StopPodSandbox for \"2c543aee0cf2947030f0332cea24591a96c45cb9ca5b343f100a56c463e2b9c4\" returns successfully" Jan 30 18:08:35.890722 containerd[1633]: time="2025-01-30T18:08:35.890670674Z" level=info msg="RemovePodSandbox for \"2c543aee0cf2947030f0332cea24591a96c45cb9ca5b343f100a56c463e2b9c4\"" Jan 30 18:08:35.891385 containerd[1633]: time="2025-01-30T18:08:35.890837185Z" level=info msg="Forcibly stopping sandbox \"2c543aee0cf2947030f0332cea24591a96c45cb9ca5b343f100a56c463e2b9c4\"" Jan 30 18:08:36.010046 containerd[1633]: 2025-01-30 18:08:35.957 [WARNING][5311] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2c543aee0cf2947030f0332cea24591a96c45cb9ca5b343f100a56c463e2b9c4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xoz4v.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--cs5tc-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"27fc1c1a-8707-42c1-a228-145009810bf8", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 18, 7, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xoz4v.gb1.brightbox.com", ContainerID:"d7765d1b480e8bb7df242f6e566915552050e97c3d82a5fc724769ba71362fa8", Pod:"coredns-7db6d8ff4d-cs5tc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.37.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3f59a49b742", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 18:08:36.010046 containerd[1633]: 2025-01-30 18:08:35.957 [INFO][5311] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2c543aee0cf2947030f0332cea24591a96c45cb9ca5b343f100a56c463e2b9c4" Jan 30 18:08:36.010046 containerd[1633]: 2025-01-30 18:08:35.957 [INFO][5311] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2c543aee0cf2947030f0332cea24591a96c45cb9ca5b343f100a56c463e2b9c4" iface="eth0" netns="" Jan 30 18:08:36.010046 containerd[1633]: 2025-01-30 18:08:35.957 [INFO][5311] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2c543aee0cf2947030f0332cea24591a96c45cb9ca5b343f100a56c463e2b9c4" Jan 30 18:08:36.010046 containerd[1633]: 2025-01-30 18:08:35.957 [INFO][5311] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2c543aee0cf2947030f0332cea24591a96c45cb9ca5b343f100a56c463e2b9c4" Jan 30 18:08:36.010046 containerd[1633]: 2025-01-30 18:08:35.993 [INFO][5318] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2c543aee0cf2947030f0332cea24591a96c45cb9ca5b343f100a56c463e2b9c4" HandleID="k8s-pod-network.2c543aee0cf2947030f0332cea24591a96c45cb9ca5b343f100a56c463e2b9c4" Workload="srv--xoz4v.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--cs5tc-eth0" Jan 30 18:08:36.010046 containerd[1633]: 2025-01-30 18:08:35.994 [INFO][5318] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 18:08:36.010046 containerd[1633]: 2025-01-30 18:08:35.994 [INFO][5318] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 18:08:36.010046 containerd[1633]: 2025-01-30 18:08:36.002 [WARNING][5318] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2c543aee0cf2947030f0332cea24591a96c45cb9ca5b343f100a56c463e2b9c4" HandleID="k8s-pod-network.2c543aee0cf2947030f0332cea24591a96c45cb9ca5b343f100a56c463e2b9c4" Workload="srv--xoz4v.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--cs5tc-eth0" Jan 30 18:08:36.010046 containerd[1633]: 2025-01-30 18:08:36.002 [INFO][5318] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2c543aee0cf2947030f0332cea24591a96c45cb9ca5b343f100a56c463e2b9c4" HandleID="k8s-pod-network.2c543aee0cf2947030f0332cea24591a96c45cb9ca5b343f100a56c463e2b9c4" Workload="srv--xoz4v.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--cs5tc-eth0" Jan 30 18:08:36.010046 containerd[1633]: 2025-01-30 18:08:36.005 [INFO][5318] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 18:08:36.010046 containerd[1633]: 2025-01-30 18:08:36.007 [INFO][5311] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2c543aee0cf2947030f0332cea24591a96c45cb9ca5b343f100a56c463e2b9c4" Jan 30 18:08:36.011434 containerd[1633]: time="2025-01-30T18:08:36.009993742Z" level=info msg="TearDown network for sandbox \"2c543aee0cf2947030f0332cea24591a96c45cb9ca5b343f100a56c463e2b9c4\" successfully" Jan 30 18:08:36.016439 containerd[1633]: time="2025-01-30T18:08:36.016400279Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2c543aee0cf2947030f0332cea24591a96c45cb9ca5b343f100a56c463e2b9c4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 18:08:36.016582 containerd[1633]: time="2025-01-30T18:08:36.016530763Z" level=info msg="RemovePodSandbox \"2c543aee0cf2947030f0332cea24591a96c45cb9ca5b343f100a56c463e2b9c4\" returns successfully" Jan 30 18:08:36.017674 containerd[1633]: time="2025-01-30T18:08:36.017541039Z" level=info msg="StopPodSandbox for \"265d769654847538caba8d9112bcc641bd0117d300f01a0e5f8844084c36ecbb\"" Jan 30 18:08:36.075295 containerd[1633]: time="2025-01-30T18:08:36.074599250Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:08:36.077292 containerd[1633]: time="2025-01-30T18:08:36.076893371Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 30 18:08:36.085896 containerd[1633]: time="2025-01-30T18:08:36.085732470Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 442.194705ms" Jan 30 18:08:36.085896 containerd[1633]: time="2025-01-30T18:08:36.085800869Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 30 18:08:36.090789 containerd[1633]: time="2025-01-30T18:08:36.090621130Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 30 18:08:36.099766 containerd[1633]: time="2025-01-30T18:08:36.099595944Z" level=info msg="CreateContainer within sandbox \"2e1ccb33c0a11d83a89c4c8b776ef6ce9d9cb67a9be3df08ef8776c6c174b695\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 18:08:36.153521 containerd[1633]: time="2025-01-30T18:08:36.153363475Z" level=info msg="CreateContainer within sandbox \"2e1ccb33c0a11d83a89c4c8b776ef6ce9d9cb67a9be3df08ef8776c6c174b695\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b9e25575c5823ea335f72f85f30d3b5cc6e42e3c6e5d44102f3b262a6f07eda1\"" Jan 30 18:08:36.156117 containerd[1633]: time="2025-01-30T18:08:36.156035914Z" level=info msg="StartContainer for \"b9e25575c5823ea335f72f85f30d3b5cc6e42e3c6e5d44102f3b262a6f07eda1\"" Jan 30 18:08:36.164367 containerd[1633]: 2025-01-30 18:08:36.077 [WARNING][5336] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="265d769654847538caba8d9112bcc641bd0117d300f01a0e5f8844084c36ecbb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xoz4v.gb1.brightbox.com-k8s-calico--apiserver--86c44fb797--bqbzc-eth0", GenerateName:"calico-apiserver-86c44fb797-", Namespace:"calico-apiserver", SelfLink:"", UID:"ba41a6ef-6128-4cea-831d-b51553e05c5c", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 18, 7, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86c44fb797", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xoz4v.gb1.brightbox.com", ContainerID:"88881d5fc090f0a88218d05613f67ccf902663165d85aad395e4ed9f4785d742", Pod:"calico-apiserver-86c44fb797-bqbzc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.37.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3408c576d6c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 18:08:36.164367 containerd[1633]: 2025-01-30 18:08:36.077 [INFO][5336] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="265d769654847538caba8d9112bcc641bd0117d300f01a0e5f8844084c36ecbb" Jan 30 18:08:36.164367 containerd[1633]: 2025-01-30 18:08:36.077 [INFO][5336] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="265d769654847538caba8d9112bcc641bd0117d300f01a0e5f8844084c36ecbb" iface="eth0" netns="" Jan 30 18:08:36.164367 containerd[1633]: 2025-01-30 18:08:36.077 [INFO][5336] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="265d769654847538caba8d9112bcc641bd0117d300f01a0e5f8844084c36ecbb" Jan 30 18:08:36.164367 containerd[1633]: 2025-01-30 18:08:36.077 [INFO][5336] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="265d769654847538caba8d9112bcc641bd0117d300f01a0e5f8844084c36ecbb" Jan 30 18:08:36.164367 containerd[1633]: 2025-01-30 18:08:36.130 [INFO][5342] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="265d769654847538caba8d9112bcc641bd0117d300f01a0e5f8844084c36ecbb" HandleID="k8s-pod-network.265d769654847538caba8d9112bcc641bd0117d300f01a0e5f8844084c36ecbb" Workload="srv--xoz4v.gb1.brightbox.com-k8s-calico--apiserver--86c44fb797--bqbzc-eth0" Jan 30 18:08:36.164367 containerd[1633]: 2025-01-30 18:08:36.132 [INFO][5342] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 18:08:36.164367 containerd[1633]: 2025-01-30 18:08:36.132 [INFO][5342] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 18:08:36.164367 containerd[1633]: 2025-01-30 18:08:36.149 [WARNING][5342] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="265d769654847538caba8d9112bcc641bd0117d300f01a0e5f8844084c36ecbb" HandleID="k8s-pod-network.265d769654847538caba8d9112bcc641bd0117d300f01a0e5f8844084c36ecbb" Workload="srv--xoz4v.gb1.brightbox.com-k8s-calico--apiserver--86c44fb797--bqbzc-eth0" Jan 30 18:08:36.164367 containerd[1633]: 2025-01-30 18:08:36.149 [INFO][5342] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="265d769654847538caba8d9112bcc641bd0117d300f01a0e5f8844084c36ecbb" HandleID="k8s-pod-network.265d769654847538caba8d9112bcc641bd0117d300f01a0e5f8844084c36ecbb" Workload="srv--xoz4v.gb1.brightbox.com-k8s-calico--apiserver--86c44fb797--bqbzc-eth0" Jan 30 18:08:36.164367 containerd[1633]: 2025-01-30 18:08:36.152 [INFO][5342] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 18:08:36.164367 containerd[1633]: 2025-01-30 18:08:36.159 [INFO][5336] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="265d769654847538caba8d9112bcc641bd0117d300f01a0e5f8844084c36ecbb" Jan 30 18:08:36.165663 containerd[1633]: time="2025-01-30T18:08:36.164317404Z" level=info msg="TearDown network for sandbox \"265d769654847538caba8d9112bcc641bd0117d300f01a0e5f8844084c36ecbb\" successfully" Jan 30 18:08:36.165663 containerd[1633]: time="2025-01-30T18:08:36.164929367Z" level=info msg="StopPodSandbox for \"265d769654847538caba8d9112bcc641bd0117d300f01a0e5f8844084c36ecbb\" returns successfully" Jan 30 18:08:36.166440 containerd[1633]: time="2025-01-30T18:08:36.165902934Z" level=info msg="RemovePodSandbox for \"265d769654847538caba8d9112bcc641bd0117d300f01a0e5f8844084c36ecbb\"" Jan 30 18:08:36.166440 containerd[1633]: time="2025-01-30T18:08:36.165940457Z" level=info msg="Forcibly stopping sandbox \"265d769654847538caba8d9112bcc641bd0117d300f01a0e5f8844084c36ecbb\"" Jan 30 18:08:36.369119 containerd[1633]: 2025-01-30 18:08:36.266 [WARNING][5368] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="265d769654847538caba8d9112bcc641bd0117d300f01a0e5f8844084c36ecbb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xoz4v.gb1.brightbox.com-k8s-calico--apiserver--86c44fb797--bqbzc-eth0", GenerateName:"calico-apiserver-86c44fb797-", Namespace:"calico-apiserver", SelfLink:"", UID:"ba41a6ef-6128-4cea-831d-b51553e05c5c", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 18, 7, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86c44fb797", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xoz4v.gb1.brightbox.com", ContainerID:"88881d5fc090f0a88218d05613f67ccf902663165d85aad395e4ed9f4785d742", Pod:"calico-apiserver-86c44fb797-bqbzc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.37.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3408c576d6c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 18:08:36.369119 containerd[1633]: 2025-01-30 18:08:36.267 [INFO][5368] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="265d769654847538caba8d9112bcc641bd0117d300f01a0e5f8844084c36ecbb" Jan 30 18:08:36.369119 containerd[1633]: 2025-01-30 18:08:36.267 [INFO][5368] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="265d769654847538caba8d9112bcc641bd0117d300f01a0e5f8844084c36ecbb" iface="eth0" netns="" Jan 30 18:08:36.369119 containerd[1633]: 2025-01-30 18:08:36.267 [INFO][5368] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="265d769654847538caba8d9112bcc641bd0117d300f01a0e5f8844084c36ecbb" Jan 30 18:08:36.369119 containerd[1633]: 2025-01-30 18:08:36.267 [INFO][5368] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="265d769654847538caba8d9112bcc641bd0117d300f01a0e5f8844084c36ecbb" Jan 30 18:08:36.369119 containerd[1633]: 2025-01-30 18:08:36.333 [INFO][5394] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="265d769654847538caba8d9112bcc641bd0117d300f01a0e5f8844084c36ecbb" HandleID="k8s-pod-network.265d769654847538caba8d9112bcc641bd0117d300f01a0e5f8844084c36ecbb" Workload="srv--xoz4v.gb1.brightbox.com-k8s-calico--apiserver--86c44fb797--bqbzc-eth0" Jan 30 18:08:36.369119 containerd[1633]: 2025-01-30 18:08:36.334 [INFO][5394] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 18:08:36.369119 containerd[1633]: 2025-01-30 18:08:36.334 [INFO][5394] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 18:08:36.369119 containerd[1633]: 2025-01-30 18:08:36.356 [WARNING][5394] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="265d769654847538caba8d9112bcc641bd0117d300f01a0e5f8844084c36ecbb" HandleID="k8s-pod-network.265d769654847538caba8d9112bcc641bd0117d300f01a0e5f8844084c36ecbb" Workload="srv--xoz4v.gb1.brightbox.com-k8s-calico--apiserver--86c44fb797--bqbzc-eth0" Jan 30 18:08:36.369119 containerd[1633]: 2025-01-30 18:08:36.357 [INFO][5394] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="265d769654847538caba8d9112bcc641bd0117d300f01a0e5f8844084c36ecbb" HandleID="k8s-pod-network.265d769654847538caba8d9112bcc641bd0117d300f01a0e5f8844084c36ecbb" Workload="srv--xoz4v.gb1.brightbox.com-k8s-calico--apiserver--86c44fb797--bqbzc-eth0" Jan 30 18:08:36.369119 containerd[1633]: 2025-01-30 18:08:36.361 [INFO][5394] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 18:08:36.369119 containerd[1633]: 2025-01-30 18:08:36.366 [INFO][5368] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="265d769654847538caba8d9112bcc641bd0117d300f01a0e5f8844084c36ecbb" Jan 30 18:08:36.375219 containerd[1633]: time="2025-01-30T18:08:36.369193096Z" level=info msg="TearDown network for sandbox \"265d769654847538caba8d9112bcc641bd0117d300f01a0e5f8844084c36ecbb\" successfully" Jan 30 18:08:36.424425 containerd[1633]: time="2025-01-30T18:08:36.424269952Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"265d769654847538caba8d9112bcc641bd0117d300f01a0e5f8844084c36ecbb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 18:08:36.425602 containerd[1633]: time="2025-01-30T18:08:36.425558944Z" level=info msg="RemovePodSandbox \"265d769654847538caba8d9112bcc641bd0117d300f01a0e5f8844084c36ecbb\" returns successfully" Jan 30 18:08:36.427119 containerd[1633]: time="2025-01-30T18:08:36.426902001Z" level=info msg="StartContainer for \"b9e25575c5823ea335f72f85f30d3b5cc6e42e3c6e5d44102f3b262a6f07eda1\" returns successfully" Jan 30 18:08:36.435965 containerd[1633]: time="2025-01-30T18:08:36.427900529Z" level=info msg="StopPodSandbox for \"5b5926bae0ba52a01a3d372bf2b9a6c0b1f30f5197df2750cf8c4e4c586a36c7\"" Jan 30 18:08:36.595144 containerd[1633]: 2025-01-30 18:08:36.534 [WARNING][5422] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5b5926bae0ba52a01a3d372bf2b9a6c0b1f30f5197df2750cf8c4e4c586a36c7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xoz4v.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--2hwgm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e38ccf5f-14a4-44d4-bb9c-2fd7d94168d8", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 18, 7, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xoz4v.gb1.brightbox.com", ContainerID:"c7ec2ff2ee3042b32fc2588748dfb706cdc9393a4a1e6c28eefbb8170b1cc3f7", Pod:"coredns-7db6d8ff4d-2hwgm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.37.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali16b64e874ba", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 18:08:36.595144 containerd[1633]: 2025-01-30 18:08:36.534 [INFO][5422] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5b5926bae0ba52a01a3d372bf2b9a6c0b1f30f5197df2750cf8c4e4c586a36c7" Jan 30 18:08:36.595144 containerd[1633]: 2025-01-30 18:08:36.534 [INFO][5422] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5b5926bae0ba52a01a3d372bf2b9a6c0b1f30f5197df2750cf8c4e4c586a36c7" iface="eth0" netns="" Jan 30 18:08:36.595144 containerd[1633]: 2025-01-30 18:08:36.534 [INFO][5422] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5b5926bae0ba52a01a3d372bf2b9a6c0b1f30f5197df2750cf8c4e4c586a36c7" Jan 30 18:08:36.595144 containerd[1633]: 2025-01-30 18:08:36.534 [INFO][5422] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5b5926bae0ba52a01a3d372bf2b9a6c0b1f30f5197df2750cf8c4e4c586a36c7" Jan 30 18:08:36.595144 containerd[1633]: 2025-01-30 18:08:36.575 [INFO][5431] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5b5926bae0ba52a01a3d372bf2b9a6c0b1f30f5197df2750cf8c4e4c586a36c7" HandleID="k8s-pod-network.5b5926bae0ba52a01a3d372bf2b9a6c0b1f30f5197df2750cf8c4e4c586a36c7" Workload="srv--xoz4v.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--2hwgm-eth0" Jan 30 18:08:36.595144 containerd[1633]: 2025-01-30 18:08:36.575 [INFO][5431] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 18:08:36.595144 containerd[1633]: 2025-01-30 18:08:36.575 [INFO][5431] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 18:08:36.595144 containerd[1633]: 2025-01-30 18:08:36.584 [WARNING][5431] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5b5926bae0ba52a01a3d372bf2b9a6c0b1f30f5197df2750cf8c4e4c586a36c7" HandleID="k8s-pod-network.5b5926bae0ba52a01a3d372bf2b9a6c0b1f30f5197df2750cf8c4e4c586a36c7" Workload="srv--xoz4v.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--2hwgm-eth0" Jan 30 18:08:36.595144 containerd[1633]: 2025-01-30 18:08:36.584 [INFO][5431] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5b5926bae0ba52a01a3d372bf2b9a6c0b1f30f5197df2750cf8c4e4c586a36c7" HandleID="k8s-pod-network.5b5926bae0ba52a01a3d372bf2b9a6c0b1f30f5197df2750cf8c4e4c586a36c7" Workload="srv--xoz4v.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--2hwgm-eth0" Jan 30 18:08:36.595144 containerd[1633]: 2025-01-30 18:08:36.588 [INFO][5431] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 18:08:36.595144 containerd[1633]: 2025-01-30 18:08:36.593 [INFO][5422] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5b5926bae0ba52a01a3d372bf2b9a6c0b1f30f5197df2750cf8c4e4c586a36c7" Jan 30 18:08:36.596458 containerd[1633]: time="2025-01-30T18:08:36.595229015Z" level=info msg="TearDown network for sandbox \"5b5926bae0ba52a01a3d372bf2b9a6c0b1f30f5197df2750cf8c4e4c586a36c7\" successfully" Jan 30 18:08:36.596458 containerd[1633]: time="2025-01-30T18:08:36.595280021Z" level=info msg="StopPodSandbox for \"5b5926bae0ba52a01a3d372bf2b9a6c0b1f30f5197df2750cf8c4e4c586a36c7\" returns successfully" Jan 30 18:08:36.597153 containerd[1633]: time="2025-01-30T18:08:36.597105347Z" level=info msg="RemovePodSandbox for \"5b5926bae0ba52a01a3d372bf2b9a6c0b1f30f5197df2750cf8c4e4c586a36c7\"" Jan 30 18:08:36.597224 containerd[1633]: time="2025-01-30T18:08:36.597162284Z" level=info msg="Forcibly stopping sandbox \"5b5926bae0ba52a01a3d372bf2b9a6c0b1f30f5197df2750cf8c4e4c586a36c7\"" Jan 30 18:08:36.743987 containerd[1633]: 2025-01-30 18:08:36.675 [WARNING][5449] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5b5926bae0ba52a01a3d372bf2b9a6c0b1f30f5197df2750cf8c4e4c586a36c7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xoz4v.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--2hwgm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e38ccf5f-14a4-44d4-bb9c-2fd7d94168d8", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 18, 7, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xoz4v.gb1.brightbox.com", ContainerID:"c7ec2ff2ee3042b32fc2588748dfb706cdc9393a4a1e6c28eefbb8170b1cc3f7", Pod:"coredns-7db6d8ff4d-2hwgm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.37.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali16b64e874ba", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 18:08:36.743987 containerd[1633]: 2025-01-30 18:08:36.676 [INFO][5449] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5b5926bae0ba52a01a3d372bf2b9a6c0b1f30f5197df2750cf8c4e4c586a36c7" Jan 30 18:08:36.743987 containerd[1633]: 2025-01-30 18:08:36.678 [INFO][5449] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5b5926bae0ba52a01a3d372bf2b9a6c0b1f30f5197df2750cf8c4e4c586a36c7" iface="eth0" netns="" Jan 30 18:08:36.743987 containerd[1633]: 2025-01-30 18:08:36.678 [INFO][5449] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5b5926bae0ba52a01a3d372bf2b9a6c0b1f30f5197df2750cf8c4e4c586a36c7" Jan 30 18:08:36.743987 containerd[1633]: 2025-01-30 18:08:36.678 [INFO][5449] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5b5926bae0ba52a01a3d372bf2b9a6c0b1f30f5197df2750cf8c4e4c586a36c7" Jan 30 18:08:36.743987 containerd[1633]: 2025-01-30 18:08:36.726 [INFO][5456] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5b5926bae0ba52a01a3d372bf2b9a6c0b1f30f5197df2750cf8c4e4c586a36c7" HandleID="k8s-pod-network.5b5926bae0ba52a01a3d372bf2b9a6c0b1f30f5197df2750cf8c4e4c586a36c7" Workload="srv--xoz4v.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--2hwgm-eth0" Jan 30 18:08:36.743987 containerd[1633]: 2025-01-30 18:08:36.726 [INFO][5456] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 18:08:36.743987 containerd[1633]: 2025-01-30 18:08:36.726 [INFO][5456] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 18:08:36.743987 containerd[1633]: 2025-01-30 18:08:36.734 [WARNING][5456] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5b5926bae0ba52a01a3d372bf2b9a6c0b1f30f5197df2750cf8c4e4c586a36c7" HandleID="k8s-pod-network.5b5926bae0ba52a01a3d372bf2b9a6c0b1f30f5197df2750cf8c4e4c586a36c7" Workload="srv--xoz4v.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--2hwgm-eth0" Jan 30 18:08:36.743987 containerd[1633]: 2025-01-30 18:08:36.734 [INFO][5456] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5b5926bae0ba52a01a3d372bf2b9a6c0b1f30f5197df2750cf8c4e4c586a36c7" HandleID="k8s-pod-network.5b5926bae0ba52a01a3d372bf2b9a6c0b1f30f5197df2750cf8c4e4c586a36c7" Workload="srv--xoz4v.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--2hwgm-eth0" Jan 30 18:08:36.743987 containerd[1633]: 2025-01-30 18:08:36.736 [INFO][5456] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 18:08:36.743987 containerd[1633]: 2025-01-30 18:08:36.739 [INFO][5449] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5b5926bae0ba52a01a3d372bf2b9a6c0b1f30f5197df2750cf8c4e4c586a36c7" Jan 30 18:08:36.751232 containerd[1633]: time="2025-01-30T18:08:36.751121420Z" level=info msg="TearDown network for sandbox \"5b5926bae0ba52a01a3d372bf2b9a6c0b1f30f5197df2750cf8c4e4c586a36c7\" successfully" Jan 30 18:08:36.759461 containerd[1633]: time="2025-01-30T18:08:36.757602098Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5b5926bae0ba52a01a3d372bf2b9a6c0b1f30f5197df2750cf8c4e4c586a36c7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 18:08:36.759461 containerd[1633]: time="2025-01-30T18:08:36.757858341Z" level=info msg="RemovePodSandbox \"5b5926bae0ba52a01a3d372bf2b9a6c0b1f30f5197df2750cf8c4e4c586a36c7\" returns successfully" Jan 30 18:08:36.759461 containerd[1633]: time="2025-01-30T18:08:36.759134172Z" level=info msg="StopPodSandbox for \"fc4005b90f7b1f714fafceaa95e5548e3924e37e278dc88e3605c5957c6d3b99\"" Jan 30 18:08:36.905416 containerd[1633]: 2025-01-30 18:08:36.836 [WARNING][5475] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fc4005b90f7b1f714fafceaa95e5548e3924e37e278dc88e3605c5957c6d3b99" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xoz4v.gb1.brightbox.com-k8s-calico--kube--controllers--6748df8946--p6bqf-eth0", GenerateName:"calico-kube-controllers-6748df8946-", Namespace:"calico-system", SelfLink:"", UID:"89cbff21-1d7a-4590-8dec-898c81289a13", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 18, 7, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6748df8946", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xoz4v.gb1.brightbox.com", ContainerID:"d04f5bacfaa13b59571d7f32cfa675c627125c5d0164177ba16b1e50822e30a4", Pod:"calico-kube-controllers-6748df8946-p6bqf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.37.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2bbf1d491b0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 18:08:36.905416 containerd[1633]: 2025-01-30 18:08:36.837 [INFO][5475] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fc4005b90f7b1f714fafceaa95e5548e3924e37e278dc88e3605c5957c6d3b99" Jan 30 18:08:36.905416 containerd[1633]: 2025-01-30 18:08:36.837 [INFO][5475] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fc4005b90f7b1f714fafceaa95e5548e3924e37e278dc88e3605c5957c6d3b99" iface="eth0" netns="" Jan 30 18:08:36.905416 containerd[1633]: 2025-01-30 18:08:36.837 [INFO][5475] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fc4005b90f7b1f714fafceaa95e5548e3924e37e278dc88e3605c5957c6d3b99" Jan 30 18:08:36.905416 containerd[1633]: 2025-01-30 18:08:36.837 [INFO][5475] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fc4005b90f7b1f714fafceaa95e5548e3924e37e278dc88e3605c5957c6d3b99" Jan 30 18:08:36.905416 containerd[1633]: 2025-01-30 18:08:36.887 [INFO][5482] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fc4005b90f7b1f714fafceaa95e5548e3924e37e278dc88e3605c5957c6d3b99" HandleID="k8s-pod-network.fc4005b90f7b1f714fafceaa95e5548e3924e37e278dc88e3605c5957c6d3b99" Workload="srv--xoz4v.gb1.brightbox.com-k8s-calico--kube--controllers--6748df8946--p6bqf-eth0" Jan 30 18:08:36.905416 containerd[1633]: 2025-01-30 18:08:36.888 [INFO][5482] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 18:08:36.905416 containerd[1633]: 2025-01-30 18:08:36.888 [INFO][5482] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 18:08:36.905416 containerd[1633]: 2025-01-30 18:08:36.897 [WARNING][5482] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fc4005b90f7b1f714fafceaa95e5548e3924e37e278dc88e3605c5957c6d3b99" HandleID="k8s-pod-network.fc4005b90f7b1f714fafceaa95e5548e3924e37e278dc88e3605c5957c6d3b99" Workload="srv--xoz4v.gb1.brightbox.com-k8s-calico--kube--controllers--6748df8946--p6bqf-eth0" Jan 30 18:08:36.905416 containerd[1633]: 2025-01-30 18:08:36.897 [INFO][5482] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fc4005b90f7b1f714fafceaa95e5548e3924e37e278dc88e3605c5957c6d3b99" HandleID="k8s-pod-network.fc4005b90f7b1f714fafceaa95e5548e3924e37e278dc88e3605c5957c6d3b99" Workload="srv--xoz4v.gb1.brightbox.com-k8s-calico--kube--controllers--6748df8946--p6bqf-eth0" Jan 30 18:08:36.905416 containerd[1633]: 2025-01-30 18:08:36.900 [INFO][5482] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 18:08:36.905416 containerd[1633]: 2025-01-30 18:08:36.903 [INFO][5475] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fc4005b90f7b1f714fafceaa95e5548e3924e37e278dc88e3605c5957c6d3b99" Jan 30 18:08:36.906755 containerd[1633]: time="2025-01-30T18:08:36.905677946Z" level=info msg="TearDown network for sandbox \"fc4005b90f7b1f714fafceaa95e5548e3924e37e278dc88e3605c5957c6d3b99\" successfully" Jan 30 18:08:36.906755 containerd[1633]: time="2025-01-30T18:08:36.905727240Z" level=info msg="StopPodSandbox for \"fc4005b90f7b1f714fafceaa95e5548e3924e37e278dc88e3605c5957c6d3b99\" returns successfully" Jan 30 18:08:36.907729 containerd[1633]: time="2025-01-30T18:08:36.907692919Z" level=info msg="RemovePodSandbox for \"fc4005b90f7b1f714fafceaa95e5548e3924e37e278dc88e3605c5957c6d3b99\"" Jan 30 18:08:36.908785 containerd[1633]: time="2025-01-30T18:08:36.907743290Z" level=info msg="Forcibly stopping sandbox \"fc4005b90f7b1f714fafceaa95e5548e3924e37e278dc88e3605c5957c6d3b99\"" Jan 30 18:08:37.019188 containerd[1633]: 2025-01-30 18:08:36.965 [WARNING][5500] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fc4005b90f7b1f714fafceaa95e5548e3924e37e278dc88e3605c5957c6d3b99" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xoz4v.gb1.brightbox.com-k8s-calico--kube--controllers--6748df8946--p6bqf-eth0", GenerateName:"calico-kube-controllers-6748df8946-", Namespace:"calico-system", SelfLink:"", UID:"89cbff21-1d7a-4590-8dec-898c81289a13", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 18, 7, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6748df8946", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xoz4v.gb1.brightbox.com", ContainerID:"d04f5bacfaa13b59571d7f32cfa675c627125c5d0164177ba16b1e50822e30a4", Pod:"calico-kube-controllers-6748df8946-p6bqf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.37.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2bbf1d491b0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 18:08:37.019188 containerd[1633]: 2025-01-30 18:08:36.966 [INFO][5500] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fc4005b90f7b1f714fafceaa95e5548e3924e37e278dc88e3605c5957c6d3b99" Jan 30 18:08:37.019188 containerd[1633]: 2025-01-30 18:08:36.966 [INFO][5500] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fc4005b90f7b1f714fafceaa95e5548e3924e37e278dc88e3605c5957c6d3b99" iface="eth0" netns="" Jan 30 18:08:37.019188 containerd[1633]: 2025-01-30 18:08:36.966 [INFO][5500] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fc4005b90f7b1f714fafceaa95e5548e3924e37e278dc88e3605c5957c6d3b99" Jan 30 18:08:37.019188 containerd[1633]: 2025-01-30 18:08:36.966 [INFO][5500] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fc4005b90f7b1f714fafceaa95e5548e3924e37e278dc88e3605c5957c6d3b99" Jan 30 18:08:37.019188 containerd[1633]: 2025-01-30 18:08:37.001 [INFO][5507] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fc4005b90f7b1f714fafceaa95e5548e3924e37e278dc88e3605c5957c6d3b99" HandleID="k8s-pod-network.fc4005b90f7b1f714fafceaa95e5548e3924e37e278dc88e3605c5957c6d3b99" Workload="srv--xoz4v.gb1.brightbox.com-k8s-calico--kube--controllers--6748df8946--p6bqf-eth0" Jan 30 18:08:37.019188 containerd[1633]: 2025-01-30 18:08:37.001 [INFO][5507] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 18:08:37.019188 containerd[1633]: 2025-01-30 18:08:37.001 [INFO][5507] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 18:08:37.019188 containerd[1633]: 2025-01-30 18:08:37.011 [WARNING][5507] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fc4005b90f7b1f714fafceaa95e5548e3924e37e278dc88e3605c5957c6d3b99" HandleID="k8s-pod-network.fc4005b90f7b1f714fafceaa95e5548e3924e37e278dc88e3605c5957c6d3b99" Workload="srv--xoz4v.gb1.brightbox.com-k8s-calico--kube--controllers--6748df8946--p6bqf-eth0" Jan 30 18:08:37.019188 containerd[1633]: 2025-01-30 18:08:37.012 [INFO][5507] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fc4005b90f7b1f714fafceaa95e5548e3924e37e278dc88e3605c5957c6d3b99" HandleID="k8s-pod-network.fc4005b90f7b1f714fafceaa95e5548e3924e37e278dc88e3605c5957c6d3b99" Workload="srv--xoz4v.gb1.brightbox.com-k8s-calico--kube--controllers--6748df8946--p6bqf-eth0" Jan 30 18:08:37.019188 containerd[1633]: 2025-01-30 18:08:37.015 [INFO][5507] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 18:08:37.019188 containerd[1633]: 2025-01-30 18:08:37.017 [INFO][5500] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fc4005b90f7b1f714fafceaa95e5548e3924e37e278dc88e3605c5957c6d3b99" Jan 30 18:08:37.021907 containerd[1633]: time="2025-01-30T18:08:37.020029338Z" level=info msg="TearDown network for sandbox \"fc4005b90f7b1f714fafceaa95e5548e3924e37e278dc88e3605c5957c6d3b99\" successfully" Jan 30 18:08:37.026642 containerd[1633]: time="2025-01-30T18:08:37.026286680Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fc4005b90f7b1f714fafceaa95e5548e3924e37e278dc88e3605c5957c6d3b99\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 18:08:37.026642 containerd[1633]: time="2025-01-30T18:08:37.026478457Z" level=info msg="RemovePodSandbox \"fc4005b90f7b1f714fafceaa95e5548e3924e37e278dc88e3605c5957c6d3b99\" returns successfully" Jan 30 18:08:38.055058 containerd[1633]: time="2025-01-30T18:08:38.053798523Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:08:38.055058 containerd[1633]: time="2025-01-30T18:08:38.054930831Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 30 18:08:38.055975 containerd[1633]: time="2025-01-30T18:08:38.055930877Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:08:38.063486 containerd[1633]: time="2025-01-30T18:08:38.063428768Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 18:08:38.064649 containerd[1633]: time="2025-01-30T18:08:38.064611097Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.973927557s" Jan 30 18:08:38.064738 containerd[1633]: time="2025-01-30T18:08:38.064668211Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 30 18:08:38.070022 containerd[1633]: time="2025-01-30T18:08:38.069989653Z" level=info msg="CreateContainer within sandbox \"24b66df4f769590870178ae4101c8ff8e2e523efabfb44b30797aff61ef3c02f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 30 18:08:38.087829 containerd[1633]: time="2025-01-30T18:08:38.087780915Z" level=info msg="CreateContainer within sandbox \"24b66df4f769590870178ae4101c8ff8e2e523efabfb44b30797aff61ef3c02f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"760ce28ec6d3078d7c0453fd03af161cb959553a037b67d270bfa0d6035f8f3e\"" Jan 30 18:08:38.090721 containerd[1633]: time="2025-01-30T18:08:38.090687389Z" level=info msg="StartContainer for \"760ce28ec6d3078d7c0453fd03af161cb959553a037b67d270bfa0d6035f8f3e\"" Jan 30 18:08:38.187782 kubelet[2960]: I0130 18:08:38.187703 2960 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-86c44fb797-kj4m5" podStartSLOduration=34.918115337 podStartE2EDuration="42.187677608s" podCreationTimestamp="2025-01-30 18:07:56 +0000 UTC" firstStartedPulling="2025-01-30 18:08:28.817604937 +0000 UTC m=+54.384210209" lastFinishedPulling="2025-01-30 18:08:36.087167208 +0000 UTC m=+61.653772480" observedRunningTime="2025-01-30 18:08:37.116634343 +0000 UTC m=+62.683239631" watchObservedRunningTime="2025-01-30 18:08:38.187677608 +0000 UTC m=+63.754282882" Jan 30 18:08:38.242438 containerd[1633]: time="2025-01-30T18:08:38.242347939Z" level=info msg="StartContainer for \"760ce28ec6d3078d7c0453fd03af161cb959553a037b67d270bfa0d6035f8f3e\" returns successfully" Jan 30 18:08:39.031625 kubelet[2960]: I0130 18:08:39.031328 2960 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 30 18:08:39.032998 kubelet[2960]: I0130 18:08:39.032933 2960 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 30 18:08:40.676044 systemd[1]: run-containerd-runc-k8s.io-2c791c7964ddf736d8c839d3db335c5833e886e9da2589971812f095d308f7a0-runc.iWcQ8D.mount: Deactivated successfully. Jan 30 18:08:48.620689 systemd[1]: sshd@3-10.230.68.22:22-113.200.60.74:39046.service: Deactivated successfully. Jan 30 18:08:49.284230 kubelet[2960]: I0130 18:08:49.284006 2960 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-tdn4g" podStartSLOduration=43.56327364 podStartE2EDuration="53.283153479s" podCreationTimestamp="2025-01-30 18:07:56 +0000 UTC" firstStartedPulling="2025-01-30 18:08:28.346384488 +0000 UTC m=+53.912989753" lastFinishedPulling="2025-01-30 18:08:38.066264314 +0000 UTC m=+63.632869592" observedRunningTime="2025-01-30 18:08:39.16451803 +0000 UTC m=+64.731123330" watchObservedRunningTime="2025-01-30 18:08:49.283153479 +0000 UTC m=+74.849758763" Jan 30 18:08:53.880139 systemd[1]: Started sshd@20-10.230.68.22:22-52.140.61.101:35176.service - OpenSSH per-connection server daemon (52.140.61.101:35176). Jan 30 18:08:54.653033 systemd[1]: Started sshd@21-10.230.68.22:22-139.178.89.65:55776.service - OpenSSH per-connection server daemon (139.178.89.65:55776). Jan 30 18:08:54.785324 sshd[5616]: Invalid user gitadmin from 52.140.61.101 port 35176 Jan 30 18:08:54.949187 sshd[5616]: Received disconnect from 52.140.61.101 port 35176:11: Bye Bye [preauth] Jan 30 18:08:54.949187 sshd[5616]: Disconnected from invalid user gitadmin 52.140.61.101 port 35176 [preauth] Jan 30 18:08:54.952757 systemd[1]: sshd@20-10.230.68.22:22-52.140.61.101:35176.service: Deactivated successfully. Jan 30 18:08:55.578494 sshd[5621]: Accepted publickey for core from 139.178.89.65 port 55776 ssh2: RSA SHA256:mi9Ffww0GZqlgqqsrskunsrI33jB/1uB1d3dx4wABvw Jan 30 18:08:55.586952 sshd[5621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 18:08:55.622401 systemd-logind[1602]: New session 12 of user core. Jan 30 18:08:55.631773 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 18:08:56.930520 sshd[5621]: pam_unix(sshd:session): session closed for user core Jan 30 18:08:56.950365 systemd[1]: sshd@21-10.230.68.22:22-139.178.89.65:55776.service: Deactivated successfully. Jan 30 18:08:56.955299 systemd-logind[1602]: Session 12 logged out. Waiting for processes to exit. Jan 30 18:08:56.955803 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 18:08:56.960747 systemd-logind[1602]: Removed session 12. Jan 30 18:09:00.517539 kubelet[2960]: I0130 18:09:00.517134 2960 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 18:09:02.084252 systemd[1]: Started sshd@22-10.230.68.22:22-139.178.89.65:53764.service - OpenSSH per-connection server daemon (139.178.89.65:53764). Jan 30 18:09:03.009046 sshd[5642]: Accepted publickey for core from 139.178.89.65 port 53764 ssh2: RSA SHA256:mi9Ffww0GZqlgqqsrskunsrI33jB/1uB1d3dx4wABvw Jan 30 18:09:03.017664 sshd[5642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 18:09:03.024972 systemd-logind[1602]: New session 13 of user core. Jan 30 18:09:03.031329 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 18:09:03.770182 sshd[5642]: pam_unix(sshd:session): session closed for user core Jan 30 18:09:03.774577 systemd-logind[1602]: Session 13 logged out. Waiting for processes to exit. Jan 30 18:09:03.777245 systemd[1]: sshd@22-10.230.68.22:22-139.178.89.65:53764.service: Deactivated successfully. Jan 30 18:09:03.783129 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 18:09:03.785584 systemd-logind[1602]: Removed session 13. Jan 30 18:09:05.595188 systemd[1]: Started sshd@23-10.230.68.22:22-113.200.60.74:52289.service - OpenSSH per-connection server daemon (113.200.60.74:52289). Jan 30 18:09:08.919244 systemd[1]: Started sshd@24-10.230.68.22:22-139.178.89.65:53774.service - OpenSSH per-connection server daemon (139.178.89.65:53774). Jan 30 18:09:09.821093 sshd[5665]: Accepted publickey for core from 139.178.89.65 port 53774 ssh2: RSA SHA256:mi9Ffww0GZqlgqqsrskunsrI33jB/1uB1d3dx4wABvw Jan 30 18:09:09.823633 sshd[5665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 18:09:09.831731 systemd-logind[1602]: New session 14 of user core. Jan 30 18:09:09.839464 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 18:09:10.553707 sshd[5665]: pam_unix(sshd:session): session closed for user core Jan 30 18:09:10.559771 systemd[1]: sshd@24-10.230.68.22:22-139.178.89.65:53774.service: Deactivated successfully. Jan 30 18:09:10.563322 systemd-logind[1602]: Session 14 logged out. Waiting for processes to exit. Jan 30 18:09:10.564193 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 18:09:10.566200 systemd-logind[1602]: Removed session 14. Jan 30 18:09:10.705507 systemd[1]: Started sshd@25-10.230.68.22:22-139.178.89.65:53778.service - OpenSSH per-connection server daemon (139.178.89.65:53778). Jan 30 18:09:10.779687 systemd[1]: run-containerd-runc-k8s.io-2c791c7964ddf736d8c839d3db335c5833e886e9da2589971812f095d308f7a0-runc.ENbWCO.mount: Deactivated successfully. Jan 30 18:09:11.619615 sshd[5686]: Accepted publickey for core from 139.178.89.65 port 53778 ssh2: RSA SHA256:mi9Ffww0GZqlgqqsrskunsrI33jB/1uB1d3dx4wABvw Jan 30 18:09:11.622826 sshd[5686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 18:09:11.630641 systemd-logind[1602]: New session 15 of user core. Jan 30 18:09:11.635343 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 18:09:12.423543 sshd[5686]: pam_unix(sshd:session): session closed for user core Jan 30 18:09:12.430481 systemd[1]: sshd@25-10.230.68.22:22-139.178.89.65:53778.service: Deactivated successfully. Jan 30 18:09:12.435610 systemd-logind[1602]: Session 15 logged out. Waiting for processes to exit. Jan 30 18:09:12.437073 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 18:09:12.439467 systemd-logind[1602]: Removed session 15. Jan 30 18:09:12.571828 systemd[1]: Started sshd@26-10.230.68.22:22-139.178.89.65:42150.service - OpenSSH per-connection server daemon (139.178.89.65:42150). Jan 30 18:09:13.475211 sshd[5711]: Accepted publickey for core from 139.178.89.65 port 42150 ssh2: RSA SHA256:mi9Ffww0GZqlgqqsrskunsrI33jB/1uB1d3dx4wABvw Jan 30 18:09:13.477986 sshd[5711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 18:09:13.485982 systemd-logind[1602]: New session 16 of user core. Jan 30 18:09:13.496409 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 18:09:14.195822 sshd[5711]: pam_unix(sshd:session): session closed for user core Jan 30 18:09:14.201373 systemd[1]: sshd@26-10.230.68.22:22-139.178.89.65:42150.service: Deactivated successfully. Jan 30 18:09:14.205253 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 18:09:14.205270 systemd-logind[1602]: Session 16 logged out. Waiting for processes to exit. Jan 30 18:09:14.208952 systemd-logind[1602]: Removed session 16. Jan 30 18:09:19.181429 systemd[1]: run-containerd-runc-k8s.io-af1381c7981811448d96976b39b43039203344c808c9a1c28f785eb9cb8d5c78-runc.MC4NPK.mount: Deactivated successfully. Jan 30 18:09:19.344192 systemd[1]: Started sshd@27-10.230.68.22:22-139.178.89.65:42166.service - OpenSSH per-connection server daemon (139.178.89.65:42166). Jan 30 18:09:20.248118 sshd[5755]: Accepted publickey for core from 139.178.89.65 port 42166 ssh2: RSA SHA256:mi9Ffww0GZqlgqqsrskunsrI33jB/1uB1d3dx4wABvw Jan 30 18:09:20.250808 sshd[5755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 18:09:20.257611 systemd-logind[1602]: New session 17 of user core. Jan 30 18:09:20.262361 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 18:09:20.975313 sshd[5755]: pam_unix(sshd:session): session closed for user core Jan 30 18:09:20.981173 systemd[1]: sshd@27-10.230.68.22:22-139.178.89.65:42166.service: Deactivated successfully. Jan 30 18:09:20.985170 systemd-logind[1602]: Session 17 logged out. Waiting for processes to exit. Jan 30 18:09:20.985730 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 18:09:20.987939 systemd-logind[1602]: Removed session 17. Jan 30 18:09:23.495340 systemd[1]: sshd@10-10.230.68.22:22-113.200.60.74:42777.service: Deactivated successfully. Jan 30 18:09:26.124249 systemd[1]: Started sshd@28-10.230.68.22:22-139.178.89.65:45960.service - OpenSSH per-connection server daemon (139.178.89.65:45960). Jan 30 18:09:27.036332 sshd[5793]: Accepted publickey for core from 139.178.89.65 port 45960 ssh2: RSA SHA256:mi9Ffww0GZqlgqqsrskunsrI33jB/1uB1d3dx4wABvw Jan 30 18:09:27.048006 sshd[5793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 18:09:27.057293 systemd-logind[1602]: New session 18 of user core. Jan 30 18:09:27.063136 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 18:09:27.800759 sshd[5793]: pam_unix(sshd:session): session closed for user core Jan 30 18:09:27.805994 systemd[1]: sshd@28-10.230.68.22:22-139.178.89.65:45960.service: Deactivated successfully. Jan 30 18:09:27.813406 systemd-logind[1602]: Session 18 logged out. Waiting for processes to exit. Jan 30 18:09:27.814562 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 18:09:27.817247 systemd-logind[1602]: Removed session 18. Jan 30 18:09:29.694926 systemd[1]: Started sshd@29-10.230.68.22:22-136.232.203.134:42733.service - OpenSSH per-connection server daemon (136.232.203.134:42733). Jan 30 18:09:31.059741 sshd[5807]: Invalid user ubuntu from 136.232.203.134 port 42733 Jan 30 18:09:31.304585 sshd[5807]: Received disconnect from 136.232.203.134 port 42733:11: Bye Bye [preauth] Jan 30 18:09:31.304585 sshd[5807]: Disconnected from invalid user ubuntu 136.232.203.134 port 42733 [preauth] Jan 30 18:09:31.308716 systemd[1]: sshd@29-10.230.68.22:22-136.232.203.134:42733.service: Deactivated successfully. Jan 30 18:09:32.951361 systemd[1]: Started sshd@30-10.230.68.22:22-139.178.89.65:46864.service - OpenSSH per-connection server daemon (139.178.89.65:46864). Jan 30 18:09:33.850176 sshd[5812]: Accepted publickey for core from 139.178.89.65 port 46864 ssh2: RSA SHA256:mi9Ffww0GZqlgqqsrskunsrI33jB/1uB1d3dx4wABvw Jan 30 18:09:33.853790 sshd[5812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 18:09:33.864552 systemd-logind[1602]: New session 19 of user core. Jan 30 18:09:33.869790 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 18:09:34.602594 sshd[5812]: pam_unix(sshd:session): session closed for user core Jan 30 18:09:34.613051 systemd[1]: sshd@30-10.230.68.22:22-139.178.89.65:46864.service: Deactivated successfully. Jan 30 18:09:34.616914 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 18:09:34.617302 systemd-logind[1602]: Session 19 logged out. Waiting for processes to exit. Jan 30 18:09:34.619925 systemd-logind[1602]: Removed session 19. Jan 30 18:09:37.934601 systemd-journald[1172]: Under memory pressure, flushing caches. Jan 30 18:09:37.922275 systemd-resolved[1511]: Under memory pressure, flushing caches. Jan 30 18:09:37.922331 systemd-resolved[1511]: Flushed all caches. Jan 30 18:09:38.017218 systemd[1]: Started sshd@31-10.230.68.22:22-113.200.60.74:55427.service - OpenSSH per-connection server daemon (113.200.60.74:55427). Jan 30 18:09:39.752256 systemd[1]: Started sshd@32-10.230.68.22:22-139.178.89.65:46872.service - OpenSSH per-connection server daemon (139.178.89.65:46872). Jan 30 18:09:40.645028 sshd[5831]: Accepted publickey for core from 139.178.89.65 port 46872 ssh2: RSA SHA256:mi9Ffww0GZqlgqqsrskunsrI33jB/1uB1d3dx4wABvw Jan 30 18:09:40.655442 sshd[5831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 18:09:40.672616 systemd-logind[1602]: New session 20 of user core. Jan 30 18:09:40.681343 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 18:09:41.446523 sshd[5831]: pam_unix(sshd:session): session closed for user core Jan 30 18:09:41.450489 systemd[1]: sshd@32-10.230.68.22:22-139.178.89.65:46872.service: Deactivated successfully. Jan 30 18:09:41.456011 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 18:09:41.457976 systemd-logind[1602]: Session 20 logged out. Waiting for processes to exit. Jan 30 18:09:41.462241 systemd-logind[1602]: Removed session 20. Jan 30 18:09:41.595250 systemd[1]: Started sshd@33-10.230.68.22:22-139.178.89.65:35164.service - OpenSSH per-connection server daemon (139.178.89.65:35164). Jan 30 18:09:42.495938 sshd[5864]: Accepted publickey for core from 139.178.89.65 port 35164 ssh2: RSA SHA256:mi9Ffww0GZqlgqqsrskunsrI33jB/1uB1d3dx4wABvw Jan 30 18:09:42.499568 sshd[5864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 18:09:42.508928 systemd-logind[1602]: New session 21 of user core. Jan 30 18:09:42.521421 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 18:09:43.446848 sshd[5864]: pam_unix(sshd:session): session closed for user core Jan 30 18:09:43.456968 systemd[1]: sshd@33-10.230.68.22:22-139.178.89.65:35164.service: Deactivated successfully. Jan 30 18:09:43.461689 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 18:09:43.461940 systemd-logind[1602]: Session 21 logged out. Waiting for processes to exit. Jan 30 18:09:43.464467 systemd-logind[1602]: Removed session 21. Jan 30 18:09:43.592247 systemd[1]: Started sshd@34-10.230.68.22:22-139.178.89.65:35176.service - OpenSSH per-connection server daemon (139.178.89.65:35176). Jan 30 18:09:44.487174 sshd[5876]: Accepted publickey for core from 139.178.89.65 port 35176 ssh2: RSA SHA256:mi9Ffww0GZqlgqqsrskunsrI33jB/1uB1d3dx4wABvw Jan 30 18:09:44.489953 sshd[5876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 18:09:44.498330 systemd-logind[1602]: New session 22 of user core. Jan 30 18:09:44.503303 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 18:09:47.798540 sshd[5876]: pam_unix(sshd:session): session closed for user core Jan 30 18:09:47.805285 systemd[1]: sshd@34-10.230.68.22:22-139.178.89.65:35176.service: Deactivated successfully. Jan 30 18:09:47.815460 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 18:09:47.818567 systemd-logind[1602]: Session 22 logged out. Waiting for processes to exit. Jan 30 18:09:47.820081 systemd-logind[1602]: Removed session 22. Jan 30 18:09:47.949300 systemd[1]: Started sshd@35-10.230.68.22:22-139.178.89.65:35184.service - OpenSSH per-connection server daemon (139.178.89.65:35184). Jan 30 18:09:48.843955 sshd[5901]: Accepted publickey for core from 139.178.89.65 port 35184 ssh2: RSA SHA256:mi9Ffww0GZqlgqqsrskunsrI33jB/1uB1d3dx4wABvw Jan 30 18:09:48.847493 sshd[5901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 18:09:48.855010 systemd-logind[1602]: New session 23 of user core. Jan 30 18:09:48.863514 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 18:09:49.915479 sshd[5901]: pam_unix(sshd:session): session closed for user core Jan 30 18:09:49.922138 systemd[1]: sshd@35-10.230.68.22:22-139.178.89.65:35184.service: Deactivated successfully. Jan 30 18:09:49.926191 systemd-logind[1602]: Session 23 logged out. Waiting for processes to exit. Jan 30 18:09:49.927692 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 18:09:49.929729 systemd-logind[1602]: Removed session 23. Jan 30 18:09:50.065287 systemd[1]: Started sshd@36-10.230.68.22:22-139.178.89.65:35200.service - OpenSSH per-connection server daemon (139.178.89.65:35200). Jan 30 18:09:50.954767 sshd[5935]: Accepted publickey for core from 139.178.89.65 port 35200 ssh2: RSA SHA256:mi9Ffww0GZqlgqqsrskunsrI33jB/1uB1d3dx4wABvw Jan 30 18:09:50.957789 sshd[5935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 18:09:50.964372 systemd-logind[1602]: New session 24 of user core. Jan 30 18:09:50.972303 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 30 18:09:51.654827 sshd[5935]: pam_unix(sshd:session): session closed for user core Jan 30 18:09:51.660697 systemd[1]: sshd@36-10.230.68.22:22-139.178.89.65:35200.service: Deactivated successfully. Jan 30 18:09:51.665732 systemd-logind[1602]: Session 24 logged out. Waiting for processes to exit. Jan 30 18:09:51.666017 systemd[1]: session-24.scope: Deactivated successfully. Jan 30 18:09:51.668447 systemd-logind[1602]: Removed session 24. Jan 30 18:09:56.807239 systemd[1]: Started sshd@37-10.230.68.22:22-139.178.89.65:60150.service - OpenSSH per-connection server daemon (139.178.89.65:60150). Jan 30 18:09:57.707207 sshd[5955]: Accepted publickey for core from 139.178.89.65 port 60150 ssh2: RSA SHA256:mi9Ffww0GZqlgqqsrskunsrI33jB/1uB1d3dx4wABvw Jan 30 18:09:57.709822 sshd[5955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 18:09:57.716794 systemd-logind[1602]: New session 25 of user core. Jan 30 18:09:57.724402 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 30 18:09:58.443473 sshd[5955]: pam_unix(sshd:session): session closed for user core Jan 30 18:09:58.453917 systemd[1]: sshd@37-10.230.68.22:22-139.178.89.65:60150.service: Deactivated successfully. Jan 30 18:09:58.462985 systemd-logind[1602]: Session 25 logged out. Waiting for processes to exit. Jan 30 18:09:58.464927 systemd[1]: session-25.scope: Deactivated successfully. Jan 30 18:09:58.468460 systemd-logind[1602]: Removed session 25. Jan 30 18:09:59.574812 systemd[1]: sshd@18-10.230.68.22:22-113.200.60.74:46014.service: Deactivated successfully. Jan 30 18:10:03.592192 systemd[1]: Started sshd@38-10.230.68.22:22-139.178.89.65:41914.service - OpenSSH per-connection server daemon (139.178.89.65:41914). Jan 30 18:10:04.495136 sshd[5987]: Accepted publickey for core from 139.178.89.65 port 41914 ssh2: RSA SHA256:mi9Ffww0GZqlgqqsrskunsrI33jB/1uB1d3dx4wABvw Jan 30 18:10:04.497877 sshd[5987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 18:10:04.504111 systemd-logind[1602]: New session 26 of user core. Jan 30 18:10:04.513388 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 30 18:10:05.227278 sshd[5987]: pam_unix(sshd:session): session closed for user core Jan 30 18:10:05.233096 systemd[1]: sshd@38-10.230.68.22:22-139.178.89.65:41914.service: Deactivated successfully. Jan 30 18:10:05.239280 systemd-logind[1602]: Session 26 logged out. Waiting for processes to exit. Jan 30 18:10:05.240389 systemd[1]: session-26.scope: Deactivated successfully. Jan 30 18:10:05.242553 systemd-logind[1602]: Removed session 26. Jan 30 18:10:10.379431 systemd[1]: Started sshd@39-10.230.68.22:22-139.178.89.65:41920.service - OpenSSH per-connection server daemon (139.178.89.65:41920). Jan 30 18:10:10.585592 systemd[1]: Started sshd@40-10.230.68.22:22-113.200.60.74:58565.service - OpenSSH per-connection server daemon (113.200.60.74:58565). Jan 30 18:10:11.279043 sshd[6002]: Accepted publickey for core from 139.178.89.65 port 41920 ssh2: RSA SHA256:mi9Ffww0GZqlgqqsrskunsrI33jB/1uB1d3dx4wABvw Jan 30 18:10:11.281806 sshd[6002]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 18:10:11.290067 systemd-logind[1602]: New session 27 of user core. Jan 30 18:10:11.295433 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 30 18:10:11.988254 sshd[6002]: pam_unix(sshd:session): session closed for user core Jan 30 18:10:11.992749 systemd[1]: sshd@39-10.230.68.22:22-139.178.89.65:41920.service: Deactivated successfully. Jan 30 18:10:11.998765 systemd-logind[1602]: Session 27 logged out. Waiting for processes to exit. Jan 30 18:10:11.999650 systemd[1]: session-27.scope: Deactivated successfully. Jan 30 18:10:12.001593 systemd-logind[1602]: Removed session 27.