Oct 9 07:53:44.004744 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Tue Oct 8 18:19:34 -00 2024 Oct 9 07:53:44.004789 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=1839da262570fb938be558d95db7fc3d986a0d71e1b77d40d35a3e2a1bac7dcd Oct 9 07:53:44.004803 kernel: BIOS-provided physical RAM map: Oct 9 07:53:44.004817 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 9 07:53:44.004827 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 9 07:53:44.004837 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 9 07:53:44.004847 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Oct 9 07:53:44.004857 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Oct 9 07:53:44.004867 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Oct 9 07:53:44.004876 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Oct 9 07:53:44.004886 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 9 07:53:44.004896 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 9 07:53:44.004910 kernel: NX (Execute Disable) protection: active Oct 9 07:53:44.004920 kernel: APIC: Static calls initialized Oct 9 07:53:44.004932 kernel: SMBIOS 2.8 present. Oct 9 07:53:44.004943 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Oct 9 07:53:44.004954 kernel: Hypervisor detected: KVM Oct 9 07:53:44.004968 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 9 07:53:44.004979 kernel: kvm-clock: using sched offset of 4328860975 cycles Oct 9 07:53:44.004991 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 9 07:53:44.005002 kernel: tsc: Detected 2799.998 MHz processor Oct 9 07:53:44.005013 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 9 07:53:44.005024 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 9 07:53:44.005035 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Oct 9 07:53:44.005045 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Oct 9 07:53:44.005056 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 9 07:53:44.005071 kernel: Using GB pages for direct mapping Oct 9 07:53:44.005082 kernel: ACPI: Early table checksum verification disabled Oct 9 07:53:44.005093 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Oct 9 07:53:44.005103 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:53:44.005114 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:53:44.005125 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:53:44.005135 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Oct 9 07:53:44.005146 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:53:44.005157 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:53:44.005171 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:53:44.005182 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:53:44.005193 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Oct 9 07:53:44.005219 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Oct 9 07:53:44.005244 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Oct 9 07:53:44.005263 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Oct 9 07:53:44.005275 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Oct 9 07:53:44.005290 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Oct 9 07:53:44.005301 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Oct 9 07:53:44.005312 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Oct 9 07:53:44.005324 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Oct 9 07:53:44.005335 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Oct 9 07:53:44.005346 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Oct 9 07:53:44.005366 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Oct 9 07:53:44.005383 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Oct 9 07:53:44.005394 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Oct 9 07:53:44.005405 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Oct 9 07:53:44.005416 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Oct 9 07:53:44.005427 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Oct 9 07:53:44.005439 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Oct 9 07:53:44.005450 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Oct 9 07:53:44.005461 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Oct 9 07:53:44.005472 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Oct 9 07:53:44.005483 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Oct 9 07:53:44.005498 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Oct 9 07:53:44.005510 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Oct 9 07:53:44.005521 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Oct 9 07:53:44.005532 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Oct 9 07:53:44.005543 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Oct 9 07:53:44.005555 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Oct 9 07:53:44.005566 kernel: Zone ranges: Oct 9 07:53:44.005577 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 9 07:53:44.005589 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Oct 9 07:53:44.005604 kernel: Normal empty Oct 9 07:53:44.005615 kernel: Movable zone start for each node Oct 9 07:53:44.005627 kernel: Early memory node ranges Oct 9 07:53:44.005638 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 9 07:53:44.005649 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Oct 9 07:53:44.005660 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Oct 9 07:53:44.005679 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 9 07:53:44.005691 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 9 07:53:44.005702 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Oct 9 07:53:44.005713 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 9 07:53:44.005730 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 9 07:53:44.005742 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 9 07:53:44.005753 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 9 07:53:44.005764 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 9 07:53:44.005775 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 9 07:53:44.005786 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 9 07:53:44.005797 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 9 07:53:44.005809 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 9 07:53:44.005820 kernel: TSC deadline timer available Oct 9 07:53:44.005839 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Oct 9 07:53:44.005850 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 9 07:53:44.005861 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Oct 9 07:53:44.005873 kernel: Booting paravirtualized kernel on KVM Oct 9 07:53:44.005892 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 9 07:53:44.005903 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Oct 9 07:53:44.005914 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u262144 Oct 9 07:53:44.005926 kernel: pcpu-alloc: s196904 r8192 d32472 u262144 alloc=1*2097152 Oct 9 07:53:44.005936 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Oct 9 07:53:44.005954 kernel: kvm-guest: PV spinlocks enabled Oct 9 07:53:44.005965 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 9 07:53:44.005978 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=1839da262570fb938be558d95db7fc3d986a0d71e1b77d40d35a3e2a1bac7dcd Oct 9 07:53:44.005990 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 9 07:53:44.006001 kernel: random: crng init done Oct 9 07:53:44.006012 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 9 07:53:44.006023 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 9 07:53:44.006034 kernel: Fallback order for Node 0: 0 Oct 9 07:53:44.006050 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Oct 9 07:53:44.006061 kernel: Policy zone: DMA32 Oct 9 07:53:44.006072 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 9 07:53:44.006083 kernel: software IO TLB: area num 16. Oct 9 07:53:44.006095 kernel: Memory: 1895388K/2096616K available (12288K kernel code, 2304K rwdata, 22648K rodata, 49452K init, 1888K bss, 200968K reserved, 0K cma-reserved) Oct 9 07:53:44.006106 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Oct 9 07:53:44.006117 kernel: Kernel/User page tables isolation: enabled Oct 9 07:53:44.006128 kernel: ftrace: allocating 37706 entries in 148 pages Oct 9 07:53:44.006139 kernel: ftrace: allocated 148 pages with 3 groups Oct 9 07:53:44.006155 kernel: Dynamic Preempt: voluntary Oct 9 07:53:44.006166 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 9 07:53:44.006178 kernel: rcu: RCU event tracing is enabled. Oct 9 07:53:44.006189 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Oct 9 07:53:44.006217 kernel: Trampoline variant of Tasks RCU enabled. Oct 9 07:53:44.006244 kernel: Rude variant of Tasks RCU enabled. Oct 9 07:53:44.006261 kernel: Tracing variant of Tasks RCU enabled. Oct 9 07:53:44.006273 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 9 07:53:44.006285 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Oct 9 07:53:44.006296 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Oct 9 07:53:44.006308 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 9 07:53:44.006320 kernel: Console: colour VGA+ 80x25 Oct 9 07:53:44.006336 kernel: printk: console [tty0] enabled Oct 9 07:53:44.006348 kernel: printk: console [ttyS0] enabled Oct 9 07:53:44.006369 kernel: ACPI: Core revision 20230628 Oct 9 07:53:44.006381 kernel: APIC: Switch to symmetric I/O mode setup Oct 9 07:53:44.006393 kernel: x2apic enabled Oct 9 07:53:44.006410 kernel: APIC: Switched APIC routing to: physical x2apic Oct 9 07:53:44.006422 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Oct 9 07:53:44.006434 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) Oct 9 07:53:44.006446 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 9 07:53:44.006458 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Oct 9 07:53:44.006469 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Oct 9 07:53:44.006481 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 9 07:53:44.006493 kernel: Spectre V2 : Mitigation: Retpolines Oct 9 07:53:44.006504 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 9 07:53:44.006516 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 9 07:53:44.006532 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Oct 9 07:53:44.006544 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 9 07:53:44.006556 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 9 07:53:44.006568 kernel: MDS: Mitigation: Clear CPU buffers Oct 9 07:53:44.006579 kernel: MMIO Stale Data: Unknown: No mitigations Oct 9 07:53:44.006591 kernel: SRBDS: Unknown: Dependent on hypervisor status Oct 9 07:53:44.006603 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 9 07:53:44.006615 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 9 07:53:44.006626 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 9 07:53:44.006638 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 9 07:53:44.006654 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Oct 9 07:53:44.006666 kernel: Freeing SMP alternatives memory: 32K Oct 9 07:53:44.006678 kernel: pid_max: default: 32768 minimum: 301 Oct 9 07:53:44.006689 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Oct 9 07:53:44.006701 kernel: SELinux: Initializing. Oct 9 07:53:44.006713 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 9 07:53:44.006725 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 9 07:53:44.006737 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Oct 9 07:53:44.006748 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1. Oct 9 07:53:44.006760 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1. Oct 9 07:53:44.006772 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1. Oct 9 07:53:44.006788 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Oct 9 07:53:44.006800 kernel: signal: max sigframe size: 1776 Oct 9 07:53:44.006812 kernel: rcu: Hierarchical SRCU implementation. Oct 9 07:53:44.006824 kernel: rcu: Max phase no-delay instances is 400. Oct 9 07:53:44.006836 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Oct 9 07:53:44.006848 kernel: smp: Bringing up secondary CPUs ... Oct 9 07:53:44.006859 kernel: smpboot: x86: Booting SMP configuration: Oct 9 07:53:44.006871 kernel: .... node #0, CPUs: #1 Oct 9 07:53:44.006883 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Oct 9 07:53:44.006899 kernel: smp: Brought up 1 node, 2 CPUs Oct 9 07:53:44.006911 kernel: smpboot: Max logical packages: 16 Oct 9 07:53:44.006923 kernel: smpboot: Total of 2 processors activated (11199.99 BogoMIPS) Oct 9 07:53:44.006935 kernel: devtmpfs: initialized Oct 9 07:53:44.006946 kernel: x86/mm: Memory block size: 128MB Oct 9 07:53:44.006958 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 9 07:53:44.006970 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Oct 9 07:53:44.006982 kernel: pinctrl core: initialized pinctrl subsystem Oct 9 07:53:44.006994 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 9 07:53:44.007010 kernel: audit: initializing netlink subsys (disabled) Oct 9 07:53:44.007022 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 9 07:53:44.007034 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 9 07:53:44.007046 kernel: audit: type=2000 audit(1728460422.403:1): state=initialized audit_enabled=0 res=1 Oct 9 07:53:44.007057 kernel: cpuidle: using governor menu Oct 9 07:53:44.007069 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 9 07:53:44.007081 kernel: dca service started, version 1.12.1 Oct 9 07:53:44.007092 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Oct 9 07:53:44.007105 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Oct 9 07:53:44.007121 kernel: PCI: Using configuration type 1 for base access Oct 9 07:53:44.007133 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 9 07:53:44.007145 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 9 07:53:44.007157 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 9 07:53:44.007169 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 9 07:53:44.007180 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 9 07:53:44.007192 kernel: ACPI: Added _OSI(Module Device) Oct 9 07:53:44.007231 kernel: ACPI: Added _OSI(Processor Device) Oct 9 07:53:44.007245 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 9 07:53:44.007262 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 9 07:53:44.007274 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 9 07:53:44.007286 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Oct 9 07:53:44.007298 kernel: ACPI: Interpreter enabled Oct 9 07:53:44.007310 kernel: ACPI: PM: (supports S0 S5) Oct 9 07:53:44.007322 kernel: ACPI: Using IOAPIC for interrupt routing Oct 9 07:53:44.007334 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 9 07:53:44.007346 kernel: PCI: Using E820 reservations for host bridge windows Oct 9 07:53:44.007367 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 9 07:53:44.007385 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 9 07:53:44.007643 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 9 07:53:44.007815 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 9 07:53:44.007973 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 9 07:53:44.007991 kernel: PCI host bridge to bus 0000:00 Oct 9 07:53:44.008180 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 9 07:53:44.008364 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 9 07:53:44.008524 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 9 07:53:44.008670 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Oct 9 07:53:44.008816 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 9 07:53:44.008961 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Oct 9 07:53:44.009107 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 9 07:53:44.009303 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Oct 9 07:53:44.009509 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Oct 9 07:53:44.009674 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Oct 9 07:53:44.009834 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Oct 9 07:53:44.009993 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Oct 9 07:53:44.010157 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 9 07:53:44.010375 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Oct 9 07:53:44.010544 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Oct 9 07:53:44.010733 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Oct 9 07:53:44.010895 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Oct 9 07:53:44.011066 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Oct 9 07:53:44.011244 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Oct 9 07:53:44.011435 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Oct 9 07:53:44.011598 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Oct 9 07:53:44.011776 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Oct 9 07:53:44.011937 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Oct 9 07:53:44.012115 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Oct 9 07:53:44.012303 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Oct 9 07:53:44.012485 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Oct 9 07:53:44.012642 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Oct 9 07:53:44.012816 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Oct 9 07:53:44.012974 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Oct 9 07:53:44.013147 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Oct 9 07:53:44.013335 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Oct 9 07:53:44.013509 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Oct 9 07:53:44.013669 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Oct 9 07:53:44.013831 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Oct 9 07:53:44.014009 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Oct 9 07:53:44.014172 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Oct 9 07:53:44.014402 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Oct 9 07:53:44.014559 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Oct 9 07:53:44.014723 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Oct 9 07:53:44.014879 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 9 07:53:44.015043 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Oct 9 07:53:44.015221 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Oct 9 07:53:44.015393 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Oct 9 07:53:44.015563 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Oct 9 07:53:44.015719 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Oct 9 07:53:44.015903 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Oct 9 07:53:44.016067 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Oct 9 07:53:44.016314 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Oct 9 07:53:44.016495 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Oct 9 07:53:44.016651 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Oct 9 07:53:44.016817 kernel: pci_bus 0000:02: extended config space not accessible Oct 9 07:53:44.016995 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Oct 9 07:53:44.017169 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Oct 9 07:53:44.017366 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Oct 9 07:53:44.017534 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Oct 9 07:53:44.017709 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Oct 9 07:53:44.017873 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Oct 9 07:53:44.018035 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Oct 9 07:53:44.018194 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Oct 9 07:53:44.018432 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Oct 9 07:53:44.018616 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Oct 9 07:53:44.018781 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Oct 9 07:53:44.018941 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Oct 9 07:53:44.019098 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Oct 9 07:53:44.019270 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Oct 9 07:53:44.019453 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Oct 9 07:53:44.019628 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Oct 9 07:53:44.019786 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Oct 9 07:53:44.019956 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Oct 9 07:53:44.020116 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Oct 9 07:53:44.020314 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Oct 9 07:53:44.020508 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Oct 9 07:53:44.020666 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Oct 9 07:53:44.020823 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Oct 9 07:53:44.020986 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Oct 9 07:53:44.021145 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Oct 9 07:53:44.021719 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Oct 9 07:53:44.022108 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Oct 9 07:53:44.022321 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Oct 9 07:53:44.022495 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Oct 9 07:53:44.022515 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 9 07:53:44.022527 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 9 07:53:44.022539 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 9 07:53:44.022551 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 9 07:53:44.022571 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 9 07:53:44.022583 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 9 07:53:44.022595 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 9 07:53:44.022608 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 9 07:53:44.022620 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 9 07:53:44.022632 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 9 07:53:44.022644 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 9 07:53:44.022656 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 9 07:53:44.022668 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 9 07:53:44.022685 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 9 07:53:44.022697 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 9 07:53:44.022709 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 9 07:53:44.022721 kernel: iommu: Default domain type: Translated Oct 9 07:53:44.022733 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 9 07:53:44.022744 kernel: PCI: Using ACPI for IRQ routing Oct 9 07:53:44.022756 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 9 07:53:44.022768 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 9 07:53:44.022780 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Oct 9 07:53:44.022940 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 9 07:53:44.023096 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 9 07:53:44.023273 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 9 07:53:44.023292 kernel: vgaarb: loaded Oct 9 07:53:44.023305 kernel: clocksource: Switched to clocksource kvm-clock Oct 9 07:53:44.023317 kernel: VFS: Disk quotas dquot_6.6.0 Oct 9 07:53:44.023329 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 9 07:53:44.023341 kernel: pnp: PnP ACPI init Oct 9 07:53:44.023513 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Oct 9 07:53:44.023540 kernel: pnp: PnP ACPI: found 5 devices Oct 9 07:53:44.023553 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 9 07:53:44.023565 kernel: NET: Registered PF_INET protocol family Oct 9 07:53:44.023577 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 9 07:53:44.023590 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Oct 9 07:53:44.023602 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 9 07:53:44.023614 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 9 07:53:44.023626 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Oct 9 07:53:44.023643 kernel: TCP: Hash tables configured (established 16384 bind 16384) Oct 9 07:53:44.023655 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 9 07:53:44.023667 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 9 07:53:44.023679 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 9 07:53:44.023691 kernel: NET: Registered PF_XDP protocol family Oct 9 07:53:44.023846 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Oct 9 07:53:44.024022 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Oct 9 07:53:44.024236 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Oct 9 07:53:44.024776 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Oct 9 07:53:44.024949 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Oct 9 07:53:44.025108 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Oct 9 07:53:44.025326 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Oct 9 07:53:44.025501 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Oct 9 07:53:44.025669 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Oct 9 07:53:44.025826 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Oct 9 07:53:44.025982 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Oct 9 07:53:44.026137 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Oct 9 07:53:44.026362 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Oct 9 07:53:44.026523 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Oct 9 07:53:44.026678 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Oct 9 07:53:44.026832 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Oct 9 07:53:44.027012 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Oct 9 07:53:44.028288 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Oct 9 07:53:44.028475 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Oct 9 07:53:44.028636 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Oct 9 07:53:44.028806 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Oct 9 07:53:44.028963 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Oct 9 07:53:44.029120 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Oct 9 07:53:44.030339 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Oct 9 07:53:44.030518 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Oct 9 07:53:44.030687 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Oct 9 07:53:44.030856 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Oct 9 07:53:44.031047 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Oct 9 07:53:44.031204 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Oct 9 07:53:44.032507 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Oct 9 07:53:44.032697 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Oct 9 07:53:44.032872 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Oct 9 07:53:44.033035 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Oct 9 07:53:44.033866 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Oct 9 07:53:44.034105 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Oct 9 07:53:44.034299 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Oct 9 07:53:44.034476 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Oct 9 07:53:44.034635 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Oct 9 07:53:44.034797 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Oct 9 07:53:44.034958 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Oct 9 07:53:44.035127 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Oct 9 07:53:44.039378 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Oct 9 07:53:44.039580 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Oct 9 07:53:44.039761 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Oct 9 07:53:44.039989 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Oct 9 07:53:44.040235 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Oct 9 07:53:44.040448 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Oct 9 07:53:44.040631 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Oct 9 07:53:44.040848 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Oct 9 07:53:44.041047 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Oct 9 07:53:44.041332 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 9 07:53:44.041501 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 9 07:53:44.041702 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 9 07:53:44.041860 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Oct 9 07:53:44.042044 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Oct 9 07:53:44.046342 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Oct 9 07:53:44.046538 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Oct 9 07:53:44.046708 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Oct 9 07:53:44.046861 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Oct 9 07:53:44.047059 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Oct 9 07:53:44.048286 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Oct 9 07:53:44.048515 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Oct 9 07:53:44.048670 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Oct 9 07:53:44.048851 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Oct 9 07:53:44.049009 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Oct 9 07:53:44.049166 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Oct 9 07:53:44.049403 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Oct 9 07:53:44.049567 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Oct 9 07:53:44.049729 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Oct 9 07:53:44.049916 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Oct 9 07:53:44.050069 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Oct 9 07:53:44.052262 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Oct 9 07:53:44.052459 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Oct 9 07:53:44.052620 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Oct 9 07:53:44.052782 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Oct 9 07:53:44.052949 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Oct 9 07:53:44.053102 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Oct 9 07:53:44.054303 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Oct 9 07:53:44.054487 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Oct 9 07:53:44.054640 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Oct 9 07:53:44.054826 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Oct 9 07:53:44.054846 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 9 07:53:44.054859 kernel: PCI: CLS 0 bytes, default 64 Oct 9 07:53:44.054872 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Oct 9 07:53:44.054885 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Oct 9 07:53:44.054898 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Oct 9 07:53:44.054911 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Oct 9 07:53:44.054924 kernel: Initialise system trusted keyrings Oct 9 07:53:44.054937 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Oct 9 07:53:44.054956 kernel: Key type asymmetric registered Oct 9 07:53:44.054969 kernel: Asymmetric key parser 'x509' registered Oct 9 07:53:44.054981 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Oct 9 07:53:44.054994 kernel: io scheduler mq-deadline registered Oct 9 07:53:44.055007 kernel: io scheduler kyber registered Oct 9 07:53:44.055019 kernel: io scheduler bfq registered Oct 9 07:53:44.055192 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Oct 9 07:53:44.056414 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Oct 9 07:53:44.056585 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 9 07:53:44.056763 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Oct 9 07:53:44.056929 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Oct 9 07:53:44.057094 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 9 07:53:44.058332 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Oct 9 07:53:44.058547 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Oct 9 07:53:44.058870 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 9 07:53:44.059047 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Oct 9 07:53:44.060324 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Oct 9 07:53:44.060511 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 9 07:53:44.060677 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Oct 9 07:53:44.060838 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Oct 9 07:53:44.060997 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 9 07:53:44.061169 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Oct 9 07:53:44.063394 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Oct 9 07:53:44.063566 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 9 07:53:44.063732 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Oct 9 07:53:44.063895 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Oct 9 07:53:44.064059 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 9 07:53:44.064273 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Oct 9 07:53:44.064461 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Oct 9 07:53:44.064621 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 9 07:53:44.064641 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 9 07:53:44.064655 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 9 07:53:44.064668 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Oct 9 07:53:44.064688 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 9 07:53:44.064701 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 9 07:53:44.064714 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 9 07:53:44.064727 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 9 07:53:44.064740 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 9 07:53:44.064923 kernel: rtc_cmos 00:03: RTC can wake from S4 Oct 9 07:53:44.064944 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 9 07:53:44.065089 kernel: rtc_cmos 00:03: registered as rtc0 Oct 9 07:53:44.066338 kernel: rtc_cmos 00:03: setting system clock to 2024-10-09T07:53:43 UTC (1728460423) Oct 9 07:53:44.066509 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Oct 9 07:53:44.066530 kernel: intel_pstate: CPU model not supported Oct 9 07:53:44.066543 kernel: NET: Registered PF_INET6 protocol family Oct 9 07:53:44.066556 kernel: Segment Routing with IPv6 Oct 9 07:53:44.066569 kernel: In-situ OAM (IOAM) with IPv6 Oct 9 07:53:44.066582 kernel: NET: Registered PF_PACKET protocol family Oct 9 07:53:44.066594 kernel: Key type dns_resolver registered Oct 9 07:53:44.066607 kernel: IPI shorthand broadcast: enabled Oct 9 07:53:44.066630 kernel: sched_clock: Marking stable (1146052405, 234258842)->(1606193008, -225881761) Oct 9 07:53:44.066643 kernel: registered taskstats version 1 Oct 9 07:53:44.066656 kernel: Loading compiled-in X.509 certificates Oct 9 07:53:44.066669 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: 0b7ba59a46acf969bcd97270f441857501641c76' Oct 9 07:53:44.066681 kernel: Key type .fscrypt registered Oct 9 07:53:44.066693 kernel: Key type fscrypt-provisioning registered Oct 9 07:53:44.066706 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 9 07:53:44.066718 kernel: ima: Allocated hash algorithm: sha1 Oct 9 07:53:44.066731 kernel: ima: No architecture policies found Oct 9 07:53:44.066748 kernel: clk: Disabling unused clocks Oct 9 07:53:44.066761 kernel: Freeing unused kernel image (initmem) memory: 49452K Oct 9 07:53:44.066774 kernel: Write protecting the kernel read-only data: 36864k Oct 9 07:53:44.066787 kernel: Freeing unused kernel image (rodata/data gap) memory: 1928K Oct 9 07:53:44.066799 kernel: Run /init as init process Oct 9 07:53:44.066812 kernel: with arguments: Oct 9 07:53:44.066824 kernel: /init Oct 9 07:53:44.066836 kernel: with environment: Oct 9 07:53:44.066848 kernel: HOME=/ Oct 9 07:53:44.066865 kernel: TERM=linux Oct 9 07:53:44.066878 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 9 07:53:44.066900 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 9 07:53:44.066918 systemd[1]: Detected virtualization kvm. Oct 9 07:53:44.066931 systemd[1]: Detected architecture x86-64. Oct 9 07:53:44.066944 systemd[1]: Running in initrd. Oct 9 07:53:44.066957 systemd[1]: No hostname configured, using default hostname. Oct 9 07:53:44.066975 systemd[1]: Hostname set to . Oct 9 07:53:44.066989 systemd[1]: Initializing machine ID from VM UUID. Oct 9 07:53:44.067003 systemd[1]: Queued start job for default target initrd.target. Oct 9 07:53:44.067016 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 07:53:44.067030 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 07:53:44.067043 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 9 07:53:44.067057 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 9 07:53:44.067070 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 9 07:53:44.067088 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 9 07:53:44.067104 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 9 07:53:44.067117 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 9 07:53:44.067131 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 07:53:44.067144 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 9 07:53:44.067157 systemd[1]: Reached target paths.target - Path Units. Oct 9 07:53:44.067171 systemd[1]: Reached target slices.target - Slice Units. Oct 9 07:53:44.067189 systemd[1]: Reached target swap.target - Swaps. Oct 9 07:53:44.067279 systemd[1]: Reached target timers.target - Timer Units. Oct 9 07:53:44.067295 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 9 07:53:44.067308 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 9 07:53:44.067321 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 9 07:53:44.067346 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 9 07:53:44.067370 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 9 07:53:44.067384 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 9 07:53:44.067398 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 07:53:44.067418 systemd[1]: Reached target sockets.target - Socket Units. Oct 9 07:53:44.067432 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 9 07:53:44.067445 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 9 07:53:44.067459 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 9 07:53:44.067472 systemd[1]: Starting systemd-fsck-usr.service... Oct 9 07:53:44.067485 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 9 07:53:44.067499 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 9 07:53:44.067512 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:53:44.067566 systemd-journald[199]: Collecting audit messages is disabled. Oct 9 07:53:44.067597 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 9 07:53:44.067612 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 07:53:44.067625 systemd[1]: Finished systemd-fsck-usr.service. Oct 9 07:53:44.067645 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 9 07:53:44.067671 systemd-journald[199]: Journal started Oct 9 07:53:44.067695 systemd-journald[199]: Runtime Journal (/run/log/journal/7c1517caada34b59a5b1771ea7194aee) is 4.7M, max 38.0M, 33.2M free. Oct 9 07:53:44.024754 systemd-modules-load[200]: Inserted module 'overlay' Oct 9 07:53:44.121851 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 9 07:53:44.121886 kernel: Bridge firewalling registered Oct 9 07:53:44.087012 systemd-modules-load[200]: Inserted module 'br_netfilter' Oct 9 07:53:44.124959 systemd[1]: Started systemd-journald.service - Journal Service. Oct 9 07:53:44.126164 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 9 07:53:44.127134 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:53:44.128659 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 07:53:44.137440 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 07:53:44.150473 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 07:53:44.155463 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 9 07:53:44.167389 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Oct 9 07:53:44.168667 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 07:53:44.179533 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 07:53:44.184288 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 07:53:44.192503 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 9 07:53:44.193495 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Oct 9 07:53:44.198460 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 9 07:53:44.209698 dracut-cmdline[233]: dracut-dracut-053 Oct 9 07:53:44.214583 dracut-cmdline[233]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=1839da262570fb938be558d95db7fc3d986a0d71e1b77d40d35a3e2a1bac7dcd Oct 9 07:53:44.253190 systemd-resolved[236]: Positive Trust Anchors: Oct 9 07:53:44.253325 systemd-resolved[236]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 9 07:53:44.253377 systemd-resolved[236]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Oct 9 07:53:44.258049 systemd-resolved[236]: Defaulting to hostname 'linux'. Oct 9 07:53:44.261291 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 9 07:53:44.262184 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 9 07:53:44.328242 kernel: SCSI subsystem initialized Oct 9 07:53:44.341236 kernel: Loading iSCSI transport class v2.0-870. Oct 9 07:53:44.356231 kernel: iscsi: registered transport (tcp) Oct 9 07:53:44.386315 kernel: iscsi: registered transport (qla4xxx) Oct 9 07:53:44.386397 kernel: QLogic iSCSI HBA Driver Oct 9 07:53:44.439537 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 9 07:53:44.452543 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 9 07:53:44.498029 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 9 07:53:44.498116 kernel: device-mapper: uevent: version 1.0.3 Oct 9 07:53:44.501273 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 9 07:53:44.553297 kernel: raid6: sse2x4 gen() 7921 MB/s Oct 9 07:53:44.571293 kernel: raid6: sse2x2 gen() 5496 MB/s Oct 9 07:53:44.589950 kernel: raid6: sse2x1 gen() 5453 MB/s Oct 9 07:53:44.590077 kernel: raid6: using algorithm sse2x4 gen() 7921 MB/s Oct 9 07:53:44.608881 kernel: raid6: .... xor() 5174 MB/s, rmw enabled Oct 9 07:53:44.608997 kernel: raid6: using ssse3x2 recovery algorithm Oct 9 07:53:44.639253 kernel: xor: automatically using best checksumming function avx Oct 9 07:53:44.847249 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 9 07:53:44.862425 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 9 07:53:44.869467 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 07:53:44.898131 systemd-udevd[419]: Using default interface naming scheme 'v255'. Oct 9 07:53:44.905062 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 07:53:44.914572 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 9 07:53:44.935520 dracut-pre-trigger[424]: rd.md=0: removing MD RAID activation Oct 9 07:53:44.976094 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 9 07:53:44.981416 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 9 07:53:45.094381 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 07:53:45.107008 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 9 07:53:45.135363 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 9 07:53:45.139544 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 9 07:53:45.141142 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 07:53:45.142729 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 9 07:53:45.148410 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 9 07:53:45.180693 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 9 07:53:45.228237 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Oct 9 07:53:45.230231 kernel: cryptd: max_cpu_qlen set to 1000 Oct 9 07:53:45.249241 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Oct 9 07:53:45.253299 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 9 07:53:45.254813 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 07:53:45.255750 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 07:53:45.257499 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 07:53:45.257698 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:53:45.258420 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:53:45.271086 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 9 07:53:45.271156 kernel: GPT:17805311 != 125829119 Oct 9 07:53:45.271196 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 9 07:53:45.271243 kernel: GPT:17805311 != 125829119 Oct 9 07:53:45.271260 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 9 07:53:45.271277 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 07:53:45.269707 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:53:45.300236 kernel: libata version 3.00 loaded. Oct 9 07:53:45.317235 kernel: AVX version of gcm_enc/dec engaged. Oct 9 07:53:45.327251 kernel: ahci 0000:00:1f.2: version 3.0 Oct 9 07:53:45.327574 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 9 07:53:45.328568 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Oct 9 07:53:45.328775 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 9 07:53:45.331226 kernel: scsi host0: ahci Oct 9 07:53:45.332221 kernel: scsi host1: ahci Oct 9 07:53:45.332461 kernel: scsi host2: ahci Oct 9 07:53:45.332662 kernel: scsi host3: ahci Oct 9 07:53:45.332848 kernel: scsi host4: ahci Oct 9 07:53:45.336432 kernel: scsi host5: ahci Oct 9 07:53:45.336759 kernel: AES CTR mode by8 optimization enabled Oct 9 07:53:45.338224 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Oct 9 07:53:45.338256 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Oct 9 07:53:45.338274 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Oct 9 07:53:45.338290 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Oct 9 07:53:45.338317 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Oct 9 07:53:45.338345 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Oct 9 07:53:45.339224 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (465) Oct 9 07:53:45.352230 kernel: BTRFS: device fsid a442e753-4749-4732-ba27-ea845965fe4a devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (474) Oct 9 07:53:45.366244 kernel: ACPI: bus type USB registered Oct 9 07:53:45.366342 kernel: usbcore: registered new interface driver usbfs Oct 9 07:53:45.366362 kernel: usbcore: registered new interface driver hub Oct 9 07:53:45.366379 kernel: usbcore: registered new device driver usb Oct 9 07:53:45.378679 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 9 07:53:45.466936 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 9 07:53:45.468246 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:53:45.476567 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 9 07:53:45.482773 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 9 07:53:45.483665 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 9 07:53:45.491747 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 9 07:53:45.499515 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 07:53:45.505301 disk-uuid[553]: Primary Header is updated. Oct 9 07:53:45.505301 disk-uuid[553]: Secondary Entries is updated. Oct 9 07:53:45.505301 disk-uuid[553]: Secondary Header is updated. Oct 9 07:53:45.511493 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 07:53:45.522264 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 07:53:45.532031 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 07:53:45.648511 kernel: ata1: SATA link down (SStatus 0 SControl 300) Oct 9 07:53:45.648589 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 9 07:53:45.650765 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 9 07:53:45.650798 kernel: ata3: SATA link down (SStatus 0 SControl 300) Oct 9 07:53:45.652614 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 9 07:53:45.656002 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 9 07:53:45.739272 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Oct 9 07:53:45.743276 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Oct 9 07:53:45.747293 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Oct 9 07:53:45.752246 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Oct 9 07:53:45.757944 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Oct 9 07:53:45.758224 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Oct 9 07:53:45.760493 kernel: hub 1-0:1.0: USB hub found Oct 9 07:53:45.760750 kernel: hub 1-0:1.0: 4 ports detected Oct 9 07:53:45.764508 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Oct 9 07:53:45.764777 kernel: hub 2-0:1.0: USB hub found Oct 9 07:53:45.765230 kernel: hub 2-0:1.0: 4 ports detected Oct 9 07:53:46.004382 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Oct 9 07:53:46.146237 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 9 07:53:46.153359 kernel: usbcore: registered new interface driver usbhid Oct 9 07:53:46.153429 kernel: usbhid: USB HID core driver Oct 9 07:53:46.161677 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Oct 9 07:53:46.161748 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Oct 9 07:53:46.529256 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 07:53:46.530549 disk-uuid[554]: The operation has completed successfully. Oct 9 07:53:46.581358 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 9 07:53:46.581509 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 9 07:53:46.605509 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 9 07:53:46.621002 sh[582]: Success Oct 9 07:53:46.640285 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Oct 9 07:53:46.719861 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 9 07:53:46.722822 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 9 07:53:46.723746 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 9 07:53:46.757254 kernel: BTRFS info (device dm-0): first mount of filesystem a442e753-4749-4732-ba27-ea845965fe4a Oct 9 07:53:46.761792 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 9 07:53:46.761862 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 9 07:53:46.761882 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 9 07:53:46.764882 kernel: BTRFS info (device dm-0): using free space tree Oct 9 07:53:46.773891 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 9 07:53:46.775435 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 9 07:53:46.780420 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 9 07:53:46.782386 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 9 07:53:46.805363 kernel: BTRFS info (device vda6): first mount of filesystem aa256cb8-f25c-41d0-8582-dc8cedfde7ce Oct 9 07:53:46.805449 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 07:53:46.805469 kernel: BTRFS info (device vda6): using free space tree Oct 9 07:53:46.810231 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 07:53:46.823150 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 9 07:53:46.827231 kernel: BTRFS info (device vda6): last unmount of filesystem aa256cb8-f25c-41d0-8582-dc8cedfde7ce Oct 9 07:53:46.834946 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 9 07:53:46.842544 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 9 07:53:46.957754 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 9 07:53:46.970589 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 9 07:53:46.983561 ignition[677]: Ignition 2.18.0 Oct 9 07:53:46.983585 ignition[677]: Stage: fetch-offline Oct 9 07:53:46.985943 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 9 07:53:46.983704 ignition[677]: no configs at "/usr/lib/ignition/base.d" Oct 9 07:53:46.983737 ignition[677]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 9 07:53:46.984004 ignition[677]: parsed url from cmdline: "" Oct 9 07:53:46.984011 ignition[677]: no config URL provided Oct 9 07:53:46.984021 ignition[677]: reading system config file "/usr/lib/ignition/user.ign" Oct 9 07:53:46.984038 ignition[677]: no config at "/usr/lib/ignition/user.ign" Oct 9 07:53:46.984055 ignition[677]: failed to fetch config: resource requires networking Oct 9 07:53:46.984586 ignition[677]: Ignition finished successfully Oct 9 07:53:47.003698 systemd-networkd[769]: lo: Link UP Oct 9 07:53:47.003714 systemd-networkd[769]: lo: Gained carrier Oct 9 07:53:47.006035 systemd-networkd[769]: Enumeration completed Oct 9 07:53:47.006283 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 9 07:53:47.006687 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 07:53:47.006693 systemd-networkd[769]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 07:53:47.008948 systemd-networkd[769]: eth0: Link UP Oct 9 07:53:47.008954 systemd-networkd[769]: eth0: Gained carrier Oct 9 07:53:47.008965 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 07:53:47.009519 systemd[1]: Reached target network.target - Network. Oct 9 07:53:47.018467 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Oct 9 07:53:47.038922 ignition[773]: Ignition 2.18.0 Oct 9 07:53:47.038947 ignition[773]: Stage: fetch Oct 9 07:53:47.039282 ignition[773]: no configs at "/usr/lib/ignition/base.d" Oct 9 07:53:47.039317 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 9 07:53:47.041372 systemd-networkd[769]: eth0: DHCPv4 address 10.230.72.98/30, gateway 10.230.72.97 acquired from 10.230.72.97 Oct 9 07:53:47.039463 ignition[773]: parsed url from cmdline: "" Oct 9 07:53:47.039470 ignition[773]: no config URL provided Oct 9 07:53:47.039480 ignition[773]: reading system config file "/usr/lib/ignition/user.ign" Oct 9 07:53:47.039497 ignition[773]: no config at "/usr/lib/ignition/user.ign" Oct 9 07:53:47.039664 ignition[773]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Oct 9 07:53:47.039969 ignition[773]: GET error: Get "http://169.254.169.254/openstack/latest/user_data": dial tcp 169.254.169.254:80: connect: network is unreachable Oct 9 07:53:47.040043 ignition[773]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Oct 9 07:53:47.040063 ignition[773]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Oct 9 07:53:47.240354 ignition[773]: GET http://169.254.169.254/openstack/latest/user_data: attempt #2 Oct 9 07:53:47.256311 ignition[773]: GET result: OK Oct 9 07:53:47.257051 ignition[773]: parsing config with SHA512: 3ccaea64d6d3fa7a2ad7ddb1bf0c41a1867fadd3f25b11a0c58bea0236a16199397bdbea8c5d6f6cdb9421583406dd0ca723ab2f5fdaf8e1ee7ed5fdbb4845db Oct 9 07:53:47.264065 unknown[773]: fetched base config from "system" Oct 9 07:53:47.264085 unknown[773]: fetched base config from "system" Oct 9 07:53:47.264596 ignition[773]: fetch: fetch complete Oct 9 07:53:47.264105 unknown[773]: fetched user config from "openstack" Oct 9 07:53:47.264605 ignition[773]: fetch: fetch passed Oct 9 07:53:47.266566 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Oct 9 07:53:47.264674 ignition[773]: Ignition finished successfully Oct 9 07:53:47.275551 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 9 07:53:47.296606 ignition[781]: Ignition 2.18.0 Oct 9 07:53:47.296640 ignition[781]: Stage: kargs Oct 9 07:53:47.296947 ignition[781]: no configs at "/usr/lib/ignition/base.d" Oct 9 07:53:47.296968 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 9 07:53:47.299761 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 9 07:53:47.298219 ignition[781]: kargs: kargs passed Oct 9 07:53:47.298306 ignition[781]: Ignition finished successfully Oct 9 07:53:47.309574 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 9 07:53:47.327725 ignition[788]: Ignition 2.18.0 Oct 9 07:53:47.327748 ignition[788]: Stage: disks Oct 9 07:53:47.328005 ignition[788]: no configs at "/usr/lib/ignition/base.d" Oct 9 07:53:47.330648 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 9 07:53:47.328027 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 9 07:53:47.329155 ignition[788]: disks: disks passed Oct 9 07:53:47.332640 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 9 07:53:47.329247 ignition[788]: Ignition finished successfully Oct 9 07:53:47.333919 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 9 07:53:47.335190 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 9 07:53:47.336677 systemd[1]: Reached target sysinit.target - System Initialization. Oct 9 07:53:47.337887 systemd[1]: Reached target basic.target - Basic System. Oct 9 07:53:47.345425 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 9 07:53:47.365391 systemd-fsck[798]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Oct 9 07:53:47.369273 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 9 07:53:47.374345 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 9 07:53:47.508348 kernel: EXT4-fs (vda9): mounted filesystem ef891253-2811-499a-a9aa-02f0764c1b95 r/w with ordered data mode. Quota mode: none. Oct 9 07:53:47.509559 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 9 07:53:47.511679 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 9 07:53:47.530531 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 9 07:53:47.533899 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 9 07:53:47.535504 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 9 07:53:47.537561 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Oct 9 07:53:47.538689 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 9 07:53:47.538734 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 9 07:53:47.548245 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (806) Oct 9 07:53:47.554220 kernel: BTRFS info (device vda6): first mount of filesystem aa256cb8-f25c-41d0-8582-dc8cedfde7ce Oct 9 07:53:47.554287 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 07:53:47.554310 kernel: BTRFS info (device vda6): using free space tree Oct 9 07:53:47.558785 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 9 07:53:47.569381 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 07:53:47.565438 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 9 07:53:47.570579 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 9 07:53:47.657929 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Oct 9 07:53:47.664846 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Oct 9 07:53:47.673690 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Oct 9 07:53:47.679453 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Oct 9 07:53:47.785920 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 9 07:53:47.791388 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 9 07:53:47.794654 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 9 07:53:47.809652 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 9 07:53:47.812508 kernel: BTRFS info (device vda6): last unmount of filesystem aa256cb8-f25c-41d0-8582-dc8cedfde7ce Oct 9 07:53:47.836469 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 9 07:53:47.842788 ignition[926]: INFO : Ignition 2.18.0 Oct 9 07:53:47.842788 ignition[926]: INFO : Stage: mount Oct 9 07:53:47.845348 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 07:53:47.845348 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 9 07:53:47.845348 ignition[926]: INFO : mount: mount passed Oct 9 07:53:47.845348 ignition[926]: INFO : Ignition finished successfully Oct 9 07:53:47.845682 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 9 07:53:48.168614 systemd-networkd[769]: eth0: Gained IPv6LL Oct 9 07:53:49.675996 systemd-networkd[769]: eth0: Ignoring DHCPv6 address 2a02:1348:179:9218:24:19ff:fee6:4862/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:9218:24:19ff:fee6:4862/64 assigned by NDisc. Oct 9 07:53:49.676014 systemd-networkd[769]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Oct 9 07:53:54.722682 coreos-metadata[808]: Oct 09 07:53:54.722 WARN failed to locate config-drive, using the metadata service API instead Oct 9 07:53:54.744533 coreos-metadata[808]: Oct 09 07:53:54.744 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Oct 9 07:53:54.760137 coreos-metadata[808]: Oct 09 07:53:54.760 INFO Fetch successful Oct 9 07:53:54.761228 coreos-metadata[808]: Oct 09 07:53:54.760 INFO wrote hostname srv-9xk3k.gb1.brightbox.com to /sysroot/etc/hostname Oct 9 07:53:54.762814 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Oct 9 07:53:54.763008 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Oct 9 07:53:54.771391 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 9 07:53:54.803497 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 9 07:53:54.815458 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (943) Oct 9 07:53:54.815531 kernel: BTRFS info (device vda6): first mount of filesystem aa256cb8-f25c-41d0-8582-dc8cedfde7ce Oct 9 07:53:54.818012 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 07:53:54.819538 kernel: BTRFS info (device vda6): using free space tree Oct 9 07:53:54.825268 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 07:53:54.827331 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 9 07:53:54.859239 ignition[961]: INFO : Ignition 2.18.0 Oct 9 07:53:54.859239 ignition[961]: INFO : Stage: files Oct 9 07:53:54.859239 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 07:53:54.859239 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 9 07:53:54.862956 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Oct 9 07:53:54.862956 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 9 07:53:54.862956 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 9 07:53:54.866987 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 9 07:53:54.869017 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 9 07:53:54.869017 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 9 07:53:54.868308 unknown[961]: wrote ssh authorized keys file for user: core Oct 9 07:53:54.873164 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Oct 9 07:53:54.873164 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Oct 9 07:53:54.873164 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 9 07:53:54.873164 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Oct 9 07:53:55.127781 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 9 07:53:55.908672 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 9 07:53:55.916737 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Oct 9 07:53:55.916737 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Oct 9 07:53:55.916737 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 9 07:53:55.916737 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 9 07:53:55.916737 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 9 07:53:55.916737 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 9 07:53:55.916737 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 9 07:53:55.916737 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 9 07:53:55.916737 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 9 07:53:55.916737 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 9 07:53:55.916737 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 9 07:53:55.916737 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 9 07:53:55.916737 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 9 07:53:55.916737 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Oct 9 07:53:56.461263 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Oct 9 07:53:57.892246 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 9 07:53:57.892246 ignition[961]: INFO : files: op(c): [started] processing unit "containerd.service" Oct 9 07:53:57.900273 ignition[961]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Oct 9 07:53:57.901778 ignition[961]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Oct 9 07:53:57.901778 ignition[961]: INFO : files: op(c): [finished] processing unit "containerd.service" Oct 9 07:53:57.901778 ignition[961]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Oct 9 07:53:57.901778 ignition[961]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 9 07:53:57.901778 ignition[961]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 9 07:53:57.901778 ignition[961]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Oct 9 07:53:57.901778 ignition[961]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Oct 9 07:53:57.901778 ignition[961]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Oct 9 07:53:57.901778 ignition[961]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 9 07:53:57.901778 ignition[961]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 9 07:53:57.901778 ignition[961]: INFO : files: files passed Oct 9 07:53:57.901778 ignition[961]: INFO : Ignition finished successfully Oct 9 07:53:57.903013 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 9 07:53:57.913627 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 9 07:53:57.926490 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 9 07:53:57.933538 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 9 07:53:57.934668 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 9 07:53:57.942245 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 9 07:53:57.942245 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 9 07:53:57.945496 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 9 07:53:57.946829 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 9 07:53:57.948471 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 9 07:53:57.960554 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 9 07:53:57.992039 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 9 07:53:58.001305 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 9 07:53:58.006246 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 9 07:53:58.007153 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 9 07:53:58.009374 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 9 07:53:58.014513 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 9 07:53:58.047436 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 9 07:53:58.053422 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 9 07:53:58.080761 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 9 07:53:58.081699 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 07:53:58.082603 systemd[1]: Stopped target timers.target - Timer Units. Oct 9 07:53:58.084113 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 9 07:53:58.084417 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 9 07:53:58.086294 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 9 07:53:58.087193 systemd[1]: Stopped target basic.target - Basic System. Oct 9 07:53:58.088574 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 9 07:53:58.090123 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 9 07:53:58.091499 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 9 07:53:58.092815 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 9 07:53:58.094253 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 9 07:53:58.095888 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 9 07:53:58.097464 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 9 07:53:58.098829 systemd[1]: Stopped target swap.target - Swaps. Oct 9 07:53:58.100333 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 9 07:53:58.100536 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 9 07:53:58.102329 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 9 07:53:58.103287 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 07:53:58.104860 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 9 07:53:58.105089 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 07:53:58.106388 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 9 07:53:58.106630 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 9 07:53:58.108375 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 9 07:53:58.108545 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 9 07:53:58.109474 systemd[1]: ignition-files.service: Deactivated successfully. Oct 9 07:53:58.109640 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 9 07:53:58.118505 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 9 07:53:58.130585 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 9 07:53:58.132405 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 9 07:53:58.132614 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 07:53:58.136626 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 9 07:53:58.137825 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 9 07:53:58.144848 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 9 07:53:58.145084 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 9 07:53:58.151857 ignition[1014]: INFO : Ignition 2.18.0 Oct 9 07:53:58.151857 ignition[1014]: INFO : Stage: umount Oct 9 07:53:58.153671 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 07:53:58.153671 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 9 07:53:58.155450 ignition[1014]: INFO : umount: umount passed Oct 9 07:53:58.156432 ignition[1014]: INFO : Ignition finished successfully Oct 9 07:53:58.159053 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 9 07:53:58.159278 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 9 07:53:58.160277 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 9 07:53:58.160362 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 9 07:53:58.161102 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 9 07:53:58.161180 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 9 07:53:58.161910 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 9 07:53:58.161981 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Oct 9 07:53:58.163463 systemd[1]: Stopped target network.target - Network. Oct 9 07:53:58.165612 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 9 07:53:58.165708 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 9 07:53:58.167703 systemd[1]: Stopped target paths.target - Path Units. Oct 9 07:53:58.168322 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 9 07:53:58.174373 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 07:53:58.175507 systemd[1]: Stopped target slices.target - Slice Units. Oct 9 07:53:58.176778 systemd[1]: Stopped target sockets.target - Socket Units. Oct 9 07:53:58.178336 systemd[1]: iscsid.socket: Deactivated successfully. Oct 9 07:53:58.178440 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 9 07:53:58.179858 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 9 07:53:58.179916 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 9 07:53:58.181143 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 9 07:53:58.181243 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 9 07:53:58.182496 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 9 07:53:58.182572 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 9 07:53:58.184264 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 9 07:53:58.187031 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 9 07:53:58.190096 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 9 07:53:58.190458 systemd-networkd[769]: eth0: DHCPv6 lease lost Oct 9 07:53:58.192000 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 9 07:53:58.193637 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 9 07:53:58.195485 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 9 07:53:58.195643 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 9 07:53:58.198033 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 9 07:53:58.198161 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 9 07:53:58.200952 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 9 07:53:58.201035 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 9 07:53:58.212398 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 9 07:53:58.215286 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 9 07:53:58.215373 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 9 07:53:58.217010 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 07:53:58.219363 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 9 07:53:58.219520 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 9 07:53:58.227824 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 9 07:53:58.228090 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 07:53:58.234352 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 9 07:53:58.235365 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 9 07:53:58.236982 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 9 07:53:58.237150 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 9 07:53:58.238729 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 9 07:53:58.238788 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 07:53:58.240350 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 9 07:53:58.240436 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 9 07:53:58.242311 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 9 07:53:58.242374 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 9 07:53:58.243589 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 9 07:53:58.243666 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 07:53:58.258506 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 9 07:53:58.261582 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 9 07:53:58.261674 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 9 07:53:58.263217 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 9 07:53:58.263291 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 9 07:53:58.264520 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 9 07:53:58.264585 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 07:53:58.266740 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 9 07:53:58.266806 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 07:53:58.269359 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 9 07:53:58.269422 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 07:53:58.273154 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 9 07:53:58.273307 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Oct 9 07:53:58.274881 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 07:53:58.274948 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:53:58.277263 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 9 07:53:58.277431 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 9 07:53:58.279661 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 9 07:53:58.286484 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 9 07:53:58.298557 systemd[1]: Switching root. Oct 9 07:53:58.325248 systemd-journald[199]: Journal stopped Oct 9 07:53:59.760928 systemd-journald[199]: Received SIGTERM from PID 1 (systemd). Oct 9 07:53:59.761063 kernel: SELinux: policy capability network_peer_controls=1 Oct 9 07:53:59.761096 kernel: SELinux: policy capability open_perms=1 Oct 9 07:53:59.761123 kernel: SELinux: policy capability extended_socket_class=1 Oct 9 07:53:59.761141 kernel: SELinux: policy capability always_check_network=0 Oct 9 07:53:59.761159 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 9 07:53:59.761184 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 9 07:53:59.761221 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 9 07:53:59.761242 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 9 07:53:59.761266 kernel: audit: type=1403 audit(1728460438.611:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 9 07:53:59.761304 systemd[1]: Successfully loaded SELinux policy in 49.322ms. Oct 9 07:53:59.761348 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.509ms. Oct 9 07:53:59.761369 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 9 07:53:59.761390 systemd[1]: Detected virtualization kvm. Oct 9 07:53:59.761409 systemd[1]: Detected architecture x86-64. Oct 9 07:53:59.761435 systemd[1]: Detected first boot. Oct 9 07:53:59.761455 systemd[1]: Hostname set to . Oct 9 07:53:59.761473 systemd[1]: Initializing machine ID from VM UUID. Oct 9 07:53:59.761505 zram_generator::config[1077]: No configuration found. Oct 9 07:53:59.761533 systemd[1]: Populated /etc with preset unit settings. Oct 9 07:53:59.761554 systemd[1]: Queued start job for default target multi-user.target. Oct 9 07:53:59.761579 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 9 07:53:59.761601 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 9 07:53:59.761621 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 9 07:53:59.761639 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 9 07:53:59.761658 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 9 07:53:59.761689 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 9 07:53:59.761710 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 9 07:53:59.761730 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 9 07:53:59.761749 systemd[1]: Created slice user.slice - User and Session Slice. Oct 9 07:53:59.761768 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 07:53:59.761787 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 07:53:59.761807 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 9 07:53:59.761826 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 9 07:53:59.761846 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 9 07:53:59.761882 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 9 07:53:59.761903 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 9 07:53:59.761923 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 07:53:59.761942 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 9 07:53:59.761961 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 07:53:59.761982 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 9 07:53:59.762020 systemd[1]: Reached target slices.target - Slice Units. Oct 9 07:53:59.762042 systemd[1]: Reached target swap.target - Swaps. Oct 9 07:53:59.762075 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 9 07:53:59.762108 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 9 07:53:59.762140 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 9 07:53:59.762166 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 9 07:53:59.764238 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 9 07:53:59.764277 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 9 07:53:59.764298 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 07:53:59.764318 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 9 07:53:59.764338 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 9 07:53:59.764357 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 9 07:53:59.764376 systemd[1]: Mounting media.mount - External Media Directory... Oct 9 07:53:59.764397 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:53:59.764415 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 9 07:53:59.764452 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 9 07:53:59.764473 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 9 07:53:59.764494 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 9 07:53:59.764514 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 07:53:59.764534 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 9 07:53:59.764553 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 9 07:53:59.764572 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 07:53:59.764591 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 9 07:53:59.764611 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 07:53:59.764643 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 9 07:53:59.764664 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 07:53:59.764684 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 9 07:53:59.764703 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Oct 9 07:53:59.764730 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Oct 9 07:53:59.764750 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 9 07:53:59.764770 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 9 07:53:59.764788 kernel: fuse: init (API version 7.39) Oct 9 07:53:59.764808 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 9 07:53:59.764850 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 9 07:53:59.764877 kernel: loop: module loaded Oct 9 07:53:59.764897 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 9 07:53:59.764917 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:53:59.764941 kernel: ACPI: bus type drm_connector registered Oct 9 07:53:59.764959 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 9 07:53:59.764984 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 9 07:53:59.765022 systemd[1]: Mounted media.mount - External Media Directory. Oct 9 07:53:59.765055 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 9 07:53:59.765076 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 9 07:53:59.765095 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 9 07:53:59.765156 systemd-journald[1177]: Collecting audit messages is disabled. Oct 9 07:53:59.765221 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 07:53:59.765248 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 9 07:53:59.765269 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 9 07:53:59.765288 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 07:53:59.765323 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 07:53:59.765343 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 9 07:53:59.765362 systemd-journald[1177]: Journal started Oct 9 07:53:59.765408 systemd-journald[1177]: Runtime Journal (/run/log/journal/7c1517caada34b59a5b1771ea7194aee) is 4.7M, max 38.0M, 33.2M free. Oct 9 07:53:59.768251 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 9 07:53:59.768320 systemd[1]: Started systemd-journald.service - Journal Service. Oct 9 07:53:59.773575 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 07:53:59.773903 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 07:53:59.776505 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 9 07:53:59.776737 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 9 07:53:59.777956 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 07:53:59.778177 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 07:53:59.782776 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 9 07:53:59.784929 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 9 07:53:59.787705 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 9 07:53:59.801798 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 9 07:53:59.810105 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 9 07:53:59.817443 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 9 07:53:59.826838 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 9 07:53:59.827682 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 9 07:53:59.832156 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 9 07:53:59.841466 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 9 07:53:59.845315 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 07:53:59.857281 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 9 07:53:59.860338 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 07:53:59.870466 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 07:53:59.873405 systemd-journald[1177]: Time spent on flushing to /var/log/journal/7c1517caada34b59a5b1771ea7194aee is 34.530ms for 1126 entries. Oct 9 07:53:59.873405 systemd-journald[1177]: System Journal (/var/log/journal/7c1517caada34b59a5b1771ea7194aee) is 8.0M, max 584.8M, 576.8M free. Oct 9 07:53:59.928381 systemd-journald[1177]: Received client request to flush runtime journal. Oct 9 07:53:59.878895 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 9 07:53:59.884814 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 9 07:53:59.886410 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 9 07:53:59.912523 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 9 07:53:59.914623 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 9 07:53:59.933689 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 9 07:53:59.956283 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 07:53:59.972037 systemd-tmpfiles[1229]: ACLs are not supported, ignoring. Oct 9 07:53:59.973260 systemd-tmpfiles[1229]: ACLs are not supported, ignoring. Oct 9 07:53:59.989774 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 07:54:00.001614 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 9 07:54:00.003064 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 07:54:00.016438 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 9 07:54:00.059697 udevadm[1247]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Oct 9 07:54:00.075966 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 9 07:54:00.090678 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 9 07:54:00.115130 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. Oct 9 07:54:00.115626 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. Oct 9 07:54:00.123046 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 07:54:00.655668 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 9 07:54:00.664531 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 07:54:00.699336 systemd-udevd[1261]: Using default interface naming scheme 'v255'. Oct 9 07:54:00.727187 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 07:54:00.738514 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 9 07:54:00.759417 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 9 07:54:00.836641 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Oct 9 07:54:00.851876 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 9 07:54:00.882296 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1269) Oct 9 07:54:00.940241 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1277) Oct 9 07:54:01.008627 systemd-networkd[1270]: lo: Link UP Oct 9 07:54:01.008647 systemd-networkd[1270]: lo: Gained carrier Oct 9 07:54:01.011536 systemd-networkd[1270]: Enumeration completed Oct 9 07:54:01.012150 systemd-networkd[1270]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 07:54:01.012163 systemd-networkd[1270]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 07:54:01.018073 systemd-networkd[1270]: eth0: Link UP Oct 9 07:54:01.018087 systemd-networkd[1270]: eth0: Gained carrier Oct 9 07:54:01.018106 systemd-networkd[1270]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 07:54:01.020673 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 9 07:54:01.021693 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 9 07:54:01.028386 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 9 07:54:01.040250 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 9 07:54:01.048301 systemd-networkd[1270]: eth0: DHCPv4 address 10.230.72.98/30, gateway 10.230.72.97 acquired from 10.230.72.97 Oct 9 07:54:01.050229 kernel: ACPI: button: Power Button [PWRF] Oct 9 07:54:01.065257 kernel: mousedev: PS/2 mouse device common for all mice Oct 9 07:54:01.127833 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Oct 9 07:54:01.132238 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 9 07:54:01.137988 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Oct 9 07:54:01.138343 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 9 07:54:01.169620 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:54:01.378479 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:54:01.382307 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 9 07:54:01.400518 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 9 07:54:01.421303 lvm[1305]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 9 07:54:01.458868 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 9 07:54:01.459978 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 9 07:54:01.474455 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 9 07:54:01.481058 lvm[1308]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 9 07:54:01.511636 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 9 07:54:01.513333 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 9 07:54:01.514269 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 9 07:54:01.514419 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 9 07:54:01.515413 systemd[1]: Reached target machines.target - Containers. Oct 9 07:54:01.517859 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 9 07:54:01.525488 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 9 07:54:01.530401 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 9 07:54:01.532494 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 07:54:01.535049 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 9 07:54:01.552603 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 9 07:54:01.557398 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 9 07:54:01.562453 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 9 07:54:01.588502 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 9 07:54:01.589519 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 9 07:54:01.599479 kernel: loop0: detected capacity change from 0 to 139904 Oct 9 07:54:01.594661 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 9 07:54:01.607273 kernel: block loop0: the capability attribute has been deprecated. Oct 9 07:54:01.652243 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 9 07:54:01.688238 kernel: loop1: detected capacity change from 0 to 80568 Oct 9 07:54:01.726238 kernel: loop2: detected capacity change from 0 to 8 Oct 9 07:54:01.764239 kernel: loop3: detected capacity change from 0 to 211296 Oct 9 07:54:01.808489 kernel: loop4: detected capacity change from 0 to 139904 Oct 9 07:54:01.840271 kernel: loop5: detected capacity change from 0 to 80568 Oct 9 07:54:01.859255 kernel: loop6: detected capacity change from 0 to 8 Oct 9 07:54:01.862277 kernel: loop7: detected capacity change from 0 to 211296 Oct 9 07:54:01.879416 (sd-merge)[1329]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Oct 9 07:54:01.881052 (sd-merge)[1329]: Merged extensions into '/usr'. Oct 9 07:54:01.888838 systemd[1]: Reloading requested from client PID 1316 ('systemd-sysext') (unit systemd-sysext.service)... Oct 9 07:54:01.888879 systemd[1]: Reloading... Oct 9 07:54:01.987487 zram_generator::config[1355]: No configuration found. Oct 9 07:54:02.167354 ldconfig[1313]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 9 07:54:02.198579 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 07:54:02.250791 systemd-networkd[1270]: eth0: Gained IPv6LL Oct 9 07:54:02.282549 systemd[1]: Reloading finished in 392 ms. Oct 9 07:54:02.311417 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 9 07:54:02.313014 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 9 07:54:02.314276 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 9 07:54:02.324447 systemd[1]: Starting ensure-sysext.service... Oct 9 07:54:02.327525 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Oct 9 07:54:02.339232 systemd[1]: Reloading requested from client PID 1420 ('systemctl') (unit ensure-sysext.service)... Oct 9 07:54:02.339433 systemd[1]: Reloading... Oct 9 07:54:02.360832 systemd-tmpfiles[1421]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 9 07:54:02.361466 systemd-tmpfiles[1421]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 9 07:54:02.363237 systemd-tmpfiles[1421]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 9 07:54:02.363641 systemd-tmpfiles[1421]: ACLs are not supported, ignoring. Oct 9 07:54:02.363748 systemd-tmpfiles[1421]: ACLs are not supported, ignoring. Oct 9 07:54:02.367897 systemd-tmpfiles[1421]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 07:54:02.367917 systemd-tmpfiles[1421]: Skipping /boot Oct 9 07:54:02.387400 systemd-tmpfiles[1421]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 07:54:02.387422 systemd-tmpfiles[1421]: Skipping /boot Oct 9 07:54:02.420253 zram_generator::config[1446]: No configuration found. Oct 9 07:54:02.613093 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 07:54:02.693490 systemd[1]: Reloading finished in 353 ms. Oct 9 07:54:02.714033 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Oct 9 07:54:02.731417 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 9 07:54:02.740466 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 9 07:54:02.744412 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 9 07:54:02.756438 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 9 07:54:02.770488 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 9 07:54:02.781710 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:54:02.782008 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 07:54:02.785502 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 07:54:02.797859 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 07:54:02.825822 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 07:54:02.829984 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 07:54:02.830239 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:54:02.850614 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 07:54:02.851482 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 07:54:02.857421 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 07:54:02.864621 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 07:54:02.873808 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 07:54:02.874698 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 07:54:02.877992 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 9 07:54:02.892614 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 9 07:54:02.900567 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:54:02.900911 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 07:54:02.904247 augenrules[1544]: No rules Oct 9 07:54:02.908614 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 07:54:02.917585 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 9 07:54:02.926129 systemd-resolved[1516]: Positive Trust Anchors: Oct 9 07:54:02.926145 systemd-resolved[1516]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 9 07:54:02.926188 systemd-resolved[1516]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Oct 9 07:54:02.931612 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 07:54:02.938563 systemd-resolved[1516]: Using system hostname 'srv-9xk3k.gb1.brightbox.com'. Oct 9 07:54:02.945612 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 07:54:02.947413 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 07:54:02.957541 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 9 07:54:02.961871 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:54:02.965604 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 9 07:54:02.968317 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 9 07:54:02.969717 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 9 07:54:02.971062 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 07:54:02.971319 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 07:54:02.972924 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 9 07:54:02.973157 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 9 07:54:02.974545 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 07:54:02.974765 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 07:54:02.976107 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 07:54:02.978644 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 07:54:02.981145 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 9 07:54:02.986364 systemd[1]: Finished ensure-sysext.service. Oct 9 07:54:02.992638 systemd[1]: Reached target network.target - Network. Oct 9 07:54:02.993319 systemd[1]: Reached target network-online.target - Network is Online. Oct 9 07:54:02.994023 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 9 07:54:02.994876 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 07:54:02.994980 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 07:54:03.000463 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 9 07:54:03.001228 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 9 07:54:03.053340 systemd-networkd[1270]: eth0: Ignoring DHCPv6 address 2a02:1348:179:9218:24:19ff:fee6:4862/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:9218:24:19ff:fee6:4862/64 assigned by NDisc. Oct 9 07:54:03.053355 systemd-networkd[1270]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Oct 9 07:54:03.074595 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 9 07:54:03.076015 systemd[1]: Reached target sysinit.target - System Initialization. Oct 9 07:54:03.076821 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 9 07:54:03.077618 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 9 07:54:03.078396 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 9 07:54:03.079235 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 9 07:54:03.079276 systemd[1]: Reached target paths.target - Path Units. Oct 9 07:54:03.079887 systemd[1]: Reached target time-set.target - System Time Set. Oct 9 07:54:03.080775 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 9 07:54:03.081639 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 9 07:54:03.082396 systemd[1]: Reached target timers.target - Timer Units. Oct 9 07:54:03.084155 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 9 07:54:03.086969 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 9 07:54:03.090029 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 9 07:54:03.091364 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 9 07:54:03.092129 systemd[1]: Reached target sockets.target - Socket Units. Oct 9 07:54:03.092835 systemd[1]: Reached target basic.target - Basic System. Oct 9 07:54:03.093738 systemd[1]: System is tainted: cgroupsv1 Oct 9 07:54:03.093799 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 9 07:54:03.093833 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 9 07:54:03.096811 systemd[1]: Starting containerd.service - containerd container runtime... Oct 9 07:54:03.106186 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Oct 9 07:54:03.116462 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 9 07:54:03.125298 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 9 07:54:03.127354 systemd-timesyncd[1573]: Contacted time server 176.58.109.199:123 (0.flatcar.pool.ntp.org). Oct 9 07:54:03.127444 systemd-timesyncd[1573]: Initial clock synchronization to Wed 2024-10-09 07:54:03.524801 UTC. Oct 9 07:54:03.137640 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 9 07:54:03.139084 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 9 07:54:03.146806 jq[1581]: false Oct 9 07:54:03.147168 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:54:03.155505 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 9 07:54:03.160112 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 9 07:54:03.172905 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 9 07:54:03.175560 dbus-daemon[1580]: [system] SELinux support is enabled Oct 9 07:54:03.182466 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 9 07:54:03.185592 dbus-daemon[1580]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1270 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Oct 9 07:54:03.190050 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 9 07:54:03.209319 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 9 07:54:03.213407 extend-filesystems[1584]: Found loop4 Oct 9 07:54:03.213407 extend-filesystems[1584]: Found loop5 Oct 9 07:54:03.213407 extend-filesystems[1584]: Found loop6 Oct 9 07:54:03.213407 extend-filesystems[1584]: Found loop7 Oct 9 07:54:03.213407 extend-filesystems[1584]: Found vda Oct 9 07:54:03.213407 extend-filesystems[1584]: Found vda1 Oct 9 07:54:03.213407 extend-filesystems[1584]: Found vda2 Oct 9 07:54:03.213407 extend-filesystems[1584]: Found vda3 Oct 9 07:54:03.213407 extend-filesystems[1584]: Found usr Oct 9 07:54:03.213407 extend-filesystems[1584]: Found vda4 Oct 9 07:54:03.213407 extend-filesystems[1584]: Found vda6 Oct 9 07:54:03.213407 extend-filesystems[1584]: Found vda7 Oct 9 07:54:03.213407 extend-filesystems[1584]: Found vda9 Oct 9 07:54:03.213407 extend-filesystems[1584]: Checking size of /dev/vda9 Oct 9 07:54:03.327188 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Oct 9 07:54:03.327272 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1267) Oct 9 07:54:03.212902 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 9 07:54:03.327408 extend-filesystems[1584]: Resized partition /dev/vda9 Oct 9 07:54:03.229458 systemd[1]: Starting update-engine.service - Update Engine... Oct 9 07:54:03.345886 extend-filesystems[1613]: resize2fs 1.47.0 (5-Feb-2023) Oct 9 07:54:03.259699 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 9 07:54:03.266951 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 9 07:54:03.364884 jq[1611]: true Oct 9 07:54:03.281315 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 9 07:54:03.281642 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 9 07:54:03.309344 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 9 07:54:03.309701 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 9 07:54:03.337490 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 9 07:54:03.351978 systemd[1]: motdgen.service: Deactivated successfully. Oct 9 07:54:03.352429 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 9 07:54:03.373433 (ntainerd)[1626]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 9 07:54:03.387274 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 9 07:54:03.396406 update_engine[1603]: I1009 07:54:03.393401 1603 main.cc:92] Flatcar Update Engine starting Oct 9 07:54:03.389796 dbus-daemon[1580]: [system] Successfully activated service 'org.freedesktop.systemd1' Oct 9 07:54:03.387321 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 9 07:54:03.389306 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 9 07:54:03.389337 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 9 07:54:03.400062 systemd-logind[1599]: Watching system buttons on /dev/input/event2 (Power Button) Oct 9 07:54:03.402399 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Oct 9 07:54:03.405105 systemd-logind[1599]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 9 07:54:03.411258 jq[1625]: true Oct 9 07:54:03.411718 systemd-logind[1599]: New seat seat0. Oct 9 07:54:03.415814 tar[1619]: linux-amd64/helm Oct 9 07:54:03.427779 update_engine[1603]: I1009 07:54:03.426772 1603 update_check_scheduler.cc:74] Next update check in 2m7s Oct 9 07:54:03.427529 systemd[1]: Started systemd-logind.service - User Login Management. Oct 9 07:54:03.437223 systemd[1]: Started update-engine.service - Update Engine. Oct 9 07:54:03.442123 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 9 07:54:03.445722 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 9 07:54:03.629974 locksmithd[1642]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 9 07:54:03.709606 bash[1660]: Updated "/home/core/.ssh/authorized_keys" Oct 9 07:54:03.714967 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 9 07:54:03.740626 systemd[1]: Starting sshkeys.service... Oct 9 07:54:03.788376 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Oct 9 07:54:03.790579 dbus-daemon[1580]: [system] Successfully activated service 'org.freedesktop.hostname1' Oct 9 07:54:03.796627 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Oct 9 07:54:03.798691 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Oct 9 07:54:03.807271 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Oct 9 07:54:03.803836 dbus-daemon[1580]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1638 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Oct 9 07:54:03.815842 systemd[1]: Starting polkit.service - Authorization Manager... Oct 9 07:54:03.829665 extend-filesystems[1613]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 9 07:54:03.829665 extend-filesystems[1613]: old_desc_blocks = 1, new_desc_blocks = 8 Oct 9 07:54:03.829665 extend-filesystems[1613]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Oct 9 07:54:03.842834 extend-filesystems[1584]: Resized filesystem in /dev/vda9 Oct 9 07:54:03.831255 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 9 07:54:03.831791 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 9 07:54:03.871164 polkitd[1671]: Started polkitd version 121 Oct 9 07:54:03.892152 polkitd[1671]: Loading rules from directory /etc/polkit-1/rules.d Oct 9 07:54:03.895358 polkitd[1671]: Loading rules from directory /usr/share/polkit-1/rules.d Oct 9 07:54:03.897049 polkitd[1671]: Finished loading, compiling and executing 2 rules Oct 9 07:54:03.899780 dbus-daemon[1580]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Oct 9 07:54:03.900388 systemd[1]: Started polkit.service - Authorization Manager. Oct 9 07:54:03.901174 polkitd[1671]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Oct 9 07:54:03.942010 systemd-hostnamed[1638]: Hostname set to (static) Oct 9 07:54:04.003539 containerd[1626]: time="2024-10-09T07:54:04.003163285Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Oct 9 07:54:04.045687 sshd_keygen[1620]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 9 07:54:04.075985 containerd[1626]: time="2024-10-09T07:54:04.073796372Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 9 07:54:04.075985 containerd[1626]: time="2024-10-09T07:54:04.073889224Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 9 07:54:04.076470 containerd[1626]: time="2024-10-09T07:54:04.076419924Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 9 07:54:04.076553 containerd[1626]: time="2024-10-09T07:54:04.076470343Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 9 07:54:04.077551 containerd[1626]: time="2024-10-09T07:54:04.076848635Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 07:54:04.077551 containerd[1626]: time="2024-10-09T07:54:04.076901163Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 9 07:54:04.077551 containerd[1626]: time="2024-10-09T07:54:04.077058008Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 9 07:54:04.077551 containerd[1626]: time="2024-10-09T07:54:04.077166467Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 07:54:04.077551 containerd[1626]: time="2024-10-09T07:54:04.077196180Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 9 07:54:04.077551 containerd[1626]: time="2024-10-09T07:54:04.077357824Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 9 07:54:04.077795 containerd[1626]: time="2024-10-09T07:54:04.077756077Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 9 07:54:04.077833 containerd[1626]: time="2024-10-09T07:54:04.077789499Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 9 07:54:04.077833 containerd[1626]: time="2024-10-09T07:54:04.077812139Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 9 07:54:04.078602 containerd[1626]: time="2024-10-09T07:54:04.077991880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 07:54:04.078602 containerd[1626]: time="2024-10-09T07:54:04.078030041Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 9 07:54:04.078602 containerd[1626]: time="2024-10-09T07:54:04.078116072Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 9 07:54:04.078602 containerd[1626]: time="2024-10-09T07:54:04.078141386Z" level=info msg="metadata content store policy set" policy=shared Oct 9 07:54:04.085277 containerd[1626]: time="2024-10-09T07:54:04.083216779Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 9 07:54:04.085277 containerd[1626]: time="2024-10-09T07:54:04.083284791Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 9 07:54:04.085277 containerd[1626]: time="2024-10-09T07:54:04.083312083Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 9 07:54:04.085277 containerd[1626]: time="2024-10-09T07:54:04.083369894Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 9 07:54:04.085277 containerd[1626]: time="2024-10-09T07:54:04.083395158Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 9 07:54:04.085277 containerd[1626]: time="2024-10-09T07:54:04.083418234Z" level=info msg="NRI interface is disabled by configuration." Oct 9 07:54:04.085277 containerd[1626]: time="2024-10-09T07:54:04.083438952Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 9 07:54:04.085277 containerd[1626]: time="2024-10-09T07:54:04.083619913Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 9 07:54:04.085277 containerd[1626]: time="2024-10-09T07:54:04.083646275Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 9 07:54:04.085277 containerd[1626]: time="2024-10-09T07:54:04.083666790Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 9 07:54:04.085277 containerd[1626]: time="2024-10-09T07:54:04.083690798Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 9 07:54:04.085277 containerd[1626]: time="2024-10-09T07:54:04.083713560Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 9 07:54:04.085277 containerd[1626]: time="2024-10-09T07:54:04.083745818Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 9 07:54:04.085277 containerd[1626]: time="2024-10-09T07:54:04.083776511Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 9 07:54:04.085787 containerd[1626]: time="2024-10-09T07:54:04.083823173Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 9 07:54:04.085787 containerd[1626]: time="2024-10-09T07:54:04.083848208Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 9 07:54:04.085787 containerd[1626]: time="2024-10-09T07:54:04.083870039Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 9 07:54:04.085787 containerd[1626]: time="2024-10-09T07:54:04.083900051Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 9 07:54:04.085787 containerd[1626]: time="2024-10-09T07:54:04.083924586Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 9 07:54:04.085787 containerd[1626]: time="2024-10-09T07:54:04.084096956Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 9 07:54:04.085787 containerd[1626]: time="2024-10-09T07:54:04.084509779Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 9 07:54:04.085787 containerd[1626]: time="2024-10-09T07:54:04.084558889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 9 07:54:04.085787 containerd[1626]: time="2024-10-09T07:54:04.084588494Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 9 07:54:04.085787 containerd[1626]: time="2024-10-09T07:54:04.084637728Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 9 07:54:04.085787 containerd[1626]: time="2024-10-09T07:54:04.084745806Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 9 07:54:04.085787 containerd[1626]: time="2024-10-09T07:54:04.084782415Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 9 07:54:04.085787 containerd[1626]: time="2024-10-09T07:54:04.084805859Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 9 07:54:04.085787 containerd[1626]: time="2024-10-09T07:54:04.084825755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 9 07:54:04.086273 containerd[1626]: time="2024-10-09T07:54:04.084846611Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 9 07:54:04.086273 containerd[1626]: time="2024-10-09T07:54:04.084867688Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 9 07:54:04.086273 containerd[1626]: time="2024-10-09T07:54:04.084886949Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 9 07:54:04.086273 containerd[1626]: time="2024-10-09T07:54:04.084908944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 9 07:54:04.086273 containerd[1626]: time="2024-10-09T07:54:04.084929233Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 9 07:54:04.086273 containerd[1626]: time="2024-10-09T07:54:04.085153460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 9 07:54:04.086273 containerd[1626]: time="2024-10-09T07:54:04.085180841Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 9 07:54:04.086273 containerd[1626]: time="2024-10-09T07:54:04.085201857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 9 07:54:04.086273 containerd[1626]: time="2024-10-09T07:54:04.085223003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 9 07:54:04.086727 containerd[1626]: time="2024-10-09T07:54:04.085241831Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 9 07:54:04.086885 containerd[1626]: time="2024-10-09T07:54:04.086858204Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 9 07:54:04.086984 containerd[1626]: time="2024-10-09T07:54:04.086961863Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 9 07:54:04.087087 containerd[1626]: time="2024-10-09T07:54:04.087064082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 9 07:54:04.087620 containerd[1626]: time="2024-10-09T07:54:04.087533243Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 9 07:54:04.087917 containerd[1626]: time="2024-10-09T07:54:04.087894411Z" level=info msg="Connect containerd service" Oct 9 07:54:04.088082 containerd[1626]: time="2024-10-09T07:54:04.088056445Z" level=info msg="using legacy CRI server" Oct 9 07:54:04.088189 containerd[1626]: time="2024-10-09T07:54:04.088167384Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 9 07:54:04.088433 containerd[1626]: time="2024-10-09T07:54:04.088405984Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 9 07:54:04.089397 containerd[1626]: time="2024-10-09T07:54:04.089363713Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 9 07:54:04.089548 containerd[1626]: time="2024-10-09T07:54:04.089521409Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 9 07:54:04.089674 containerd[1626]: time="2024-10-09T07:54:04.089646198Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 9 07:54:04.089787 containerd[1626]: time="2024-10-09T07:54:04.089762451Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 9 07:54:04.089896 containerd[1626]: time="2024-10-09T07:54:04.089870718Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 9 07:54:04.090865 containerd[1626]: time="2024-10-09T07:54:04.090838370Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 9 07:54:04.091018 containerd[1626]: time="2024-10-09T07:54:04.090995566Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 9 07:54:04.091318 containerd[1626]: time="2024-10-09T07:54:04.091277849Z" level=info msg="Start subscribing containerd event" Oct 9 07:54:04.091435 containerd[1626]: time="2024-10-09T07:54:04.091410189Z" level=info msg="Start recovering state" Oct 9 07:54:04.091714 containerd[1626]: time="2024-10-09T07:54:04.091690672Z" level=info msg="Start event monitor" Oct 9 07:54:04.091881 containerd[1626]: time="2024-10-09T07:54:04.091856766Z" level=info msg="Start snapshots syncer" Oct 9 07:54:04.091978 containerd[1626]: time="2024-10-09T07:54:04.091956241Z" level=info msg="Start cni network conf syncer for default" Oct 9 07:54:04.092138 containerd[1626]: time="2024-10-09T07:54:04.092115096Z" level=info msg="Start streaming server" Oct 9 07:54:04.092433 containerd[1626]: time="2024-10-09T07:54:04.092324866Z" level=info msg="containerd successfully booted in 0.090896s" Oct 9 07:54:04.093487 systemd[1]: Started containerd.service - containerd container runtime. Oct 9 07:54:04.131332 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 9 07:54:04.142717 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 9 07:54:04.172920 systemd[1]: issuegen.service: Deactivated successfully. Oct 9 07:54:04.173403 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 9 07:54:04.185892 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 9 07:54:04.213892 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 9 07:54:04.233883 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 9 07:54:04.244957 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 9 07:54:04.247482 systemd[1]: Reached target getty.target - Login Prompts. Oct 9 07:54:04.583192 tar[1619]: linux-amd64/LICENSE Oct 9 07:54:04.587271 tar[1619]: linux-amd64/README.md Oct 9 07:54:04.600602 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 9 07:54:04.750512 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:54:04.760973 (kubelet)[1722]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 07:54:05.462681 kubelet[1722]: E1009 07:54:05.462537 1722 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 07:54:05.465088 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 07:54:05.465448 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 07:54:09.310444 login[1708]: pam_lastlog(login:session): file /var/log/lastlog is locked/read Oct 9 07:54:09.310831 login[1707]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Oct 9 07:54:09.329979 systemd-logind[1599]: New session 1 of user core. Oct 9 07:54:09.331597 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 9 07:54:09.337676 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 9 07:54:09.365423 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 9 07:54:09.376966 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 9 07:54:09.383762 (systemd)[1742]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:54:09.519130 systemd[1742]: Queued start job for default target default.target. Oct 9 07:54:09.519673 systemd[1742]: Created slice app.slice - User Application Slice. Oct 9 07:54:09.519723 systemd[1742]: Reached target paths.target - Paths. Oct 9 07:54:09.519746 systemd[1742]: Reached target timers.target - Timers. Oct 9 07:54:09.526361 systemd[1742]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 9 07:54:09.535917 systemd[1742]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 9 07:54:09.535984 systemd[1742]: Reached target sockets.target - Sockets. Oct 9 07:54:09.536019 systemd[1742]: Reached target basic.target - Basic System. Oct 9 07:54:09.536079 systemd[1742]: Reached target default.target - Main User Target. Oct 9 07:54:09.536129 systemd[1742]: Startup finished in 143ms. Oct 9 07:54:09.536275 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 9 07:54:09.546893 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 9 07:54:10.184846 coreos-metadata[1578]: Oct 09 07:54:10.184 WARN failed to locate config-drive, using the metadata service API instead Oct 9 07:54:10.212291 coreos-metadata[1578]: Oct 09 07:54:10.212 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Oct 9 07:54:10.218552 coreos-metadata[1578]: Oct 09 07:54:10.218 INFO Fetch failed with 404: resource not found Oct 9 07:54:10.218552 coreos-metadata[1578]: Oct 09 07:54:10.218 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Oct 9 07:54:10.219150 coreos-metadata[1578]: Oct 09 07:54:10.219 INFO Fetch successful Oct 9 07:54:10.219383 coreos-metadata[1578]: Oct 09 07:54:10.219 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Oct 9 07:54:10.231875 coreos-metadata[1578]: Oct 09 07:54:10.231 INFO Fetch successful Oct 9 07:54:10.232183 coreos-metadata[1578]: Oct 09 07:54:10.232 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Oct 9 07:54:10.246396 coreos-metadata[1578]: Oct 09 07:54:10.246 INFO Fetch successful Oct 9 07:54:10.246685 coreos-metadata[1578]: Oct 09 07:54:10.246 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Oct 9 07:54:10.261822 coreos-metadata[1578]: Oct 09 07:54:10.261 INFO Fetch successful Oct 9 07:54:10.262082 coreos-metadata[1578]: Oct 09 07:54:10.262 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Oct 9 07:54:10.277908 coreos-metadata[1578]: Oct 09 07:54:10.277 INFO Fetch successful Oct 9 07:54:10.304810 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Oct 9 07:54:10.305976 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 9 07:54:10.314838 login[1708]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Oct 9 07:54:10.322084 systemd-logind[1599]: New session 2 of user core. Oct 9 07:54:10.333927 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 9 07:54:10.951141 coreos-metadata[1670]: Oct 09 07:54:10.951 WARN failed to locate config-drive, using the metadata service API instead Oct 9 07:54:10.972498 coreos-metadata[1670]: Oct 09 07:54:10.972 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Oct 9 07:54:10.995981 coreos-metadata[1670]: Oct 09 07:54:10.995 INFO Fetch successful Oct 9 07:54:10.996358 coreos-metadata[1670]: Oct 09 07:54:10.996 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Oct 9 07:54:11.040794 coreos-metadata[1670]: Oct 09 07:54:11.040 INFO Fetch successful Oct 9 07:54:11.042878 unknown[1670]: wrote ssh authorized keys file for user: core Oct 9 07:54:11.062119 update-ssh-keys[1781]: Updated "/home/core/.ssh/authorized_keys" Oct 9 07:54:11.063065 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Oct 9 07:54:11.067904 systemd[1]: Finished sshkeys.service. Oct 9 07:54:11.074784 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 9 07:54:11.075439 systemd[1]: Startup finished in 16.185s (kernel) + 12.510s (userspace) = 28.696s. Oct 9 07:54:13.088376 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 9 07:54:13.093597 systemd[1]: Started sshd@0-10.230.72.98:22-147.75.109.163:57782.service - OpenSSH per-connection server daemon (147.75.109.163:57782). Oct 9 07:54:14.503317 sshd[1787]: Accepted publickey for core from 147.75.109.163 port 57782 ssh2: RSA SHA256:z8SERNOz70RXmsszd0t6WAiJugIQVIPROP/9OQBL8sM Oct 9 07:54:14.505507 sshd[1787]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:54:14.512077 systemd-logind[1599]: New session 3 of user core. Oct 9 07:54:14.519671 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 9 07:54:15.716162 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 9 07:54:15.724505 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:54:15.891459 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:54:15.904882 (kubelet)[1804]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 07:54:15.987667 kubelet[1804]: E1009 07:54:15.987514 1804 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 07:54:15.993766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 07:54:15.994054 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 07:54:16.024882 systemd[1]: Started sshd@1-10.230.72.98:22-147.75.109.163:57784.service - OpenSSH per-connection server daemon (147.75.109.163:57784). Oct 9 07:54:16.965821 sshd[1813]: Accepted publickey for core from 147.75.109.163 port 57784 ssh2: RSA SHA256:z8SERNOz70RXmsszd0t6WAiJugIQVIPROP/9OQBL8sM Oct 9 07:54:16.967810 sshd[1813]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:54:16.975393 systemd-logind[1599]: New session 4 of user core. Oct 9 07:54:16.980600 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 9 07:54:17.619807 sshd[1813]: pam_unix(sshd:session): session closed for user core Oct 9 07:54:17.625062 systemd[1]: sshd@1-10.230.72.98:22-147.75.109.163:57784.service: Deactivated successfully. Oct 9 07:54:17.625209 systemd-logind[1599]: Session 4 logged out. Waiting for processes to exit. Oct 9 07:54:17.630603 systemd[1]: session-4.scope: Deactivated successfully. Oct 9 07:54:17.632403 systemd-logind[1599]: Removed session 4. Oct 9 07:54:17.788620 systemd[1]: Started sshd@2-10.230.72.98:22-147.75.109.163:45394.service - OpenSSH per-connection server daemon (147.75.109.163:45394). Oct 9 07:54:18.749816 sshd[1821]: Accepted publickey for core from 147.75.109.163 port 45394 ssh2: RSA SHA256:z8SERNOz70RXmsszd0t6WAiJugIQVIPROP/9OQBL8sM Oct 9 07:54:18.751895 sshd[1821]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:54:18.759693 systemd-logind[1599]: New session 5 of user core. Oct 9 07:54:18.774717 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 9 07:54:19.416684 sshd[1821]: pam_unix(sshd:session): session closed for user core Oct 9 07:54:19.420752 systemd-logind[1599]: Session 5 logged out. Waiting for processes to exit. Oct 9 07:54:19.422515 systemd[1]: sshd@2-10.230.72.98:22-147.75.109.163:45394.service: Deactivated successfully. Oct 9 07:54:19.425695 systemd[1]: session-5.scope: Deactivated successfully. Oct 9 07:54:19.428806 systemd-logind[1599]: Removed session 5. Oct 9 07:54:19.574646 systemd[1]: Started sshd@3-10.230.72.98:22-147.75.109.163:45410.service - OpenSSH per-connection server daemon (147.75.109.163:45410). Oct 9 07:54:20.536184 sshd[1829]: Accepted publickey for core from 147.75.109.163 port 45410 ssh2: RSA SHA256:z8SERNOz70RXmsszd0t6WAiJugIQVIPROP/9OQBL8sM Oct 9 07:54:20.539185 sshd[1829]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:54:20.546353 systemd-logind[1599]: New session 6 of user core. Oct 9 07:54:20.553807 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 9 07:54:21.201840 sshd[1829]: pam_unix(sshd:session): session closed for user core Oct 9 07:54:21.207141 systemd[1]: sshd@3-10.230.72.98:22-147.75.109.163:45410.service: Deactivated successfully. Oct 9 07:54:21.210370 systemd-logind[1599]: Session 6 logged out. Waiting for processes to exit. Oct 9 07:54:21.211404 systemd[1]: session-6.scope: Deactivated successfully. Oct 9 07:54:21.212176 systemd-logind[1599]: Removed session 6. Oct 9 07:54:21.364697 systemd[1]: Started sshd@4-10.230.72.98:22-147.75.109.163:45426.service - OpenSSH per-connection server daemon (147.75.109.163:45426). Oct 9 07:54:22.286552 sshd[1837]: Accepted publickey for core from 147.75.109.163 port 45426 ssh2: RSA SHA256:z8SERNOz70RXmsszd0t6WAiJugIQVIPROP/9OQBL8sM Oct 9 07:54:22.288475 sshd[1837]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:54:22.296538 systemd-logind[1599]: New session 7 of user core. Oct 9 07:54:22.299632 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 9 07:54:22.798715 sudo[1841]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 9 07:54:22.799166 sudo[1841]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 9 07:54:22.814478 sudo[1841]: pam_unix(sudo:session): session closed for user root Oct 9 07:54:22.966198 sshd[1837]: pam_unix(sshd:session): session closed for user core Oct 9 07:54:22.973102 systemd[1]: sshd@4-10.230.72.98:22-147.75.109.163:45426.service: Deactivated successfully. Oct 9 07:54:22.976477 systemd-logind[1599]: Session 7 logged out. Waiting for processes to exit. Oct 9 07:54:22.977341 systemd[1]: session-7.scope: Deactivated successfully. Oct 9 07:54:22.979266 systemd-logind[1599]: Removed session 7. Oct 9 07:54:23.120697 systemd[1]: Started sshd@5-10.230.72.98:22-147.75.109.163:45436.service - OpenSSH per-connection server daemon (147.75.109.163:45436). Oct 9 07:54:24.056967 sshd[1846]: Accepted publickey for core from 147.75.109.163 port 45436 ssh2: RSA SHA256:z8SERNOz70RXmsszd0t6WAiJugIQVIPROP/9OQBL8sM Oct 9 07:54:24.059345 sshd[1846]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:54:24.068501 systemd-logind[1599]: New session 8 of user core. Oct 9 07:54:24.071648 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 9 07:54:24.554732 sudo[1851]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 9 07:54:24.555174 sudo[1851]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 9 07:54:24.560558 sudo[1851]: pam_unix(sudo:session): session closed for user root Oct 9 07:54:24.569756 sudo[1850]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 9 07:54:24.570234 sudo[1850]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 9 07:54:24.600718 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Oct 9 07:54:24.603841 auditctl[1854]: No rules Oct 9 07:54:24.604544 systemd[1]: audit-rules.service: Deactivated successfully. Oct 9 07:54:24.604940 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Oct 9 07:54:24.611887 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 9 07:54:24.667105 augenrules[1873]: No rules Oct 9 07:54:24.669814 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 9 07:54:24.672015 sudo[1850]: pam_unix(sudo:session): session closed for user root Oct 9 07:54:24.826621 sshd[1846]: pam_unix(sshd:session): session closed for user core Oct 9 07:54:24.834007 systemd[1]: sshd@5-10.230.72.98:22-147.75.109.163:45436.service: Deactivated successfully. Oct 9 07:54:24.835409 systemd-logind[1599]: Session 8 logged out. Waiting for processes to exit. Oct 9 07:54:24.838805 systemd[1]: session-8.scope: Deactivated successfully. Oct 9 07:54:24.840018 systemd-logind[1599]: Removed session 8. Oct 9 07:54:24.991903 systemd[1]: Started sshd@6-10.230.72.98:22-147.75.109.163:45440.service - OpenSSH per-connection server daemon (147.75.109.163:45440). Oct 9 07:54:25.919094 sshd[1882]: Accepted publickey for core from 147.75.109.163 port 45440 ssh2: RSA SHA256:z8SERNOz70RXmsszd0t6WAiJugIQVIPROP/9OQBL8sM Oct 9 07:54:25.920958 sshd[1882]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:54:25.927421 systemd-logind[1599]: New session 9 of user core. Oct 9 07:54:25.933843 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 9 07:54:26.210150 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 9 07:54:26.221538 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:54:26.356416 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:54:26.362616 (kubelet)[1898]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 07:54:26.419376 sudo[1905]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 9 07:54:26.419785 sudo[1905]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 9 07:54:26.447220 kubelet[1898]: E1009 07:54:26.447134 1898 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 07:54:26.450504 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 07:54:26.450973 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 07:54:26.579600 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 9 07:54:26.592073 (dockerd)[1917]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 9 07:54:26.967463 dockerd[1917]: time="2024-10-09T07:54:26.967362621Z" level=info msg="Starting up" Oct 9 07:54:27.144904 dockerd[1917]: time="2024-10-09T07:54:27.144777552Z" level=info msg="Loading containers: start." Oct 9 07:54:27.294245 kernel: Initializing XFRM netlink socket Oct 9 07:54:27.416157 systemd-networkd[1270]: docker0: Link UP Oct 9 07:54:27.438397 dockerd[1917]: time="2024-10-09T07:54:27.438337034Z" level=info msg="Loading containers: done." Oct 9 07:54:27.526139 dockerd[1917]: time="2024-10-09T07:54:27.525163213Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 9 07:54:27.526139 dockerd[1917]: time="2024-10-09T07:54:27.525501510Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Oct 9 07:54:27.526139 dockerd[1917]: time="2024-10-09T07:54:27.525646782Z" level=info msg="Daemon has completed initialization" Oct 9 07:54:27.563865 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 9 07:54:27.564890 dockerd[1917]: time="2024-10-09T07:54:27.564615401Z" level=info msg="API listen on /run/docker.sock" Oct 9 07:54:28.807282 containerd[1626]: time="2024-10-09T07:54:28.807154332Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\"" Oct 9 07:54:29.617725 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3471644264.mount: Deactivated successfully. Oct 9 07:54:31.589465 containerd[1626]: time="2024-10-09T07:54:31.589197708Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:54:31.591654 containerd[1626]: time="2024-10-09T07:54:31.591597081Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.9: active requests=0, bytes read=35213849" Oct 9 07:54:31.592399 containerd[1626]: time="2024-10-09T07:54:31.592358716Z" level=info msg="ImageCreate event name:\"sha256:bc1ec5c2b6c60a3b18e7f54a99f0452c038400ecaaa2576931fd5342a0586abb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:54:31.596288 containerd[1626]: time="2024-10-09T07:54:31.596195032Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:54:31.598397 containerd[1626]: time="2024-10-09T07:54:31.597803329Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.9\" with image id \"sha256:bc1ec5c2b6c60a3b18e7f54a99f0452c038400ecaaa2576931fd5342a0586abb\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\", size \"35210641\" in 2.790505319s" Oct 9 07:54:31.598397 containerd[1626]: time="2024-10-09T07:54:31.597864028Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\" returns image reference \"sha256:bc1ec5c2b6c60a3b18e7f54a99f0452c038400ecaaa2576931fd5342a0586abb\"" Oct 9 07:54:31.632414 containerd[1626]: time="2024-10-09T07:54:31.632357367Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\"" Oct 9 07:54:33.960672 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Oct 9 07:54:34.057535 containerd[1626]: time="2024-10-09T07:54:34.057409663Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:54:34.059373 containerd[1626]: time="2024-10-09T07:54:34.059287277Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.9: active requests=0, bytes read=32208681" Oct 9 07:54:34.060051 containerd[1626]: time="2024-10-09T07:54:34.059545494Z" level=info msg="ImageCreate event name:\"sha256:5abda0d0a9153cd1f90fd828be379f7a16a6c814e6efbbbf31e247e13c3843e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:54:34.067227 containerd[1626]: time="2024-10-09T07:54:34.065473501Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:54:34.070084 containerd[1626]: time="2024-10-09T07:54:34.070045959Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.9\" with image id \"sha256:5abda0d0a9153cd1f90fd828be379f7a16a6c814e6efbbbf31e247e13c3843e5\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\", size \"33739229\" in 2.437277352s" Oct 9 07:54:34.070161 containerd[1626]: time="2024-10-09T07:54:34.070113056Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\" returns image reference \"sha256:5abda0d0a9153cd1f90fd828be379f7a16a6c814e6efbbbf31e247e13c3843e5\"" Oct 9 07:54:34.105103 containerd[1626]: time="2024-10-09T07:54:34.105014598Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\"" Oct 9 07:54:35.683242 containerd[1626]: time="2024-10-09T07:54:35.681990135Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:54:35.684551 containerd[1626]: time="2024-10-09T07:54:35.683358963Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.9: active requests=0, bytes read=17320464" Oct 9 07:54:35.684551 containerd[1626]: time="2024-10-09T07:54:35.684044874Z" level=info msg="ImageCreate event name:\"sha256:059957505b3370d4c57d793e79cc70f9063d7ab75767f7040f5cc85572fe7e8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:54:35.688228 containerd[1626]: time="2024-10-09T07:54:35.687651023Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:54:35.689447 containerd[1626]: time="2024-10-09T07:54:35.689309449Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.9\" with image id \"sha256:059957505b3370d4c57d793e79cc70f9063d7ab75767f7040f5cc85572fe7e8d\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\", size \"18851030\" in 1.584236559s" Oct 9 07:54:35.689447 containerd[1626]: time="2024-10-09T07:54:35.689351409Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\" returns image reference \"sha256:059957505b3370d4c57d793e79cc70f9063d7ab75767f7040f5cc85572fe7e8d\"" Oct 9 07:54:35.714928 containerd[1626]: time="2024-10-09T07:54:35.714866649Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\"" Oct 9 07:54:36.460029 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Oct 9 07:54:36.469599 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:54:36.660746 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:54:36.683762 (kubelet)[2144]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 07:54:36.801489 kubelet[2144]: E1009 07:54:36.800237 2144 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 07:54:36.805485 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 07:54:36.805836 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 07:54:37.351493 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1748632198.mount: Deactivated successfully. Oct 9 07:54:37.917832 containerd[1626]: time="2024-10-09T07:54:37.917706842Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:54:37.919293 containerd[1626]: time="2024-10-09T07:54:37.919252938Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.9: active requests=0, bytes read=28601758" Oct 9 07:54:37.920768 containerd[1626]: time="2024-10-09T07:54:37.920721205Z" level=info msg="ImageCreate event name:\"sha256:dd650d127e51776919ec1622a4469a8b141b2dfee5a33fbc5cb9729372e0dcfa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:54:37.925740 containerd[1626]: time="2024-10-09T07:54:37.925615215Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:54:37.927041 containerd[1626]: time="2024-10-09T07:54:37.926541178Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.9\" with image id \"sha256:dd650d127e51776919ec1622a4469a8b141b2dfee5a33fbc5cb9729372e0dcfa\", repo tag \"registry.k8s.io/kube-proxy:v1.29.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\", size \"28600769\" in 2.211616528s" Oct 9 07:54:37.927041 containerd[1626]: time="2024-10-09T07:54:37.926590392Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\" returns image reference \"sha256:dd650d127e51776919ec1622a4469a8b141b2dfee5a33fbc5cb9729372e0dcfa\"" Oct 9 07:54:37.955371 containerd[1626]: time="2024-10-09T07:54:37.955029528Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 9 07:54:38.617746 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount45895766.mount: Deactivated successfully. Oct 9 07:54:39.723238 containerd[1626]: time="2024-10-09T07:54:39.723092699Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:54:39.724919 containerd[1626]: time="2024-10-09T07:54:39.724865576Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Oct 9 07:54:39.725707 containerd[1626]: time="2024-10-09T07:54:39.725279803Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:54:39.729526 containerd[1626]: time="2024-10-09T07:54:39.729491196Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:54:39.731568 containerd[1626]: time="2024-10-09T07:54:39.731195609Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.77610717s" Oct 9 07:54:39.731568 containerd[1626]: time="2024-10-09T07:54:39.731294982Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Oct 9 07:54:39.758935 containerd[1626]: time="2024-10-09T07:54:39.758841324Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Oct 9 07:54:40.367248 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3379869442.mount: Deactivated successfully. Oct 9 07:54:40.372330 containerd[1626]: time="2024-10-09T07:54:40.371437541Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:54:40.373098 containerd[1626]: time="2024-10-09T07:54:40.372846877Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Oct 9 07:54:40.373874 containerd[1626]: time="2024-10-09T07:54:40.373807523Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:54:40.376774 containerd[1626]: time="2024-10-09T07:54:40.376701935Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:54:40.378501 containerd[1626]: time="2024-10-09T07:54:40.377880695Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 618.734968ms" Oct 9 07:54:40.378501 containerd[1626]: time="2024-10-09T07:54:40.377925442Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Oct 9 07:54:40.404302 containerd[1626]: time="2024-10-09T07:54:40.404241105Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Oct 9 07:54:41.063068 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3264863487.mount: Deactivated successfully. Oct 9 07:54:43.930854 containerd[1626]: time="2024-10-09T07:54:43.930713548Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:54:43.932295 containerd[1626]: time="2024-10-09T07:54:43.932233163Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651633" Oct 9 07:54:43.932695 containerd[1626]: time="2024-10-09T07:54:43.932638026Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:54:43.936938 containerd[1626]: time="2024-10-09T07:54:43.936868058Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:54:43.938927 containerd[1626]: time="2024-10-09T07:54:43.938701228Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.53416643s" Oct 9 07:54:43.938927 containerd[1626]: time="2024-10-09T07:54:43.938764520Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Oct 9 07:54:46.960675 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Oct 9 07:54:46.976764 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:54:47.167507 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:54:47.180795 (kubelet)[2336]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 07:54:47.271037 kubelet[2336]: E1009 07:54:47.270409 2336 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 07:54:47.274935 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 07:54:47.275279 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 07:54:48.344581 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:54:48.359525 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:54:48.390230 systemd[1]: Reloading requested from client PID 2354 ('systemctl') (unit session-9.scope)... Oct 9 07:54:48.390490 systemd[1]: Reloading... Oct 9 07:54:48.394118 update_engine[1603]: I1009 07:54:48.393577 1603 update_attempter.cc:509] Updating boot flags... Oct 9 07:54:48.563391 zram_generator::config[2389]: No configuration found. Oct 9 07:54:48.631465 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2410) Oct 9 07:54:48.678372 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2411) Oct 9 07:54:48.798238 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 07:54:48.896118 systemd[1]: Reloading finished in 504 ms. Oct 9 07:54:48.984237 (kubelet)[2472]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 07:54:49.025568 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:54:49.036037 systemd[1]: kubelet.service: Deactivated successfully. Oct 9 07:54:49.036425 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:54:49.048597 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:54:49.326070 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:54:49.336759 (kubelet)[2493]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 07:54:49.395451 kubelet[2493]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 07:54:49.395451 kubelet[2493]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 9 07:54:49.395451 kubelet[2493]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 07:54:49.396897 kubelet[2493]: I1009 07:54:49.396804 2493 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 9 07:54:49.962707 kubelet[2493]: I1009 07:54:49.962652 2493 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 9 07:54:49.964219 kubelet[2493]: I1009 07:54:49.962922 2493 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 9 07:54:49.964219 kubelet[2493]: I1009 07:54:49.963274 2493 server.go:919] "Client rotation is on, will bootstrap in background" Oct 9 07:54:49.992558 kubelet[2493]: E1009 07:54:49.992495 2493 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.230.72.98:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.230.72.98:6443: connect: connection refused Oct 9 07:54:49.994138 kubelet[2493]: I1009 07:54:49.993736 2493 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 07:54:50.010916 kubelet[2493]: I1009 07:54:50.010871 2493 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 9 07:54:50.013419 kubelet[2493]: I1009 07:54:50.013396 2493 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 9 07:54:50.014534 kubelet[2493]: I1009 07:54:50.014508 2493 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 9 07:54:50.015284 kubelet[2493]: I1009 07:54:50.014810 2493 topology_manager.go:138] "Creating topology manager with none policy" Oct 9 07:54:50.015284 kubelet[2493]: I1009 07:54:50.014837 2493 container_manager_linux.go:301] "Creating device plugin manager" Oct 9 07:54:50.015284 kubelet[2493]: I1009 07:54:50.015036 2493 state_mem.go:36] "Initialized new in-memory state store" Oct 9 07:54:50.015284 kubelet[2493]: I1009 07:54:50.015244 2493 kubelet.go:396] "Attempting to sync node with API server" Oct 9 07:54:50.015284 kubelet[2493]: I1009 07:54:50.015277 2493 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 9 07:54:50.016766 kubelet[2493]: I1009 07:54:50.015332 2493 kubelet.go:312] "Adding apiserver pod source" Oct 9 07:54:50.016766 kubelet[2493]: I1009 07:54:50.015364 2493 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 9 07:54:50.018593 kubelet[2493]: W1009 07:54:50.018084 2493 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.230.72.98:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.72.98:6443: connect: connection refused Oct 9 07:54:50.018593 kubelet[2493]: E1009 07:54:50.018157 2493 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.230.72.98:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.72.98:6443: connect: connection refused Oct 9 07:54:50.018593 kubelet[2493]: W1009 07:54:50.018519 2493 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.230.72.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-9xk3k.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.72.98:6443: connect: connection refused Oct 9 07:54:50.018593 kubelet[2493]: E1009 07:54:50.018565 2493 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.230.72.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-9xk3k.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.72.98:6443: connect: connection refused Oct 9 07:54:50.019729 kubelet[2493]: I1009 07:54:50.019356 2493 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Oct 9 07:54:50.023519 kubelet[2493]: I1009 07:54:50.023376 2493 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 9 07:54:50.024513 kubelet[2493]: W1009 07:54:50.024491 2493 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 9 07:54:50.025707 kubelet[2493]: I1009 07:54:50.025500 2493 server.go:1256] "Started kubelet" Oct 9 07:54:50.025707 kubelet[2493]: I1009 07:54:50.025669 2493 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 9 07:54:50.026882 kubelet[2493]: I1009 07:54:50.026838 2493 server.go:461] "Adding debug handlers to kubelet server" Oct 9 07:54:50.031516 kubelet[2493]: I1009 07:54:50.030307 2493 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 9 07:54:50.031516 kubelet[2493]: I1009 07:54:50.030768 2493 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 9 07:54:50.031516 kubelet[2493]: I1009 07:54:50.031034 2493 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 9 07:54:50.034283 kubelet[2493]: E1009 07:54:50.032821 2493 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.72.98:6443/api/v1/namespaces/default/events\": dial tcp 10.230.72.98:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-9xk3k.gb1.brightbox.com.17fcb9a86c43f4a3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-9xk3k.gb1.brightbox.com,UID:srv-9xk3k.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-9xk3k.gb1.brightbox.com,},FirstTimestamp:2024-10-09 07:54:50.025464995 +0000 UTC m=+0.684032230,LastTimestamp:2024-10-09 07:54:50.025464995 +0000 UTC m=+0.684032230,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-9xk3k.gb1.brightbox.com,}" Oct 9 07:54:50.036010 kubelet[2493]: I1009 07:54:50.035978 2493 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 9 07:54:50.036949 kubelet[2493]: I1009 07:54:50.036921 2493 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 9 07:54:50.038192 kubelet[2493]: I1009 07:54:50.038167 2493 reconciler_new.go:29] "Reconciler: start to sync state" Oct 9 07:54:50.042910 kubelet[2493]: W1009 07:54:50.042841 2493 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.230.72.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.72.98:6443: connect: connection refused Oct 9 07:54:50.044799 kubelet[2493]: E1009 07:54:50.044775 2493 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.230.72.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.72.98:6443: connect: connection refused Oct 9 07:54:50.046132 kubelet[2493]: E1009 07:54:50.046109 2493 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.72.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-9xk3k.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.72.98:6443: connect: connection refused" interval="200ms" Oct 9 07:54:50.046744 kubelet[2493]: I1009 07:54:50.046720 2493 factory.go:221] Registration of the systemd container factory successfully Oct 9 07:54:50.046988 kubelet[2493]: I1009 07:54:50.046964 2493 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 9 07:54:50.051488 kubelet[2493]: I1009 07:54:50.051465 2493 factory.go:221] Registration of the containerd container factory successfully Oct 9 07:54:50.064335 kubelet[2493]: I1009 07:54:50.064296 2493 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 9 07:54:50.065682 kubelet[2493]: I1009 07:54:50.065658 2493 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 9 07:54:50.065757 kubelet[2493]: I1009 07:54:50.065717 2493 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 9 07:54:50.065757 kubelet[2493]: I1009 07:54:50.065754 2493 kubelet.go:2329] "Starting kubelet main sync loop" Oct 9 07:54:50.065857 kubelet[2493]: E1009 07:54:50.065838 2493 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 9 07:54:50.070867 kubelet[2493]: E1009 07:54:50.070260 2493 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 9 07:54:50.078343 kubelet[2493]: W1009 07:54:50.078289 2493 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.230.72.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.72.98:6443: connect: connection refused Oct 9 07:54:50.078557 kubelet[2493]: E1009 07:54:50.078356 2493 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.230.72.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.72.98:6443: connect: connection refused Oct 9 07:54:50.105359 kubelet[2493]: I1009 07:54:50.105282 2493 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 9 07:54:50.105539 kubelet[2493]: I1009 07:54:50.105312 2493 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 9 07:54:50.105539 kubelet[2493]: I1009 07:54:50.105439 2493 state_mem.go:36] "Initialized new in-memory state store" Oct 9 07:54:50.108039 kubelet[2493]: I1009 07:54:50.107998 2493 policy_none.go:49] "None policy: Start" Oct 9 07:54:50.111979 kubelet[2493]: I1009 07:54:50.111899 2493 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 9 07:54:50.111979 kubelet[2493]: I1009 07:54:50.111974 2493 state_mem.go:35] "Initializing new in-memory state store" Oct 9 07:54:50.118336 kubelet[2493]: I1009 07:54:50.118291 2493 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 9 07:54:50.118730 kubelet[2493]: I1009 07:54:50.118685 2493 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 9 07:54:50.126623 kubelet[2493]: E1009 07:54:50.126576 2493 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-9xk3k.gb1.brightbox.com\" not found" Oct 9 07:54:50.140239 kubelet[2493]: I1009 07:54:50.140069 2493 kubelet_node_status.go:73] "Attempting to register node" node="srv-9xk3k.gb1.brightbox.com" Oct 9 07:54:50.140549 kubelet[2493]: E1009 07:54:50.140517 2493 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.72.98:6443/api/v1/nodes\": dial tcp 10.230.72.98:6443: connect: connection refused" node="srv-9xk3k.gb1.brightbox.com" Oct 9 07:54:50.167151 kubelet[2493]: I1009 07:54:50.167033 2493 topology_manager.go:215] "Topology Admit Handler" podUID="7a842c9aaef7bd71f648d91f3bd73c76" podNamespace="kube-system" podName="kube-controller-manager-srv-9xk3k.gb1.brightbox.com" Oct 9 07:54:50.171629 kubelet[2493]: I1009 07:54:50.171600 2493 topology_manager.go:215] "Topology Admit Handler" podUID="232e33a7a42f65648263bc8f3c144241" podNamespace="kube-system" podName="kube-scheduler-srv-9xk3k.gb1.brightbox.com" Oct 9 07:54:50.174459 kubelet[2493]: I1009 07:54:50.174249 2493 topology_manager.go:215] "Topology Admit Handler" podUID="10477e625fb1605eff9c73b1a1709e01" podNamespace="kube-system" podName="kube-apiserver-srv-9xk3k.gb1.brightbox.com" Oct 9 07:54:50.247702 kubelet[2493]: E1009 07:54:50.247516 2493 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.72.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-9xk3k.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.72.98:6443: connect: connection refused" interval="400ms" Oct 9 07:54:50.339180 kubelet[2493]: I1009 07:54:50.339105 2493 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/232e33a7a42f65648263bc8f3c144241-kubeconfig\") pod \"kube-scheduler-srv-9xk3k.gb1.brightbox.com\" (UID: \"232e33a7a42f65648263bc8f3c144241\") " pod="kube-system/kube-scheduler-srv-9xk3k.gb1.brightbox.com" Oct 9 07:54:50.339180 kubelet[2493]: I1009 07:54:50.339182 2493 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/10477e625fb1605eff9c73b1a1709e01-ca-certs\") pod \"kube-apiserver-srv-9xk3k.gb1.brightbox.com\" (UID: \"10477e625fb1605eff9c73b1a1709e01\") " pod="kube-system/kube-apiserver-srv-9xk3k.gb1.brightbox.com" Oct 9 07:54:50.339489 kubelet[2493]: I1009 07:54:50.339236 2493 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/10477e625fb1605eff9c73b1a1709e01-k8s-certs\") pod \"kube-apiserver-srv-9xk3k.gb1.brightbox.com\" (UID: \"10477e625fb1605eff9c73b1a1709e01\") " pod="kube-system/kube-apiserver-srv-9xk3k.gb1.brightbox.com" Oct 9 07:54:50.339489 kubelet[2493]: I1009 07:54:50.339268 2493 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7a842c9aaef7bd71f648d91f3bd73c76-k8s-certs\") pod \"kube-controller-manager-srv-9xk3k.gb1.brightbox.com\" (UID: \"7a842c9aaef7bd71f648d91f3bd73c76\") " pod="kube-system/kube-controller-manager-srv-9xk3k.gb1.brightbox.com" Oct 9 07:54:50.339489 kubelet[2493]: I1009 07:54:50.339305 2493 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7a842c9aaef7bd71f648d91f3bd73c76-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-9xk3k.gb1.brightbox.com\" (UID: \"7a842c9aaef7bd71f648d91f3bd73c76\") " pod="kube-system/kube-controller-manager-srv-9xk3k.gb1.brightbox.com" Oct 9 07:54:50.339489 kubelet[2493]: I1009 07:54:50.339335 2493 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7a842c9aaef7bd71f648d91f3bd73c76-kubeconfig\") pod \"kube-controller-manager-srv-9xk3k.gb1.brightbox.com\" (UID: \"7a842c9aaef7bd71f648d91f3bd73c76\") " pod="kube-system/kube-controller-manager-srv-9xk3k.gb1.brightbox.com" Oct 9 07:54:50.339489 kubelet[2493]: I1009 07:54:50.339369 2493 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/10477e625fb1605eff9c73b1a1709e01-usr-share-ca-certificates\") pod \"kube-apiserver-srv-9xk3k.gb1.brightbox.com\" (UID: \"10477e625fb1605eff9c73b1a1709e01\") " pod="kube-system/kube-apiserver-srv-9xk3k.gb1.brightbox.com" Oct 9 07:54:50.339727 kubelet[2493]: I1009 07:54:50.339401 2493 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7a842c9aaef7bd71f648d91f3bd73c76-ca-certs\") pod \"kube-controller-manager-srv-9xk3k.gb1.brightbox.com\" (UID: \"7a842c9aaef7bd71f648d91f3bd73c76\") " pod="kube-system/kube-controller-manager-srv-9xk3k.gb1.brightbox.com" Oct 9 07:54:50.339727 kubelet[2493]: I1009 07:54:50.339432 2493 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7a842c9aaef7bd71f648d91f3bd73c76-flexvolume-dir\") pod \"kube-controller-manager-srv-9xk3k.gb1.brightbox.com\" (UID: \"7a842c9aaef7bd71f648d91f3bd73c76\") " pod="kube-system/kube-controller-manager-srv-9xk3k.gb1.brightbox.com" Oct 9 07:54:50.343939 kubelet[2493]: I1009 07:54:50.343911 2493 kubelet_node_status.go:73] "Attempting to register node" node="srv-9xk3k.gb1.brightbox.com" Oct 9 07:54:50.344351 kubelet[2493]: E1009 07:54:50.344328 2493 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.72.98:6443/api/v1/nodes\": dial tcp 10.230.72.98:6443: connect: connection refused" node="srv-9xk3k.gb1.brightbox.com" Oct 9 07:54:50.479477 containerd[1626]: time="2024-10-09T07:54:50.479331916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-9xk3k.gb1.brightbox.com,Uid:7a842c9aaef7bd71f648d91f3bd73c76,Namespace:kube-system,Attempt:0,}" Oct 9 07:54:50.489293 containerd[1626]: time="2024-10-09T07:54:50.489246516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-9xk3k.gb1.brightbox.com,Uid:232e33a7a42f65648263bc8f3c144241,Namespace:kube-system,Attempt:0,}" Oct 9 07:54:50.491301 containerd[1626]: time="2024-10-09T07:54:50.491068150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-9xk3k.gb1.brightbox.com,Uid:10477e625fb1605eff9c73b1a1709e01,Namespace:kube-system,Attempt:0,}" Oct 9 07:54:50.650229 kubelet[2493]: E1009 07:54:50.649398 2493 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.72.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-9xk3k.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.72.98:6443: connect: connection refused" interval="800ms" Oct 9 07:54:50.747532 kubelet[2493]: I1009 07:54:50.747480 2493 kubelet_node_status.go:73] "Attempting to register node" node="srv-9xk3k.gb1.brightbox.com" Oct 9 07:54:50.747884 kubelet[2493]: E1009 07:54:50.747856 2493 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.72.98:6443/api/v1/nodes\": dial tcp 10.230.72.98:6443: connect: connection refused" node="srv-9xk3k.gb1.brightbox.com" Oct 9 07:54:50.905771 kubelet[2493]: W1009 07:54:50.905564 2493 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.230.72.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-9xk3k.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.72.98:6443: connect: connection refused Oct 9 07:54:50.905771 kubelet[2493]: E1009 07:54:50.905661 2493 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.230.72.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-9xk3k.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.72.98:6443: connect: connection refused Oct 9 07:54:51.099206 kubelet[2493]: W1009 07:54:51.098871 2493 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.230.72.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.72.98:6443: connect: connection refused Oct 9 07:54:51.099206 kubelet[2493]: E1009 07:54:51.098929 2493 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.230.72.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.72.98:6443: connect: connection refused Oct 9 07:54:51.107936 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1872079755.mount: Deactivated successfully. Oct 9 07:54:51.111769 containerd[1626]: time="2024-10-09T07:54:51.111704005Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:54:51.113830 containerd[1626]: time="2024-10-09T07:54:51.113752384Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 9 07:54:51.114409 containerd[1626]: time="2024-10-09T07:54:51.114366878Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:54:51.115448 containerd[1626]: time="2024-10-09T07:54:51.115397813Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:54:51.116715 containerd[1626]: time="2024-10-09T07:54:51.116595775Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Oct 9 07:54:51.117620 containerd[1626]: time="2024-10-09T07:54:51.117466091Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 9 07:54:51.117620 containerd[1626]: time="2024-10-09T07:54:51.117563503Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:54:51.121920 containerd[1626]: time="2024-10-09T07:54:51.121875682Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:54:51.124475 containerd[1626]: time="2024-10-09T07:54:51.124134099Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 632.946527ms" Oct 9 07:54:51.126652 containerd[1626]: time="2024-10-09T07:54:51.126520358Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 637.001039ms" Oct 9 07:54:51.131779 containerd[1626]: time="2024-10-09T07:54:51.131550185Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 651.933231ms" Oct 9 07:54:51.257876 kubelet[2493]: E1009 07:54:51.257827 2493 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.72.98:6443/api/v1/namespaces/default/events\": dial tcp 10.230.72.98:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-9xk3k.gb1.brightbox.com.17fcb9a86c43f4a3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-9xk3k.gb1.brightbox.com,UID:srv-9xk3k.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-9xk3k.gb1.brightbox.com,},FirstTimestamp:2024-10-09 07:54:50.025464995 +0000 UTC m=+0.684032230,LastTimestamp:2024-10-09 07:54:50.025464995 +0000 UTC m=+0.684032230,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-9xk3k.gb1.brightbox.com,}" Oct 9 07:54:51.319416 containerd[1626]: time="2024-10-09T07:54:51.318663151Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:54:51.319416 containerd[1626]: time="2024-10-09T07:54:51.318752403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:54:51.319416 containerd[1626]: time="2024-10-09T07:54:51.318781872Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:54:51.319416 containerd[1626]: time="2024-10-09T07:54:51.318801356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:54:51.323195 containerd[1626]: time="2024-10-09T07:54:51.323101167Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:54:51.324447 containerd[1626]: time="2024-10-09T07:54:51.324248360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:54:51.324447 containerd[1626]: time="2024-10-09T07:54:51.324281608Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:54:51.324447 containerd[1626]: time="2024-10-09T07:54:51.324297601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:54:51.334781 containerd[1626]: time="2024-10-09T07:54:51.334278838Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:54:51.334781 containerd[1626]: time="2024-10-09T07:54:51.334354569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:54:51.334781 containerd[1626]: time="2024-10-09T07:54:51.334416103Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:54:51.334781 containerd[1626]: time="2024-10-09T07:54:51.334435745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:54:51.453869 kubelet[2493]: E1009 07:54:51.453378 2493 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.72.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-9xk3k.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.72.98:6443: connect: connection refused" interval="1.6s" Oct 9 07:54:51.477919 containerd[1626]: time="2024-10-09T07:54:51.477486204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-9xk3k.gb1.brightbox.com,Uid:10477e625fb1605eff9c73b1a1709e01,Namespace:kube-system,Attempt:0,} returns sandbox id \"4feaa86115a18f4b6a63ec5195259586adaf40134e6acc1ddf53144ef47ea0dd\"" Oct 9 07:54:51.483061 containerd[1626]: time="2024-10-09T07:54:51.483003548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-9xk3k.gb1.brightbox.com,Uid:7a842c9aaef7bd71f648d91f3bd73c76,Namespace:kube-system,Attempt:0,} returns sandbox id \"1c5778348b65cebe60985aea605c3c7b40d06f53ff31aec12b7275aff4a96cc7\"" Oct 9 07:54:51.486676 containerd[1626]: time="2024-10-09T07:54:51.486643960Z" level=info msg="CreateContainer within sandbox \"4feaa86115a18f4b6a63ec5195259586adaf40134e6acc1ddf53144ef47ea0dd\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 9 07:54:51.489506 containerd[1626]: time="2024-10-09T07:54:51.489467316Z" level=info msg="CreateContainer within sandbox \"1c5778348b65cebe60985aea605c3c7b40d06f53ff31aec12b7275aff4a96cc7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 9 07:54:51.490466 containerd[1626]: time="2024-10-09T07:54:51.490432960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-9xk3k.gb1.brightbox.com,Uid:232e33a7a42f65648263bc8f3c144241,Namespace:kube-system,Attempt:0,} returns sandbox id \"da471d253e5813cda33371ce77220aace2a289720c8fa40978c143a5538c0667\"" Oct 9 07:54:51.496677 containerd[1626]: time="2024-10-09T07:54:51.496635230Z" level=info msg="CreateContainer within sandbox \"da471d253e5813cda33371ce77220aace2a289720c8fa40978c143a5538c0667\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 9 07:54:51.516283 containerd[1626]: time="2024-10-09T07:54:51.515433678Z" level=info msg="CreateContainer within sandbox \"1c5778348b65cebe60985aea605c3c7b40d06f53ff31aec12b7275aff4a96cc7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"aa5636ca5ddce7e7161c17ca042affe1bad4321b5ef2aa1861bce8e9a62923ea\"" Oct 9 07:54:51.517826 containerd[1626]: time="2024-10-09T07:54:51.517501570Z" level=info msg="StartContainer for \"aa5636ca5ddce7e7161c17ca042affe1bad4321b5ef2aa1861bce8e9a62923ea\"" Oct 9 07:54:51.523480 containerd[1626]: time="2024-10-09T07:54:51.523441535Z" level=info msg="CreateContainer within sandbox \"4feaa86115a18f4b6a63ec5195259586adaf40134e6acc1ddf53144ef47ea0dd\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"241fdb6f67bf2709f4049c57ca60f4a9d4a92212d86b4c495d7d3563cda3be5a\"" Oct 9 07:54:51.525227 containerd[1626]: time="2024-10-09T07:54:51.524191945Z" level=info msg="StartContainer for \"241fdb6f67bf2709f4049c57ca60f4a9d4a92212d86b4c495d7d3563cda3be5a\"" Oct 9 07:54:51.529275 containerd[1626]: time="2024-10-09T07:54:51.529230640Z" level=info msg="CreateContainer within sandbox \"da471d253e5813cda33371ce77220aace2a289720c8fa40978c143a5538c0667\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a67841c0aa5513c9cf1800fe38f74a4db04f3bc801a1db4784dcce9860bb8104\"" Oct 9 07:54:51.529922 containerd[1626]: time="2024-10-09T07:54:51.529892697Z" level=info msg="StartContainer for \"a67841c0aa5513c9cf1800fe38f74a4db04f3bc801a1db4784dcce9860bb8104\"" Oct 9 07:54:51.553568 kubelet[2493]: I1009 07:54:51.553533 2493 kubelet_node_status.go:73] "Attempting to register node" node="srv-9xk3k.gb1.brightbox.com" Oct 9 07:54:51.555356 kubelet[2493]: E1009 07:54:51.555328 2493 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.72.98:6443/api/v1/nodes\": dial tcp 10.230.72.98:6443: connect: connection refused" node="srv-9xk3k.gb1.brightbox.com" Oct 9 07:54:51.584845 kubelet[2493]: W1009 07:54:51.584772 2493 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.230.72.98:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.72.98:6443: connect: connection refused Oct 9 07:54:51.585522 kubelet[2493]: E1009 07:54:51.585402 2493 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.230.72.98:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.72.98:6443: connect: connection refused Oct 9 07:54:51.669033 containerd[1626]: time="2024-10-09T07:54:51.668696619Z" level=info msg="StartContainer for \"aa5636ca5ddce7e7161c17ca042affe1bad4321b5ef2aa1861bce8e9a62923ea\" returns successfully" Oct 9 07:54:51.674760 kubelet[2493]: W1009 07:54:51.674004 2493 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.230.72.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.72.98:6443: connect: connection refused Oct 9 07:54:51.674760 kubelet[2493]: E1009 07:54:51.674074 2493 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.230.72.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.72.98:6443: connect: connection refused Oct 9 07:54:51.709235 containerd[1626]: time="2024-10-09T07:54:51.707184714Z" level=info msg="StartContainer for \"a67841c0aa5513c9cf1800fe38f74a4db04f3bc801a1db4784dcce9860bb8104\" returns successfully" Oct 9 07:54:51.717692 containerd[1626]: time="2024-10-09T07:54:51.717642738Z" level=info msg="StartContainer for \"241fdb6f67bf2709f4049c57ca60f4a9d4a92212d86b4c495d7d3563cda3be5a\" returns successfully" Oct 9 07:54:52.015235 kubelet[2493]: E1009 07:54:52.013327 2493 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.230.72.98:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.230.72.98:6443: connect: connection refused Oct 9 07:54:53.163366 kubelet[2493]: I1009 07:54:53.162693 2493 kubelet_node_status.go:73] "Attempting to register node" node="srv-9xk3k.gb1.brightbox.com" Oct 9 07:54:54.193296 kubelet[2493]: E1009 07:54:54.193244 2493 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-9xk3k.gb1.brightbox.com\" not found" node="srv-9xk3k.gb1.brightbox.com" Oct 9 07:54:54.218758 kubelet[2493]: I1009 07:54:54.218348 2493 kubelet_node_status.go:76] "Successfully registered node" node="srv-9xk3k.gb1.brightbox.com" Oct 9 07:54:55.021009 kubelet[2493]: I1009 07:54:55.019565 2493 apiserver.go:52] "Watching apiserver" Oct 9 07:54:55.038450 kubelet[2493]: I1009 07:54:55.038388 2493 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 9 07:54:57.002455 systemd[1]: Reloading requested from client PID 2766 ('systemctl') (unit session-9.scope)... Oct 9 07:54:57.002512 systemd[1]: Reloading... Oct 9 07:54:57.136489 zram_generator::config[2806]: No configuration found. Oct 9 07:54:57.367190 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 07:54:57.490349 systemd[1]: Reloading finished in 486 ms. Oct 9 07:54:57.546006 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:54:57.562035 systemd[1]: kubelet.service: Deactivated successfully. Oct 9 07:54:57.563132 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:54:57.573880 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:54:57.814639 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:54:57.819865 (kubelet)[2877]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 07:54:57.951240 kubelet[2877]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 07:54:57.951240 kubelet[2877]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 9 07:54:57.951240 kubelet[2877]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 07:54:57.951240 kubelet[2877]: I1009 07:54:57.950590 2877 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 9 07:54:57.958801 kubelet[2877]: I1009 07:54:57.958762 2877 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 9 07:54:57.959022 kubelet[2877]: I1009 07:54:57.959003 2877 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 9 07:54:57.959427 kubelet[2877]: I1009 07:54:57.959405 2877 server.go:919] "Client rotation is on, will bootstrap in background" Oct 9 07:54:57.964227 kubelet[2877]: I1009 07:54:57.962884 2877 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 9 07:54:57.965770 kubelet[2877]: I1009 07:54:57.965532 2877 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 07:54:57.988130 kubelet[2877]: I1009 07:54:57.987272 2877 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 9 07:54:57.990034 kubelet[2877]: I1009 07:54:57.989979 2877 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 9 07:54:57.990584 kubelet[2877]: I1009 07:54:57.990542 2877 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 9 07:54:57.990838 kubelet[2877]: I1009 07:54:57.990817 2877 topology_manager.go:138] "Creating topology manager with none policy" Oct 9 07:54:57.990947 kubelet[2877]: I1009 07:54:57.990929 2877 container_manager_linux.go:301] "Creating device plugin manager" Oct 9 07:54:57.991765 kubelet[2877]: I1009 07:54:57.991743 2877 state_mem.go:36] "Initialized new in-memory state store" Oct 9 07:54:57.992141 kubelet[2877]: I1009 07:54:57.992120 2877 kubelet.go:396] "Attempting to sync node with API server" Oct 9 07:54:57.992328 kubelet[2877]: I1009 07:54:57.992307 2877 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 9 07:54:57.992507 kubelet[2877]: I1009 07:54:57.992486 2877 kubelet.go:312] "Adding apiserver pod source" Oct 9 07:54:57.992649 kubelet[2877]: I1009 07:54:57.992626 2877 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 9 07:54:57.999840 kubelet[2877]: I1009 07:54:57.998358 2877 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Oct 9 07:54:58.013422 kubelet[2877]: I1009 07:54:58.007020 2877 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 9 07:54:58.013422 kubelet[2877]: I1009 07:54:58.007940 2877 server.go:1256] "Started kubelet" Oct 9 07:54:58.013422 kubelet[2877]: I1009 07:54:58.012638 2877 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 9 07:54:58.020170 kubelet[2877]: I1009 07:54:58.019096 2877 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 9 07:54:58.020459 kubelet[2877]: I1009 07:54:58.020340 2877 server.go:461] "Adding debug handlers to kubelet server" Oct 9 07:54:58.029076 kubelet[2877]: I1009 07:54:58.028532 2877 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 9 07:54:58.029076 kubelet[2877]: I1009 07:54:58.028972 2877 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 9 07:54:58.030481 kubelet[2877]: I1009 07:54:58.030446 2877 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 9 07:54:58.031762 kubelet[2877]: I1009 07:54:58.031063 2877 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 9 07:54:58.037155 kubelet[2877]: I1009 07:54:58.036695 2877 factory.go:221] Registration of the systemd container factory successfully Oct 9 07:54:58.038931 kubelet[2877]: I1009 07:54:58.038888 2877 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 9 07:54:58.040397 kubelet[2877]: I1009 07:54:58.039798 2877 reconciler_new.go:29] "Reconciler: start to sync state" Oct 9 07:54:58.044637 kubelet[2877]: E1009 07:54:58.044329 2877 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 9 07:54:58.050426 kubelet[2877]: I1009 07:54:58.048881 2877 factory.go:221] Registration of the containerd container factory successfully Oct 9 07:54:58.078074 kubelet[2877]: I1009 07:54:58.077943 2877 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 9 07:54:58.082976 kubelet[2877]: I1009 07:54:58.082949 2877 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 9 07:54:58.084723 kubelet[2877]: I1009 07:54:58.084663 2877 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 9 07:54:58.084845 kubelet[2877]: I1009 07:54:58.084715 2877 kubelet.go:2329] "Starting kubelet main sync loop" Oct 9 07:54:58.085089 kubelet[2877]: E1009 07:54:58.085065 2877 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 9 07:54:58.156793 kubelet[2877]: I1009 07:54:58.156744 2877 kubelet_node_status.go:73] "Attempting to register node" node="srv-9xk3k.gb1.brightbox.com" Oct 9 07:54:58.166700 kubelet[2877]: I1009 07:54:58.166644 2877 kubelet_node_status.go:112] "Node was previously registered" node="srv-9xk3k.gb1.brightbox.com" Oct 9 07:54:58.167188 kubelet[2877]: I1009 07:54:58.166979 2877 kubelet_node_status.go:76] "Successfully registered node" node="srv-9xk3k.gb1.brightbox.com" Oct 9 07:54:58.185569 kubelet[2877]: E1009 07:54:58.185524 2877 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 9 07:54:58.204067 kubelet[2877]: I1009 07:54:58.203550 2877 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 9 07:54:58.204067 kubelet[2877]: I1009 07:54:58.203585 2877 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 9 07:54:58.204067 kubelet[2877]: I1009 07:54:58.203622 2877 state_mem.go:36] "Initialized new in-memory state store" Oct 9 07:54:58.204906 kubelet[2877]: I1009 07:54:58.204508 2877 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 9 07:54:58.204906 kubelet[2877]: I1009 07:54:58.204554 2877 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 9 07:54:58.204906 kubelet[2877]: I1009 07:54:58.204576 2877 policy_none.go:49] "None policy: Start" Oct 9 07:54:58.206014 kubelet[2877]: I1009 07:54:58.205619 2877 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 9 07:54:58.206014 kubelet[2877]: I1009 07:54:58.205663 2877 state_mem.go:35] "Initializing new in-memory state store" Oct 9 07:54:58.206014 kubelet[2877]: I1009 07:54:58.205840 2877 state_mem.go:75] "Updated machine memory state" Oct 9 07:54:58.208728 kubelet[2877]: I1009 07:54:58.208701 2877 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 9 07:54:58.213264 kubelet[2877]: I1009 07:54:58.213231 2877 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 9 07:54:58.386906 kubelet[2877]: I1009 07:54:58.386744 2877 topology_manager.go:215] "Topology Admit Handler" podUID="10477e625fb1605eff9c73b1a1709e01" podNamespace="kube-system" podName="kube-apiserver-srv-9xk3k.gb1.brightbox.com" Oct 9 07:54:58.387385 kubelet[2877]: I1009 07:54:58.387278 2877 topology_manager.go:215] "Topology Admit Handler" podUID="7a842c9aaef7bd71f648d91f3bd73c76" podNamespace="kube-system" podName="kube-controller-manager-srv-9xk3k.gb1.brightbox.com" Oct 9 07:54:58.389763 kubelet[2877]: I1009 07:54:58.389059 2877 topology_manager.go:215] "Topology Admit Handler" podUID="232e33a7a42f65648263bc8f3c144241" podNamespace="kube-system" podName="kube-scheduler-srv-9xk3k.gb1.brightbox.com" Oct 9 07:54:58.402407 kubelet[2877]: W1009 07:54:58.401824 2877 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 9 07:54:58.403842 kubelet[2877]: W1009 07:54:58.403821 2877 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 9 07:54:58.404076 kubelet[2877]: W1009 07:54:58.404056 2877 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 9 07:54:58.444580 kubelet[2877]: I1009 07:54:58.444512 2877 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7a842c9aaef7bd71f648d91f3bd73c76-ca-certs\") pod \"kube-controller-manager-srv-9xk3k.gb1.brightbox.com\" (UID: \"7a842c9aaef7bd71f648d91f3bd73c76\") " pod="kube-system/kube-controller-manager-srv-9xk3k.gb1.brightbox.com" Oct 9 07:54:58.444771 kubelet[2877]: I1009 07:54:58.444597 2877 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7a842c9aaef7bd71f648d91f3bd73c76-flexvolume-dir\") pod \"kube-controller-manager-srv-9xk3k.gb1.brightbox.com\" (UID: \"7a842c9aaef7bd71f648d91f3bd73c76\") " pod="kube-system/kube-controller-manager-srv-9xk3k.gb1.brightbox.com" Oct 9 07:54:58.444771 kubelet[2877]: I1009 07:54:58.444633 2877 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7a842c9aaef7bd71f648d91f3bd73c76-k8s-certs\") pod \"kube-controller-manager-srv-9xk3k.gb1.brightbox.com\" (UID: \"7a842c9aaef7bd71f648d91f3bd73c76\") " pod="kube-system/kube-controller-manager-srv-9xk3k.gb1.brightbox.com" Oct 9 07:54:58.444771 kubelet[2877]: I1009 07:54:58.444662 2877 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7a842c9aaef7bd71f648d91f3bd73c76-kubeconfig\") pod \"kube-controller-manager-srv-9xk3k.gb1.brightbox.com\" (UID: \"7a842c9aaef7bd71f648d91f3bd73c76\") " pod="kube-system/kube-controller-manager-srv-9xk3k.gb1.brightbox.com" Oct 9 07:54:58.444771 kubelet[2877]: I1009 07:54:58.444705 2877 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7a842c9aaef7bd71f648d91f3bd73c76-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-9xk3k.gb1.brightbox.com\" (UID: \"7a842c9aaef7bd71f648d91f3bd73c76\") " pod="kube-system/kube-controller-manager-srv-9xk3k.gb1.brightbox.com" Oct 9 07:54:58.444771 kubelet[2877]: I1009 07:54:58.444745 2877 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/10477e625fb1605eff9c73b1a1709e01-ca-certs\") pod \"kube-apiserver-srv-9xk3k.gb1.brightbox.com\" (UID: \"10477e625fb1605eff9c73b1a1709e01\") " pod="kube-system/kube-apiserver-srv-9xk3k.gb1.brightbox.com" Oct 9 07:54:58.445010 kubelet[2877]: I1009 07:54:58.444776 2877 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/10477e625fb1605eff9c73b1a1709e01-k8s-certs\") pod \"kube-apiserver-srv-9xk3k.gb1.brightbox.com\" (UID: \"10477e625fb1605eff9c73b1a1709e01\") " pod="kube-system/kube-apiserver-srv-9xk3k.gb1.brightbox.com" Oct 9 07:54:58.445010 kubelet[2877]: I1009 07:54:58.444807 2877 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/10477e625fb1605eff9c73b1a1709e01-usr-share-ca-certificates\") pod \"kube-apiserver-srv-9xk3k.gb1.brightbox.com\" (UID: \"10477e625fb1605eff9c73b1a1709e01\") " pod="kube-system/kube-apiserver-srv-9xk3k.gb1.brightbox.com" Oct 9 07:54:58.445010 kubelet[2877]: I1009 07:54:58.444836 2877 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/232e33a7a42f65648263bc8f3c144241-kubeconfig\") pod \"kube-scheduler-srv-9xk3k.gb1.brightbox.com\" (UID: \"232e33a7a42f65648263bc8f3c144241\") " pod="kube-system/kube-scheduler-srv-9xk3k.gb1.brightbox.com" Oct 9 07:54:58.997629 kubelet[2877]: I1009 07:54:58.997302 2877 apiserver.go:52] "Watching apiserver" Oct 9 07:54:59.034729 kubelet[2877]: I1009 07:54:59.034602 2877 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 9 07:54:59.148988 kubelet[2877]: W1009 07:54:59.148936 2877 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 9 07:54:59.149186 kubelet[2877]: E1009 07:54:59.149084 2877 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-srv-9xk3k.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-9xk3k.gb1.brightbox.com" Oct 9 07:54:59.204156 kubelet[2877]: I1009 07:54:59.204040 2877 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-9xk3k.gb1.brightbox.com" podStartSLOduration=1.203952512 podStartE2EDuration="1.203952512s" podCreationTimestamp="2024-10-09 07:54:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:54:59.1828184 +0000 UTC m=+1.351329376" watchObservedRunningTime="2024-10-09 07:54:59.203952512 +0000 UTC m=+1.372463524" Oct 9 07:54:59.204558 kubelet[2877]: I1009 07:54:59.204217 2877 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-9xk3k.gb1.brightbox.com" podStartSLOduration=1.20417873 podStartE2EDuration="1.20417873s" podCreationTimestamp="2024-10-09 07:54:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:54:59.198906 +0000 UTC m=+1.367416998" watchObservedRunningTime="2024-10-09 07:54:59.20417873 +0000 UTC m=+1.372689708" Oct 9 07:54:59.278897 kubelet[2877]: I1009 07:54:59.278536 2877 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-9xk3k.gb1.brightbox.com" podStartSLOduration=1.278489119 podStartE2EDuration="1.278489119s" podCreationTimestamp="2024-10-09 07:54:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:54:59.232586929 +0000 UTC m=+1.401097954" watchObservedRunningTime="2024-10-09 07:54:59.278489119 +0000 UTC m=+1.447000112" Oct 9 07:55:03.481974 sudo[1905]: pam_unix(sudo:session): session closed for user root Oct 9 07:55:03.635233 sshd[1882]: pam_unix(sshd:session): session closed for user core Oct 9 07:55:03.643051 systemd[1]: sshd@6-10.230.72.98:22-147.75.109.163:45440.service: Deactivated successfully. Oct 9 07:55:03.646555 systemd-logind[1599]: Session 9 logged out. Waiting for processes to exit. Oct 9 07:55:03.647738 systemd[1]: session-9.scope: Deactivated successfully. Oct 9 07:55:03.649736 systemd-logind[1599]: Removed session 9. Oct 9 07:55:10.622802 kubelet[2877]: I1009 07:55:10.622719 2877 topology_manager.go:215] "Topology Admit Handler" podUID="e80b57b0-2a23-4576-9e20-c88c76445761" podNamespace="kube-system" podName="kube-proxy-68j96" Oct 9 07:55:10.638142 kubelet[2877]: I1009 07:55:10.636803 2877 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 9 07:55:10.642482 containerd[1626]: time="2024-10-09T07:55:10.642197026Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 9 07:55:10.646626 kubelet[2877]: I1009 07:55:10.644056 2877 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 9 07:55:10.737737 kubelet[2877]: I1009 07:55:10.737615 2877 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e80b57b0-2a23-4576-9e20-c88c76445761-xtables-lock\") pod \"kube-proxy-68j96\" (UID: \"e80b57b0-2a23-4576-9e20-c88c76445761\") " pod="kube-system/kube-proxy-68j96" Oct 9 07:55:10.737737 kubelet[2877]: I1009 07:55:10.737692 2877 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e80b57b0-2a23-4576-9e20-c88c76445761-kube-proxy\") pod \"kube-proxy-68j96\" (UID: \"e80b57b0-2a23-4576-9e20-c88c76445761\") " pod="kube-system/kube-proxy-68j96" Oct 9 07:55:10.737737 kubelet[2877]: I1009 07:55:10.737766 2877 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e80b57b0-2a23-4576-9e20-c88c76445761-lib-modules\") pod \"kube-proxy-68j96\" (UID: \"e80b57b0-2a23-4576-9e20-c88c76445761\") " pod="kube-system/kube-proxy-68j96" Oct 9 07:55:10.738089 kubelet[2877]: I1009 07:55:10.737816 2877 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv9pf\" (UniqueName: \"kubernetes.io/projected/e80b57b0-2a23-4576-9e20-c88c76445761-kube-api-access-fv9pf\") pod \"kube-proxy-68j96\" (UID: \"e80b57b0-2a23-4576-9e20-c88c76445761\") " pod="kube-system/kube-proxy-68j96" Oct 9 07:55:10.782068 kubelet[2877]: I1009 07:55:10.781347 2877 topology_manager.go:215] "Topology Admit Handler" podUID="3f9bac58-76d4-4daf-a485-d9a83894cb10" podNamespace="tigera-operator" podName="tigera-operator-5d56685c77-jxmtx" Oct 9 07:55:10.839030 kubelet[2877]: I1009 07:55:10.838983 2877 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3f9bac58-76d4-4daf-a485-d9a83894cb10-var-lib-calico\") pod \"tigera-operator-5d56685c77-jxmtx\" (UID: \"3f9bac58-76d4-4daf-a485-d9a83894cb10\") " pod="tigera-operator/tigera-operator-5d56685c77-jxmtx" Oct 9 07:55:10.839290 kubelet[2877]: I1009 07:55:10.839105 2877 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs4k8\" (UniqueName: \"kubernetes.io/projected/3f9bac58-76d4-4daf-a485-d9a83894cb10-kube-api-access-zs4k8\") pod \"tigera-operator-5d56685c77-jxmtx\" (UID: \"3f9bac58-76d4-4daf-a485-d9a83894cb10\") " pod="tigera-operator/tigera-operator-5d56685c77-jxmtx" Oct 9 07:55:10.954307 containerd[1626]: time="2024-10-09T07:55:10.950146723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-68j96,Uid:e80b57b0-2a23-4576-9e20-c88c76445761,Namespace:kube-system,Attempt:0,}" Oct 9 07:55:10.996660 containerd[1626]: time="2024-10-09T07:55:10.996486707Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:55:10.997491 containerd[1626]: time="2024-10-09T07:55:10.996622965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:55:10.997491 containerd[1626]: time="2024-10-09T07:55:10.996692636Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:55:10.997491 containerd[1626]: time="2024-10-09T07:55:10.996740200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:55:11.065581 containerd[1626]: time="2024-10-09T07:55:11.065525764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-68j96,Uid:e80b57b0-2a23-4576-9e20-c88c76445761,Namespace:kube-system,Attempt:0,} returns sandbox id \"5f30f43232ac40d67b872a5770c8e8ebc659a54ae79274ce8926707b619b35ce\"" Oct 9 07:55:11.071627 containerd[1626]: time="2024-10-09T07:55:11.071575253Z" level=info msg="CreateContainer within sandbox \"5f30f43232ac40d67b872a5770c8e8ebc659a54ae79274ce8926707b619b35ce\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 9 07:55:11.089410 containerd[1626]: time="2024-10-09T07:55:11.089362342Z" level=info msg="CreateContainer within sandbox \"5f30f43232ac40d67b872a5770c8e8ebc659a54ae79274ce8926707b619b35ce\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"71578aeed3d4a3cfa650fe05331e00bbdbeeeb7c9134f7a27a93e3e6520f8df9\"" Oct 9 07:55:11.090426 containerd[1626]: time="2024-10-09T07:55:11.090328321Z" level=info msg="StartContainer for \"71578aeed3d4a3cfa650fe05331e00bbdbeeeb7c9134f7a27a93e3e6520f8df9\"" Oct 9 07:55:11.099655 containerd[1626]: time="2024-10-09T07:55:11.099522820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-jxmtx,Uid:3f9bac58-76d4-4daf-a485-d9a83894cb10,Namespace:tigera-operator,Attempt:0,}" Oct 9 07:55:11.157686 containerd[1626]: time="2024-10-09T07:55:11.157517903Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:55:11.158326 containerd[1626]: time="2024-10-09T07:55:11.158173042Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:55:11.158493 containerd[1626]: time="2024-10-09T07:55:11.158271968Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:55:11.158716 containerd[1626]: time="2024-10-09T07:55:11.158443386Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:55:11.198529 containerd[1626]: time="2024-10-09T07:55:11.198422595Z" level=info msg="StartContainer for \"71578aeed3d4a3cfa650fe05331e00bbdbeeeb7c9134f7a27a93e3e6520f8df9\" returns successfully" Oct 9 07:55:11.266738 containerd[1626]: time="2024-10-09T07:55:11.266678141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-jxmtx,Uid:3f9bac58-76d4-4daf-a485-d9a83894cb10,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"5866e89a11299dad44e45897c01fe967b427b90b85fb1413862279231ef3e497\"" Oct 9 07:55:11.268991 containerd[1626]: time="2024-10-09T07:55:11.268944683Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Oct 9 07:55:11.861198 systemd[1]: run-containerd-runc-k8s.io-5f30f43232ac40d67b872a5770c8e8ebc659a54ae79274ce8926707b619b35ce-runc.A2pkWT.mount: Deactivated successfully. Oct 9 07:55:12.170861 kubelet[2877]: I1009 07:55:12.170507 2877 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-68j96" podStartSLOduration=2.170427953 podStartE2EDuration="2.170427953s" podCreationTimestamp="2024-10-09 07:55:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:55:12.169855038 +0000 UTC m=+14.338366052" watchObservedRunningTime="2024-10-09 07:55:12.170427953 +0000 UTC m=+14.338938947" Oct 9 07:55:12.973907 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2402003845.mount: Deactivated successfully. Oct 9 07:55:13.792838 containerd[1626]: time="2024-10-09T07:55:13.792698065Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:55:13.794959 containerd[1626]: time="2024-10-09T07:55:13.794875033Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=22136589" Oct 9 07:55:13.797685 containerd[1626]: time="2024-10-09T07:55:13.796017800Z" level=info msg="ImageCreate event name:\"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:55:13.803786 containerd[1626]: time="2024-10-09T07:55:13.803712296Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:55:13.805620 containerd[1626]: time="2024-10-09T07:55:13.804948578Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"22130728\" in 2.535950023s" Oct 9 07:55:13.805620 containerd[1626]: time="2024-10-09T07:55:13.804995000Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\"" Oct 9 07:55:13.808645 containerd[1626]: time="2024-10-09T07:55:13.808594377Z" level=info msg="CreateContainer within sandbox \"5866e89a11299dad44e45897c01fe967b427b90b85fb1413862279231ef3e497\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 9 07:55:13.993165 containerd[1626]: time="2024-10-09T07:55:13.992988097Z" level=info msg="CreateContainer within sandbox \"5866e89a11299dad44e45897c01fe967b427b90b85fb1413862279231ef3e497\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"fa9348fae382cb5aa6f80714587d153186fad9c336215f985f17aafbfa8b2ff4\"" Oct 9 07:55:13.994126 containerd[1626]: time="2024-10-09T07:55:13.994091598Z" level=info msg="StartContainer for \"fa9348fae382cb5aa6f80714587d153186fad9c336215f985f17aafbfa8b2ff4\"" Oct 9 07:55:14.076673 containerd[1626]: time="2024-10-09T07:55:14.076545085Z" level=info msg="StartContainer for \"fa9348fae382cb5aa6f80714587d153186fad9c336215f985f17aafbfa8b2ff4\" returns successfully" Oct 9 07:55:14.180384 kubelet[2877]: I1009 07:55:14.179714 2877 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5d56685c77-jxmtx" podStartSLOduration=1.642281144 podStartE2EDuration="4.179595462s" podCreationTimestamp="2024-10-09 07:55:10 +0000 UTC" firstStartedPulling="2024-10-09 07:55:11.268183372 +0000 UTC m=+13.436694345" lastFinishedPulling="2024-10-09 07:55:13.80549769 +0000 UTC m=+15.974008663" observedRunningTime="2024-10-09 07:55:14.179334034 +0000 UTC m=+16.347845033" watchObservedRunningTime="2024-10-09 07:55:14.179595462 +0000 UTC m=+16.348106441" Oct 9 07:55:17.361535 kubelet[2877]: I1009 07:55:17.361476 2877 topology_manager.go:215] "Topology Admit Handler" podUID="8e41fa71-0bfc-47a1-9c9e-80f4a7b9098a" podNamespace="calico-system" podName="calico-typha-84b7fdc47c-d5jgt" Oct 9 07:55:17.473697 kubelet[2877]: I1009 07:55:17.473643 2877 topology_manager.go:215] "Topology Admit Handler" podUID="7ae7e645-c588-418c-9414-5d774b9a468b" podNamespace="calico-system" podName="calico-node-d7skx" Oct 9 07:55:17.492349 kubelet[2877]: I1009 07:55:17.492022 2877 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e41fa71-0bfc-47a1-9c9e-80f4a7b9098a-tigera-ca-bundle\") pod \"calico-typha-84b7fdc47c-d5jgt\" (UID: \"8e41fa71-0bfc-47a1-9c9e-80f4a7b9098a\") " pod="calico-system/calico-typha-84b7fdc47c-d5jgt" Oct 9 07:55:17.492349 kubelet[2877]: I1009 07:55:17.492168 2877 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8e41fa71-0bfc-47a1-9c9e-80f4a7b9098a-typha-certs\") pod \"calico-typha-84b7fdc47c-d5jgt\" (UID: \"8e41fa71-0bfc-47a1-9c9e-80f4a7b9098a\") " pod="calico-system/calico-typha-84b7fdc47c-d5jgt" Oct 9 07:55:17.492349 kubelet[2877]: I1009 07:55:17.492248 2877 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz8k5\" (UniqueName: \"kubernetes.io/projected/8e41fa71-0bfc-47a1-9c9e-80f4a7b9098a-kube-api-access-gz8k5\") pod \"calico-typha-84b7fdc47c-d5jgt\" (UID: \"8e41fa71-0bfc-47a1-9c9e-80f4a7b9098a\") " pod="calico-system/calico-typha-84b7fdc47c-d5jgt" Oct 9 07:55:17.594965 kubelet[2877]: I1009 07:55:17.593331 2877 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7ae7e645-c588-418c-9414-5d774b9a468b-cni-net-dir\") pod \"calico-node-d7skx\" (UID: \"7ae7e645-c588-418c-9414-5d774b9a468b\") " pod="calico-system/calico-node-d7skx" Oct 9 07:55:17.594965 kubelet[2877]: I1009 07:55:17.593416 2877 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7ae7e645-c588-418c-9414-5d774b9a468b-flexvol-driver-host\") pod \"calico-node-d7skx\" (UID: \"7ae7e645-c588-418c-9414-5d774b9a468b\") " pod="calico-system/calico-node-d7skx" Oct 9 07:55:17.594965 kubelet[2877]: I1009 07:55:17.593488 2877 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7ae7e645-c588-418c-9414-5d774b9a468b-xtables-lock\") pod \"calico-node-d7skx\" (UID: \"7ae7e645-c588-418c-9414-5d774b9a468b\") " pod="calico-system/calico-node-d7skx" Oct 9 07:55:17.594965 kubelet[2877]: I1009 07:55:17.593520 2877 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7ae7e645-c588-418c-9414-5d774b9a468b-policysync\") pod \"calico-node-d7skx\" (UID: \"7ae7e645-c588-418c-9414-5d774b9a468b\") " pod="calico-system/calico-node-d7skx" Oct 9 07:55:17.594965 kubelet[2877]: I1009 07:55:17.593573 2877 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7ae7e645-c588-418c-9414-5d774b9a468b-var-run-calico\") pod \"calico-node-d7skx\" (UID: \"7ae7e645-c588-418c-9414-5d774b9a468b\") " pod="calico-system/calico-node-d7skx" Oct 9 07:55:17.595399 kubelet[2877]: I1009 07:55:17.593603 2877 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmnlb\" (UniqueName: \"kubernetes.io/projected/7ae7e645-c588-418c-9414-5d774b9a468b-kube-api-access-fmnlb\") pod \"calico-node-d7skx\" (UID: \"7ae7e645-c588-418c-9414-5d774b9a468b\") " pod="calico-system/calico-node-d7skx" Oct 9 07:55:17.595399 kubelet[2877]: I1009 07:55:17.593639 2877 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7ae7e645-c588-418c-9414-5d774b9a468b-cni-log-dir\") pod \"calico-node-d7skx\" (UID: \"7ae7e645-c588-418c-9414-5d774b9a468b\") " pod="calico-system/calico-node-d7skx" Oct 9 07:55:17.595399 kubelet[2877]: I1009 07:55:17.593682 2877 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7ae7e645-c588-418c-9414-5d774b9a468b-lib-modules\") pod \"calico-node-d7skx\" (UID: \"7ae7e645-c588-418c-9414-5d774b9a468b\") " pod="calico-system/calico-node-d7skx" Oct 9 07:55:17.595399 kubelet[2877]: I1009 07:55:17.593715 2877 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ae7e645-c588-418c-9414-5d774b9a468b-tigera-ca-bundle\") pod \"calico-node-d7skx\" (UID: \"7ae7e645-c588-418c-9414-5d774b9a468b\") " pod="calico-system/calico-node-d7skx" Oct 9 07:55:17.595399 kubelet[2877]: I1009 07:55:17.593745 2877 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7ae7e645-c588-418c-9414-5d774b9a468b-node-certs\") pod \"calico-node-d7skx\" (UID: \"7ae7e645-c588-418c-9414-5d774b9a468b\") " pod="calico-system/calico-node-d7skx" Oct 9 07:55:17.596835 kubelet[2877]: I1009 07:55:17.593775 2877 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7ae7e645-c588-418c-9414-5d774b9a468b-cni-bin-dir\") pod \"calico-node-d7skx\" (UID: \"7ae7e645-c588-418c-9414-5d774b9a468b\") " pod="calico-system/calico-node-d7skx" Oct 9 07:55:17.596835 kubelet[2877]: I1009 07:55:17.593826 2877 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7ae7e645-c588-418c-9414-5d774b9a468b-var-lib-calico\") pod \"calico-node-d7skx\" (UID: \"7ae7e645-c588-418c-9414-5d774b9a468b\") " pod="calico-system/calico-node-d7skx" Oct 9 07:55:17.631328 kubelet[2877]: I1009 07:55:17.628598 2877 topology_manager.go:215] "Topology Admit Handler" podUID="c258f453-4ba7-47ef-a510-74cac7855910" podNamespace="calico-system" podName="csi-node-driver-6frp9" Oct 9 07:55:17.631328 kubelet[2877]: E1009 07:55:17.629071 2877 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6frp9" podUID="c258f453-4ba7-47ef-a510-74cac7855910" Oct 9 07:55:17.679082 containerd[1626]: time="2024-10-09T07:55:17.678194076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-84b7fdc47c-d5jgt,Uid:8e41fa71-0bfc-47a1-9c9e-80f4a7b9098a,Namespace:calico-system,Attempt:0,}" Oct 9 07:55:17.698283 kubelet[2877]: I1009 07:55:17.696468 2877 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c258f453-4ba7-47ef-a510-74cac7855910-varrun\") pod \"csi-node-driver-6frp9\" (UID: \"c258f453-4ba7-47ef-a510-74cac7855910\") " pod="calico-system/csi-node-driver-6frp9" Oct 9 07:55:17.698283 kubelet[2877]: I1009 07:55:17.697013 2877 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c258f453-4ba7-47ef-a510-74cac7855910-registration-dir\") pod \"csi-node-driver-6frp9\" (UID: \"c258f453-4ba7-47ef-a510-74cac7855910\") " pod="calico-system/csi-node-driver-6frp9" Oct 9 07:55:17.698283 kubelet[2877]: I1009 07:55:17.698180 2877 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c258f453-4ba7-47ef-a510-74cac7855910-kubelet-dir\") pod \"csi-node-driver-6frp9\" (UID: \"c258f453-4ba7-47ef-a510-74cac7855910\") " pod="calico-system/csi-node-driver-6frp9" Oct 9 07:55:17.698577 kubelet[2877]: I1009 07:55:17.698563 2877 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmpc9\" (UniqueName: \"kubernetes.io/projected/c258f453-4ba7-47ef-a510-74cac7855910-kube-api-access-kmpc9\") pod \"csi-node-driver-6frp9\" (UID: \"c258f453-4ba7-47ef-a510-74cac7855910\") " pod="calico-system/csi-node-driver-6frp9" Oct 9 07:55:17.701922 kubelet[2877]: I1009 07:55:17.698676 2877 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c258f453-4ba7-47ef-a510-74cac7855910-socket-dir\") pod \"csi-node-driver-6frp9\" (UID: \"c258f453-4ba7-47ef-a510-74cac7855910\") " pod="calico-system/csi-node-driver-6frp9" Oct 9 07:55:17.725428 kubelet[2877]: E1009 07:55:17.724028 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:55:17.725428 kubelet[2877]: W1009 07:55:17.724090 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:55:17.725428 kubelet[2877]: E1009 07:55:17.724197 2877 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:55:17.751707 kubelet[2877]: E1009 07:55:17.750143 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:55:17.751707 kubelet[2877]: W1009 07:55:17.750178 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:55:17.751707 kubelet[2877]: E1009 07:55:17.750252 2877 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:55:17.787249 containerd[1626]: time="2024-10-09T07:55:17.786990350Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:55:17.787249 containerd[1626]: time="2024-10-09T07:55:17.787144944Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:55:17.789415 containerd[1626]: time="2024-10-09T07:55:17.789373821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-d7skx,Uid:7ae7e645-c588-418c-9414-5d774b9a468b,Namespace:calico-system,Attempt:0,}" Oct 9 07:55:17.789850 containerd[1626]: time="2024-10-09T07:55:17.787188662Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:55:17.789850 containerd[1626]: time="2024-10-09T07:55:17.788558082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:55:17.800112 kubelet[2877]: E1009 07:55:17.799869 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:55:17.800112 kubelet[2877]: W1009 07:55:17.799902 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:55:17.800385 kubelet[2877]: E1009 07:55:17.800171 2877 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:55:17.803639 kubelet[2877]: E1009 07:55:17.800751 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:55:17.803639 kubelet[2877]: W1009 07:55:17.800891 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:55:17.803639 kubelet[2877]: E1009 07:55:17.800911 2877 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:55:17.803639 kubelet[2877]: E1009 07:55:17.801516 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:55:17.803639 kubelet[2877]: W1009 07:55:17.801531 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:55:17.803639 kubelet[2877]: E1009 07:55:17.801657 2877 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:55:17.803639 kubelet[2877]: E1009 07:55:17.802340 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:55:17.803639 kubelet[2877]: W1009 07:55:17.802366 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:55:17.803639 kubelet[2877]: E1009 07:55:17.802494 2877 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:55:17.803639 kubelet[2877]: E1009 07:55:17.803409 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:55:17.804196 kubelet[2877]: W1009 07:55:17.803423 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:55:17.804196 kubelet[2877]: E1009 07:55:17.803452 2877 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:55:17.804196 kubelet[2877]: E1009 07:55:17.803751 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:55:17.804196 kubelet[2877]: W1009 07:55:17.803765 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:55:17.804196 kubelet[2877]: E1009 07:55:17.803782 2877 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:55:17.804196 kubelet[2877]: E1009 07:55:17.804172 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:55:17.804196 kubelet[2877]: W1009 07:55:17.804185 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:55:17.808227 kubelet[2877]: E1009 07:55:17.804325 2877 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:55:17.808227 kubelet[2877]: E1009 07:55:17.804816 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:55:17.808227 kubelet[2877]: W1009 07:55:17.804829 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:55:17.808227 kubelet[2877]: E1009 07:55:17.804846 2877 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:55:17.808227 kubelet[2877]: E1009 07:55:17.806030 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:55:17.808227 kubelet[2877]: W1009 07:55:17.806043 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:55:17.808227 kubelet[2877]: E1009 07:55:17.806092 2877 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:55:17.808227 kubelet[2877]: E1009 07:55:17.806862 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:55:17.808227 kubelet[2877]: W1009 07:55:17.806878 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:55:17.808227 kubelet[2877]: E1009 07:55:17.807020 2877 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:55:17.808844 kubelet[2877]: E1009 07:55:17.807628 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:55:17.808844 kubelet[2877]: W1009 07:55:17.807642 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:55:17.808844 kubelet[2877]: E1009 07:55:17.807781 2877 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:55:17.808844 kubelet[2877]: E1009 07:55:17.808142 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:55:17.808844 kubelet[2877]: W1009 07:55:17.808156 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:55:17.808844 kubelet[2877]: E1009 07:55:17.808725 2877 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:55:17.810747 kubelet[2877]: E1009 07:55:17.808933 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:55:17.810747 kubelet[2877]: W1009 07:55:17.808947 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:55:17.810747 kubelet[2877]: E1009 07:55:17.810348 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:55:17.810747 kubelet[2877]: W1009 07:55:17.810362 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:55:17.810747 kubelet[2877]: E1009 07:55:17.810538 2877 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:55:17.810747 kubelet[2877]: E1009 07:55:17.810588 2877 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:55:17.810747 kubelet[2877]: E1009 07:55:17.810657 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:55:17.810747 kubelet[2877]: W1009 07:55:17.810688 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:55:17.819524 kubelet[2877]: E1009 07:55:17.810772 2877 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:55:17.819524 kubelet[2877]: E1009 07:55:17.811021 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:55:17.819524 kubelet[2877]: W1009 07:55:17.811175 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:55:17.819524 kubelet[2877]: E1009 07:55:17.812252 2877 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:55:17.819524 kubelet[2877]: E1009 07:55:17.812457 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:55:17.819524 kubelet[2877]: W1009 07:55:17.812470 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:55:17.819524 kubelet[2877]: E1009 07:55:17.812603 2877 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:55:17.819524 kubelet[2877]: E1009 07:55:17.812911 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:55:17.819524 kubelet[2877]: W1009 07:55:17.812953 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:55:17.819524 kubelet[2877]: E1009 07:55:17.813018 2877 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:55:17.819958 kubelet[2877]: E1009 07:55:17.813439 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:55:17.819958 kubelet[2877]: W1009 07:55:17.813493 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:55:17.819958 kubelet[2877]: E1009 07:55:17.813757 2877 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:55:17.819958 kubelet[2877]: E1009 07:55:17.814411 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:55:17.819958 kubelet[2877]: W1009 07:55:17.814425 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:55:17.819958 kubelet[2877]: E1009 07:55:17.814888 2877 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:55:17.819958 kubelet[2877]: E1009 07:55:17.815126 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:55:17.819958 kubelet[2877]: W1009 07:55:17.815151 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:55:17.819958 kubelet[2877]: E1009 07:55:17.815576 2877 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:55:17.819958 kubelet[2877]: E1009 07:55:17.817818 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:55:17.820401 kubelet[2877]: W1009 07:55:17.817852 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:55:17.820401 kubelet[2877]: E1009 07:55:17.817967 2877 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:55:17.820401 kubelet[2877]: E1009 07:55:17.818151 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:55:17.820401 kubelet[2877]: W1009 07:55:17.818164 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:55:17.820401 kubelet[2877]: E1009 07:55:17.819061 2877 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:55:17.820401 kubelet[2877]: E1009 07:55:17.819683 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:55:17.820401 kubelet[2877]: W1009 07:55:17.819729 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:55:17.820401 kubelet[2877]: E1009 07:55:17.819819 2877 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:55:17.820401 kubelet[2877]: E1009 07:55:17.820280 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:55:17.820401 kubelet[2877]: W1009 07:55:17.820354 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:55:17.820819 kubelet[2877]: E1009 07:55:17.820444 2877 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:55:17.844656 kubelet[2877]: E1009 07:55:17.844093 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:55:17.844656 kubelet[2877]: W1009 07:55:17.844121 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:55:17.844656 kubelet[2877]: E1009 07:55:17.844191 2877 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:55:17.901992 containerd[1626]: time="2024-10-09T07:55:17.900928637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:55:17.901992 containerd[1626]: time="2024-10-09T07:55:17.901009945Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:55:17.901992 containerd[1626]: time="2024-10-09T07:55:17.901032430Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:55:17.901992 containerd[1626]: time="2024-10-09T07:55:17.901046656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:55:17.991048 containerd[1626]: time="2024-10-09T07:55:17.990782147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-84b7fdc47c-d5jgt,Uid:8e41fa71-0bfc-47a1-9c9e-80f4a7b9098a,Namespace:calico-system,Attempt:0,} returns sandbox id \"0a7af8b19bddc047a36fa3f2b2825b1d22a9a1375bea5cdd5fe85329317f04af\"" Oct 9 07:55:18.026019 containerd[1626]: time="2024-10-09T07:55:18.025867310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-d7skx,Uid:7ae7e645-c588-418c-9414-5d774b9a468b,Namespace:calico-system,Attempt:0,} returns sandbox id \"a26014f1516ff09e592e11888a1900359d9123388ead88ba790f8040e8d4e3b9\"" Oct 9 07:55:18.034229 containerd[1626]: time="2024-10-09T07:55:18.032577164Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Oct 9 07:55:19.085905 kubelet[2877]: E1009 07:55:19.085243 2877 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6frp9" podUID="c258f453-4ba7-47ef-a510-74cac7855910" Oct 9 07:55:19.777872 containerd[1626]: time="2024-10-09T07:55:19.777818050Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:55:19.781505 containerd[1626]: time="2024-10-09T07:55:19.779862753Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=5141007" Oct 9 07:55:19.781505 containerd[1626]: time="2024-10-09T07:55:19.780849041Z" level=info msg="ImageCreate event name:\"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:55:19.787239 containerd[1626]: time="2024-10-09T07:55:19.785336693Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:55:19.787239 containerd[1626]: time="2024-10-09T07:55:19.786431063Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6633368\" in 1.753777982s" Oct 9 07:55:19.787239 containerd[1626]: time="2024-10-09T07:55:19.786985155Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\"" Oct 9 07:55:19.790330 containerd[1626]: time="2024-10-09T07:55:19.788069534Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Oct 9 07:55:19.790890 containerd[1626]: time="2024-10-09T07:55:19.790669578Z" level=info msg="CreateContainer within sandbox \"a26014f1516ff09e592e11888a1900359d9123388ead88ba790f8040e8d4e3b9\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 9 07:55:19.832774 containerd[1626]: time="2024-10-09T07:55:19.832701413Z" level=info msg="CreateContainer within sandbox \"a26014f1516ff09e592e11888a1900359d9123388ead88ba790f8040e8d4e3b9\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c3c18b805140a471fb569de8940857767588e29184347b86ba3930a21ef00aaa\"" Oct 9 07:55:19.836256 containerd[1626]: time="2024-10-09T07:55:19.835058781Z" level=info msg="StartContainer for \"c3c18b805140a471fb569de8940857767588e29184347b86ba3930a21ef00aaa\"" Oct 9 07:55:19.951737 containerd[1626]: time="2024-10-09T07:55:19.951681025Z" level=info msg="StartContainer for \"c3c18b805140a471fb569de8940857767588e29184347b86ba3930a21ef00aaa\" returns successfully" Oct 9 07:55:20.033801 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c3c18b805140a471fb569de8940857767588e29184347b86ba3930a21ef00aaa-rootfs.mount: Deactivated successfully. Oct 9 07:55:20.068030 containerd[1626]: time="2024-10-09T07:55:20.067780235Z" level=info msg="shim disconnected" id=c3c18b805140a471fb569de8940857767588e29184347b86ba3930a21ef00aaa namespace=k8s.io Oct 9 07:55:20.068030 containerd[1626]: time="2024-10-09T07:55:20.067890245Z" level=warning msg="cleaning up after shim disconnected" id=c3c18b805140a471fb569de8940857767588e29184347b86ba3930a21ef00aaa namespace=k8s.io Oct 9 07:55:20.068626 containerd[1626]: time="2024-10-09T07:55:20.067905909Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 07:55:21.086133 kubelet[2877]: E1009 07:55:21.085895 2877 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6frp9" podUID="c258f453-4ba7-47ef-a510-74cac7855910" Oct 9 07:55:23.089418 kubelet[2877]: E1009 07:55:23.089315 2877 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6frp9" podUID="c258f453-4ba7-47ef-a510-74cac7855910" Oct 9 07:55:23.230820 containerd[1626]: time="2024-10-09T07:55:23.230494709Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:55:23.231957 containerd[1626]: time="2024-10-09T07:55:23.230893997Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=29471335" Oct 9 07:55:23.234861 containerd[1626]: time="2024-10-09T07:55:23.233596399Z" level=info msg="ImageCreate event name:\"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:55:23.238267 containerd[1626]: time="2024-10-09T07:55:23.238193904Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:55:23.239642 containerd[1626]: time="2024-10-09T07:55:23.239570236Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"30963728\" in 3.451446072s" Oct 9 07:55:23.239727 containerd[1626]: time="2024-10-09T07:55:23.239668106Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\"" Oct 9 07:55:23.242500 containerd[1626]: time="2024-10-09T07:55:23.242465436Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Oct 9 07:55:23.264231 containerd[1626]: time="2024-10-09T07:55:23.263516308Z" level=info msg="CreateContainer within sandbox \"0a7af8b19bddc047a36fa3f2b2825b1d22a9a1375bea5cdd5fe85329317f04af\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 9 07:55:23.304397 containerd[1626]: time="2024-10-09T07:55:23.304339241Z" level=info msg="CreateContainer within sandbox \"0a7af8b19bddc047a36fa3f2b2825b1d22a9a1375bea5cdd5fe85329317f04af\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"bc2a494466d2ecbb371dfdc03ed45936e311a55478a02216333ee9da37bb534c\"" Oct 9 07:55:23.307181 containerd[1626]: time="2024-10-09T07:55:23.307133419Z" level=info msg="StartContainer for \"bc2a494466d2ecbb371dfdc03ed45936e311a55478a02216333ee9da37bb534c\"" Oct 9 07:55:23.430818 containerd[1626]: time="2024-10-09T07:55:23.430656647Z" level=info msg="StartContainer for \"bc2a494466d2ecbb371dfdc03ed45936e311a55478a02216333ee9da37bb534c\" returns successfully" Oct 9 07:55:25.085868 kubelet[2877]: E1009 07:55:25.085820 2877 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6frp9" podUID="c258f453-4ba7-47ef-a510-74cac7855910" Oct 9 07:55:25.208894 kubelet[2877]: I1009 07:55:25.208854 2877 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 07:55:27.086658 kubelet[2877]: E1009 07:55:27.086149 2877 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6frp9" podUID="c258f453-4ba7-47ef-a510-74cac7855910" Oct 9 07:55:29.085834 kubelet[2877]: E1009 07:55:29.085725 2877 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6frp9" podUID="c258f453-4ba7-47ef-a510-74cac7855910" Oct 9 07:55:29.332328 containerd[1626]: time="2024-10-09T07:55:29.332189241Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:55:29.334370 containerd[1626]: time="2024-10-09T07:55:29.332973305Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=93083736" Oct 9 07:55:29.341347 containerd[1626]: time="2024-10-09T07:55:29.341138765Z" level=info msg="ImageCreate event name:\"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:55:29.345241 containerd[1626]: time="2024-10-09T07:55:29.344289752Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:55:29.346321 containerd[1626]: time="2024-10-09T07:55:29.345446519Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"94576137\" in 6.102737207s" Oct 9 07:55:29.346321 containerd[1626]: time="2024-10-09T07:55:29.345502028Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\"" Oct 9 07:55:29.348066 containerd[1626]: time="2024-10-09T07:55:29.348032882Z" level=info msg="CreateContainer within sandbox \"a26014f1516ff09e592e11888a1900359d9123388ead88ba790f8040e8d4e3b9\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 9 07:55:29.371189 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1816875397.mount: Deactivated successfully. Oct 9 07:55:29.373273 containerd[1626]: time="2024-10-09T07:55:29.373182141Z" level=info msg="CreateContainer within sandbox \"a26014f1516ff09e592e11888a1900359d9123388ead88ba790f8040e8d4e3b9\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a4250762abf0077238828b94d36b3985f3400b95bea05bd883b7ee5237939cb8\"" Oct 9 07:55:29.374797 containerd[1626]: time="2024-10-09T07:55:29.374725997Z" level=info msg="StartContainer for \"a4250762abf0077238828b94d36b3985f3400b95bea05bd883b7ee5237939cb8\"" Oct 9 07:55:29.510398 containerd[1626]: time="2024-10-09T07:55:29.510351325Z" level=info msg="StartContainer for \"a4250762abf0077238828b94d36b3985f3400b95bea05bd883b7ee5237939cb8\" returns successfully" Oct 9 07:55:30.269188 kubelet[2877]: I1009 07:55:30.269054 2877 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-84b7fdc47c-d5jgt" podStartSLOduration=8.061908562 podStartE2EDuration="13.268907811s" podCreationTimestamp="2024-10-09 07:55:17 +0000 UTC" firstStartedPulling="2024-10-09 07:55:18.033318352 +0000 UTC m=+20.201829325" lastFinishedPulling="2024-10-09 07:55:23.240317582 +0000 UTC m=+25.408828574" observedRunningTime="2024-10-09 07:55:24.221825099 +0000 UTC m=+26.390336072" watchObservedRunningTime="2024-10-09 07:55:30.268907811 +0000 UTC m=+32.437418798" Oct 9 07:55:30.512262 kubelet[2877]: I1009 07:55:30.509638 2877 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Oct 9 07:55:30.557269 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a4250762abf0077238828b94d36b3985f3400b95bea05bd883b7ee5237939cb8-rootfs.mount: Deactivated successfully. Oct 9 07:55:30.573818 kubelet[2877]: I1009 07:55:30.570060 2877 topology_manager.go:215] "Topology Admit Handler" podUID="7bfa79c2-7b56-4a96-b7e3-29115f7ec8ba" podNamespace="calico-system" podName="calico-kube-controllers-55d54487b6-jf2kn" Oct 9 07:55:30.578303 kubelet[2877]: I1009 07:55:30.577552 2877 topology_manager.go:215] "Topology Admit Handler" podUID="5221c43d-cc16-4dce-aec2-ff2ae5da0e97" podNamespace="kube-system" podName="coredns-76f75df574-2t599" Oct 9 07:55:30.579096 kubelet[2877]: I1009 07:55:30.578896 2877 topology_manager.go:215] "Topology Admit Handler" podUID="8dc77e6f-1a05-4ddb-9194-edc11be626aa" podNamespace="kube-system" podName="coredns-76f75df574-mjvxj" Oct 9 07:55:30.620149 kubelet[2877]: I1009 07:55:30.620093 2877 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7bfa79c2-7b56-4a96-b7e3-29115f7ec8ba-tigera-ca-bundle\") pod \"calico-kube-controllers-55d54487b6-jf2kn\" (UID: \"7bfa79c2-7b56-4a96-b7e3-29115f7ec8ba\") " pod="calico-system/calico-kube-controllers-55d54487b6-jf2kn" Oct 9 07:55:30.620372 kubelet[2877]: I1009 07:55:30.620174 2877 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgnbp\" (UniqueName: \"kubernetes.io/projected/7bfa79c2-7b56-4a96-b7e3-29115f7ec8ba-kube-api-access-bgnbp\") pod \"calico-kube-controllers-55d54487b6-jf2kn\" (UID: \"7bfa79c2-7b56-4a96-b7e3-29115f7ec8ba\") " pod="calico-system/calico-kube-controllers-55d54487b6-jf2kn" Oct 9 07:55:30.620372 kubelet[2877]: I1009 07:55:30.620242 2877 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v7zb\" (UniqueName: \"kubernetes.io/projected/8dc77e6f-1a05-4ddb-9194-edc11be626aa-kube-api-access-2v7zb\") pod \"coredns-76f75df574-mjvxj\" (UID: \"8dc77e6f-1a05-4ddb-9194-edc11be626aa\") " pod="kube-system/coredns-76f75df574-mjvxj" Oct 9 07:55:30.620372 kubelet[2877]: I1009 07:55:30.620286 2877 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw4kc\" (UniqueName: \"kubernetes.io/projected/5221c43d-cc16-4dce-aec2-ff2ae5da0e97-kube-api-access-fw4kc\") pod \"coredns-76f75df574-2t599\" (UID: \"5221c43d-cc16-4dce-aec2-ff2ae5da0e97\") " pod="kube-system/coredns-76f75df574-2t599" Oct 9 07:55:30.620372 kubelet[2877]: I1009 07:55:30.620329 2877 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8dc77e6f-1a05-4ddb-9194-edc11be626aa-config-volume\") pod \"coredns-76f75df574-mjvxj\" (UID: \"8dc77e6f-1a05-4ddb-9194-edc11be626aa\") " pod="kube-system/coredns-76f75df574-mjvxj" Oct 9 07:55:30.620372 kubelet[2877]: I1009 07:55:30.620370 2877 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5221c43d-cc16-4dce-aec2-ff2ae5da0e97-config-volume\") pod \"coredns-76f75df574-2t599\" (UID: \"5221c43d-cc16-4dce-aec2-ff2ae5da0e97\") " pod="kube-system/coredns-76f75df574-2t599" Oct 9 07:55:30.635510 containerd[1626]: time="2024-10-09T07:55:30.635385490Z" level=info msg="shim disconnected" id=a4250762abf0077238828b94d36b3985f3400b95bea05bd883b7ee5237939cb8 namespace=k8s.io Oct 9 07:55:30.635510 containerd[1626]: time="2024-10-09T07:55:30.635495912Z" level=warning msg="cleaning up after shim disconnected" id=a4250762abf0077238828b94d36b3985f3400b95bea05bd883b7ee5237939cb8 namespace=k8s.io Oct 9 07:55:30.635510 containerd[1626]: time="2024-10-09T07:55:30.635511108Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 07:55:30.654665 containerd[1626]: time="2024-10-09T07:55:30.654558606Z" level=warning msg="cleanup warnings time=\"2024-10-09T07:55:30Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Oct 9 07:55:30.884761 containerd[1626]: time="2024-10-09T07:55:30.883949656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55d54487b6-jf2kn,Uid:7bfa79c2-7b56-4a96-b7e3-29115f7ec8ba,Namespace:calico-system,Attempt:0,}" Oct 9 07:55:30.892256 containerd[1626]: time="2024-10-09T07:55:30.892178812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-2t599,Uid:5221c43d-cc16-4dce-aec2-ff2ae5da0e97,Namespace:kube-system,Attempt:0,}" Oct 9 07:55:30.894822 containerd[1626]: time="2024-10-09T07:55:30.894548796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mjvxj,Uid:8dc77e6f-1a05-4ddb-9194-edc11be626aa,Namespace:kube-system,Attempt:0,}" Oct 9 07:55:31.096258 containerd[1626]: time="2024-10-09T07:55:31.095371422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6frp9,Uid:c258f453-4ba7-47ef-a510-74cac7855910,Namespace:calico-system,Attempt:0,}" Oct 9 07:55:31.182114 containerd[1626]: time="2024-10-09T07:55:31.181916543Z" level=error msg="Failed to destroy network for sandbox \"e0df07c0fe3a150074924634352dafcc3b392777cf6c2ff5dd1d8b273063688e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:55:31.187592 containerd[1626]: time="2024-10-09T07:55:31.187549730Z" level=error msg="encountered an error cleaning up failed sandbox \"e0df07c0fe3a150074924634352dafcc3b392777cf6c2ff5dd1d8b273063688e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:55:31.216769 containerd[1626]: time="2024-10-09T07:55:31.216692014Z" level=error msg="Failed to destroy network for sandbox \"904485de00b9d4bcf7c22c318f6cde1fa24654c19bb8d74c511c8bac570cd0c4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:55:31.218171 containerd[1626]: time="2024-10-09T07:55:31.218123163Z" level=error msg="encountered an error cleaning up failed sandbox \"904485de00b9d4bcf7c22c318f6cde1fa24654c19bb8d74c511c8bac570cd0c4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:55:31.218882 containerd[1626]: time="2024-10-09T07:55:31.218841747Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-2t599,Uid:5221c43d-cc16-4dce-aec2-ff2ae5da0e97,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"904485de00b9d4bcf7c22c318f6cde1fa24654c19bb8d74c511c8bac570cd0c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:55:31.219268 containerd[1626]: time="2024-10-09T07:55:31.219235166Z" level=error msg="Failed to destroy network for sandbox \"d88bfa8b48c677030a0add4806c2b07063e7c1e2c183796c2e73071d045918f4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:55:31.219802 containerd[1626]: time="2024-10-09T07:55:31.219768641Z" level=error msg="encountered an error cleaning up failed sandbox \"d88bfa8b48c677030a0add4806c2b07063e7c1e2c183796c2e73071d045918f4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:55:31.219935 containerd[1626]: time="2024-10-09T07:55:31.219901814Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55d54487b6-jf2kn,Uid:7bfa79c2-7b56-4a96-b7e3-29115f7ec8ba,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d88bfa8b48c677030a0add4806c2b07063e7c1e2c183796c2e73071d045918f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:55:31.230121 containerd[1626]: time="2024-10-09T07:55:31.229872613Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mjvxj,Uid:8dc77e6f-1a05-4ddb-9194-edc11be626aa,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e0df07c0fe3a150074924634352dafcc3b392777cf6c2ff5dd1d8b273063688e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:55:31.232990 kubelet[2877]: E1009 07:55:31.231107 2877 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"904485de00b9d4bcf7c22c318f6cde1fa24654c19bb8d74c511c8bac570cd0c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:55:31.232990 kubelet[2877]: E1009 07:55:31.231092 2877 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d88bfa8b48c677030a0add4806c2b07063e7c1e2c183796c2e73071d045918f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:55:31.232990 kubelet[2877]: E1009 07:55:31.231319 2877 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0df07c0fe3a150074924634352dafcc3b392777cf6c2ff5dd1d8b273063688e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:55:31.232990 kubelet[2877]: E1009 07:55:31.231388 2877 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"904485de00b9d4bcf7c22c318f6cde1fa24654c19bb8d74c511c8bac570cd0c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-2t599" Oct 9 07:55:31.233299 kubelet[2877]: E1009 07:55:31.231425 2877 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d88bfa8b48c677030a0add4806c2b07063e7c1e2c183796c2e73071d045918f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55d54487b6-jf2kn" Oct 9 07:55:31.233299 kubelet[2877]: E1009 07:55:31.231487 2877 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d88bfa8b48c677030a0add4806c2b07063e7c1e2c183796c2e73071d045918f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55d54487b6-jf2kn" Oct 9 07:55:31.233299 kubelet[2877]: E1009 07:55:31.231631 2877 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-55d54487b6-jf2kn_calico-system(7bfa79c2-7b56-4a96-b7e3-29115f7ec8ba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-55d54487b6-jf2kn_calico-system(7bfa79c2-7b56-4a96-b7e3-29115f7ec8ba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d88bfa8b48c677030a0add4806c2b07063e7c1e2c183796c2e73071d045918f4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-55d54487b6-jf2kn" podUID="7bfa79c2-7b56-4a96-b7e3-29115f7ec8ba" Oct 9 07:55:31.234783 kubelet[2877]: E1009 07:55:31.231393 2877 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0df07c0fe3a150074924634352dafcc3b392777cf6c2ff5dd1d8b273063688e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-mjvxj" Oct 9 07:55:31.234783 kubelet[2877]: E1009 07:55:31.232482 2877 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0df07c0fe3a150074924634352dafcc3b392777cf6c2ff5dd1d8b273063688e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-mjvxj" Oct 9 07:55:31.234783 kubelet[2877]: E1009 07:55:31.232646 2877 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-mjvxj_kube-system(8dc77e6f-1a05-4ddb-9194-edc11be626aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-mjvxj_kube-system(8dc77e6f-1a05-4ddb-9194-edc11be626aa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e0df07c0fe3a150074924634352dafcc3b392777cf6c2ff5dd1d8b273063688e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-mjvxj" podUID="8dc77e6f-1a05-4ddb-9194-edc11be626aa" Oct 9 07:55:31.234961 kubelet[2877]: E1009 07:55:31.231471 2877 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"904485de00b9d4bcf7c22c318f6cde1fa24654c19bb8d74c511c8bac570cd0c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-2t599" Oct 9 07:55:31.234961 kubelet[2877]: E1009 07:55:31.232926 2877 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-2t599_kube-system(5221c43d-cc16-4dce-aec2-ff2ae5da0e97)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-2t599_kube-system(5221c43d-cc16-4dce-aec2-ff2ae5da0e97)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"904485de00b9d4bcf7c22c318f6cde1fa24654c19bb8d74c511c8bac570cd0c4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-2t599" podUID="5221c43d-cc16-4dce-aec2-ff2ae5da0e97" Oct 9 07:55:31.252635 kubelet[2877]: I1009 07:55:31.252508 2877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="904485de00b9d4bcf7c22c318f6cde1fa24654c19bb8d74c511c8bac570cd0c4" Oct 9 07:55:31.261156 containerd[1626]: time="2024-10-09T07:55:31.261075861Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Oct 9 07:55:31.287513 kubelet[2877]: I1009 07:55:31.286463 2877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0df07c0fe3a150074924634352dafcc3b392777cf6c2ff5dd1d8b273063688e" Oct 9 07:55:31.306237 containerd[1626]: time="2024-10-09T07:55:31.305100195Z" level=info msg="StopPodSandbox for \"904485de00b9d4bcf7c22c318f6cde1fa24654c19bb8d74c511c8bac570cd0c4\"" Oct 9 07:55:31.308274 containerd[1626]: time="2024-10-09T07:55:31.308238450Z" level=info msg="StopPodSandbox for \"e0df07c0fe3a150074924634352dafcc3b392777cf6c2ff5dd1d8b273063688e\"" Oct 9 07:55:31.308727 containerd[1626]: time="2024-10-09T07:55:31.308688098Z" level=info msg="Ensure that sandbox e0df07c0fe3a150074924634352dafcc3b392777cf6c2ff5dd1d8b273063688e in task-service has been cleanup successfully" Oct 9 07:55:31.314772 kubelet[2877]: I1009 07:55:31.314738 2877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d88bfa8b48c677030a0add4806c2b07063e7c1e2c183796c2e73071d045918f4" Oct 9 07:55:31.317311 containerd[1626]: time="2024-10-09T07:55:31.317253816Z" level=info msg="Ensure that sandbox 904485de00b9d4bcf7c22c318f6cde1fa24654c19bb8d74c511c8bac570cd0c4 in task-service has been cleanup successfully" Oct 9 07:55:31.321148 containerd[1626]: time="2024-10-09T07:55:31.321101428Z" level=info msg="StopPodSandbox for \"d88bfa8b48c677030a0add4806c2b07063e7c1e2c183796c2e73071d045918f4\"" Oct 9 07:55:31.321530 containerd[1626]: time="2024-10-09T07:55:31.321498122Z" level=info msg="Ensure that sandbox d88bfa8b48c677030a0add4806c2b07063e7c1e2c183796c2e73071d045918f4 in task-service has been cleanup successfully" Oct 9 07:55:31.340126 containerd[1626]: time="2024-10-09T07:55:31.340045110Z" level=error msg="Failed to destroy network for sandbox \"3496d03e740517a3139c6df4764dba446a9f3cdcdaea201f2f0ee51289340871\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:55:31.340976 containerd[1626]: time="2024-10-09T07:55:31.340929739Z" level=error msg="encountered an error cleaning up failed sandbox \"3496d03e740517a3139c6df4764dba446a9f3cdcdaea201f2f0ee51289340871\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:55:31.341128 containerd[1626]: time="2024-10-09T07:55:31.341095846Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6frp9,Uid:c258f453-4ba7-47ef-a510-74cac7855910,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3496d03e740517a3139c6df4764dba446a9f3cdcdaea201f2f0ee51289340871\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:55:31.343347 kubelet[2877]: E1009 07:55:31.343305 2877 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3496d03e740517a3139c6df4764dba446a9f3cdcdaea201f2f0ee51289340871\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:55:31.343594 kubelet[2877]: E1009 07:55:31.343490 2877 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3496d03e740517a3139c6df4764dba446a9f3cdcdaea201f2f0ee51289340871\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6frp9" Oct 9 07:55:31.343706 kubelet[2877]: E1009 07:55:31.343635 2877 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3496d03e740517a3139c6df4764dba446a9f3cdcdaea201f2f0ee51289340871\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6frp9" Oct 9 07:55:31.344146 kubelet[2877]: E1009 07:55:31.343737 2877 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6frp9_calico-system(c258f453-4ba7-47ef-a510-74cac7855910)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6frp9_calico-system(c258f453-4ba7-47ef-a510-74cac7855910)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3496d03e740517a3139c6df4764dba446a9f3cdcdaea201f2f0ee51289340871\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6frp9" podUID="c258f453-4ba7-47ef-a510-74cac7855910" Oct 9 07:55:31.384533 containerd[1626]: time="2024-10-09T07:55:31.384463089Z" level=error msg="StopPodSandbox for \"e0df07c0fe3a150074924634352dafcc3b392777cf6c2ff5dd1d8b273063688e\" failed" error="failed to destroy network for sandbox \"e0df07c0fe3a150074924634352dafcc3b392777cf6c2ff5dd1d8b273063688e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:55:31.385255 kubelet[2877]: E1009 07:55:31.384924 2877 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e0df07c0fe3a150074924634352dafcc3b392777cf6c2ff5dd1d8b273063688e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e0df07c0fe3a150074924634352dafcc3b392777cf6c2ff5dd1d8b273063688e" Oct 9 07:55:31.391553 kubelet[2877]: E1009 07:55:31.390800 2877 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e0df07c0fe3a150074924634352dafcc3b392777cf6c2ff5dd1d8b273063688e"} Oct 9 07:55:31.391553 kubelet[2877]: E1009 07:55:31.391443 2877 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8dc77e6f-1a05-4ddb-9194-edc11be626aa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e0df07c0fe3a150074924634352dafcc3b392777cf6c2ff5dd1d8b273063688e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 07:55:31.391553 kubelet[2877]: E1009 07:55:31.391504 2877 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8dc77e6f-1a05-4ddb-9194-edc11be626aa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e0df07c0fe3a150074924634352dafcc3b392777cf6c2ff5dd1d8b273063688e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-mjvxj" podUID="8dc77e6f-1a05-4ddb-9194-edc11be626aa" Oct 9 07:55:31.402209 containerd[1626]: time="2024-10-09T07:55:31.401230438Z" level=error msg="StopPodSandbox for \"d88bfa8b48c677030a0add4806c2b07063e7c1e2c183796c2e73071d045918f4\" failed" error="failed to destroy network for sandbox \"d88bfa8b48c677030a0add4806c2b07063e7c1e2c183796c2e73071d045918f4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:55:31.402407 kubelet[2877]: E1009 07:55:31.401526 2877 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d88bfa8b48c677030a0add4806c2b07063e7c1e2c183796c2e73071d045918f4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d88bfa8b48c677030a0add4806c2b07063e7c1e2c183796c2e73071d045918f4" Oct 9 07:55:31.402407 kubelet[2877]: E1009 07:55:31.401570 2877 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d88bfa8b48c677030a0add4806c2b07063e7c1e2c183796c2e73071d045918f4"} Oct 9 07:55:31.402407 kubelet[2877]: E1009 07:55:31.401630 2877 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7bfa79c2-7b56-4a96-b7e3-29115f7ec8ba\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d88bfa8b48c677030a0add4806c2b07063e7c1e2c183796c2e73071d045918f4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 07:55:31.402407 kubelet[2877]: E1009 07:55:31.401683 2877 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7bfa79c2-7b56-4a96-b7e3-29115f7ec8ba\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d88bfa8b48c677030a0add4806c2b07063e7c1e2c183796c2e73071d045918f4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-55d54487b6-jf2kn" podUID="7bfa79c2-7b56-4a96-b7e3-29115f7ec8ba" Oct 9 07:55:31.402918 containerd[1626]: time="2024-10-09T07:55:31.402818362Z" level=error msg="StopPodSandbox for \"904485de00b9d4bcf7c22c318f6cde1fa24654c19bb8d74c511c8bac570cd0c4\" failed" error="failed to destroy network for sandbox \"904485de00b9d4bcf7c22c318f6cde1fa24654c19bb8d74c511c8bac570cd0c4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:55:31.403067 kubelet[2877]: E1009 07:55:31.403032 2877 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"904485de00b9d4bcf7c22c318f6cde1fa24654c19bb8d74c511c8bac570cd0c4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="904485de00b9d4bcf7c22c318f6cde1fa24654c19bb8d74c511c8bac570cd0c4" Oct 9 07:55:31.403192 kubelet[2877]: E1009 07:55:31.403067 2877 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"904485de00b9d4bcf7c22c318f6cde1fa24654c19bb8d74c511c8bac570cd0c4"} Oct 9 07:55:31.403192 kubelet[2877]: E1009 07:55:31.403108 2877 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5221c43d-cc16-4dce-aec2-ff2ae5da0e97\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"904485de00b9d4bcf7c22c318f6cde1fa24654c19bb8d74c511c8bac570cd0c4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 07:55:31.403192 kubelet[2877]: E1009 07:55:31.403182 2877 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5221c43d-cc16-4dce-aec2-ff2ae5da0e97\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"904485de00b9d4bcf7c22c318f6cde1fa24654c19bb8d74c511c8bac570cd0c4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-2t599" podUID="5221c43d-cc16-4dce-aec2-ff2ae5da0e97" Oct 9 07:55:31.562291 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e0df07c0fe3a150074924634352dafcc3b392777cf6c2ff5dd1d8b273063688e-shm.mount: Deactivated successfully. Oct 9 07:55:31.562557 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d88bfa8b48c677030a0add4806c2b07063e7c1e2c183796c2e73071d045918f4-shm.mount: Deactivated successfully. Oct 9 07:55:32.318991 kubelet[2877]: I1009 07:55:32.318399 2877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3496d03e740517a3139c6df4764dba446a9f3cdcdaea201f2f0ee51289340871" Oct 9 07:55:32.319637 containerd[1626]: time="2024-10-09T07:55:32.319065212Z" level=info msg="StopPodSandbox for \"3496d03e740517a3139c6df4764dba446a9f3cdcdaea201f2f0ee51289340871\"" Oct 9 07:55:32.319637 containerd[1626]: time="2024-10-09T07:55:32.319395567Z" level=info msg="Ensure that sandbox 3496d03e740517a3139c6df4764dba446a9f3cdcdaea201f2f0ee51289340871 in task-service has been cleanup successfully" Oct 9 07:55:32.378575 containerd[1626]: time="2024-10-09T07:55:32.378422007Z" level=error msg="StopPodSandbox for \"3496d03e740517a3139c6df4764dba446a9f3cdcdaea201f2f0ee51289340871\" failed" error="failed to destroy network for sandbox \"3496d03e740517a3139c6df4764dba446a9f3cdcdaea201f2f0ee51289340871\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:55:32.378896 kubelet[2877]: E1009 07:55:32.378840 2877 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3496d03e740517a3139c6df4764dba446a9f3cdcdaea201f2f0ee51289340871\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3496d03e740517a3139c6df4764dba446a9f3cdcdaea201f2f0ee51289340871" Oct 9 07:55:32.379243 kubelet[2877]: E1009 07:55:32.378962 2877 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3496d03e740517a3139c6df4764dba446a9f3cdcdaea201f2f0ee51289340871"} Oct 9 07:55:32.379243 kubelet[2877]: E1009 07:55:32.379095 2877 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c258f453-4ba7-47ef-a510-74cac7855910\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3496d03e740517a3139c6df4764dba446a9f3cdcdaea201f2f0ee51289340871\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 07:55:32.379243 kubelet[2877]: E1009 07:55:32.379167 2877 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c258f453-4ba7-47ef-a510-74cac7855910\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3496d03e740517a3139c6df4764dba446a9f3cdcdaea201f2f0ee51289340871\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6frp9" podUID="c258f453-4ba7-47ef-a510-74cac7855910" Oct 9 07:55:33.643463 kubelet[2877]: I1009 07:55:33.642433 2877 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 07:55:38.507821 systemd-journald[1177]: Under memory pressure, flushing caches. Oct 9 07:55:38.504972 systemd-resolved[1516]: Under memory pressure, flushing caches. Oct 9 07:55:38.505156 systemd-resolved[1516]: Flushed all caches. Oct 9 07:55:39.692487 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2221008456.mount: Deactivated successfully. Oct 9 07:55:39.749672 containerd[1626]: time="2024-10-09T07:55:39.749553031Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:55:39.757791 containerd[1626]: time="2024-10-09T07:55:39.757572270Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=117873564" Oct 9 07:55:39.765235 containerd[1626]: time="2024-10-09T07:55:39.765143225Z" level=info msg="ImageCreate event name:\"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:55:39.770457 containerd[1626]: time="2024-10-09T07:55:39.770264378Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:55:39.774264 containerd[1626]: time="2024-10-09T07:55:39.774014706Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"117873426\" in 8.510481522s" Oct 9 07:55:39.774264 containerd[1626]: time="2024-10-09T07:55:39.774091134Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\"" Oct 9 07:55:39.827245 containerd[1626]: time="2024-10-09T07:55:39.827098540Z" level=info msg="CreateContainer within sandbox \"a26014f1516ff09e592e11888a1900359d9123388ead88ba790f8040e8d4e3b9\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 9 07:55:39.880853 containerd[1626]: time="2024-10-09T07:55:39.880749501Z" level=info msg="CreateContainer within sandbox \"a26014f1516ff09e592e11888a1900359d9123388ead88ba790f8040e8d4e3b9\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9c416ddd3a5e102342b567044c83525bcfd4d27c7ef0fbb0ac2bf6874013af7f\"" Oct 9 07:55:39.882776 containerd[1626]: time="2024-10-09T07:55:39.882681709Z" level=info msg="StartContainer for \"9c416ddd3a5e102342b567044c83525bcfd4d27c7ef0fbb0ac2bf6874013af7f\"" Oct 9 07:55:40.036406 containerd[1626]: time="2024-10-09T07:55:40.036325404Z" level=info msg="StartContainer for \"9c416ddd3a5e102342b567044c83525bcfd4d27c7ef0fbb0ac2bf6874013af7f\" returns successfully" Oct 9 07:55:40.146525 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 9 07:55:40.146748 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 9 07:55:40.393879 kubelet[2877]: I1009 07:55:40.393743 2877 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-d7skx" podStartSLOduration=1.643780228 podStartE2EDuration="23.390033354s" podCreationTimestamp="2024-10-09 07:55:17 +0000 UTC" firstStartedPulling="2024-10-09 07:55:18.028338111 +0000 UTC m=+20.196849085" lastFinishedPulling="2024-10-09 07:55:39.774591232 +0000 UTC m=+41.943102211" observedRunningTime="2024-10-09 07:55:40.389536574 +0000 UTC m=+42.558047566" watchObservedRunningTime="2024-10-09 07:55:40.390033354 +0000 UTC m=+42.558544337" Oct 9 07:55:40.558803 systemd-journald[1177]: Under memory pressure, flushing caches. Oct 9 07:55:40.556783 systemd-resolved[1516]: Under memory pressure, flushing caches. Oct 9 07:55:40.558715 systemd-resolved[1516]: Flushed all caches. Oct 9 07:55:41.356526 kubelet[2877]: I1009 07:55:41.356484 2877 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 07:55:42.231246 kernel: bpftool[3942]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Oct 9 07:55:42.585568 systemd-networkd[1270]: vxlan.calico: Link UP Oct 9 07:55:42.585580 systemd-networkd[1270]: vxlan.calico: Gained carrier Oct 9 07:55:44.008500 systemd-networkd[1270]: vxlan.calico: Gained IPv6LL Oct 9 07:55:44.389545 kubelet[2877]: I1009 07:55:44.389384 2877 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 07:55:44.496549 systemd[1]: run-containerd-runc-k8s.io-9c416ddd3a5e102342b567044c83525bcfd4d27c7ef0fbb0ac2bf6874013af7f-runc.QfLrPS.mount: Deactivated successfully. Oct 9 07:55:45.087951 containerd[1626]: time="2024-10-09T07:55:45.087852567Z" level=info msg="StopPodSandbox for \"904485de00b9d4bcf7c22c318f6cde1fa24654c19bb8d74c511c8bac570cd0c4\"" Oct 9 07:55:45.090747 containerd[1626]: time="2024-10-09T07:55:45.088576450Z" level=info msg="StopPodSandbox for \"e0df07c0fe3a150074924634352dafcc3b392777cf6c2ff5dd1d8b273063688e\"" Oct 9 07:55:45.373159 containerd[1626]: 2024-10-09 07:55:45.187 [INFO][4094] k8s.go 608: Cleaning up netns ContainerID="904485de00b9d4bcf7c22c318f6cde1fa24654c19bb8d74c511c8bac570cd0c4" Oct 9 07:55:45.373159 containerd[1626]: 2024-10-09 07:55:45.191 [INFO][4094] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="904485de00b9d4bcf7c22c318f6cde1fa24654c19bb8d74c511c8bac570cd0c4" iface="eth0" netns="/var/run/netns/cni-7218f702-efac-1df8-7a15-2987fbdcd809" Oct 9 07:55:45.373159 containerd[1626]: 2024-10-09 07:55:45.192 [INFO][4094] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="904485de00b9d4bcf7c22c318f6cde1fa24654c19bb8d74c511c8bac570cd0c4" iface="eth0" netns="/var/run/netns/cni-7218f702-efac-1df8-7a15-2987fbdcd809" Oct 9 07:55:45.373159 containerd[1626]: 2024-10-09 07:55:45.193 [INFO][4094] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="904485de00b9d4bcf7c22c318f6cde1fa24654c19bb8d74c511c8bac570cd0c4" iface="eth0" netns="/var/run/netns/cni-7218f702-efac-1df8-7a15-2987fbdcd809" Oct 9 07:55:45.373159 containerd[1626]: 2024-10-09 07:55:45.193 [INFO][4094] k8s.go 615: Releasing IP address(es) ContainerID="904485de00b9d4bcf7c22c318f6cde1fa24654c19bb8d74c511c8bac570cd0c4" Oct 9 07:55:45.373159 containerd[1626]: 2024-10-09 07:55:45.193 [INFO][4094] utils.go 188: Calico CNI releasing IP address ContainerID="904485de00b9d4bcf7c22c318f6cde1fa24654c19bb8d74c511c8bac570cd0c4" Oct 9 07:55:45.373159 containerd[1626]: 2024-10-09 07:55:45.348 [INFO][4108] ipam_plugin.go 417: Releasing address using handleID ContainerID="904485de00b9d4bcf7c22c318f6cde1fa24654c19bb8d74c511c8bac570cd0c4" HandleID="k8s-pod-network.904485de00b9d4bcf7c22c318f6cde1fa24654c19bb8d74c511c8bac570cd0c4" Workload="srv--9xk3k.gb1.brightbox.com-k8s-coredns--76f75df574--2t599-eth0" Oct 9 07:55:45.373159 containerd[1626]: 2024-10-09 07:55:45.349 [INFO][4108] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:55:45.373159 containerd[1626]: 2024-10-09 07:55:45.350 [INFO][4108] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:55:45.373159 containerd[1626]: 2024-10-09 07:55:45.363 [WARNING][4108] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="904485de00b9d4bcf7c22c318f6cde1fa24654c19bb8d74c511c8bac570cd0c4" HandleID="k8s-pod-network.904485de00b9d4bcf7c22c318f6cde1fa24654c19bb8d74c511c8bac570cd0c4" Workload="srv--9xk3k.gb1.brightbox.com-k8s-coredns--76f75df574--2t599-eth0" Oct 9 07:55:45.373159 containerd[1626]: 2024-10-09 07:55:45.363 [INFO][4108] ipam_plugin.go 445: Releasing address using workloadID ContainerID="904485de00b9d4bcf7c22c318f6cde1fa24654c19bb8d74c511c8bac570cd0c4" HandleID="k8s-pod-network.904485de00b9d4bcf7c22c318f6cde1fa24654c19bb8d74c511c8bac570cd0c4" Workload="srv--9xk3k.gb1.brightbox.com-k8s-coredns--76f75df574--2t599-eth0" Oct 9 07:55:45.373159 containerd[1626]: 2024-10-09 07:55:45.365 [INFO][4108] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:55:45.373159 containerd[1626]: 2024-10-09 07:55:45.367 [INFO][4094] k8s.go 621: Teardown processing complete. ContainerID="904485de00b9d4bcf7c22c318f6cde1fa24654c19bb8d74c511c8bac570cd0c4" Oct 9 07:55:45.375724 containerd[1626]: time="2024-10-09T07:55:45.374122691Z" level=info msg="TearDown network for sandbox \"904485de00b9d4bcf7c22c318f6cde1fa24654c19bb8d74c511c8bac570cd0c4\" successfully" Oct 9 07:55:45.375724 containerd[1626]: time="2024-10-09T07:55:45.374163167Z" level=info msg="StopPodSandbox for \"904485de00b9d4bcf7c22c318f6cde1fa24654c19bb8d74c511c8bac570cd0c4\" returns successfully" Oct 9 07:55:45.380785 systemd[1]: run-netns-cni\x2d7218f702\x2defac\x2d1df8\x2d7a15\x2d2987fbdcd809.mount: Deactivated successfully. Oct 9 07:55:45.385335 containerd[1626]: time="2024-10-09T07:55:45.385295485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-2t599,Uid:5221c43d-cc16-4dce-aec2-ff2ae5da0e97,Namespace:kube-system,Attempt:1,}" Oct 9 07:55:45.387418 containerd[1626]: 2024-10-09 07:55:45.190 [INFO][4095] k8s.go 608: Cleaning up netns ContainerID="e0df07c0fe3a150074924634352dafcc3b392777cf6c2ff5dd1d8b273063688e" Oct 9 07:55:45.387418 containerd[1626]: 2024-10-09 07:55:45.191 [INFO][4095] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="e0df07c0fe3a150074924634352dafcc3b392777cf6c2ff5dd1d8b273063688e" iface="eth0" netns="/var/run/netns/cni-58b7e767-bbfd-4226-6602-5f8c46e327f8" Oct 9 07:55:45.387418 containerd[1626]: 2024-10-09 07:55:45.191 [INFO][4095] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="e0df07c0fe3a150074924634352dafcc3b392777cf6c2ff5dd1d8b273063688e" iface="eth0" netns="/var/run/netns/cni-58b7e767-bbfd-4226-6602-5f8c46e327f8" Oct 9 07:55:45.387418 containerd[1626]: 2024-10-09 07:55:45.193 [INFO][4095] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="e0df07c0fe3a150074924634352dafcc3b392777cf6c2ff5dd1d8b273063688e" iface="eth0" netns="/var/run/netns/cni-58b7e767-bbfd-4226-6602-5f8c46e327f8" Oct 9 07:55:45.387418 containerd[1626]: 2024-10-09 07:55:45.193 [INFO][4095] k8s.go 615: Releasing IP address(es) ContainerID="e0df07c0fe3a150074924634352dafcc3b392777cf6c2ff5dd1d8b273063688e" Oct 9 07:55:45.387418 containerd[1626]: 2024-10-09 07:55:45.193 [INFO][4095] utils.go 188: Calico CNI releasing IP address ContainerID="e0df07c0fe3a150074924634352dafcc3b392777cf6c2ff5dd1d8b273063688e" Oct 9 07:55:45.387418 containerd[1626]: 2024-10-09 07:55:45.348 [INFO][4107] ipam_plugin.go 417: Releasing address using handleID ContainerID="e0df07c0fe3a150074924634352dafcc3b392777cf6c2ff5dd1d8b273063688e" HandleID="k8s-pod-network.e0df07c0fe3a150074924634352dafcc3b392777cf6c2ff5dd1d8b273063688e" Workload="srv--9xk3k.gb1.brightbox.com-k8s-coredns--76f75df574--mjvxj-eth0" Oct 9 07:55:45.387418 containerd[1626]: 2024-10-09 07:55:45.349 [INFO][4107] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:55:45.387418 containerd[1626]: 2024-10-09 07:55:45.365 [INFO][4107] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:55:45.387418 containerd[1626]: 2024-10-09 07:55:45.375 [WARNING][4107] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="e0df07c0fe3a150074924634352dafcc3b392777cf6c2ff5dd1d8b273063688e" HandleID="k8s-pod-network.e0df07c0fe3a150074924634352dafcc3b392777cf6c2ff5dd1d8b273063688e" Workload="srv--9xk3k.gb1.brightbox.com-k8s-coredns--76f75df574--mjvxj-eth0" Oct 9 07:55:45.387418 containerd[1626]: 2024-10-09 07:55:45.376 [INFO][4107] ipam_plugin.go 445: Releasing address using workloadID ContainerID="e0df07c0fe3a150074924634352dafcc3b392777cf6c2ff5dd1d8b273063688e" HandleID="k8s-pod-network.e0df07c0fe3a150074924634352dafcc3b392777cf6c2ff5dd1d8b273063688e" Workload="srv--9xk3k.gb1.brightbox.com-k8s-coredns--76f75df574--mjvxj-eth0" Oct 9 07:55:45.387418 containerd[1626]: 2024-10-09 07:55:45.379 [INFO][4107] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:55:45.387418 containerd[1626]: 2024-10-09 07:55:45.385 [INFO][4095] k8s.go 621: Teardown processing complete. ContainerID="e0df07c0fe3a150074924634352dafcc3b392777cf6c2ff5dd1d8b273063688e" Oct 9 07:55:45.390927 containerd[1626]: time="2024-10-09T07:55:45.388189091Z" level=info msg="TearDown network for sandbox \"e0df07c0fe3a150074924634352dafcc3b392777cf6c2ff5dd1d8b273063688e\" successfully" Oct 9 07:55:45.390927 containerd[1626]: time="2024-10-09T07:55:45.388271361Z" level=info msg="StopPodSandbox for \"e0df07c0fe3a150074924634352dafcc3b392777cf6c2ff5dd1d8b273063688e\" returns successfully" Oct 9 07:55:45.390927 containerd[1626]: time="2024-10-09T07:55:45.389361647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mjvxj,Uid:8dc77e6f-1a05-4ddb-9194-edc11be626aa,Namespace:kube-system,Attempt:1,}" Oct 9 07:55:45.392334 systemd[1]: run-netns-cni\x2d58b7e767\x2dbbfd\x2d4226\x2d6602\x2d5f8c46e327f8.mount: Deactivated successfully. Oct 9 07:55:45.600320 systemd-networkd[1270]: cali85d158e7bd3: Link UP Oct 9 07:55:45.604049 systemd-networkd[1270]: cali85d158e7bd3: Gained carrier Oct 9 07:55:45.641746 containerd[1626]: 2024-10-09 07:55:45.464 [INFO][4122] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--9xk3k.gb1.brightbox.com-k8s-coredns--76f75df574--2t599-eth0 coredns-76f75df574- kube-system 5221c43d-cc16-4dce-aec2-ff2ae5da0e97 685 0 2024-10-09 07:55:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-9xk3k.gb1.brightbox.com coredns-76f75df574-2t599 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali85d158e7bd3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="15713b223810abf7c46bb896011ca4684059e4652bc06f9d433a97e6d6d30ac9" Namespace="kube-system" Pod="coredns-76f75df574-2t599" WorkloadEndpoint="srv--9xk3k.gb1.brightbox.com-k8s-coredns--76f75df574--2t599-" Oct 9 07:55:45.641746 containerd[1626]: 2024-10-09 07:55:45.464 [INFO][4122] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="15713b223810abf7c46bb896011ca4684059e4652bc06f9d433a97e6d6d30ac9" Namespace="kube-system" Pod="coredns-76f75df574-2t599" WorkloadEndpoint="srv--9xk3k.gb1.brightbox.com-k8s-coredns--76f75df574--2t599-eth0" Oct 9 07:55:45.641746 containerd[1626]: 2024-10-09 07:55:45.527 [INFO][4147] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="15713b223810abf7c46bb896011ca4684059e4652bc06f9d433a97e6d6d30ac9" HandleID="k8s-pod-network.15713b223810abf7c46bb896011ca4684059e4652bc06f9d433a97e6d6d30ac9" Workload="srv--9xk3k.gb1.brightbox.com-k8s-coredns--76f75df574--2t599-eth0" Oct 9 07:55:45.641746 containerd[1626]: 2024-10-09 07:55:45.543 [INFO][4147] ipam_plugin.go 270: Auto assigning IP ContainerID="15713b223810abf7c46bb896011ca4684059e4652bc06f9d433a97e6d6d30ac9" HandleID="k8s-pod-network.15713b223810abf7c46bb896011ca4684059e4652bc06f9d433a97e6d6d30ac9" Workload="srv--9xk3k.gb1.brightbox.com-k8s-coredns--76f75df574--2t599-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051590), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-9xk3k.gb1.brightbox.com", "pod":"coredns-76f75df574-2t599", "timestamp":"2024-10-09 07:55:45.527307384 +0000 UTC"}, Hostname:"srv-9xk3k.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:55:45.641746 containerd[1626]: 2024-10-09 07:55:45.543 [INFO][4147] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:55:45.641746 containerd[1626]: 2024-10-09 07:55:45.544 [INFO][4147] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:55:45.641746 containerd[1626]: 2024-10-09 07:55:45.544 [INFO][4147] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-9xk3k.gb1.brightbox.com' Oct 9 07:55:45.641746 containerd[1626]: 2024-10-09 07:55:45.547 [INFO][4147] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.15713b223810abf7c46bb896011ca4684059e4652bc06f9d433a97e6d6d30ac9" host="srv-9xk3k.gb1.brightbox.com" Oct 9 07:55:45.641746 containerd[1626]: 2024-10-09 07:55:45.556 [INFO][4147] ipam.go 372: Looking up existing affinities for host host="srv-9xk3k.gb1.brightbox.com" Oct 9 07:55:45.641746 containerd[1626]: 2024-10-09 07:55:45.562 [INFO][4147] ipam.go 489: Trying affinity for 192.168.5.128/26 host="srv-9xk3k.gb1.brightbox.com" Oct 9 07:55:45.641746 containerd[1626]: 2024-10-09 07:55:45.564 [INFO][4147] ipam.go 155: Attempting to load block cidr=192.168.5.128/26 host="srv-9xk3k.gb1.brightbox.com" Oct 9 07:55:45.641746 containerd[1626]: 2024-10-09 07:55:45.567 [INFO][4147] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.5.128/26 host="srv-9xk3k.gb1.brightbox.com" Oct 9 07:55:45.641746 containerd[1626]: 2024-10-09 07:55:45.567 [INFO][4147] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.5.128/26 handle="k8s-pod-network.15713b223810abf7c46bb896011ca4684059e4652bc06f9d433a97e6d6d30ac9" host="srv-9xk3k.gb1.brightbox.com" Oct 9 07:55:45.641746 containerd[1626]: 2024-10-09 07:55:45.568 [INFO][4147] ipam.go 1685: Creating new handle: k8s-pod-network.15713b223810abf7c46bb896011ca4684059e4652bc06f9d433a97e6d6d30ac9 Oct 9 07:55:45.641746 containerd[1626]: 2024-10-09 07:55:45.576 [INFO][4147] ipam.go 1203: Writing block in order to claim IPs block=192.168.5.128/26 handle="k8s-pod-network.15713b223810abf7c46bb896011ca4684059e4652bc06f9d433a97e6d6d30ac9" host="srv-9xk3k.gb1.brightbox.com" Oct 9 07:55:45.641746 containerd[1626]: 2024-10-09 07:55:45.583 [INFO][4147] ipam.go 1216: Successfully claimed IPs: [192.168.5.129/26] block=192.168.5.128/26 handle="k8s-pod-network.15713b223810abf7c46bb896011ca4684059e4652bc06f9d433a97e6d6d30ac9" host="srv-9xk3k.gb1.brightbox.com" Oct 9 07:55:45.641746 containerd[1626]: 2024-10-09 07:55:45.583 [INFO][4147] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.5.129/26] handle="k8s-pod-network.15713b223810abf7c46bb896011ca4684059e4652bc06f9d433a97e6d6d30ac9" host="srv-9xk3k.gb1.brightbox.com" Oct 9 07:55:45.641746 containerd[1626]: 2024-10-09 07:55:45.583 [INFO][4147] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:55:45.641746 containerd[1626]: 2024-10-09 07:55:45.583 [INFO][4147] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.5.129/26] IPv6=[] ContainerID="15713b223810abf7c46bb896011ca4684059e4652bc06f9d433a97e6d6d30ac9" HandleID="k8s-pod-network.15713b223810abf7c46bb896011ca4684059e4652bc06f9d433a97e6d6d30ac9" Workload="srv--9xk3k.gb1.brightbox.com-k8s-coredns--76f75df574--2t599-eth0" Oct 9 07:55:45.644669 containerd[1626]: 2024-10-09 07:55:45.588 [INFO][4122] k8s.go 386: Populated endpoint ContainerID="15713b223810abf7c46bb896011ca4684059e4652bc06f9d433a97e6d6d30ac9" Namespace="kube-system" Pod="coredns-76f75df574-2t599" WorkloadEndpoint="srv--9xk3k.gb1.brightbox.com-k8s-coredns--76f75df574--2t599-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--9xk3k.gb1.brightbox.com-k8s-coredns--76f75df574--2t599-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"5221c43d-cc16-4dce-aec2-ff2ae5da0e97", ResourceVersion:"685", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 55, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-9xk3k.gb1.brightbox.com", ContainerID:"", Pod:"coredns-76f75df574-2t599", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali85d158e7bd3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:55:45.644669 containerd[1626]: 2024-10-09 07:55:45.588 [INFO][4122] k8s.go 387: Calico CNI using IPs: [192.168.5.129/32] ContainerID="15713b223810abf7c46bb896011ca4684059e4652bc06f9d433a97e6d6d30ac9" Namespace="kube-system" Pod="coredns-76f75df574-2t599" WorkloadEndpoint="srv--9xk3k.gb1.brightbox.com-k8s-coredns--76f75df574--2t599-eth0" Oct 9 07:55:45.644669 containerd[1626]: 2024-10-09 07:55:45.589 [INFO][4122] dataplane_linux.go 68: Setting the host side veth name to cali85d158e7bd3 ContainerID="15713b223810abf7c46bb896011ca4684059e4652bc06f9d433a97e6d6d30ac9" Namespace="kube-system" Pod="coredns-76f75df574-2t599" WorkloadEndpoint="srv--9xk3k.gb1.brightbox.com-k8s-coredns--76f75df574--2t599-eth0" Oct 9 07:55:45.644669 containerd[1626]: 2024-10-09 07:55:45.600 [INFO][4122] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="15713b223810abf7c46bb896011ca4684059e4652bc06f9d433a97e6d6d30ac9" Namespace="kube-system" Pod="coredns-76f75df574-2t599" WorkloadEndpoint="srv--9xk3k.gb1.brightbox.com-k8s-coredns--76f75df574--2t599-eth0" Oct 9 07:55:45.644669 containerd[1626]: 2024-10-09 07:55:45.609 [INFO][4122] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="15713b223810abf7c46bb896011ca4684059e4652bc06f9d433a97e6d6d30ac9" Namespace="kube-system" Pod="coredns-76f75df574-2t599" WorkloadEndpoint="srv--9xk3k.gb1.brightbox.com-k8s-coredns--76f75df574--2t599-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--9xk3k.gb1.brightbox.com-k8s-coredns--76f75df574--2t599-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"5221c43d-cc16-4dce-aec2-ff2ae5da0e97", ResourceVersion:"685", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 55, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-9xk3k.gb1.brightbox.com", ContainerID:"15713b223810abf7c46bb896011ca4684059e4652bc06f9d433a97e6d6d30ac9", Pod:"coredns-76f75df574-2t599", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali85d158e7bd3", MAC:"ce:4c:81:15:2b:f8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:55:45.644669 containerd[1626]: 2024-10-09 07:55:45.630 [INFO][4122] k8s.go 500: Wrote updated endpoint to datastore ContainerID="15713b223810abf7c46bb896011ca4684059e4652bc06f9d433a97e6d6d30ac9" Namespace="kube-system" Pod="coredns-76f75df574-2t599" WorkloadEndpoint="srv--9xk3k.gb1.brightbox.com-k8s-coredns--76f75df574--2t599-eth0" Oct 9 07:55:45.662300 systemd-networkd[1270]: cali68d09508bcf: Link UP Oct 9 07:55:45.662645 systemd-networkd[1270]: cali68d09508bcf: Gained carrier Oct 9 07:55:45.696545 containerd[1626]: 2024-10-09 07:55:45.465 [INFO][4120] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--9xk3k.gb1.brightbox.com-k8s-coredns--76f75df574--mjvxj-eth0 coredns-76f75df574- kube-system 8dc77e6f-1a05-4ddb-9194-edc11be626aa 686 0 2024-10-09 07:55:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-9xk3k.gb1.brightbox.com coredns-76f75df574-mjvxj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali68d09508bcf [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="619697e9603c5870a65b0eab027dfc58e2792e9b97be585e13c6c91cfb37acd0" Namespace="kube-system" Pod="coredns-76f75df574-mjvxj" WorkloadEndpoint="srv--9xk3k.gb1.brightbox.com-k8s-coredns--76f75df574--mjvxj-" Oct 9 07:55:45.696545 containerd[1626]: 2024-10-09 07:55:45.465 [INFO][4120] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="619697e9603c5870a65b0eab027dfc58e2792e9b97be585e13c6c91cfb37acd0" Namespace="kube-system" Pod="coredns-76f75df574-mjvxj" WorkloadEndpoint="srv--9xk3k.gb1.brightbox.com-k8s-coredns--76f75df574--mjvxj-eth0" Oct 9 07:55:45.696545 containerd[1626]: 2024-10-09 07:55:45.528 [INFO][4143] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="619697e9603c5870a65b0eab027dfc58e2792e9b97be585e13c6c91cfb37acd0" HandleID="k8s-pod-network.619697e9603c5870a65b0eab027dfc58e2792e9b97be585e13c6c91cfb37acd0" Workload="srv--9xk3k.gb1.brightbox.com-k8s-coredns--76f75df574--mjvxj-eth0" Oct 9 07:55:45.696545 containerd[1626]: 2024-10-09 07:55:45.547 [INFO][4143] ipam_plugin.go 270: Auto assigning IP ContainerID="619697e9603c5870a65b0eab027dfc58e2792e9b97be585e13c6c91cfb37acd0" HandleID="k8s-pod-network.619697e9603c5870a65b0eab027dfc58e2792e9b97be585e13c6c91cfb37acd0" Workload="srv--9xk3k.gb1.brightbox.com-k8s-coredns--76f75df574--mjvxj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00039b140), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-9xk3k.gb1.brightbox.com", "pod":"coredns-76f75df574-mjvxj", "timestamp":"2024-10-09 07:55:45.528361731 +0000 UTC"}, Hostname:"srv-9xk3k.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:55:45.696545 containerd[1626]: 2024-10-09 07:55:45.547 [INFO][4143] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:55:45.696545 containerd[1626]: 2024-10-09 07:55:45.584 [INFO][4143] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:55:45.696545 containerd[1626]: 2024-10-09 07:55:45.584 [INFO][4143] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-9xk3k.gb1.brightbox.com' Oct 9 07:55:45.696545 containerd[1626]: 2024-10-09 07:55:45.587 [INFO][4143] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.619697e9603c5870a65b0eab027dfc58e2792e9b97be585e13c6c91cfb37acd0" host="srv-9xk3k.gb1.brightbox.com" Oct 9 07:55:45.696545 containerd[1626]: 2024-10-09 07:55:45.599 [INFO][4143] ipam.go 372: Looking up existing affinities for host host="srv-9xk3k.gb1.brightbox.com" Oct 9 07:55:45.696545 containerd[1626]: 2024-10-09 07:55:45.612 [INFO][4143] ipam.go 489: Trying affinity for 192.168.5.128/26 host="srv-9xk3k.gb1.brightbox.com" Oct 9 07:55:45.696545 containerd[1626]: 2024-10-09 07:55:45.615 [INFO][4143] ipam.go 155: Attempting to load block cidr=192.168.5.128/26 host="srv-9xk3k.gb1.brightbox.com" Oct 9 07:55:45.696545 containerd[1626]: 2024-10-09 07:55:45.618 [INFO][4143] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.5.128/26 host="srv-9xk3k.gb1.brightbox.com" Oct 9 07:55:45.696545 containerd[1626]: 2024-10-09 07:55:45.618 [INFO][4143] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.5.128/26 handle="k8s-pod-network.619697e9603c5870a65b0eab027dfc58e2792e9b97be585e13c6c91cfb37acd0" host="srv-9xk3k.gb1.brightbox.com" Oct 9 07:55:45.696545 containerd[1626]: 2024-10-09 07:55:45.620 [INFO][4143] ipam.go 1685: Creating new handle: k8s-pod-network.619697e9603c5870a65b0eab027dfc58e2792e9b97be585e13c6c91cfb37acd0 Oct 9 07:55:45.696545 containerd[1626]: 2024-10-09 07:55:45.630 [INFO][4143] ipam.go 1203: Writing block in order to claim IPs block=192.168.5.128/26 handle="k8s-pod-network.619697e9603c5870a65b0eab027dfc58e2792e9b97be585e13c6c91cfb37acd0" host="srv-9xk3k.gb1.brightbox.com" Oct 9 07:55:45.696545 containerd[1626]: 2024-10-09 07:55:45.644 [INFO][4143] ipam.go 1216: Successfully claimed IPs: [192.168.5.130/26] block=192.168.5.128/26 handle="k8s-pod-network.619697e9603c5870a65b0eab027dfc58e2792e9b97be585e13c6c91cfb37acd0" host="srv-9xk3k.gb1.brightbox.com" Oct 9 07:55:45.696545 containerd[1626]: 2024-10-09 07:55:45.644 [INFO][4143] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.5.130/26] handle="k8s-pod-network.619697e9603c5870a65b0eab027dfc58e2792e9b97be585e13c6c91cfb37acd0" host="srv-9xk3k.gb1.brightbox.com" Oct 9 07:55:45.696545 containerd[1626]: 2024-10-09 07:55:45.644 [INFO][4143] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:55:45.696545 containerd[1626]: 2024-10-09 07:55:45.644 [INFO][4143] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.5.130/26] IPv6=[] ContainerID="619697e9603c5870a65b0eab027dfc58e2792e9b97be585e13c6c91cfb37acd0" HandleID="k8s-pod-network.619697e9603c5870a65b0eab027dfc58e2792e9b97be585e13c6c91cfb37acd0" Workload="srv--9xk3k.gb1.brightbox.com-k8s-coredns--76f75df574--mjvxj-eth0" Oct 9 07:55:45.699048 containerd[1626]: 2024-10-09 07:55:45.654 [INFO][4120] k8s.go 386: Populated endpoint ContainerID="619697e9603c5870a65b0eab027dfc58e2792e9b97be585e13c6c91cfb37acd0" Namespace="kube-system" Pod="coredns-76f75df574-mjvxj" WorkloadEndpoint="srv--9xk3k.gb1.brightbox.com-k8s-coredns--76f75df574--mjvxj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--9xk3k.gb1.brightbox.com-k8s-coredns--76f75df574--mjvxj-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"8dc77e6f-1a05-4ddb-9194-edc11be626aa", ResourceVersion:"686", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 55, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-9xk3k.gb1.brightbox.com", ContainerID:"", Pod:"coredns-76f75df574-mjvxj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali68d09508bcf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:55:45.699048 containerd[1626]: 2024-10-09 07:55:45.655 [INFO][4120] k8s.go 387: Calico CNI using IPs: [192.168.5.130/32] ContainerID="619697e9603c5870a65b0eab027dfc58e2792e9b97be585e13c6c91cfb37acd0" Namespace="kube-system" Pod="coredns-76f75df574-mjvxj" WorkloadEndpoint="srv--9xk3k.gb1.brightbox.com-k8s-coredns--76f75df574--mjvxj-eth0" Oct 9 07:55:45.699048 containerd[1626]: 2024-10-09 07:55:45.655 [INFO][4120] dataplane_linux.go 68: Setting the host side veth name to cali68d09508bcf ContainerID="619697e9603c5870a65b0eab027dfc58e2792e9b97be585e13c6c91cfb37acd0" Namespace="kube-system" Pod="coredns-76f75df574-mjvxj" WorkloadEndpoint="srv--9xk3k.gb1.brightbox.com-k8s-coredns--76f75df574--mjvxj-eth0" Oct 9 07:55:45.699048 containerd[1626]: 2024-10-09 07:55:45.664 [INFO][4120] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="619697e9603c5870a65b0eab027dfc58e2792e9b97be585e13c6c91cfb37acd0" Namespace="kube-system" Pod="coredns-76f75df574-mjvxj" WorkloadEndpoint="srv--9xk3k.gb1.brightbox.com-k8s-coredns--76f75df574--mjvxj-eth0" Oct 9 07:55:45.699048 containerd[1626]: 2024-10-09 07:55:45.664 [INFO][4120] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="619697e9603c5870a65b0eab027dfc58e2792e9b97be585e13c6c91cfb37acd0" Namespace="kube-system" Pod="coredns-76f75df574-mjvxj" WorkloadEndpoint="srv--9xk3k.gb1.brightbox.com-k8s-coredns--76f75df574--mjvxj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--9xk3k.gb1.brightbox.com-k8s-coredns--76f75df574--mjvxj-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"8dc77e6f-1a05-4ddb-9194-edc11be626aa", ResourceVersion:"686", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 55, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-9xk3k.gb1.brightbox.com", ContainerID:"619697e9603c5870a65b0eab027dfc58e2792e9b97be585e13c6c91cfb37acd0", Pod:"coredns-76f75df574-mjvxj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali68d09508bcf", MAC:"22:11:25:59:15:c2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:55:45.699048 containerd[1626]: 2024-10-09 07:55:45.692 [INFO][4120] k8s.go 500: Wrote updated endpoint to datastore ContainerID="619697e9603c5870a65b0eab027dfc58e2792e9b97be585e13c6c91cfb37acd0" Namespace="kube-system" Pod="coredns-76f75df574-mjvxj" WorkloadEndpoint="srv--9xk3k.gb1.brightbox.com-k8s-coredns--76f75df574--mjvxj-eth0" Oct 9 07:55:45.776837 containerd[1626]: time="2024-10-09T07:55:45.776124663Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:55:45.777306 containerd[1626]: time="2024-10-09T07:55:45.776963091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:55:45.777625 containerd[1626]: time="2024-10-09T07:55:45.777543821Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:55:45.778507 containerd[1626]: time="2024-10-09T07:55:45.777904164Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:55:45.779252 containerd[1626]: time="2024-10-09T07:55:45.779018780Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:55:45.779252 containerd[1626]: time="2024-10-09T07:55:45.779116862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:55:45.779252 containerd[1626]: time="2024-10-09T07:55:45.779145905Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:55:45.779252 containerd[1626]: time="2024-10-09T07:55:45.779164866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:55:45.902450 containerd[1626]: time="2024-10-09T07:55:45.902041690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mjvxj,Uid:8dc77e6f-1a05-4ddb-9194-edc11be626aa,Namespace:kube-system,Attempt:1,} returns sandbox id \"619697e9603c5870a65b0eab027dfc58e2792e9b97be585e13c6c91cfb37acd0\"" Oct 9 07:55:45.905021 containerd[1626]: time="2024-10-09T07:55:45.904684515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-2t599,Uid:5221c43d-cc16-4dce-aec2-ff2ae5da0e97,Namespace:kube-system,Attempt:1,} returns sandbox id \"15713b223810abf7c46bb896011ca4684059e4652bc06f9d433a97e6d6d30ac9\"" Oct 9 07:55:45.909524 containerd[1626]: time="2024-10-09T07:55:45.909357281Z" level=info msg="CreateContainer within sandbox \"619697e9603c5870a65b0eab027dfc58e2792e9b97be585e13c6c91cfb37acd0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 9 07:55:45.911309 containerd[1626]: time="2024-10-09T07:55:45.911257988Z" level=info msg="CreateContainer within sandbox \"15713b223810abf7c46bb896011ca4684059e4652bc06f9d433a97e6d6d30ac9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 9 07:55:45.930176 containerd[1626]: time="2024-10-09T07:55:45.930125696Z" level=info msg="CreateContainer within sandbox \"619697e9603c5870a65b0eab027dfc58e2792e9b97be585e13c6c91cfb37acd0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8d5df7b63dbd29f07eb44bfd3c7a073a1ff514b6d5469a87958896fda466f059\"" Oct 9 07:55:45.931808 containerd[1626]: time="2024-10-09T07:55:45.931586759Z" level=info msg="StartContainer for \"8d5df7b63dbd29f07eb44bfd3c7a073a1ff514b6d5469a87958896fda466f059\"" Oct 9 07:55:45.933417 containerd[1626]: time="2024-10-09T07:55:45.933275582Z" level=info msg="CreateContainer within sandbox \"15713b223810abf7c46bb896011ca4684059e4652bc06f9d433a97e6d6d30ac9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8f24c89a6294bb03a148afb64239bcb6f2651ac5e0189a375bd01fc122e8b10c\"" Oct 9 07:55:45.934587 containerd[1626]: time="2024-10-09T07:55:45.933877644Z" level=info msg="StartContainer for \"8f24c89a6294bb03a148afb64239bcb6f2651ac5e0189a375bd01fc122e8b10c\"" Oct 9 07:55:46.030512 containerd[1626]: time="2024-10-09T07:55:46.030440823Z" level=info msg="StartContainer for \"8f24c89a6294bb03a148afb64239bcb6f2651ac5e0189a375bd01fc122e8b10c\" returns successfully" Oct 9 07:55:46.030712 containerd[1626]: time="2024-10-09T07:55:46.030441008Z" level=info msg="StartContainer for \"8d5df7b63dbd29f07eb44bfd3c7a073a1ff514b6d5469a87958896fda466f059\" returns successfully" Oct 9 07:55:46.091819 containerd[1626]: time="2024-10-09T07:55:46.089227724Z" level=info msg="StopPodSandbox for \"d88bfa8b48c677030a0add4806c2b07063e7c1e2c183796c2e73071d045918f4\"" Oct 9 07:55:46.286480 containerd[1626]: 2024-10-09 07:55:46.180 [INFO][4348] k8s.go 608: Cleaning up netns ContainerID="d88bfa8b48c677030a0add4806c2b07063e7c1e2c183796c2e73071d045918f4" Oct 9 07:55:46.286480 containerd[1626]: 2024-10-09 07:55:46.182 [INFO][4348] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="d88bfa8b48c677030a0add4806c2b07063e7c1e2c183796c2e73071d045918f4" iface="eth0" netns="/var/run/netns/cni-57aab2eb-b46d-a445-00f9-381a7d7e4274" Oct 9 07:55:46.286480 containerd[1626]: 2024-10-09 07:55:46.183 [INFO][4348] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="d88bfa8b48c677030a0add4806c2b07063e7c1e2c183796c2e73071d045918f4" iface="eth0" netns="/var/run/netns/cni-57aab2eb-b46d-a445-00f9-381a7d7e4274" Oct 9 07:55:46.286480 containerd[1626]: 2024-10-09 07:55:46.183 [INFO][4348] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="d88bfa8b48c677030a0add4806c2b07063e7c1e2c183796c2e73071d045918f4" iface="eth0" netns="/var/run/netns/cni-57aab2eb-b46d-a445-00f9-381a7d7e4274" Oct 9 07:55:46.286480 containerd[1626]: 2024-10-09 07:55:46.183 [INFO][4348] k8s.go 615: Releasing IP address(es) ContainerID="d88bfa8b48c677030a0add4806c2b07063e7c1e2c183796c2e73071d045918f4" Oct 9 07:55:46.286480 containerd[1626]: 2024-10-09 07:55:46.183 [INFO][4348] utils.go 188: Calico CNI releasing IP address ContainerID="d88bfa8b48c677030a0add4806c2b07063e7c1e2c183796c2e73071d045918f4" Oct 9 07:55:46.286480 containerd[1626]: 2024-10-09 07:55:46.238 [INFO][4356] ipam_plugin.go 417: Releasing address using handleID ContainerID="d88bfa8b48c677030a0add4806c2b07063e7c1e2c183796c2e73071d045918f4" HandleID="k8s-pod-network.d88bfa8b48c677030a0add4806c2b07063e7c1e2c183796c2e73071d045918f4" Workload="srv--9xk3k.gb1.brightbox.com-k8s-calico--kube--controllers--55d54487b6--jf2kn-eth0" Oct 9 07:55:46.286480 containerd[1626]: 2024-10-09 07:55:46.239 [INFO][4356] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:55:46.286480 containerd[1626]: 2024-10-09 07:55:46.239 [INFO][4356] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:55:46.286480 containerd[1626]: 2024-10-09 07:55:46.249 [WARNING][4356] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="d88bfa8b48c677030a0add4806c2b07063e7c1e2c183796c2e73071d045918f4" HandleID="k8s-pod-network.d88bfa8b48c677030a0add4806c2b07063e7c1e2c183796c2e73071d045918f4" Workload="srv--9xk3k.gb1.brightbox.com-k8s-calico--kube--controllers--55d54487b6--jf2kn-eth0" Oct 9 07:55:46.286480 containerd[1626]: 2024-10-09 07:55:46.250 [INFO][4356] ipam_plugin.go 445: Releasing address using workloadID ContainerID="d88bfa8b48c677030a0add4806c2b07063e7c1e2c183796c2e73071d045918f4" HandleID="k8s-pod-network.d88bfa8b48c677030a0add4806c2b07063e7c1e2c183796c2e73071d045918f4" Workload="srv--9xk3k.gb1.brightbox.com-k8s-calico--kube--controllers--55d54487b6--jf2kn-eth0" Oct 9 07:55:46.286480 containerd[1626]: 2024-10-09 07:55:46.252 [INFO][4356] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:55:46.286480 containerd[1626]: 2024-10-09 07:55:46.254 [INFO][4348] k8s.go 621: Teardown processing complete. ContainerID="d88bfa8b48c677030a0add4806c2b07063e7c1e2c183796c2e73071d045918f4" Oct 9 07:55:46.289362 containerd[1626]: time="2024-10-09T07:55:46.288628293Z" level=info msg="TearDown network for sandbox \"d88bfa8b48c677030a0add4806c2b07063e7c1e2c183796c2e73071d045918f4\" successfully" Oct 9 07:55:46.289362 containerd[1626]: time="2024-10-09T07:55:46.288666425Z" level=info msg="StopPodSandbox for \"d88bfa8b48c677030a0add4806c2b07063e7c1e2c183796c2e73071d045918f4\" returns successfully" Oct 9 07:55:46.290654 containerd[1626]: time="2024-10-09T07:55:46.290179546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55d54487b6-jf2kn,Uid:7bfa79c2-7b56-4a96-b7e3-29115f7ec8ba,Namespace:calico-system,Attempt:1,}" Oct 9 07:55:46.411403 kubelet[2877]: I1009 07:55:46.410663 2877 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-2t599" podStartSLOduration=36.410383866 podStartE2EDuration="36.410383866s" podCreationTimestamp="2024-10-09 07:55:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:55:46.408963564 +0000 UTC m=+48.577474574" watchObservedRunningTime="2024-10-09 07:55:46.410383866 +0000 UTC m=+48.578894856" Oct 9 07:55:46.487228 kubelet[2877]: I1009 07:55:46.482935 2877 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-mjvxj" podStartSLOduration=36.482870077 podStartE2EDuration="36.482870077s" podCreationTimestamp="2024-10-09 07:55:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:55:46.436357375 +0000 UTC m=+48.604868373" watchObservedRunningTime="2024-10-09 07:55:46.482870077 +0000 UTC m=+48.651381058" Oct 9 07:55:46.500360 systemd[1]: run-netns-cni\x2d57aab2eb\x2db46d\x2da445\x2d00f9\x2d381a7d7e4274.mount: Deactivated successfully. Oct 9 07:55:46.619329 systemd-networkd[1270]: cali4f922f5c149: Link UP Oct 9 07:55:46.622589 systemd-networkd[1270]: cali4f922f5c149: Gained carrier Oct 9 07:55:46.632421 systemd-networkd[1270]: cali85d158e7bd3: Gained IPv6LL Oct 9 07:55:46.647968 containerd[1626]: 2024-10-09 07:55:46.425 [INFO][4367] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--9xk3k.gb1.brightbox.com-k8s-calico--kube--controllers--55d54487b6--jf2kn-eth0 calico-kube-controllers-55d54487b6- calico-system 7bfa79c2-7b56-4a96-b7e3-29115f7ec8ba 702 0 2024-10-09 07:55:17 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:55d54487b6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-9xk3k.gb1.brightbox.com calico-kube-controllers-55d54487b6-jf2kn eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali4f922f5c149 [] []}} ContainerID="babfd2ae9ebd20b7919730db586566795a58c8caac47871826238c1e2290d604" Namespace="calico-system" Pod="calico-kube-controllers-55d54487b6-jf2kn" WorkloadEndpoint="srv--9xk3k.gb1.brightbox.com-k8s-calico--kube--controllers--55d54487b6--jf2kn-" Oct 9 07:55:46.647968 containerd[1626]: 2024-10-09 07:55:46.425 [INFO][4367] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="babfd2ae9ebd20b7919730db586566795a58c8caac47871826238c1e2290d604" Namespace="calico-system" Pod="calico-kube-controllers-55d54487b6-jf2kn" WorkloadEndpoint="srv--9xk3k.gb1.brightbox.com-k8s-calico--kube--controllers--55d54487b6--jf2kn-eth0" Oct 9 07:55:46.647968 containerd[1626]: 2024-10-09 07:55:46.513 [INFO][4379] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="babfd2ae9ebd20b7919730db586566795a58c8caac47871826238c1e2290d604" HandleID="k8s-pod-network.babfd2ae9ebd20b7919730db586566795a58c8caac47871826238c1e2290d604" Workload="srv--9xk3k.gb1.brightbox.com-k8s-calico--kube--controllers--55d54487b6--jf2kn-eth0" Oct 9 07:55:46.647968 containerd[1626]: 2024-10-09 07:55:46.529 [INFO][4379] ipam_plugin.go 270: Auto assigning IP ContainerID="babfd2ae9ebd20b7919730db586566795a58c8caac47871826238c1e2290d604" HandleID="k8s-pod-network.babfd2ae9ebd20b7919730db586566795a58c8caac47871826238c1e2290d604" Workload="srv--9xk3k.gb1.brightbox.com-k8s-calico--kube--controllers--55d54487b6--jf2kn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000362ca0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-9xk3k.gb1.brightbox.com", "pod":"calico-kube-controllers-55d54487b6-jf2kn", "timestamp":"2024-10-09 07:55:46.513646004 +0000 UTC"}, Hostname:"srv-9xk3k.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:55:46.647968 containerd[1626]: 2024-10-09 07:55:46.529 [INFO][4379] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:55:46.647968 containerd[1626]: 2024-10-09 07:55:46.529 [INFO][4379] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:55:46.647968 containerd[1626]: 2024-10-09 07:55:46.529 [INFO][4379] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-9xk3k.gb1.brightbox.com' Oct 9 07:55:46.647968 containerd[1626]: 2024-10-09 07:55:46.537 [INFO][4379] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.babfd2ae9ebd20b7919730db586566795a58c8caac47871826238c1e2290d604" host="srv-9xk3k.gb1.brightbox.com" Oct 9 07:55:46.647968 containerd[1626]: 2024-10-09 07:55:46.552 [INFO][4379] ipam.go 372: Looking up existing affinities for host host="srv-9xk3k.gb1.brightbox.com" Oct 9 07:55:46.647968 containerd[1626]: 2024-10-09 07:55:46.561 [INFO][4379] ipam.go 489: Trying affinity for 192.168.5.128/26 host="srv-9xk3k.gb1.brightbox.com" Oct 9 07:55:46.647968 containerd[1626]: 2024-10-09 07:55:46.567 [INFO][4379] ipam.go 155: Attempting to load block cidr=192.168.5.128/26 host="srv-9xk3k.gb1.brightbox.com" Oct 9 07:55:46.647968 containerd[1626]: 2024-10-09 07:55:46.585 [INFO][4379] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.5.128/26 host="srv-9xk3k.gb1.brightbox.com" Oct 9 07:55:46.647968 containerd[1626]: 2024-10-09 07:55:46.585 [INFO][4379] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.5.128/26 handle="k8s-pod-network.babfd2ae9ebd20b7919730db586566795a58c8caac47871826238c1e2290d604" host="srv-9xk3k.gb1.brightbox.com" Oct 9 07:55:46.647968 containerd[1626]: 2024-10-09 07:55:46.588 [INFO][4379] ipam.go 1685: Creating new handle: k8s-pod-network.babfd2ae9ebd20b7919730db586566795a58c8caac47871826238c1e2290d604 Oct 9 07:55:46.647968 containerd[1626]: 2024-10-09 07:55:46.596 [INFO][4379] ipam.go 1203: Writing block in order to claim IPs block=192.168.5.128/26 handle="k8s-pod-network.babfd2ae9ebd20b7919730db586566795a58c8caac47871826238c1e2290d604" host="srv-9xk3k.gb1.brightbox.com" Oct 9 07:55:46.647968 containerd[1626]: 2024-10-09 07:55:46.605 [INFO][4379] ipam.go 1216: Successfully claimed IPs: [192.168.5.131/26] block=192.168.5.128/26 handle="k8s-pod-network.babfd2ae9ebd20b7919730db586566795a58c8caac47871826238c1e2290d604" host="srv-9xk3k.gb1.brightbox.com" Oct 9 07:55:46.647968 containerd[1626]: 2024-10-09 07:55:46.605 [INFO][4379] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.5.131/26] handle="k8s-pod-network.babfd2ae9ebd20b7919730db586566795a58c8caac47871826238c1e2290d604" host="srv-9xk3k.gb1.brightbox.com" Oct 9 07:55:46.647968 containerd[1626]: 2024-10-09 07:55:46.605 [INFO][4379] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:55:46.647968 containerd[1626]: 2024-10-09 07:55:46.605 [INFO][4379] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.5.131/26] IPv6=[] ContainerID="babfd2ae9ebd20b7919730db586566795a58c8caac47871826238c1e2290d604" HandleID="k8s-pod-network.babfd2ae9ebd20b7919730db586566795a58c8caac47871826238c1e2290d604" Workload="srv--9xk3k.gb1.brightbox.com-k8s-calico--kube--controllers--55d54487b6--jf2kn-eth0" Oct 9 07:55:46.651751 containerd[1626]: 2024-10-09 07:55:46.610 [INFO][4367] k8s.go 386: Populated endpoint ContainerID="babfd2ae9ebd20b7919730db586566795a58c8caac47871826238c1e2290d604" Namespace="calico-system" Pod="calico-kube-controllers-55d54487b6-jf2kn" WorkloadEndpoint="srv--9xk3k.gb1.brightbox.com-k8s-calico--kube--controllers--55d54487b6--jf2kn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--9xk3k.gb1.brightbox.com-k8s-calico--kube--controllers--55d54487b6--jf2kn-eth0", GenerateName:"calico-kube-controllers-55d54487b6-", Namespace:"calico-system", SelfLink:"", UID:"7bfa79c2-7b56-4a96-b7e3-29115f7ec8ba", ResourceVersion:"702", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 55, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55d54487b6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-9xk3k.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-55d54487b6-jf2kn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.5.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4f922f5c149", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:55:46.651751 containerd[1626]: 2024-10-09 07:55:46.610 [INFO][4367] k8s.go 387: Calico CNI using IPs: [192.168.5.131/32] ContainerID="babfd2ae9ebd20b7919730db586566795a58c8caac47871826238c1e2290d604" Namespace="calico-system" Pod="calico-kube-controllers-55d54487b6-jf2kn" WorkloadEndpoint="srv--9xk3k.gb1.brightbox.com-k8s-calico--kube--controllers--55d54487b6--jf2kn-eth0" Oct 9 07:55:46.651751 containerd[1626]: 2024-10-09 07:55:46.610 [INFO][4367] dataplane_linux.go 68: Setting the host side veth name to cali4f922f5c149 ContainerID="babfd2ae9ebd20b7919730db586566795a58c8caac47871826238c1e2290d604" Namespace="calico-system" Pod="calico-kube-controllers-55d54487b6-jf2kn" WorkloadEndpoint="srv--9xk3k.gb1.brightbox.com-k8s-calico--kube--controllers--55d54487b6--jf2kn-eth0" Oct 9 07:55:46.651751 containerd[1626]: 2024-10-09 07:55:46.617 [INFO][4367] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="babfd2ae9ebd20b7919730db586566795a58c8caac47871826238c1e2290d604" Namespace="calico-system" Pod="calico-kube-controllers-55d54487b6-jf2kn" WorkloadEndpoint="srv--9xk3k.gb1.brightbox.com-k8s-calico--kube--controllers--55d54487b6--jf2kn-eth0" Oct 9 07:55:46.651751 containerd[1626]: 2024-10-09 07:55:46.618 [INFO][4367] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="babfd2ae9ebd20b7919730db586566795a58c8caac47871826238c1e2290d604" Namespace="calico-system" Pod="calico-kube-controllers-55d54487b6-jf2kn" WorkloadEndpoint="srv--9xk3k.gb1.brightbox.com-k8s-calico--kube--controllers--55d54487b6--jf2kn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--9xk3k.gb1.brightbox.com-k8s-calico--kube--controllers--55d54487b6--jf2kn-eth0", GenerateName:"calico-kube-controllers-55d54487b6-", Namespace:"calico-system", SelfLink:"", UID:"7bfa79c2-7b56-4a96-b7e3-29115f7ec8ba", ResourceVersion:"702", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 55, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55d54487b6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-9xk3k.gb1.brightbox.com", ContainerID:"babfd2ae9ebd20b7919730db586566795a58c8caac47871826238c1e2290d604", Pod:"calico-kube-controllers-55d54487b6-jf2kn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.5.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4f922f5c149", MAC:"96:57:06:8d:b1:21", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:55:46.651751 containerd[1626]: 2024-10-09 07:55:46.636 [INFO][4367] k8s.go 500: Wrote updated endpoint to datastore ContainerID="babfd2ae9ebd20b7919730db586566795a58c8caac47871826238c1e2290d604" Namespace="calico-system" Pod="calico-kube-controllers-55d54487b6-jf2kn" WorkloadEndpoint="srv--9xk3k.gb1.brightbox.com-k8s-calico--kube--controllers--55d54487b6--jf2kn-eth0" Oct 9 07:55:46.697472 containerd[1626]: time="2024-10-09T07:55:46.695522092Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:55:46.699323 containerd[1626]: time="2024-10-09T07:55:46.698349790Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:55:46.699323 containerd[1626]: time="2024-10-09T07:55:46.698649336Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:55:46.699323 containerd[1626]: time="2024-10-09T07:55:46.698725111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:55:46.750898 systemd[1]: run-containerd-runc-k8s.io-babfd2ae9ebd20b7919730db586566795a58c8caac47871826238c1e2290d604-runc.LfKxRH.mount: Deactivated successfully. Oct 9 07:55:46.821705 containerd[1626]: time="2024-10-09T07:55:46.820745922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55d54487b6-jf2kn,Uid:7bfa79c2-7b56-4a96-b7e3-29115f7ec8ba,Namespace:calico-system,Attempt:1,} returns sandbox id \"babfd2ae9ebd20b7919730db586566795a58c8caac47871826238c1e2290d604\"" Oct 9 07:55:46.830304 containerd[1626]: time="2024-10-09T07:55:46.830100474Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Oct 9 07:55:47.086585 containerd[1626]: time="2024-10-09T07:55:47.086440775Z" level=info msg="StopPodSandbox for \"3496d03e740517a3139c6df4764dba446a9f3cdcdaea201f2f0ee51289340871\"" Oct 9 07:55:47.190234 containerd[1626]: 2024-10-09 07:55:47.141 [INFO][4457] k8s.go 608: Cleaning up netns ContainerID="3496d03e740517a3139c6df4764dba446a9f3cdcdaea201f2f0ee51289340871" Oct 9 07:55:47.190234 containerd[1626]: 2024-10-09 07:55:47.143 [INFO][4457] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="3496d03e740517a3139c6df4764dba446a9f3cdcdaea201f2f0ee51289340871" iface="eth0" netns="/var/run/netns/cni-2f279f5a-68fb-5c94-2e13-5593b8b21c24" Oct 9 07:55:47.190234 containerd[1626]: 2024-10-09 07:55:47.143 [INFO][4457] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="3496d03e740517a3139c6df4764dba446a9f3cdcdaea201f2f0ee51289340871" iface="eth0" netns="/var/run/netns/cni-2f279f5a-68fb-5c94-2e13-5593b8b21c24" Oct 9 07:55:47.190234 containerd[1626]: 2024-10-09 07:55:47.144 [INFO][4457] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="3496d03e740517a3139c6df4764dba446a9f3cdcdaea201f2f0ee51289340871" iface="eth0" netns="/var/run/netns/cni-2f279f5a-68fb-5c94-2e13-5593b8b21c24" Oct 9 07:55:47.190234 containerd[1626]: 2024-10-09 07:55:47.144 [INFO][4457] k8s.go 615: Releasing IP address(es) ContainerID="3496d03e740517a3139c6df4764dba446a9f3cdcdaea201f2f0ee51289340871" Oct 9 07:55:47.190234 containerd[1626]: 2024-10-09 07:55:47.144 [INFO][4457] utils.go 188: Calico CNI releasing IP address ContainerID="3496d03e740517a3139c6df4764dba446a9f3cdcdaea201f2f0ee51289340871" Oct 9 07:55:47.190234 containerd[1626]: 2024-10-09 07:55:47.170 [INFO][4464] ipam_plugin.go 417: Releasing address using handleID ContainerID="3496d03e740517a3139c6df4764dba446a9f3cdcdaea201f2f0ee51289340871" HandleID="k8s-pod-network.3496d03e740517a3139c6df4764dba446a9f3cdcdaea201f2f0ee51289340871" Workload="srv--9xk3k.gb1.brightbox.com-k8s-csi--node--driver--6frp9-eth0" Oct 9 07:55:47.190234 containerd[1626]: 2024-10-09 07:55:47.170 [INFO][4464] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:55:47.190234 containerd[1626]: 2024-10-09 07:55:47.170 [INFO][4464] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:55:47.190234 containerd[1626]: 2024-10-09 07:55:47.180 [WARNING][4464] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="3496d03e740517a3139c6df4764dba446a9f3cdcdaea201f2f0ee51289340871" HandleID="k8s-pod-network.3496d03e740517a3139c6df4764dba446a9f3cdcdaea201f2f0ee51289340871" Workload="srv--9xk3k.gb1.brightbox.com-k8s-csi--node--driver--6frp9-eth0" Oct 9 07:55:47.190234 containerd[1626]: 2024-10-09 07:55:47.180 [INFO][4464] ipam_plugin.go 445: Releasing address using workloadID ContainerID="3496d03e740517a3139c6df4764dba446a9f3cdcdaea201f2f0ee51289340871" HandleID="k8s-pod-network.3496d03e740517a3139c6df4764dba446a9f3cdcdaea201f2f0ee51289340871" Workload="srv--9xk3k.gb1.brightbox.com-k8s-csi--node--driver--6frp9-eth0" Oct 9 07:55:47.190234 containerd[1626]: 2024-10-09 07:55:47.183 [INFO][4464] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:55:47.190234 containerd[1626]: 2024-10-09 07:55:47.187 [INFO][4457] k8s.go 621: Teardown processing complete. ContainerID="3496d03e740517a3139c6df4764dba446a9f3cdcdaea201f2f0ee51289340871" Oct 9 07:55:47.192019 containerd[1626]: time="2024-10-09T07:55:47.191929394Z" level=info msg="TearDown network for sandbox \"3496d03e740517a3139c6df4764dba446a9f3cdcdaea201f2f0ee51289340871\" successfully" Oct 9 07:55:47.192019 containerd[1626]: time="2024-10-09T07:55:47.191968957Z" level=info msg="StopPodSandbox for \"3496d03e740517a3139c6df4764dba446a9f3cdcdaea201f2f0ee51289340871\" returns successfully" Oct 9 07:55:47.193853 containerd[1626]: time="2024-10-09T07:55:47.193817009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6frp9,Uid:c258f453-4ba7-47ef-a510-74cac7855910,Namespace:calico-system,Attempt:1,}" Oct 9 07:55:47.197758 systemd[1]: run-netns-cni\x2d2f279f5a\x2d68fb\x2d5c94\x2d2e13\x2d5593b8b21c24.mount: Deactivated successfully. Oct 9 07:55:47.275833 systemd-networkd[1270]: cali68d09508bcf: Gained IPv6LL Oct 9 07:55:47.351758 systemd-networkd[1270]: calib86cb8a3251: Link UP Oct 9 07:55:47.353876 systemd-networkd[1270]: calib86cb8a3251: Gained carrier Oct 9 07:55:47.377556 containerd[1626]: 2024-10-09 07:55:47.253 [INFO][4471] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--9xk3k.gb1.brightbox.com-k8s-csi--node--driver--6frp9-eth0 csi-node-driver- calico-system c258f453-4ba7-47ef-a510-74cac7855910 726 0 2024-10-09 07:55:17 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78cd84fb8c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s srv-9xk3k.gb1.brightbox.com csi-node-driver-6frp9 eth0 default [] [] [kns.calico-system ksa.calico-system.default] calib86cb8a3251 [] []}} ContainerID="719355bf959a2d69d9bb2a62bc20c8f0613b45e16ce750c4067a0488a65f8b04" Namespace="calico-system" Pod="csi-node-driver-6frp9" WorkloadEndpoint="srv--9xk3k.gb1.brightbox.com-k8s-csi--node--driver--6frp9-" Oct 9 07:55:47.377556 containerd[1626]: 2024-10-09 07:55:47.253 [INFO][4471] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="719355bf959a2d69d9bb2a62bc20c8f0613b45e16ce750c4067a0488a65f8b04" Namespace="calico-system" Pod="csi-node-driver-6frp9" WorkloadEndpoint="srv--9xk3k.gb1.brightbox.com-k8s-csi--node--driver--6frp9-eth0" Oct 9 07:55:47.377556 containerd[1626]: 2024-10-09 07:55:47.297 [INFO][4481] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="719355bf959a2d69d9bb2a62bc20c8f0613b45e16ce750c4067a0488a65f8b04" HandleID="k8s-pod-network.719355bf959a2d69d9bb2a62bc20c8f0613b45e16ce750c4067a0488a65f8b04" Workload="srv--9xk3k.gb1.brightbox.com-k8s-csi--node--driver--6frp9-eth0" Oct 9 07:55:47.377556 containerd[1626]: 2024-10-09 07:55:47.311 [INFO][4481] ipam_plugin.go 270: Auto assigning IP ContainerID="719355bf959a2d69d9bb2a62bc20c8f0613b45e16ce750c4067a0488a65f8b04" HandleID="k8s-pod-network.719355bf959a2d69d9bb2a62bc20c8f0613b45e16ce750c4067a0488a65f8b04" Workload="srv--9xk3k.gb1.brightbox.com-k8s-csi--node--driver--6frp9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000293c80), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-9xk3k.gb1.brightbox.com", "pod":"csi-node-driver-6frp9", "timestamp":"2024-10-09 07:55:47.297903275 +0000 UTC"}, Hostname:"srv-9xk3k.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:55:47.377556 containerd[1626]: 2024-10-09 07:55:47.311 [INFO][4481] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:55:47.377556 containerd[1626]: 2024-10-09 07:55:47.312 [INFO][4481] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:55:47.377556 containerd[1626]: 2024-10-09 07:55:47.312 [INFO][4481] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-9xk3k.gb1.brightbox.com' Oct 9 07:55:47.377556 containerd[1626]: 2024-10-09 07:55:47.314 [INFO][4481] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.719355bf959a2d69d9bb2a62bc20c8f0613b45e16ce750c4067a0488a65f8b04" host="srv-9xk3k.gb1.brightbox.com" Oct 9 07:55:47.377556 containerd[1626]: 2024-10-09 07:55:47.319 [INFO][4481] ipam.go 372: Looking up existing affinities for host host="srv-9xk3k.gb1.brightbox.com" Oct 9 07:55:47.377556 containerd[1626]: 2024-10-09 07:55:47.324 [INFO][4481] ipam.go 489: Trying affinity for 192.168.5.128/26 host="srv-9xk3k.gb1.brightbox.com" Oct 9 07:55:47.377556 containerd[1626]: 2024-10-09 07:55:47.326 [INFO][4481] ipam.go 155: Attempting to load block cidr=192.168.5.128/26 host="srv-9xk3k.gb1.brightbox.com" Oct 9 07:55:47.377556 containerd[1626]: 2024-10-09 07:55:47.329 [INFO][4481] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.5.128/26 host="srv-9xk3k.gb1.brightbox.com" Oct 9 07:55:47.377556 containerd[1626]: 2024-10-09 07:55:47.329 [INFO][4481] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.5.128/26 handle="k8s-pod-network.719355bf959a2d69d9bb2a62bc20c8f0613b45e16ce750c4067a0488a65f8b04" host="srv-9xk3k.gb1.brightbox.com" Oct 9 07:55:47.377556 containerd[1626]: 2024-10-09 07:55:47.331 [INFO][4481] ipam.go 1685: Creating new handle: k8s-pod-network.719355bf959a2d69d9bb2a62bc20c8f0613b45e16ce750c4067a0488a65f8b04 Oct 9 07:55:47.377556 containerd[1626]: 2024-10-09 07:55:47.337 [INFO][4481] ipam.go 1203: Writing block in order to claim IPs block=192.168.5.128/26 handle="k8s-pod-network.719355bf959a2d69d9bb2a62bc20c8f0613b45e16ce750c4067a0488a65f8b04" host="srv-9xk3k.gb1.brightbox.com" Oct 9 07:55:47.377556 containerd[1626]: 2024-10-09 07:55:47.344 [INFO][4481] ipam.go 1216: Successfully claimed IPs: [192.168.5.132/26] block=192.168.5.128/26 handle="k8s-pod-network.719355bf959a2d69d9bb2a62bc20c8f0613b45e16ce750c4067a0488a65f8b04" host="srv-9xk3k.gb1.brightbox.com" Oct 9 07:55:47.377556 containerd[1626]: 2024-10-09 07:55:47.345 [INFO][4481] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.5.132/26] handle="k8s-pod-network.719355bf959a2d69d9bb2a62bc20c8f0613b45e16ce750c4067a0488a65f8b04" host="srv-9xk3k.gb1.brightbox.com" Oct 9 07:55:47.377556 containerd[1626]: 2024-10-09 07:55:47.345 [INFO][4481] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:55:47.377556 containerd[1626]: 2024-10-09 07:55:47.345 [INFO][4481] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.5.132/26] IPv6=[] ContainerID="719355bf959a2d69d9bb2a62bc20c8f0613b45e16ce750c4067a0488a65f8b04" HandleID="k8s-pod-network.719355bf959a2d69d9bb2a62bc20c8f0613b45e16ce750c4067a0488a65f8b04" Workload="srv--9xk3k.gb1.brightbox.com-k8s-csi--node--driver--6frp9-eth0" Oct 9 07:55:47.379900 containerd[1626]: 2024-10-09 07:55:47.347 [INFO][4471] k8s.go 386: Populated endpoint ContainerID="719355bf959a2d69d9bb2a62bc20c8f0613b45e16ce750c4067a0488a65f8b04" Namespace="calico-system" Pod="csi-node-driver-6frp9" WorkloadEndpoint="srv--9xk3k.gb1.brightbox.com-k8s-csi--node--driver--6frp9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--9xk3k.gb1.brightbox.com-k8s-csi--node--driver--6frp9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c258f453-4ba7-47ef-a510-74cac7855910", ResourceVersion:"726", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 55, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-9xk3k.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-6frp9", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.5.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calib86cb8a3251", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:55:47.379900 containerd[1626]: 2024-10-09 07:55:47.347 [INFO][4471] k8s.go 387: Calico CNI using IPs: [192.168.5.132/32] ContainerID="719355bf959a2d69d9bb2a62bc20c8f0613b45e16ce750c4067a0488a65f8b04" Namespace="calico-system" Pod="csi-node-driver-6frp9" WorkloadEndpoint="srv--9xk3k.gb1.brightbox.com-k8s-csi--node--driver--6frp9-eth0" Oct 9 07:55:47.379900 containerd[1626]: 2024-10-09 07:55:47.347 [INFO][4471] dataplane_linux.go 68: Setting the host side veth name to calib86cb8a3251 ContainerID="719355bf959a2d69d9bb2a62bc20c8f0613b45e16ce750c4067a0488a65f8b04" Namespace="calico-system" Pod="csi-node-driver-6frp9" WorkloadEndpoint="srv--9xk3k.gb1.brightbox.com-k8s-csi--node--driver--6frp9-eth0" Oct 9 07:55:47.379900 containerd[1626]: 2024-10-09 07:55:47.354 [INFO][4471] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="719355bf959a2d69d9bb2a62bc20c8f0613b45e16ce750c4067a0488a65f8b04" Namespace="calico-system" Pod="csi-node-driver-6frp9" WorkloadEndpoint="srv--9xk3k.gb1.brightbox.com-k8s-csi--node--driver--6frp9-eth0" Oct 9 07:55:47.379900 containerd[1626]: 2024-10-09 07:55:47.354 [INFO][4471] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="719355bf959a2d69d9bb2a62bc20c8f0613b45e16ce750c4067a0488a65f8b04" Namespace="calico-system" Pod="csi-node-driver-6frp9" WorkloadEndpoint="srv--9xk3k.gb1.brightbox.com-k8s-csi--node--driver--6frp9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--9xk3k.gb1.brightbox.com-k8s-csi--node--driver--6frp9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c258f453-4ba7-47ef-a510-74cac7855910", ResourceVersion:"726", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 55, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-9xk3k.gb1.brightbox.com", ContainerID:"719355bf959a2d69d9bb2a62bc20c8f0613b45e16ce750c4067a0488a65f8b04", Pod:"csi-node-driver-6frp9", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.5.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calib86cb8a3251", MAC:"a2:4d:75:ac:f7:18", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:55:47.379900 containerd[1626]: 2024-10-09 07:55:47.373 [INFO][4471] k8s.go 500: Wrote updated endpoint to datastore ContainerID="719355bf959a2d69d9bb2a62bc20c8f0613b45e16ce750c4067a0488a65f8b04" Namespace="calico-system" Pod="csi-node-driver-6frp9" WorkloadEndpoint="srv--9xk3k.gb1.brightbox.com-k8s-csi--node--driver--6frp9-eth0" Oct 9 07:55:47.414164 containerd[1626]: time="2024-10-09T07:55:47.413895584Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:55:47.415319 containerd[1626]: time="2024-10-09T07:55:47.414052648Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:55:47.415592 containerd[1626]: time="2024-10-09T07:55:47.415472710Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:55:47.415592 containerd[1626]: time="2024-10-09T07:55:47.415551467Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:55:47.465922 containerd[1626]: time="2024-10-09T07:55:47.465446373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6frp9,Uid:c258f453-4ba7-47ef-a510-74cac7855910,Namespace:calico-system,Attempt:1,} returns sandbox id \"719355bf959a2d69d9bb2a62bc20c8f0613b45e16ce750c4067a0488a65f8b04\"" Oct 9 07:55:48.168638 systemd-networkd[1270]: cali4f922f5c149: Gained IPv6LL Oct 9 07:55:48.426375 systemd-networkd[1270]: calib86cb8a3251: Gained IPv6LL Oct 9 07:55:50.394335 containerd[1626]: time="2024-10-09T07:55:50.393877142Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:55:50.395183 containerd[1626]: time="2024-10-09T07:55:50.395092234Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=33507125" Oct 9 07:55:50.396905 containerd[1626]: time="2024-10-09T07:55:50.396826204Z" level=info msg="ImageCreate event name:\"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:55:50.402863 containerd[1626]: time="2024-10-09T07:55:50.402350411Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:55:50.403619 containerd[1626]: time="2024-10-09T07:55:50.403540177Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"34999494\" in 3.573376809s" Oct 9 07:55:50.403737 containerd[1626]: time="2024-10-09T07:55:50.403617223Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\"" Oct 9 07:55:50.405843 containerd[1626]: time="2024-10-09T07:55:50.405794664Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Oct 9 07:55:50.459696 containerd[1626]: time="2024-10-09T07:55:50.459359139Z" level=info msg="CreateContainer within sandbox \"babfd2ae9ebd20b7919730db586566795a58c8caac47871826238c1e2290d604\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 9 07:55:50.480840 containerd[1626]: time="2024-10-09T07:55:50.480786988Z" level=info msg="CreateContainer within sandbox \"babfd2ae9ebd20b7919730db586566795a58c8caac47871826238c1e2290d604\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"cd12cc02d463edaa00cad9d1e4c4107270e29a875852ed4af3e8a338f65d6f87\"" Oct 9 07:55:50.492368 containerd[1626]: time="2024-10-09T07:55:50.485385727Z" level=info msg="StartContainer for \"cd12cc02d463edaa00cad9d1e4c4107270e29a875852ed4af3e8a338f65d6f87\"" Oct 9 07:55:50.489216 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3408070516.mount: Deactivated successfully. Oct 9 07:55:50.619473 containerd[1626]: time="2024-10-09T07:55:50.619363769Z" level=info msg="StartContainer for \"cd12cc02d463edaa00cad9d1e4c4107270e29a875852ed4af3e8a338f65d6f87\" returns successfully" Oct 9 07:55:51.565411 kubelet[2877]: I1009 07:55:51.564856 2877 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-55d54487b6-jf2kn" podStartSLOduration=30.985414922 podStartE2EDuration="34.564789204s" podCreationTimestamp="2024-10-09 07:55:17 +0000 UTC" firstStartedPulling="2024-10-09 07:55:46.825136371 +0000 UTC m=+48.993647344" lastFinishedPulling="2024-10-09 07:55:50.404510634 +0000 UTC m=+52.573021626" observedRunningTime="2024-10-09 07:55:51.457501481 +0000 UTC m=+53.626012488" watchObservedRunningTime="2024-10-09 07:55:51.564789204 +0000 UTC m=+53.733300191" Oct 9 07:55:52.140781 containerd[1626]: time="2024-10-09T07:55:52.140151677Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:55:52.143578 containerd[1626]: time="2024-10-09T07:55:52.143492870Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7642081" Oct 9 07:55:52.144881 containerd[1626]: time="2024-10-09T07:55:52.144831749Z" level=info msg="ImageCreate event name:\"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:55:52.147427 containerd[1626]: time="2024-10-09T07:55:52.147386195Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:55:52.149040 containerd[1626]: time="2024-10-09T07:55:52.148829794Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"9134482\" in 1.742888517s" Oct 9 07:55:52.149040 containerd[1626]: time="2024-10-09T07:55:52.148875194Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\"" Oct 9 07:55:52.153017 containerd[1626]: time="2024-10-09T07:55:52.152948225Z" level=info msg="CreateContainer within sandbox \"719355bf959a2d69d9bb2a62bc20c8f0613b45e16ce750c4067a0488a65f8b04\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 9 07:55:52.180250 containerd[1626]: time="2024-10-09T07:55:52.179841173Z" level=info msg="CreateContainer within sandbox \"719355bf959a2d69d9bb2a62bc20c8f0613b45e16ce750c4067a0488a65f8b04\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"07b964d4b4a60995bd32763a75319c7a50f2a4f6545623d6157d15e69c6551bb\"" Oct 9 07:55:52.181352 containerd[1626]: time="2024-10-09T07:55:52.181107782Z" level=info msg="StartContainer for \"07b964d4b4a60995bd32763a75319c7a50f2a4f6545623d6157d15e69c6551bb\"" Oct 9 07:55:52.312010 containerd[1626]: time="2024-10-09T07:55:52.311892047Z" level=info msg="StartContainer for \"07b964d4b4a60995bd32763a75319c7a50f2a4f6545623d6157d15e69c6551bb\" returns successfully" Oct 9 07:55:52.321711 containerd[1626]: time="2024-10-09T07:55:52.321355731Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Oct 9 07:55:54.301763 containerd[1626]: time="2024-10-09T07:55:54.301646473Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:55:54.303531 containerd[1626]: time="2024-10-09T07:55:54.303067048Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12907822" Oct 9 07:55:54.303878 containerd[1626]: time="2024-10-09T07:55:54.303841932Z" level=info msg="ImageCreate event name:\"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:55:54.308396 containerd[1626]: time="2024-10-09T07:55:54.308315954Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:55:54.310748 containerd[1626]: time="2024-10-09T07:55:54.310681281Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"14400175\" in 1.988547524s" Oct 9 07:55:54.311002 containerd[1626]: time="2024-10-09T07:55:54.310917414Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\"" Oct 9 07:55:54.317123 containerd[1626]: time="2024-10-09T07:55:54.317087386Z" level=info msg="CreateContainer within sandbox \"719355bf959a2d69d9bb2a62bc20c8f0613b45e16ce750c4067a0488a65f8b04\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 9 07:55:54.345740 containerd[1626]: time="2024-10-09T07:55:54.345691025Z" level=info msg="CreateContainer within sandbox \"719355bf959a2d69d9bb2a62bc20c8f0613b45e16ce750c4067a0488a65f8b04\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"72f6b5913132b46f2cc096940c69f1a8b16a93fa46204f70c78bcfd22376b2f8\"" Oct 9 07:55:54.347054 containerd[1626]: time="2024-10-09T07:55:54.347022003Z" level=info msg="StartContainer for \"72f6b5913132b46f2cc096940c69f1a8b16a93fa46204f70c78bcfd22376b2f8\"" Oct 9 07:55:54.406213 systemd[1]: run-containerd-runc-k8s.io-72f6b5913132b46f2cc096940c69f1a8b16a93fa46204f70c78bcfd22376b2f8-runc.Zl5CtP.mount: Deactivated successfully. Oct 9 07:55:54.464272 containerd[1626]: time="2024-10-09T07:55:54.463429685Z" level=info msg="StartContainer for \"72f6b5913132b46f2cc096940c69f1a8b16a93fa46204f70c78bcfd22376b2f8\" returns successfully" Oct 9 07:55:55.370829 kubelet[2877]: I1009 07:55:55.370757 2877 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 9 07:55:55.374957 kubelet[2877]: I1009 07:55:55.374904 2877 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 9 07:55:55.478411 kubelet[2877]: I1009 07:55:55.477354 2877 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-6frp9" podStartSLOduration=31.633654505 podStartE2EDuration="38.477256461s" podCreationTimestamp="2024-10-09 07:55:17 +0000 UTC" firstStartedPulling="2024-10-09 07:55:47.468371838 +0000 UTC m=+49.636882819" lastFinishedPulling="2024-10-09 07:55:54.311973788 +0000 UTC m=+56.480484775" observedRunningTime="2024-10-09 07:55:55.476837029 +0000 UTC m=+57.645348019" watchObservedRunningTime="2024-10-09 07:55:55.477256461 +0000 UTC m=+57.645767461" Oct 9 07:55:56.967296 kubelet[2877]: I1009 07:55:56.966937 2877 topology_manager.go:215] "Topology Admit Handler" podUID="486006d2-72db-48ab-a190-2ade85a28c14" podNamespace="calico-apiserver" podName="calico-apiserver-7d4854566c-2gg96" Oct 9 07:55:56.972925 kubelet[2877]: I1009 07:55:56.969039 2877 topology_manager.go:215] "Topology Admit Handler" podUID="a9d5e0ac-3e74-4330-9f86-9063d2460899" podNamespace="calico-apiserver" podName="calico-apiserver-7d4854566c-5b5mm" Oct 9 07:55:57.035551 kubelet[2877]: I1009 07:55:57.035498 2877 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22mdd\" (UniqueName: \"kubernetes.io/projected/a9d5e0ac-3e74-4330-9f86-9063d2460899-kube-api-access-22mdd\") pod \"calico-apiserver-7d4854566c-5b5mm\" (UID: \"a9d5e0ac-3e74-4330-9f86-9063d2460899\") " pod="calico-apiserver/calico-apiserver-7d4854566c-5b5mm" Oct 9 07:55:57.035747 kubelet[2877]: I1009 07:55:57.035576 2877 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsk7z\" (UniqueName: \"kubernetes.io/projected/486006d2-72db-48ab-a190-2ade85a28c14-kube-api-access-jsk7z\") pod \"calico-apiserver-7d4854566c-2gg96\" (UID: \"486006d2-72db-48ab-a190-2ade85a28c14\") " pod="calico-apiserver/calico-apiserver-7d4854566c-2gg96" Oct 9 07:55:57.035747 kubelet[2877]: I1009 07:55:57.035623 2877 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a9d5e0ac-3e74-4330-9f86-9063d2460899-calico-apiserver-certs\") pod \"calico-apiserver-7d4854566c-5b5mm\" (UID: \"a9d5e0ac-3e74-4330-9f86-9063d2460899\") " pod="calico-apiserver/calico-apiserver-7d4854566c-5b5mm" Oct 9 07:55:57.035747 kubelet[2877]: I1009 07:55:57.035697 2877 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/486006d2-72db-48ab-a190-2ade85a28c14-calico-apiserver-certs\") pod \"calico-apiserver-7d4854566c-2gg96\" (UID: \"486006d2-72db-48ab-a190-2ade85a28c14\") " pod="calico-apiserver/calico-apiserver-7d4854566c-2gg96" Oct 9 07:55:57.145241 kubelet[2877]: E1009 07:55:57.144229 2877 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Oct 9 07:55:57.145705 kubelet[2877]: E1009 07:55:57.144237 2877 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Oct 9 07:55:57.155517 kubelet[2877]: E1009 07:55:57.155103 2877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a9d5e0ac-3e74-4330-9f86-9063d2460899-calico-apiserver-certs podName:a9d5e0ac-3e74-4330-9f86-9063d2460899 nodeName:}" failed. No retries permitted until 2024-10-09 07:55:57.645480391 +0000 UTC m=+59.813991378 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/a9d5e0ac-3e74-4330-9f86-9063d2460899-calico-apiserver-certs") pod "calico-apiserver-7d4854566c-5b5mm" (UID: "a9d5e0ac-3e74-4330-9f86-9063d2460899") : secret "calico-apiserver-certs" not found Oct 9 07:55:57.155517 kubelet[2877]: E1009 07:55:57.155144 2877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/486006d2-72db-48ab-a190-2ade85a28c14-calico-apiserver-certs podName:486006d2-72db-48ab-a190-2ade85a28c14 nodeName:}" failed. No retries permitted until 2024-10-09 07:55:57.655128474 +0000 UTC m=+59.823639454 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/486006d2-72db-48ab-a190-2ade85a28c14-calico-apiserver-certs") pod "calico-apiserver-7d4854566c-2gg96" (UID: "486006d2-72db-48ab-a190-2ade85a28c14") : secret "calico-apiserver-certs" not found Oct 9 07:55:57.888424 containerd[1626]: time="2024-10-09T07:55:57.887691247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4854566c-2gg96,Uid:486006d2-72db-48ab-a190-2ade85a28c14,Namespace:calico-apiserver,Attempt:0,}" Oct 9 07:55:57.898341 containerd[1626]: time="2024-10-09T07:55:57.898283762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4854566c-5b5mm,Uid:a9d5e0ac-3e74-4330-9f86-9063d2460899,Namespace:calico-apiserver,Attempt:0,}" Oct 9 07:55:58.125442 containerd[1626]: time="2024-10-09T07:55:58.125284617Z" level=info msg="StopPodSandbox for \"3496d03e740517a3139c6df4764dba446a9f3cdcdaea201f2f0ee51289340871\"" Oct 9 07:55:58.378181 systemd-networkd[1270]: calice1e68c1580: Link UP Oct 9 07:55:58.382313 systemd-networkd[1270]: calice1e68c1580: Gained carrier Oct 9 07:55:58.415396 containerd[1626]: 2024-10-09 07:55:58.008 [INFO][4704] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--9xk3k.gb1.brightbox.com-k8s-calico--apiserver--7d4854566c--2gg96-eth0 calico-apiserver-7d4854566c- calico-apiserver 486006d2-72db-48ab-a190-2ade85a28c14 818 0 2024-10-09 07:55:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7d4854566c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-9xk3k.gb1.brightbox.com calico-apiserver-7d4854566c-2gg96 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calice1e68c1580 [] []}} ContainerID="9079af74f224ac7146610df7ef4a7d39525b09fff31902588cd4750863e463fe" Namespace="calico-apiserver" Pod="calico-apiserver-7d4854566c-2gg96" WorkloadEndpoint="srv--9xk3k.gb1.brightbox.com-k8s-calico--apiserver--7d4854566c--2gg96-" Oct 9 07:55:58.415396 containerd[1626]: 2024-10-09 07:55:58.010 [INFO][4704] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9079af74f224ac7146610df7ef4a7d39525b09fff31902588cd4750863e463fe" Namespace="calico-apiserver" Pod="calico-apiserver-7d4854566c-2gg96" WorkloadEndpoint="srv--9xk3k.gb1.brightbox.com-k8s-calico--apiserver--7d4854566c--2gg96-eth0" Oct 9 07:55:58.415396 containerd[1626]: 2024-10-09 07:55:58.270 [INFO][4730] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9079af74f224ac7146610df7ef4a7d39525b09fff31902588cd4750863e463fe" HandleID="k8s-pod-network.9079af74f224ac7146610df7ef4a7d39525b09fff31902588cd4750863e463fe" Workload="srv--9xk3k.gb1.brightbox.com-k8s-calico--apiserver--7d4854566c--2gg96-eth0" Oct 9 07:55:58.415396 containerd[1626]: 2024-10-09 07:55:58.295 [INFO][4730] ipam_plugin.go 270: Auto assigning IP ContainerID="9079af74f224ac7146610df7ef4a7d39525b09fff31902588cd4750863e463fe" HandleID="k8s-pod-network.9079af74f224ac7146610df7ef4a7d39525b09fff31902588cd4750863e463fe" Workload="srv--9xk3k.gb1.brightbox.com-k8s-calico--apiserver--7d4854566c--2gg96-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000342290), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-9xk3k.gb1.brightbox.com", "pod":"calico-apiserver-7d4854566c-2gg96", "timestamp":"2024-10-09 07:55:58.268605128 +0000 UTC"}, Hostname:"srv-9xk3k.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:55:58.415396 containerd[1626]: 2024-10-09 07:55:58.296 [INFO][4730] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:55:58.415396 containerd[1626]: 2024-10-09 07:55:58.296 [INFO][4730] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:55:58.415396 containerd[1626]: 2024-10-09 07:55:58.296 [INFO][4730] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-9xk3k.gb1.brightbox.com' Oct 9 07:55:58.415396 containerd[1626]: 2024-10-09 07:55:58.302 [INFO][4730] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9079af74f224ac7146610df7ef4a7d39525b09fff31902588cd4750863e463fe" host="srv-9xk3k.gb1.brightbox.com" Oct 9 07:55:58.415396 containerd[1626]: 2024-10-09 07:55:58.311 [INFO][4730] ipam.go 372: Looking up existing affinities for host host="srv-9xk3k.gb1.brightbox.com" Oct 9 07:55:58.415396 containerd[1626]: 2024-10-09 07:55:58.323 [INFO][4730] ipam.go 489: Trying affinity for 192.168.5.128/26 host="srv-9xk3k.gb1.brightbox.com" Oct 9 07:55:58.415396 containerd[1626]: 2024-10-09 07:55:58.327 [INFO][4730] ipam.go 155: Attempting to load block cidr=192.168.5.128/26 host="srv-9xk3k.gb1.brightbox.com" Oct 9 07:55:58.415396 containerd[1626]: 2024-10-09 07:55:58.333 [INFO][4730] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.5.128/26 host="srv-9xk3k.gb1.brightbox.com" Oct 9 07:55:58.415396 containerd[1626]: 2024-10-09 07:55:58.333 [INFO][4730] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.5.128/26 handle="k8s-pod-network.9079af74f224ac7146610df7ef4a7d39525b09fff31902588cd4750863e463fe" host="srv-9xk3k.gb1.brightbox.com" Oct 9 07:55:58.415396 containerd[1626]: 2024-10-09 07:55:58.335 [INFO][4730] ipam.go 1685: Creating new handle: k8s-pod-network.9079af74f224ac7146610df7ef4a7d39525b09fff31902588cd4750863e463fe Oct 9 07:55:58.415396 containerd[1626]: 2024-10-09 07:55:58.343 [INFO][4730] ipam.go 1203: Writing block in order to claim IPs block=192.168.5.128/26 handle="k8s-pod-network.9079af74f224ac7146610df7ef4a7d39525b09fff31902588cd4750863e463fe" host="srv-9xk3k.gb1.brightbox.com" Oct 9 07:55:58.415396 containerd[1626]: 2024-10-09 07:55:58.355 [INFO][4730] ipam.go 1216: Successfully claimed IPs: [192.168.5.133/26] block=192.168.5.128/26 handle="k8s-pod-network.9079af74f224ac7146610df7ef4a7d39525b09fff31902588cd4750863e463fe" host="srv-9xk3k.gb1.brightbox.com" Oct 9 07:55:58.415396 containerd[1626]: 2024-10-09 07:55:58.355 [INFO][4730] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.5.133/26] handle="k8s-pod-network.9079af74f224ac7146610df7ef4a7d39525b09fff31902588cd4750863e463fe" host="srv-9xk3k.gb1.brightbox.com" Oct 9 07:55:58.415396 containerd[1626]: 2024-10-09 07:55:58.357 [INFO][4730] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:55:58.415396 containerd[1626]: 2024-10-09 07:55:58.357 [INFO][4730] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.5.133/26] IPv6=[] ContainerID="9079af74f224ac7146610df7ef4a7d39525b09fff31902588cd4750863e463fe" HandleID="k8s-pod-network.9079af74f224ac7146610df7ef4a7d39525b09fff31902588cd4750863e463fe" Workload="srv--9xk3k.gb1.brightbox.com-k8s-calico--apiserver--7d4854566c--2gg96-eth0" Oct 9 07:55:58.424379 containerd[1626]: 2024-10-09 07:55:58.360 [INFO][4704] k8s.go 386: Populated endpoint ContainerID="9079af74f224ac7146610df7ef4a7d39525b09fff31902588cd4750863e463fe" Namespace="calico-apiserver" Pod="calico-apiserver-7d4854566c-2gg96" WorkloadEndpoint="srv--9xk3k.gb1.brightbox.com-k8s-calico--apiserver--7d4854566c--2gg96-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--9xk3k.gb1.brightbox.com-k8s-calico--apiserver--7d4854566c--2gg96-eth0", GenerateName:"calico-apiserver-7d4854566c-", Namespace:"calico-apiserver", SelfLink:"", UID:"486006d2-72db-48ab-a190-2ade85a28c14", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 55, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d4854566c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-9xk3k.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-7d4854566c-2gg96", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.5.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calice1e68c1580", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:55:58.424379 containerd[1626]: 2024-10-09 07:55:58.361 [INFO][4704] k8s.go 387: Calico CNI using IPs: [192.168.5.133/32] ContainerID="9079af74f224ac7146610df7ef4a7d39525b09fff31902588cd4750863e463fe" Namespace="calico-apiserver" Pod="calico-apiserver-7d4854566c-2gg96" WorkloadEndpoint="srv--9xk3k.gb1.brightbox.com-k8s-calico--apiserver--7d4854566c--2gg96-eth0" Oct 9 07:55:58.424379 containerd[1626]: 2024-10-09 07:55:58.361 [INFO][4704] dataplane_linux.go 68: Setting the host side veth name to calice1e68c1580 ContainerID="9079af74f224ac7146610df7ef4a7d39525b09fff31902588cd4750863e463fe" Namespace="calico-apiserver" Pod="calico-apiserver-7d4854566c-2gg96" WorkloadEndpoint="srv--9xk3k.gb1.brightbox.com-k8s-calico--apiserver--7d4854566c--2gg96-eth0" Oct 9 07:55:58.424379 containerd[1626]: 2024-10-09 07:55:58.382 [INFO][4704] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="9079af74f224ac7146610df7ef4a7d39525b09fff31902588cd4750863e463fe" Namespace="calico-apiserver" Pod="calico-apiserver-7d4854566c-2gg96" WorkloadEndpoint="srv--9xk3k.gb1.brightbox.com-k8s-calico--apiserver--7d4854566c--2gg96-eth0" Oct 9 07:55:58.424379 containerd[1626]: 2024-10-09 07:55:58.384 [INFO][4704] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9079af74f224ac7146610df7ef4a7d39525b09fff31902588cd4750863e463fe" Namespace="calico-apiserver" Pod="calico-apiserver-7d4854566c-2gg96" WorkloadEndpoint="srv--9xk3k.gb1.brightbox.com-k8s-calico--apiserver--7d4854566c--2gg96-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--9xk3k.gb1.brightbox.com-k8s-calico--apiserver--7d4854566c--2gg96-eth0", GenerateName:"calico-apiserver-7d4854566c-", Namespace:"calico-apiserver", SelfLink:"", UID:"486006d2-72db-48ab-a190-2ade85a28c14", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 55, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d4854566c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-9xk3k.gb1.brightbox.com", ContainerID:"9079af74f224ac7146610df7ef4a7d39525b09fff31902588cd4750863e463fe", Pod:"calico-apiserver-7d4854566c-2gg96", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.5.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calice1e68c1580", MAC:"3a:17:46:86:e0:5f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:55:58.424379 containerd[1626]: 2024-10-09 07:55:58.407 [INFO][4704] k8s.go 500: Wrote updated endpoint to datastore ContainerID="9079af74f224ac7146610df7ef4a7d39525b09fff31902588cd4750863e463fe" Namespace="calico-apiserver" Pod="calico-apiserver-7d4854566c-2gg96" WorkloadEndpoint="srv--9xk3k.gb1.brightbox.com-k8s-calico--apiserver--7d4854566c--2gg96-eth0" Oct 9 07:55:58.479054 systemd-journald[1177]: Under memory pressure, flushing caches. Oct 9 07:55:58.474397 systemd-resolved[1516]: Under memory pressure, flushing caches. Oct 9 07:55:58.474445 systemd-resolved[1516]: Flushed all caches. Oct 9 07:55:58.495516 systemd-networkd[1270]: cali05586663241: Link UP Oct 9 07:55:58.498897 systemd-networkd[1270]: cali05586663241: Gained carrier Oct 9 07:55:58.537855 containerd[1626]: 2024-10-09 07:55:58.008 [INFO][4711] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--9xk3k.gb1.brightbox.com-k8s-calico--apiserver--7d4854566c--5b5mm-eth0 calico-apiserver-7d4854566c- calico-apiserver a9d5e0ac-3e74-4330-9f86-9063d2460899 820 0 2024-10-09 07:55:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7d4854566c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-9xk3k.gb1.brightbox.com calico-apiserver-7d4854566c-5b5mm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali05586663241 [] []}} ContainerID="74b0706a4e5bad3bff99fff1f07579381e8e1ccab0749004bea635e6a6c3e542" Namespace="calico-apiserver" Pod="calico-apiserver-7d4854566c-5b5mm" WorkloadEndpoint="srv--9xk3k.gb1.brightbox.com-k8s-calico--apiserver--7d4854566c--5b5mm-" Oct 9 07:55:58.537855 containerd[1626]: 2024-10-09 07:55:58.009 [INFO][4711] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="74b0706a4e5bad3bff99fff1f07579381e8e1ccab0749004bea635e6a6c3e542" Namespace="calico-apiserver" Pod="calico-apiserver-7d4854566c-5b5mm" WorkloadEndpoint="srv--9xk3k.gb1.brightbox.com-k8s-calico--apiserver--7d4854566c--5b5mm-eth0" Oct 9 07:55:58.537855 containerd[1626]: 2024-10-09 07:55:58.272 [INFO][4729] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="74b0706a4e5bad3bff99fff1f07579381e8e1ccab0749004bea635e6a6c3e542" HandleID="k8s-pod-network.74b0706a4e5bad3bff99fff1f07579381e8e1ccab0749004bea635e6a6c3e542" Workload="srv--9xk3k.gb1.brightbox.com-k8s-calico--apiserver--7d4854566c--5b5mm-eth0" Oct 9 07:55:58.537855 containerd[1626]: 2024-10-09 07:55:58.301 [INFO][4729] ipam_plugin.go 270: Auto assigning IP ContainerID="74b0706a4e5bad3bff99fff1f07579381e8e1ccab0749004bea635e6a6c3e542" HandleID="k8s-pod-network.74b0706a4e5bad3bff99fff1f07579381e8e1ccab0749004bea635e6a6c3e542" Workload="srv--9xk3k.gb1.brightbox.com-k8s-calico--apiserver--7d4854566c--5b5mm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000354780), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-9xk3k.gb1.brightbox.com", "pod":"calico-apiserver-7d4854566c-5b5mm", "timestamp":"2024-10-09 07:55:58.27215641 +0000 UTC"}, Hostname:"srv-9xk3k.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:55:58.537855 containerd[1626]: 2024-10-09 07:55:58.302 [INFO][4729] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:55:58.537855 containerd[1626]: 2024-10-09 07:55:58.356 [INFO][4729] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:55:58.537855 containerd[1626]: 2024-10-09 07:55:58.356 [INFO][4729] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-9xk3k.gb1.brightbox.com' Oct 9 07:55:58.537855 containerd[1626]: 2024-10-09 07:55:58.363 [INFO][4729] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.74b0706a4e5bad3bff99fff1f07579381e8e1ccab0749004bea635e6a6c3e542" host="srv-9xk3k.gb1.brightbox.com" Oct 9 07:55:58.537855 containerd[1626]: 2024-10-09 07:55:58.377 [INFO][4729] ipam.go 372: Looking up existing affinities for host host="srv-9xk3k.gb1.brightbox.com" Oct 9 07:55:58.537855 containerd[1626]: 2024-10-09 07:55:58.388 [INFO][4729] ipam.go 489: Trying affinity for 192.168.5.128/26 host="srv-9xk3k.gb1.brightbox.com" Oct 9 07:55:58.537855 containerd[1626]: 2024-10-09 07:55:58.397 [INFO][4729] ipam.go 155: Attempting to load block cidr=192.168.5.128/26 host="srv-9xk3k.gb1.brightbox.com" Oct 9 07:55:58.537855 containerd[1626]: 2024-10-09 07:55:58.409 [INFO][4729] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.5.128/26 host="srv-9xk3k.gb1.brightbox.com" Oct 9 07:55:58.537855 containerd[1626]: 2024-10-09 07:55:58.410 [INFO][4729] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.5.128/26 handle="k8s-pod-network.74b0706a4e5bad3bff99fff1f07579381e8e1ccab0749004bea635e6a6c3e542" host="srv-9xk3k.gb1.brightbox.com" Oct 9 07:55:58.537855 containerd[1626]: 2024-10-09 07:55:58.422 [INFO][4729] ipam.go 1685: Creating new handle: k8s-pod-network.74b0706a4e5bad3bff99fff1f07579381e8e1ccab0749004bea635e6a6c3e542 Oct 9 07:55:58.537855 containerd[1626]: 2024-10-09 07:55:58.435 [INFO][4729] ipam.go 1203: Writing block in order to claim IPs block=192.168.5.128/26 handle="k8s-pod-network.74b0706a4e5bad3bff99fff1f07579381e8e1ccab0749004bea635e6a6c3e542" host="srv-9xk3k.gb1.brightbox.com" Oct 9 07:55:58.537855 containerd[1626]: 2024-10-09 07:55:58.456 [INFO][4729] ipam.go 1216: Successfully claimed IPs: [192.168.5.134/26] block=192.168.5.128/26 handle="k8s-pod-network.74b0706a4e5bad3bff99fff1f07579381e8e1ccab0749004bea635e6a6c3e542" host="srv-9xk3k.gb1.brightbox.com" Oct 9 07:55:58.537855 containerd[1626]: 2024-10-09 07:55:58.457 [INFO][4729] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.5.134/26] handle="k8s-pod-network.74b0706a4e5bad3bff99fff1f07579381e8e1ccab0749004bea635e6a6c3e542" host="srv-9xk3k.gb1.brightbox.com" Oct 9 07:55:58.537855 containerd[1626]: 2024-10-09 07:55:58.457 [INFO][4729] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:55:58.537855 containerd[1626]: 2024-10-09 07:55:58.457 [INFO][4729] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.5.134/26] IPv6=[] ContainerID="74b0706a4e5bad3bff99fff1f07579381e8e1ccab0749004bea635e6a6c3e542" HandleID="k8s-pod-network.74b0706a4e5bad3bff99fff1f07579381e8e1ccab0749004bea635e6a6c3e542" Workload="srv--9xk3k.gb1.brightbox.com-k8s-calico--apiserver--7d4854566c--5b5mm-eth0" Oct 9 07:55:58.542353 containerd[1626]: 2024-10-09 07:55:58.480 [INFO][4711] k8s.go 386: Populated endpoint ContainerID="74b0706a4e5bad3bff99fff1f07579381e8e1ccab0749004bea635e6a6c3e542" Namespace="calico-apiserver" Pod="calico-apiserver-7d4854566c-5b5mm" WorkloadEndpoint="srv--9xk3k.gb1.brightbox.com-k8s-calico--apiserver--7d4854566c--5b5mm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--9xk3k.gb1.brightbox.com-k8s-calico--apiserver--7d4854566c--5b5mm-eth0", GenerateName:"calico-apiserver-7d4854566c-", Namespace:"calico-apiserver", SelfLink:"", UID:"a9d5e0ac-3e74-4330-9f86-9063d2460899", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 55, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d4854566c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-9xk3k.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-7d4854566c-5b5mm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.5.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali05586663241", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:55:58.542353 containerd[1626]: 2024-10-09 07:55:58.481 [INFO][4711] k8s.go 387: Calico CNI using IPs: [192.168.5.134/32] ContainerID="74b0706a4e5bad3bff99fff1f07579381e8e1ccab0749004bea635e6a6c3e542" Namespace="calico-apiserver" Pod="calico-apiserver-7d4854566c-5b5mm" WorkloadEndpoint="srv--9xk3k.gb1.brightbox.com-k8s-calico--apiserver--7d4854566c--5b5mm-eth0" Oct 9 07:55:58.542353 containerd[1626]: 2024-10-09 07:55:58.481 [INFO][4711] dataplane_linux.go 68: Setting the host side veth name to cali05586663241 ContainerID="74b0706a4e5bad3bff99fff1f07579381e8e1ccab0749004bea635e6a6c3e542" Namespace="calico-apiserver" Pod="calico-apiserver-7d4854566c-5b5mm" WorkloadEndpoint="srv--9xk3k.gb1.brightbox.com-k8s-calico--apiserver--7d4854566c--5b5mm-eth0" Oct 9 07:55:58.542353 containerd[1626]: 2024-10-09 07:55:58.502 [INFO][4711] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="74b0706a4e5bad3bff99fff1f07579381e8e1ccab0749004bea635e6a6c3e542" Namespace="calico-apiserver" Pod="calico-apiserver-7d4854566c-5b5mm" WorkloadEndpoint="srv--9xk3k.gb1.brightbox.com-k8s-calico--apiserver--7d4854566c--5b5mm-eth0" Oct 9 07:55:58.542353 containerd[1626]: 2024-10-09 07:55:58.505 [INFO][4711] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="74b0706a4e5bad3bff99fff1f07579381e8e1ccab0749004bea635e6a6c3e542" Namespace="calico-apiserver" Pod="calico-apiserver-7d4854566c-5b5mm" WorkloadEndpoint="srv--9xk3k.gb1.brightbox.com-k8s-calico--apiserver--7d4854566c--5b5mm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--9xk3k.gb1.brightbox.com-k8s-calico--apiserver--7d4854566c--5b5mm-eth0", GenerateName:"calico-apiserver-7d4854566c-", Namespace:"calico-apiserver", SelfLink:"", UID:"a9d5e0ac-3e74-4330-9f86-9063d2460899", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 55, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d4854566c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-9xk3k.gb1.brightbox.com", ContainerID:"74b0706a4e5bad3bff99fff1f07579381e8e1ccab0749004bea635e6a6c3e542", Pod:"calico-apiserver-7d4854566c-5b5mm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.5.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali05586663241", MAC:"d2:15:27:23:37:4f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:55:58.542353 containerd[1626]: 2024-10-09 07:55:58.520 [INFO][4711] k8s.go 500: Wrote updated endpoint to datastore ContainerID="74b0706a4e5bad3bff99fff1f07579381e8e1ccab0749004bea635e6a6c3e542" Namespace="calico-apiserver" Pod="calico-apiserver-7d4854566c-5b5mm" WorkloadEndpoint="srv--9xk3k.gb1.brightbox.com-k8s-calico--apiserver--7d4854566c--5b5mm-eth0" Oct 9 07:55:58.607444 containerd[1626]: 2024-10-09 07:55:58.462 [WARNING][4755] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3496d03e740517a3139c6df4764dba446a9f3cdcdaea201f2f0ee51289340871" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--9xk3k.gb1.brightbox.com-k8s-csi--node--driver--6frp9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c258f453-4ba7-47ef-a510-74cac7855910", ResourceVersion:"770", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 55, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-9xk3k.gb1.brightbox.com", ContainerID:"719355bf959a2d69d9bb2a62bc20c8f0613b45e16ce750c4067a0488a65f8b04", Pod:"csi-node-driver-6frp9", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.5.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calib86cb8a3251", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:55:58.607444 containerd[1626]: 2024-10-09 07:55:58.462 [INFO][4755] k8s.go 608: Cleaning up netns ContainerID="3496d03e740517a3139c6df4764dba446a9f3cdcdaea201f2f0ee51289340871" Oct 9 07:55:58.607444 containerd[1626]: 2024-10-09 07:55:58.462 [INFO][4755] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="3496d03e740517a3139c6df4764dba446a9f3cdcdaea201f2f0ee51289340871" iface="eth0" netns="" Oct 9 07:55:58.607444 containerd[1626]: 2024-10-09 07:55:58.462 [INFO][4755] k8s.go 615: Releasing IP address(es) ContainerID="3496d03e740517a3139c6df4764dba446a9f3cdcdaea201f2f0ee51289340871" Oct 9 07:55:58.607444 containerd[1626]: 2024-10-09 07:55:58.462 [INFO][4755] utils.go 188: Calico CNI releasing IP address ContainerID="3496d03e740517a3139c6df4764dba446a9f3cdcdaea201f2f0ee51289340871" Oct 9 07:55:58.607444 containerd[1626]: 2024-10-09 07:55:58.579 [INFO][4777] ipam_plugin.go 417: Releasing address using handleID ContainerID="3496d03e740517a3139c6df4764dba446a9f3cdcdaea201f2f0ee51289340871" HandleID="k8s-pod-network.3496d03e740517a3139c6df4764dba446a9f3cdcdaea201f2f0ee51289340871" Workload="srv--9xk3k.gb1.brightbox.com-k8s-csi--node--driver--6frp9-eth0" Oct 9 07:55:58.607444 containerd[1626]: 2024-10-09 07:55:58.580 [INFO][4777] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:55:58.607444 containerd[1626]: 2024-10-09 07:55:58.582 [INFO][4777] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:55:58.607444 containerd[1626]: 2024-10-09 07:55:58.594 [WARNING][4777] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="3496d03e740517a3139c6df4764dba446a9f3cdcdaea201f2f0ee51289340871" HandleID="k8s-pod-network.3496d03e740517a3139c6df4764dba446a9f3cdcdaea201f2f0ee51289340871" Workload="srv--9xk3k.gb1.brightbox.com-k8s-csi--node--driver--6frp9-eth0" Oct 9 07:55:58.607444 containerd[1626]: 2024-10-09 07:55:58.597 [INFO][4777] ipam_plugin.go 445: Releasing address using workloadID ContainerID="3496d03e740517a3139c6df4764dba446a9f3cdcdaea201f2f0ee51289340871" HandleID="k8s-pod-network.3496d03e740517a3139c6df4764dba446a9f3cdcdaea201f2f0ee51289340871" Workload="srv--9xk3k.gb1.brightbox.com-k8s-csi--node--driver--6frp9-eth0" Oct 9 07:55:58.607444 containerd[1626]: 2024-10-09 07:55:58.599 [INFO][4777] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:55:58.607444 containerd[1626]: 2024-10-09 07:55:58.603 [INFO][4755] k8s.go 621: Teardown processing complete. ContainerID="3496d03e740517a3139c6df4764dba446a9f3cdcdaea201f2f0ee51289340871" Oct 9 07:55:58.609288 containerd[1626]: time="2024-10-09T07:55:58.608018924Z" level=info msg="TearDown network for sandbox \"3496d03e740517a3139c6df4764dba446a9f3cdcdaea201f2f0ee51289340871\" successfully" Oct 9 07:55:58.609288 containerd[1626]: time="2024-10-09T07:55:58.608081728Z" level=info msg="StopPodSandbox for \"3496d03e740517a3139c6df4764dba446a9f3cdcdaea201f2f0ee51289340871\" returns successfully" Oct 9 07:55:58.612383 containerd[1626]: time="2024-10-09T07:55:58.611455839Z" level=info msg="RemovePodSandbox for \"3496d03e740517a3139c6df4764dba446a9f3cdcdaea201f2f0ee51289340871\"" Oct 9 07:55:58.612723 containerd[1626]: time="2024-10-09T07:55:58.612411383Z" level=info msg="Forcibly stopping sandbox \"3496d03e740517a3139c6df4764dba446a9f3cdcdaea201f2f0ee51289340871\"" Oct 9 07:55:58.613819 containerd[1626]: time="2024-10-09T07:55:58.611826119Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:55:58.613819 containerd[1626]: time="2024-10-09T07:55:58.611938246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:55:58.613819 containerd[1626]: time="2024-10-09T07:55:58.611973161Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:55:58.613819 containerd[1626]: time="2024-10-09T07:55:58.612003359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:55:58.640733 containerd[1626]: time="2024-10-09T07:55:58.640033840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:55:58.640733 containerd[1626]: time="2024-10-09T07:55:58.640114564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:55:58.642621 containerd[1626]: time="2024-10-09T07:55:58.640150142Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:55:58.642621 containerd[1626]: time="2024-10-09T07:55:58.641787347Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:55:58.819496 containerd[1626]: time="2024-10-09T07:55:58.819449296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4854566c-5b5mm,Uid:a9d5e0ac-3e74-4330-9f86-9063d2460899,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"74b0706a4e5bad3bff99fff1f07579381e8e1ccab0749004bea635e6a6c3e542\"" Oct 9 07:55:58.823442 containerd[1626]: time="2024-10-09T07:55:58.823047130Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Oct 9 07:55:58.832444 containerd[1626]: time="2024-10-09T07:55:58.832408850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4854566c-2gg96,Uid:486006d2-72db-48ab-a190-2ade85a28c14,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9079af74f224ac7146610df7ef4a7d39525b09fff31902588cd4750863e463fe\"" Oct 9 07:55:58.861270 containerd[1626]: 2024-10-09 07:55:58.767 [WARNING][4846] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3496d03e740517a3139c6df4764dba446a9f3cdcdaea201f2f0ee51289340871" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--9xk3k.gb1.brightbox.com-k8s-csi--node--driver--6frp9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c258f453-4ba7-47ef-a510-74cac7855910", ResourceVersion:"770", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 55, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-9xk3k.gb1.brightbox.com", ContainerID:"719355bf959a2d69d9bb2a62bc20c8f0613b45e16ce750c4067a0488a65f8b04", Pod:"csi-node-driver-6frp9", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.5.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calib86cb8a3251", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:55:58.861270 containerd[1626]: 2024-10-09 07:55:58.768 [INFO][4846] k8s.go 608: Cleaning up netns ContainerID="3496d03e740517a3139c6df4764dba446a9f3cdcdaea201f2f0ee51289340871" Oct 9 07:55:58.861270 containerd[1626]: 2024-10-09 07:55:58.768 [INFO][4846] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="3496d03e740517a3139c6df4764dba446a9f3cdcdaea201f2f0ee51289340871" iface="eth0" netns="" Oct 9 07:55:58.861270 containerd[1626]: 2024-10-09 07:55:58.768 [INFO][4846] k8s.go 615: Releasing IP address(es) ContainerID="3496d03e740517a3139c6df4764dba446a9f3cdcdaea201f2f0ee51289340871" Oct 9 07:55:58.861270 containerd[1626]: 2024-10-09 07:55:58.768 [INFO][4846] utils.go 188: Calico CNI releasing IP address ContainerID="3496d03e740517a3139c6df4764dba446a9f3cdcdaea201f2f0ee51289340871" Oct 9 07:55:58.861270 containerd[1626]: 2024-10-09 07:55:58.846 [INFO][4887] ipam_plugin.go 417: Releasing address using handleID ContainerID="3496d03e740517a3139c6df4764dba446a9f3cdcdaea201f2f0ee51289340871" HandleID="k8s-pod-network.3496d03e740517a3139c6df4764dba446a9f3cdcdaea201f2f0ee51289340871" Workload="srv--9xk3k.gb1.brightbox.com-k8s-csi--node--driver--6frp9-eth0" Oct 9 07:55:58.861270 containerd[1626]: 2024-10-09 07:55:58.846 [INFO][4887] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:55:58.861270 containerd[1626]: 2024-10-09 07:55:58.846 [INFO][4887] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:55:58.861270 containerd[1626]: 2024-10-09 07:55:58.854 [WARNING][4887] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="3496d03e740517a3139c6df4764dba446a9f3cdcdaea201f2f0ee51289340871" HandleID="k8s-pod-network.3496d03e740517a3139c6df4764dba446a9f3cdcdaea201f2f0ee51289340871" Workload="srv--9xk3k.gb1.brightbox.com-k8s-csi--node--driver--6frp9-eth0" Oct 9 07:55:58.861270 containerd[1626]: 2024-10-09 07:55:58.854 [INFO][4887] ipam_plugin.go 445: Releasing address using workloadID ContainerID="3496d03e740517a3139c6df4764dba446a9f3cdcdaea201f2f0ee51289340871" HandleID="k8s-pod-network.3496d03e740517a3139c6df4764dba446a9f3cdcdaea201f2f0ee51289340871" Workload="srv--9xk3k.gb1.brightbox.com-k8s-csi--node--driver--6frp9-eth0" Oct 9 07:55:58.861270 containerd[1626]: 2024-10-09 07:55:58.856 [INFO][4887] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:55:58.861270 containerd[1626]: 2024-10-09 07:55:58.858 [INFO][4846] k8s.go 621: Teardown processing complete. ContainerID="3496d03e740517a3139c6df4764dba446a9f3cdcdaea201f2f0ee51289340871" Oct 9 07:55:58.862488 containerd[1626]: time="2024-10-09T07:55:58.861366431Z" level=info msg="TearDown network for sandbox \"3496d03e740517a3139c6df4764dba446a9f3cdcdaea201f2f0ee51289340871\" successfully" Oct 9 07:55:58.887132 containerd[1626]: time="2024-10-09T07:55:58.887061021Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3496d03e740517a3139c6df4764dba446a9f3cdcdaea201f2f0ee51289340871\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 07:55:58.887389 containerd[1626]: time="2024-10-09T07:55:58.887173087Z" level=info msg="RemovePodSandbox \"3496d03e740517a3139c6df4764dba446a9f3cdcdaea201f2f0ee51289340871\" returns successfully" Oct 9 07:55:58.888305 containerd[1626]: time="2024-10-09T07:55:58.888052715Z" level=info msg="StopPodSandbox for \"904485de00b9d4bcf7c22c318f6cde1fa24654c19bb8d74c511c8bac570cd0c4\"" Oct 9 07:55:59.025404 containerd[1626]: 2024-10-09 07:55:58.938 [WARNING][4918] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="904485de00b9d4bcf7c22c318f6cde1fa24654c19bb8d74c511c8bac570cd0c4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--9xk3k.gb1.brightbox.com-k8s-coredns--76f75df574--2t599-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"5221c43d-cc16-4dce-aec2-ff2ae5da0e97", ResourceVersion:"712", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 55, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-9xk3k.gb1.brightbox.com", ContainerID:"15713b223810abf7c46bb896011ca4684059e4652bc06f9d433a97e6d6d30ac9", Pod:"coredns-76f75df574-2t599", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali85d158e7bd3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:55:59.025404 containerd[1626]: 2024-10-09 07:55:58.939 [INFO][4918] k8s.go 608: Cleaning up netns ContainerID="904485de00b9d4bcf7c22c318f6cde1fa24654c19bb8d74c511c8bac570cd0c4" Oct 9 07:55:59.025404 containerd[1626]: 2024-10-09 07:55:58.939 [INFO][4918] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="904485de00b9d4bcf7c22c318f6cde1fa24654c19bb8d74c511c8bac570cd0c4" iface="eth0" netns="" Oct 9 07:55:59.025404 containerd[1626]: 2024-10-09 07:55:58.939 [INFO][4918] k8s.go 615: Releasing IP address(es) ContainerID="904485de00b9d4bcf7c22c318f6cde1fa24654c19bb8d74c511c8bac570cd0c4" Oct 9 07:55:59.025404 containerd[1626]: 2024-10-09 07:55:58.939 [INFO][4918] utils.go 188: Calico CNI releasing IP address ContainerID="904485de00b9d4bcf7c22c318f6cde1fa24654c19bb8d74c511c8bac570cd0c4" Oct 9 07:55:59.025404 containerd[1626]: 2024-10-09 07:55:58.989 [INFO][4924] ipam_plugin.go 417: Releasing address using handleID ContainerID="904485de00b9d4bcf7c22c318f6cde1fa24654c19bb8d74c511c8bac570cd0c4" HandleID="k8s-pod-network.904485de00b9d4bcf7c22c318f6cde1fa24654c19bb8d74c511c8bac570cd0c4" Workload="srv--9xk3k.gb1.brightbox.com-k8s-coredns--76f75df574--2t599-eth0" Oct 9 07:55:59.025404 containerd[1626]: 2024-10-09 07:55:58.993 [INFO][4924] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:55:59.025404 containerd[1626]: 2024-10-09 07:55:58.993 [INFO][4924] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:55:59.025404 containerd[1626]: 2024-10-09 07:55:59.004 [WARNING][4924] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="904485de00b9d4bcf7c22c318f6cde1fa24654c19bb8d74c511c8bac570cd0c4" HandleID="k8s-pod-network.904485de00b9d4bcf7c22c318f6cde1fa24654c19bb8d74c511c8bac570cd0c4" Workload="srv--9xk3k.gb1.brightbox.com-k8s-coredns--76f75df574--2t599-eth0" Oct 9 07:55:59.025404 containerd[1626]: 2024-10-09 07:55:59.004 [INFO][4924] ipam_plugin.go 445: Releasing address using workloadID ContainerID="904485de00b9d4bcf7c22c318f6cde1fa24654c19bb8d74c511c8bac570cd0c4" HandleID="k8s-pod-network.904485de00b9d4bcf7c22c318f6cde1fa24654c19bb8d74c511c8bac570cd0c4" Workload="srv--9xk3k.gb1.brightbox.com-k8s-coredns--76f75df574--2t599-eth0" Oct 9 07:55:59.025404 containerd[1626]: 2024-10-09 07:55:59.006 [INFO][4924] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:55:59.025404 containerd[1626]: 2024-10-09 07:55:59.023 [INFO][4918] k8s.go 621: Teardown processing complete. ContainerID="904485de00b9d4bcf7c22c318f6cde1fa24654c19bb8d74c511c8bac570cd0c4" Oct 9 07:55:59.029113 containerd[1626]: time="2024-10-09T07:55:59.025942328Z" level=info msg="TearDown network for sandbox \"904485de00b9d4bcf7c22c318f6cde1fa24654c19bb8d74c511c8bac570cd0c4\" successfully" Oct 9 07:55:59.029113 containerd[1626]: time="2024-10-09T07:55:59.025977603Z" level=info msg="StopPodSandbox for \"904485de00b9d4bcf7c22c318f6cde1fa24654c19bb8d74c511c8bac570cd0c4\" returns successfully" Oct 9 07:55:59.029113 containerd[1626]: time="2024-10-09T07:55:59.026459757Z" level=info msg="RemovePodSandbox for \"904485de00b9d4bcf7c22c318f6cde1fa24654c19bb8d74c511c8bac570cd0c4\"" Oct 9 07:55:59.029113 containerd[1626]: time="2024-10-09T07:55:59.026499117Z" level=info msg="Forcibly stopping sandbox \"904485de00b9d4bcf7c22c318f6cde1fa24654c19bb8d74c511c8bac570cd0c4\"" Oct 9 07:55:59.203193 containerd[1626]: 2024-10-09 07:55:59.129 [WARNING][4943] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="904485de00b9d4bcf7c22c318f6cde1fa24654c19bb8d74c511c8bac570cd0c4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--9xk3k.gb1.brightbox.com-k8s-coredns--76f75df574--2t599-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"5221c43d-cc16-4dce-aec2-ff2ae5da0e97", ResourceVersion:"712", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 55, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-9xk3k.gb1.brightbox.com", ContainerID:"15713b223810abf7c46bb896011ca4684059e4652bc06f9d433a97e6d6d30ac9", Pod:"coredns-76f75df574-2t599", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali85d158e7bd3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:55:59.203193 containerd[1626]: 2024-10-09 07:55:59.130 [INFO][4943] k8s.go 608: Cleaning up netns ContainerID="904485de00b9d4bcf7c22c318f6cde1fa24654c19bb8d74c511c8bac570cd0c4" Oct 9 07:55:59.203193 containerd[1626]: 2024-10-09 07:55:59.132 [INFO][4943] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="904485de00b9d4bcf7c22c318f6cde1fa24654c19bb8d74c511c8bac570cd0c4" iface="eth0" netns="" Oct 9 07:55:59.203193 containerd[1626]: 2024-10-09 07:55:59.132 [INFO][4943] k8s.go 615: Releasing IP address(es) ContainerID="904485de00b9d4bcf7c22c318f6cde1fa24654c19bb8d74c511c8bac570cd0c4" Oct 9 07:55:59.203193 containerd[1626]: 2024-10-09 07:55:59.132 [INFO][4943] utils.go 188: Calico CNI releasing IP address ContainerID="904485de00b9d4bcf7c22c318f6cde1fa24654c19bb8d74c511c8bac570cd0c4" Oct 9 07:55:59.203193 containerd[1626]: 2024-10-09 07:55:59.185 [INFO][4949] ipam_plugin.go 417: Releasing address using handleID ContainerID="904485de00b9d4bcf7c22c318f6cde1fa24654c19bb8d74c511c8bac570cd0c4" HandleID="k8s-pod-network.904485de00b9d4bcf7c22c318f6cde1fa24654c19bb8d74c511c8bac570cd0c4" Workload="srv--9xk3k.gb1.brightbox.com-k8s-coredns--76f75df574--2t599-eth0" Oct 9 07:55:59.203193 containerd[1626]: 2024-10-09 07:55:59.185 [INFO][4949] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:55:59.203193 containerd[1626]: 2024-10-09 07:55:59.185 [INFO][4949] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:55:59.203193 containerd[1626]: 2024-10-09 07:55:59.195 [WARNING][4949] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="904485de00b9d4bcf7c22c318f6cde1fa24654c19bb8d74c511c8bac570cd0c4" HandleID="k8s-pod-network.904485de00b9d4bcf7c22c318f6cde1fa24654c19bb8d74c511c8bac570cd0c4" Workload="srv--9xk3k.gb1.brightbox.com-k8s-coredns--76f75df574--2t599-eth0" Oct 9 07:55:59.203193 containerd[1626]: 2024-10-09 07:55:59.195 [INFO][4949] ipam_plugin.go 445: Releasing address using workloadID ContainerID="904485de00b9d4bcf7c22c318f6cde1fa24654c19bb8d74c511c8bac570cd0c4" HandleID="k8s-pod-network.904485de00b9d4bcf7c22c318f6cde1fa24654c19bb8d74c511c8bac570cd0c4" Workload="srv--9xk3k.gb1.brightbox.com-k8s-coredns--76f75df574--2t599-eth0" Oct 9 07:55:59.203193 containerd[1626]: 2024-10-09 07:55:59.197 [INFO][4949] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:55:59.203193 containerd[1626]: 2024-10-09 07:55:59.200 [INFO][4943] k8s.go 621: Teardown processing complete. ContainerID="904485de00b9d4bcf7c22c318f6cde1fa24654c19bb8d74c511c8bac570cd0c4" Oct 9 07:55:59.204706 containerd[1626]: time="2024-10-09T07:55:59.204644344Z" level=info msg="TearDown network for sandbox \"904485de00b9d4bcf7c22c318f6cde1fa24654c19bb8d74c511c8bac570cd0c4\" successfully" Oct 9 07:55:59.209661 containerd[1626]: time="2024-10-09T07:55:59.209615324Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"904485de00b9d4bcf7c22c318f6cde1fa24654c19bb8d74c511c8bac570cd0c4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 07:55:59.210023 containerd[1626]: time="2024-10-09T07:55:59.209900847Z" level=info msg="RemovePodSandbox \"904485de00b9d4bcf7c22c318f6cde1fa24654c19bb8d74c511c8bac570cd0c4\" returns successfully" Oct 9 07:55:59.210807 containerd[1626]: time="2024-10-09T07:55:59.210740855Z" level=info msg="StopPodSandbox for \"d88bfa8b48c677030a0add4806c2b07063e7c1e2c183796c2e73071d045918f4\"" Oct 9 07:55:59.312465 containerd[1626]: 2024-10-09 07:55:59.272 [WARNING][4968] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d88bfa8b48c677030a0add4806c2b07063e7c1e2c183796c2e73071d045918f4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--9xk3k.gb1.brightbox.com-k8s-calico--kube--controllers--55d54487b6--jf2kn-eth0", GenerateName:"calico-kube-controllers-55d54487b6-", Namespace:"calico-system", SelfLink:"", UID:"7bfa79c2-7b56-4a96-b7e3-29115f7ec8ba", ResourceVersion:"749", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 55, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55d54487b6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-9xk3k.gb1.brightbox.com", ContainerID:"babfd2ae9ebd20b7919730db586566795a58c8caac47871826238c1e2290d604", Pod:"calico-kube-controllers-55d54487b6-jf2kn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.5.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4f922f5c149", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:55:59.312465 containerd[1626]: 2024-10-09 07:55:59.273 [INFO][4968] k8s.go 608: Cleaning up netns ContainerID="d88bfa8b48c677030a0add4806c2b07063e7c1e2c183796c2e73071d045918f4" Oct 9 07:55:59.312465 containerd[1626]: 2024-10-09 07:55:59.273 [INFO][4968] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="d88bfa8b48c677030a0add4806c2b07063e7c1e2c183796c2e73071d045918f4" iface="eth0" netns="" Oct 9 07:55:59.312465 containerd[1626]: 2024-10-09 07:55:59.273 [INFO][4968] k8s.go 615: Releasing IP address(es) ContainerID="d88bfa8b48c677030a0add4806c2b07063e7c1e2c183796c2e73071d045918f4" Oct 9 07:55:59.312465 containerd[1626]: 2024-10-09 07:55:59.273 [INFO][4968] utils.go 188: Calico CNI releasing IP address ContainerID="d88bfa8b48c677030a0add4806c2b07063e7c1e2c183796c2e73071d045918f4" Oct 9 07:55:59.312465 containerd[1626]: 2024-10-09 07:55:59.298 [INFO][4974] ipam_plugin.go 417: Releasing address using handleID ContainerID="d88bfa8b48c677030a0add4806c2b07063e7c1e2c183796c2e73071d045918f4" HandleID="k8s-pod-network.d88bfa8b48c677030a0add4806c2b07063e7c1e2c183796c2e73071d045918f4" Workload="srv--9xk3k.gb1.brightbox.com-k8s-calico--kube--controllers--55d54487b6--jf2kn-eth0" Oct 9 07:55:59.312465 containerd[1626]: 2024-10-09 07:55:59.298 [INFO][4974] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:55:59.312465 containerd[1626]: 2024-10-09 07:55:59.298 [INFO][4974] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:55:59.312465 containerd[1626]: 2024-10-09 07:55:59.306 [WARNING][4974] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="d88bfa8b48c677030a0add4806c2b07063e7c1e2c183796c2e73071d045918f4" HandleID="k8s-pod-network.d88bfa8b48c677030a0add4806c2b07063e7c1e2c183796c2e73071d045918f4" Workload="srv--9xk3k.gb1.brightbox.com-k8s-calico--kube--controllers--55d54487b6--jf2kn-eth0" Oct 9 07:55:59.312465 containerd[1626]: 2024-10-09 07:55:59.306 [INFO][4974] ipam_plugin.go 445: Releasing address using workloadID ContainerID="d88bfa8b48c677030a0add4806c2b07063e7c1e2c183796c2e73071d045918f4" HandleID="k8s-pod-network.d88bfa8b48c677030a0add4806c2b07063e7c1e2c183796c2e73071d045918f4" Workload="srv--9xk3k.gb1.brightbox.com-k8s-calico--kube--controllers--55d54487b6--jf2kn-eth0" Oct 9 07:55:59.312465 containerd[1626]: 2024-10-09 07:55:59.307 [INFO][4974] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:55:59.312465 containerd[1626]: 2024-10-09 07:55:59.309 [INFO][4968] k8s.go 621: Teardown processing complete. ContainerID="d88bfa8b48c677030a0add4806c2b07063e7c1e2c183796c2e73071d045918f4" Oct 9 07:55:59.312465 containerd[1626]: time="2024-10-09T07:55:59.312393049Z" level=info msg="TearDown network for sandbox \"d88bfa8b48c677030a0add4806c2b07063e7c1e2c183796c2e73071d045918f4\" successfully" Oct 9 07:55:59.312465 containerd[1626]: time="2024-10-09T07:55:59.312452349Z" level=info msg="StopPodSandbox for \"d88bfa8b48c677030a0add4806c2b07063e7c1e2c183796c2e73071d045918f4\" returns successfully" Oct 9 07:55:59.316168 containerd[1626]: time="2024-10-09T07:55:59.314439017Z" level=info msg="RemovePodSandbox for \"d88bfa8b48c677030a0add4806c2b07063e7c1e2c183796c2e73071d045918f4\"" Oct 9 07:55:59.316168 containerd[1626]: time="2024-10-09T07:55:59.314482152Z" level=info msg="Forcibly stopping sandbox \"d88bfa8b48c677030a0add4806c2b07063e7c1e2c183796c2e73071d045918f4\"" Oct 9 07:55:59.414144 containerd[1626]: 2024-10-09 07:55:59.364 [WARNING][4992] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d88bfa8b48c677030a0add4806c2b07063e7c1e2c183796c2e73071d045918f4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--9xk3k.gb1.brightbox.com-k8s-calico--kube--controllers--55d54487b6--jf2kn-eth0", GenerateName:"calico-kube-controllers-55d54487b6-", Namespace:"calico-system", SelfLink:"", UID:"7bfa79c2-7b56-4a96-b7e3-29115f7ec8ba", ResourceVersion:"749", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 55, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55d54487b6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-9xk3k.gb1.brightbox.com", ContainerID:"babfd2ae9ebd20b7919730db586566795a58c8caac47871826238c1e2290d604", Pod:"calico-kube-controllers-55d54487b6-jf2kn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.5.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4f922f5c149", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:55:59.414144 containerd[1626]: 2024-10-09 07:55:59.365 [INFO][4992] k8s.go 608: Cleaning up netns ContainerID="d88bfa8b48c677030a0add4806c2b07063e7c1e2c183796c2e73071d045918f4" Oct 9 07:55:59.414144 containerd[1626]: 2024-10-09 07:55:59.365 [INFO][4992] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="d88bfa8b48c677030a0add4806c2b07063e7c1e2c183796c2e73071d045918f4" iface="eth0" netns="" Oct 9 07:55:59.414144 containerd[1626]: 2024-10-09 07:55:59.365 [INFO][4992] k8s.go 615: Releasing IP address(es) ContainerID="d88bfa8b48c677030a0add4806c2b07063e7c1e2c183796c2e73071d045918f4" Oct 9 07:55:59.414144 containerd[1626]: 2024-10-09 07:55:59.365 [INFO][4992] utils.go 188: Calico CNI releasing IP address ContainerID="d88bfa8b48c677030a0add4806c2b07063e7c1e2c183796c2e73071d045918f4" Oct 9 07:55:59.414144 containerd[1626]: 2024-10-09 07:55:59.399 [INFO][4998] ipam_plugin.go 417: Releasing address using handleID ContainerID="d88bfa8b48c677030a0add4806c2b07063e7c1e2c183796c2e73071d045918f4" HandleID="k8s-pod-network.d88bfa8b48c677030a0add4806c2b07063e7c1e2c183796c2e73071d045918f4" Workload="srv--9xk3k.gb1.brightbox.com-k8s-calico--kube--controllers--55d54487b6--jf2kn-eth0" Oct 9 07:55:59.414144 containerd[1626]: 2024-10-09 07:55:59.400 [INFO][4998] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:55:59.414144 containerd[1626]: 2024-10-09 07:55:59.400 [INFO][4998] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:55:59.414144 containerd[1626]: 2024-10-09 07:55:59.408 [WARNING][4998] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="d88bfa8b48c677030a0add4806c2b07063e7c1e2c183796c2e73071d045918f4" HandleID="k8s-pod-network.d88bfa8b48c677030a0add4806c2b07063e7c1e2c183796c2e73071d045918f4" Workload="srv--9xk3k.gb1.brightbox.com-k8s-calico--kube--controllers--55d54487b6--jf2kn-eth0" Oct 9 07:55:59.414144 containerd[1626]: 2024-10-09 07:55:59.408 [INFO][4998] ipam_plugin.go 445: Releasing address using workloadID ContainerID="d88bfa8b48c677030a0add4806c2b07063e7c1e2c183796c2e73071d045918f4" HandleID="k8s-pod-network.d88bfa8b48c677030a0add4806c2b07063e7c1e2c183796c2e73071d045918f4" Workload="srv--9xk3k.gb1.brightbox.com-k8s-calico--kube--controllers--55d54487b6--jf2kn-eth0" Oct 9 07:55:59.414144 containerd[1626]: 2024-10-09 07:55:59.410 [INFO][4998] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:55:59.414144 containerd[1626]: 2024-10-09 07:55:59.412 [INFO][4992] k8s.go 621: Teardown processing complete. ContainerID="d88bfa8b48c677030a0add4806c2b07063e7c1e2c183796c2e73071d045918f4" Oct 9 07:55:59.415777 containerd[1626]: time="2024-10-09T07:55:59.414265131Z" level=info msg="TearDown network for sandbox \"d88bfa8b48c677030a0add4806c2b07063e7c1e2c183796c2e73071d045918f4\" successfully" Oct 9 07:55:59.418106 containerd[1626]: time="2024-10-09T07:55:59.418030243Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d88bfa8b48c677030a0add4806c2b07063e7c1e2c183796c2e73071d045918f4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 07:55:59.418185 containerd[1626]: time="2024-10-09T07:55:59.418119377Z" level=info msg="RemovePodSandbox \"d88bfa8b48c677030a0add4806c2b07063e7c1e2c183796c2e73071d045918f4\" returns successfully" Oct 9 07:55:59.418777 containerd[1626]: time="2024-10-09T07:55:59.418739058Z" level=info msg="StopPodSandbox for \"e0df07c0fe3a150074924634352dafcc3b392777cf6c2ff5dd1d8b273063688e\"" Oct 9 07:55:59.520013 containerd[1626]: 2024-10-09 07:55:59.463 [WARNING][5016] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e0df07c0fe3a150074924634352dafcc3b392777cf6c2ff5dd1d8b273063688e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--9xk3k.gb1.brightbox.com-k8s-coredns--76f75df574--mjvxj-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"8dc77e6f-1a05-4ddb-9194-edc11be626aa", ResourceVersion:"708", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 55, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-9xk3k.gb1.brightbox.com", ContainerID:"619697e9603c5870a65b0eab027dfc58e2792e9b97be585e13c6c91cfb37acd0", Pod:"coredns-76f75df574-mjvxj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali68d09508bcf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:55:59.520013 containerd[1626]: 2024-10-09 07:55:59.463 [INFO][5016] k8s.go 608: Cleaning up netns ContainerID="e0df07c0fe3a150074924634352dafcc3b392777cf6c2ff5dd1d8b273063688e" Oct 9 07:55:59.520013 containerd[1626]: 2024-10-09 07:55:59.463 [INFO][5016] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="e0df07c0fe3a150074924634352dafcc3b392777cf6c2ff5dd1d8b273063688e" iface="eth0" netns="" Oct 9 07:55:59.520013 containerd[1626]: 2024-10-09 07:55:59.463 [INFO][5016] k8s.go 615: Releasing IP address(es) ContainerID="e0df07c0fe3a150074924634352dafcc3b392777cf6c2ff5dd1d8b273063688e" Oct 9 07:55:59.520013 containerd[1626]: 2024-10-09 07:55:59.464 [INFO][5016] utils.go 188: Calico CNI releasing IP address ContainerID="e0df07c0fe3a150074924634352dafcc3b392777cf6c2ff5dd1d8b273063688e" Oct 9 07:55:59.520013 containerd[1626]: 2024-10-09 07:55:59.501 [INFO][5022] ipam_plugin.go 417: Releasing address using handleID ContainerID="e0df07c0fe3a150074924634352dafcc3b392777cf6c2ff5dd1d8b273063688e" HandleID="k8s-pod-network.e0df07c0fe3a150074924634352dafcc3b392777cf6c2ff5dd1d8b273063688e" Workload="srv--9xk3k.gb1.brightbox.com-k8s-coredns--76f75df574--mjvxj-eth0" Oct 9 07:55:59.520013 containerd[1626]: 2024-10-09 07:55:59.503 [INFO][5022] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:55:59.520013 containerd[1626]: 2024-10-09 07:55:59.503 [INFO][5022] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:55:59.520013 containerd[1626]: 2024-10-09 07:55:59.513 [WARNING][5022] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="e0df07c0fe3a150074924634352dafcc3b392777cf6c2ff5dd1d8b273063688e" HandleID="k8s-pod-network.e0df07c0fe3a150074924634352dafcc3b392777cf6c2ff5dd1d8b273063688e" Workload="srv--9xk3k.gb1.brightbox.com-k8s-coredns--76f75df574--mjvxj-eth0" Oct 9 07:55:59.520013 containerd[1626]: 2024-10-09 07:55:59.513 [INFO][5022] ipam_plugin.go 445: Releasing address using workloadID ContainerID="e0df07c0fe3a150074924634352dafcc3b392777cf6c2ff5dd1d8b273063688e" HandleID="k8s-pod-network.e0df07c0fe3a150074924634352dafcc3b392777cf6c2ff5dd1d8b273063688e" Workload="srv--9xk3k.gb1.brightbox.com-k8s-coredns--76f75df574--mjvxj-eth0" Oct 9 07:55:59.520013 containerd[1626]: 2024-10-09 07:55:59.515 [INFO][5022] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:55:59.520013 containerd[1626]: 2024-10-09 07:55:59.518 [INFO][5016] k8s.go 621: Teardown processing complete. ContainerID="e0df07c0fe3a150074924634352dafcc3b392777cf6c2ff5dd1d8b273063688e" Oct 9 07:55:59.520013 containerd[1626]: time="2024-10-09T07:55:59.519855633Z" level=info msg="TearDown network for sandbox \"e0df07c0fe3a150074924634352dafcc3b392777cf6c2ff5dd1d8b273063688e\" successfully" Oct 9 07:55:59.520013 containerd[1626]: time="2024-10-09T07:55:59.519902562Z" level=info msg="StopPodSandbox for \"e0df07c0fe3a150074924634352dafcc3b392777cf6c2ff5dd1d8b273063688e\" returns successfully" Oct 9 07:55:59.522041 containerd[1626]: time="2024-10-09T07:55:59.520622850Z" level=info msg="RemovePodSandbox for \"e0df07c0fe3a150074924634352dafcc3b392777cf6c2ff5dd1d8b273063688e\"" Oct 9 07:55:59.522041 containerd[1626]: time="2024-10-09T07:55:59.520661366Z" level=info msg="Forcibly stopping sandbox \"e0df07c0fe3a150074924634352dafcc3b392777cf6c2ff5dd1d8b273063688e\"" Oct 9 07:55:59.614235 containerd[1626]: 2024-10-09 07:55:59.565 [WARNING][5040] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e0df07c0fe3a150074924634352dafcc3b392777cf6c2ff5dd1d8b273063688e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--9xk3k.gb1.brightbox.com-k8s-coredns--76f75df574--mjvxj-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"8dc77e6f-1a05-4ddb-9194-edc11be626aa", ResourceVersion:"708", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 55, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-9xk3k.gb1.brightbox.com", ContainerID:"619697e9603c5870a65b0eab027dfc58e2792e9b97be585e13c6c91cfb37acd0", Pod:"coredns-76f75df574-mjvxj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali68d09508bcf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:55:59.614235 containerd[1626]: 2024-10-09 07:55:59.565 [INFO][5040] k8s.go 608: Cleaning up netns ContainerID="e0df07c0fe3a150074924634352dafcc3b392777cf6c2ff5dd1d8b273063688e" Oct 9 07:55:59.614235 containerd[1626]: 2024-10-09 07:55:59.565 [INFO][5040] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="e0df07c0fe3a150074924634352dafcc3b392777cf6c2ff5dd1d8b273063688e" iface="eth0" netns="" Oct 9 07:55:59.614235 containerd[1626]: 2024-10-09 07:55:59.565 [INFO][5040] k8s.go 615: Releasing IP address(es) ContainerID="e0df07c0fe3a150074924634352dafcc3b392777cf6c2ff5dd1d8b273063688e" Oct 9 07:55:59.614235 containerd[1626]: 2024-10-09 07:55:59.565 [INFO][5040] utils.go 188: Calico CNI releasing IP address ContainerID="e0df07c0fe3a150074924634352dafcc3b392777cf6c2ff5dd1d8b273063688e" Oct 9 07:55:59.614235 containerd[1626]: 2024-10-09 07:55:59.599 [INFO][5046] ipam_plugin.go 417: Releasing address using handleID ContainerID="e0df07c0fe3a150074924634352dafcc3b392777cf6c2ff5dd1d8b273063688e" HandleID="k8s-pod-network.e0df07c0fe3a150074924634352dafcc3b392777cf6c2ff5dd1d8b273063688e" Workload="srv--9xk3k.gb1.brightbox.com-k8s-coredns--76f75df574--mjvxj-eth0" Oct 9 07:55:59.614235 containerd[1626]: 2024-10-09 07:55:59.599 [INFO][5046] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:55:59.614235 containerd[1626]: 2024-10-09 07:55:59.600 [INFO][5046] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:55:59.614235 containerd[1626]: 2024-10-09 07:55:59.607 [WARNING][5046] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="e0df07c0fe3a150074924634352dafcc3b392777cf6c2ff5dd1d8b273063688e" HandleID="k8s-pod-network.e0df07c0fe3a150074924634352dafcc3b392777cf6c2ff5dd1d8b273063688e" Workload="srv--9xk3k.gb1.brightbox.com-k8s-coredns--76f75df574--mjvxj-eth0" Oct 9 07:55:59.614235 containerd[1626]: 2024-10-09 07:55:59.607 [INFO][5046] ipam_plugin.go 445: Releasing address using workloadID ContainerID="e0df07c0fe3a150074924634352dafcc3b392777cf6c2ff5dd1d8b273063688e" HandleID="k8s-pod-network.e0df07c0fe3a150074924634352dafcc3b392777cf6c2ff5dd1d8b273063688e" Workload="srv--9xk3k.gb1.brightbox.com-k8s-coredns--76f75df574--mjvxj-eth0" Oct 9 07:55:59.614235 containerd[1626]: 2024-10-09 07:55:59.609 [INFO][5046] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:55:59.614235 containerd[1626]: 2024-10-09 07:55:59.610 [INFO][5040] k8s.go 621: Teardown processing complete. ContainerID="e0df07c0fe3a150074924634352dafcc3b392777cf6c2ff5dd1d8b273063688e" Oct 9 07:55:59.614235 containerd[1626]: time="2024-10-09T07:55:59.613798126Z" level=info msg="TearDown network for sandbox \"e0df07c0fe3a150074924634352dafcc3b392777cf6c2ff5dd1d8b273063688e\" successfully" Oct 9 07:55:59.620755 containerd[1626]: time="2024-10-09T07:55:59.620684531Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e0df07c0fe3a150074924634352dafcc3b392777cf6c2ff5dd1d8b273063688e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 07:55:59.620896 containerd[1626]: time="2024-10-09T07:55:59.620792755Z" level=info msg="RemovePodSandbox \"e0df07c0fe3a150074924634352dafcc3b392777cf6c2ff5dd1d8b273063688e\" returns successfully" Oct 9 07:56:00.008663 systemd-networkd[1270]: cali05586663241: Gained IPv6LL Oct 9 07:56:00.329343 systemd-networkd[1270]: calice1e68c1580: Gained IPv6LL Oct 9 07:56:00.522417 systemd-resolved[1516]: Under memory pressure, flushing caches. Oct 9 07:56:00.523259 systemd-journald[1177]: Under memory pressure, flushing caches. Oct 9 07:56:00.522478 systemd-resolved[1516]: Flushed all caches. Oct 9 07:56:02.293499 containerd[1626]: time="2024-10-09T07:56:02.293174461Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:56:02.294775 containerd[1626]: time="2024-10-09T07:56:02.294703872Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=40419849" Oct 9 07:56:02.296309 containerd[1626]: time="2024-10-09T07:56:02.296265433Z" level=info msg="ImageCreate event name:\"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:56:02.301999 containerd[1626]: time="2024-10-09T07:56:02.301945811Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:56:02.304460 containerd[1626]: time="2024-10-09T07:56:02.304253003Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 3.48115708s" Oct 9 07:56:02.304460 containerd[1626]: time="2024-10-09T07:56:02.304337464Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Oct 9 07:56:02.307658 containerd[1626]: time="2024-10-09T07:56:02.306624327Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Oct 9 07:56:02.310976 containerd[1626]: time="2024-10-09T07:56:02.310454322Z" level=info msg="CreateContainer within sandbox \"74b0706a4e5bad3bff99fff1f07579381e8e1ccab0749004bea635e6a6c3e542\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 9 07:56:02.370548 containerd[1626]: time="2024-10-09T07:56:02.370162556Z" level=info msg="CreateContainer within sandbox \"74b0706a4e5bad3bff99fff1f07579381e8e1ccab0749004bea635e6a6c3e542\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"86da5a1376782e049bf870a9f88e472b3a0a205dccf6ca0708364da5c312a052\"" Oct 9 07:56:02.374065 containerd[1626]: time="2024-10-09T07:56:02.372160538Z" level=info msg="StartContainer for \"86da5a1376782e049bf870a9f88e472b3a0a205dccf6ca0708364da5c312a052\"" Oct 9 07:56:02.517328 containerd[1626]: time="2024-10-09T07:56:02.517213603Z" level=info msg="StartContainer for \"86da5a1376782e049bf870a9f88e472b3a0a205dccf6ca0708364da5c312a052\" returns successfully" Oct 9 07:56:02.696551 containerd[1626]: time="2024-10-09T07:56:02.696388015Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:56:02.698199 containerd[1626]: time="2024-10-09T07:56:02.698140240Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=77" Oct 9 07:56:02.706879 containerd[1626]: time="2024-10-09T07:56:02.706830811Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 400.142461ms" Oct 9 07:56:02.707378 containerd[1626]: time="2024-10-09T07:56:02.707178566Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Oct 9 07:56:02.711043 containerd[1626]: time="2024-10-09T07:56:02.711001113Z" level=info msg="CreateContainer within sandbox \"9079af74f224ac7146610df7ef4a7d39525b09fff31902588cd4750863e463fe\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 9 07:56:02.737718 containerd[1626]: time="2024-10-09T07:56:02.737523146Z" level=info msg="CreateContainer within sandbox \"9079af74f224ac7146610df7ef4a7d39525b09fff31902588cd4750863e463fe\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"59710dc64bcce3e86991f4f7405f656e8a5744f9d90c7ca91f9b55edaf70c207\"" Oct 9 07:56:02.739429 containerd[1626]: time="2024-10-09T07:56:02.739358947Z" level=info msg="StartContainer for \"59710dc64bcce3e86991f4f7405f656e8a5744f9d90c7ca91f9b55edaf70c207\"" Oct 9 07:56:02.881161 containerd[1626]: time="2024-10-09T07:56:02.881113797Z" level=info msg="StartContainer for \"59710dc64bcce3e86991f4f7405f656e8a5744f9d90c7ca91f9b55edaf70c207\" returns successfully" Oct 9 07:56:03.647147 kubelet[2877]: I1009 07:56:03.647025 2877 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 07:56:03.691082 kubelet[2877]: I1009 07:56:03.688916 2877 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7d4854566c-5b5mm" podStartSLOduration=4.206604198 podStartE2EDuration="7.688793071s" podCreationTimestamp="2024-10-09 07:55:56 +0000 UTC" firstStartedPulling="2024-10-09 07:55:58.82269455 +0000 UTC m=+60.991205523" lastFinishedPulling="2024-10-09 07:56:02.304883423 +0000 UTC m=+64.473394396" observedRunningTime="2024-10-09 07:56:02.664791817 +0000 UTC m=+64.833302840" watchObservedRunningTime="2024-10-09 07:56:03.688793071 +0000 UTC m=+65.857304052" Oct 9 07:56:04.649733 kubelet[2877]: I1009 07:56:04.649693 2877 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 07:56:10.508692 update_engine[1603]: I1009 07:56:10.508528 1603 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Oct 9 07:56:10.508692 update_engine[1603]: I1009 07:56:10.508641 1603 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Oct 9 07:56:10.511471 update_engine[1603]: I1009 07:56:10.511272 1603 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Oct 9 07:56:10.514721 update_engine[1603]: I1009 07:56:10.513514 1603 omaha_request_params.cc:62] Current group set to stable Oct 9 07:56:10.514721 update_engine[1603]: I1009 07:56:10.513839 1603 update_attempter.cc:499] Already updated boot flags. Skipping. Oct 9 07:56:10.514721 update_engine[1603]: I1009 07:56:10.513850 1603 update_attempter.cc:643] Scheduling an action processor start. Oct 9 07:56:10.514721 update_engine[1603]: I1009 07:56:10.513873 1603 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Oct 9 07:56:10.514721 update_engine[1603]: I1009 07:56:10.513949 1603 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Oct 9 07:56:10.514721 update_engine[1603]: I1009 07:56:10.514051 1603 omaha_request_action.cc:271] Posting an Omaha request to disabled Oct 9 07:56:10.514721 update_engine[1603]: I1009 07:56:10.514065 1603 omaha_request_action.cc:272] Request: Oct 9 07:56:10.514721 update_engine[1603]: Oct 9 07:56:10.514721 update_engine[1603]: Oct 9 07:56:10.514721 update_engine[1603]: Oct 9 07:56:10.514721 update_engine[1603]: Oct 9 07:56:10.514721 update_engine[1603]: Oct 9 07:56:10.514721 update_engine[1603]: Oct 9 07:56:10.514721 update_engine[1603]: Oct 9 07:56:10.514721 update_engine[1603]: Oct 9 07:56:10.514721 update_engine[1603]: I1009 07:56:10.514075 1603 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 9 07:56:10.541556 locksmithd[1642]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Oct 9 07:56:10.545197 update_engine[1603]: I1009 07:56:10.545042 1603 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 9 07:56:10.545836 update_engine[1603]: I1009 07:56:10.545708 1603 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 9 07:56:10.551478 update_engine[1603]: E1009 07:56:10.551110 1603 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 9 07:56:10.551478 update_engine[1603]: I1009 07:56:10.551224 1603 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Oct 9 07:56:10.690609 systemd[1]: Started sshd@7-10.230.72.98:22-147.75.109.163:50964.service - OpenSSH per-connection server daemon (147.75.109.163:50964). Oct 9 07:56:11.722228 sshd[5170]: Accepted publickey for core from 147.75.109.163 port 50964 ssh2: RSA SHA256:z8SERNOz70RXmsszd0t6WAiJugIQVIPROP/9OQBL8sM Oct 9 07:56:11.724439 sshd[5170]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:56:11.748769 systemd-logind[1599]: New session 10 of user core. Oct 9 07:56:11.758168 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 9 07:56:12.877719 sshd[5170]: pam_unix(sshd:session): session closed for user core Oct 9 07:56:12.892729 systemd[1]: sshd@7-10.230.72.98:22-147.75.109.163:50964.service: Deactivated successfully. Oct 9 07:56:12.897800 systemd-logind[1599]: Session 10 logged out. Waiting for processes to exit. Oct 9 07:56:12.898838 systemd[1]: session-10.scope: Deactivated successfully. Oct 9 07:56:12.900367 systemd-logind[1599]: Removed session 10. Oct 9 07:56:18.037565 systemd[1]: Started sshd@8-10.230.72.98:22-147.75.109.163:47220.service - OpenSSH per-connection server daemon (147.75.109.163:47220). Oct 9 07:56:18.999554 sshd[5215]: Accepted publickey for core from 147.75.109.163 port 47220 ssh2: RSA SHA256:z8SERNOz70RXmsszd0t6WAiJugIQVIPROP/9OQBL8sM Oct 9 07:56:19.001394 sshd[5215]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:56:19.010703 systemd-logind[1599]: New session 11 of user core. Oct 9 07:56:19.017654 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 9 07:56:19.866083 sshd[5215]: pam_unix(sshd:session): session closed for user core Oct 9 07:56:19.875887 systemd[1]: sshd@8-10.230.72.98:22-147.75.109.163:47220.service: Deactivated successfully. Oct 9 07:56:19.889899 systemd[1]: session-11.scope: Deactivated successfully. Oct 9 07:56:19.892321 systemd-logind[1599]: Session 11 logged out. Waiting for processes to exit. Oct 9 07:56:19.895198 systemd-logind[1599]: Removed session 11. Oct 9 07:56:20.380305 update_engine[1603]: I1009 07:56:20.379855 1603 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 9 07:56:20.382581 update_engine[1603]: I1009 07:56:20.382285 1603 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 9 07:56:20.383789 update_engine[1603]: I1009 07:56:20.383352 1603 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 9 07:56:20.383789 update_engine[1603]: E1009 07:56:20.383674 1603 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 9 07:56:20.383789 update_engine[1603]: I1009 07:56:20.383764 1603 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Oct 9 07:56:25.024662 systemd[1]: Started sshd@9-10.230.72.98:22-147.75.109.163:47224.service - OpenSSH per-connection server daemon (147.75.109.163:47224). Oct 9 07:56:25.987618 sshd[5239]: Accepted publickey for core from 147.75.109.163 port 47224 ssh2: RSA SHA256:z8SERNOz70RXmsszd0t6WAiJugIQVIPROP/9OQBL8sM Oct 9 07:56:25.990071 sshd[5239]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:56:25.998400 systemd-logind[1599]: New session 12 of user core. Oct 9 07:56:26.007432 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 9 07:56:26.746849 sshd[5239]: pam_unix(sshd:session): session closed for user core Oct 9 07:56:26.754834 systemd[1]: sshd@9-10.230.72.98:22-147.75.109.163:47224.service: Deactivated successfully. Oct 9 07:56:26.755195 systemd-logind[1599]: Session 12 logged out. Waiting for processes to exit. Oct 9 07:56:26.762785 systemd[1]: session-12.scope: Deactivated successfully. Oct 9 07:56:26.764566 systemd-logind[1599]: Removed session 12. Oct 9 07:56:26.902527 systemd[1]: Started sshd@10-10.230.72.98:22-147.75.109.163:47228.service - OpenSSH per-connection server daemon (147.75.109.163:47228). Oct 9 07:56:27.854090 sshd[5258]: Accepted publickey for core from 147.75.109.163 port 47228 ssh2: RSA SHA256:z8SERNOz70RXmsszd0t6WAiJugIQVIPROP/9OQBL8sM Oct 9 07:56:27.863276 sshd[5258]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:56:27.873565 systemd-logind[1599]: New session 13 of user core. Oct 9 07:56:27.883899 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 9 07:56:28.763746 sshd[5258]: pam_unix(sshd:session): session closed for user core Oct 9 07:56:28.771147 systemd[1]: sshd@10-10.230.72.98:22-147.75.109.163:47228.service: Deactivated successfully. Oct 9 07:56:28.776173 systemd-logind[1599]: Session 13 logged out. Waiting for processes to exit. Oct 9 07:56:28.777180 systemd[1]: session-13.scope: Deactivated successfully. Oct 9 07:56:28.779818 systemd-logind[1599]: Removed session 13. Oct 9 07:56:29.340773 systemd[1]: Started sshd@11-10.230.72.98:22-147.75.109.163:53098.service - OpenSSH per-connection server daemon (147.75.109.163:53098). Oct 9 07:56:30.381441 update_engine[1603]: I1009 07:56:30.381317 1603 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 9 07:56:30.382311 update_engine[1603]: I1009 07:56:30.382071 1603 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 9 07:56:30.382551 update_engine[1603]: I1009 07:56:30.382519 1603 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 9 07:56:30.382983 update_engine[1603]: E1009 07:56:30.382959 1603 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 9 07:56:30.383076 update_engine[1603]: I1009 07:56:30.383026 1603 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Oct 9 07:56:30.976027 systemd[1]: run-containerd-runc-k8s.io-cd12cc02d463edaa00cad9d1e4c4107270e29a875852ed4af3e8a338f65d6f87-runc.BXURUs.mount: Deactivated successfully. Oct 9 07:56:32.845382 sshd[5270]: Accepted publickey for core from 147.75.109.163 port 53098 ssh2: RSA SHA256:z8SERNOz70RXmsszd0t6WAiJugIQVIPROP/9OQBL8sM Oct 9 07:56:32.848471 sshd[5270]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:56:32.856080 systemd-logind[1599]: New session 14 of user core. Oct 9 07:56:32.864116 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 9 07:56:33.584933 sshd[5270]: pam_unix(sshd:session): session closed for user core Oct 9 07:56:33.592312 systemd-logind[1599]: Session 14 logged out. Waiting for processes to exit. Oct 9 07:56:33.592713 systemd[1]: sshd@11-10.230.72.98:22-147.75.109.163:53098.service: Deactivated successfully. Oct 9 07:56:33.598636 systemd[1]: session-14.scope: Deactivated successfully. Oct 9 07:56:33.601170 systemd-logind[1599]: Removed session 14. Oct 9 07:56:35.124722 kubelet[2877]: I1009 07:56:35.122927 2877 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 07:56:35.166323 kubelet[2877]: I1009 07:56:35.165970 2877 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7d4854566c-2gg96" podStartSLOduration=35.293231756 podStartE2EDuration="39.16588537s" podCreationTimestamp="2024-10-09 07:55:56 +0000 UTC" firstStartedPulling="2024-10-09 07:55:58.835285074 +0000 UTC m=+61.003796050" lastFinishedPulling="2024-10-09 07:56:02.707938682 +0000 UTC m=+64.876449664" observedRunningTime="2024-10-09 07:56:03.691160097 +0000 UTC m=+65.859671085" watchObservedRunningTime="2024-10-09 07:56:35.16588537 +0000 UTC m=+97.334396368" Oct 9 07:56:38.793571 systemd[1]: Started sshd@12-10.230.72.98:22-147.75.109.163:57816.service - OpenSSH per-connection server daemon (147.75.109.163:57816). Oct 9 07:56:39.925577 sshd[5318]: Accepted publickey for core from 147.75.109.163 port 57816 ssh2: RSA SHA256:z8SERNOz70RXmsszd0t6WAiJugIQVIPROP/9OQBL8sM Oct 9 07:56:39.928895 sshd[5318]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:56:39.936467 systemd-logind[1599]: New session 15 of user core. Oct 9 07:56:39.943597 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 9 07:56:40.380533 update_engine[1603]: I1009 07:56:40.380270 1603 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 9 07:56:40.383220 update_engine[1603]: I1009 07:56:40.381685 1603 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 9 07:56:40.383220 update_engine[1603]: I1009 07:56:40.383018 1603 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 9 07:56:40.384698 update_engine[1603]: E1009 07:56:40.383805 1603 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 9 07:56:40.384698 update_engine[1603]: I1009 07:56:40.383870 1603 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Oct 9 07:56:40.384698 update_engine[1603]: I1009 07:56:40.383886 1603 omaha_request_action.cc:617] Omaha request response: Oct 9 07:56:40.384698 update_engine[1603]: E1009 07:56:40.384035 1603 omaha_request_action.cc:636] Omaha request network transfer failed. Oct 9 07:56:40.384698 update_engine[1603]: I1009 07:56:40.384193 1603 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Oct 9 07:56:40.384698 update_engine[1603]: I1009 07:56:40.384229 1603 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Oct 9 07:56:40.384698 update_engine[1603]: I1009 07:56:40.384240 1603 update_attempter.cc:306] Processing Done. Oct 9 07:56:40.384698 update_engine[1603]: E1009 07:56:40.384281 1603 update_attempter.cc:619] Update failed. Oct 9 07:56:40.384698 update_engine[1603]: I1009 07:56:40.384315 1603 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Oct 9 07:56:40.384698 update_engine[1603]: I1009 07:56:40.384325 1603 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Oct 9 07:56:40.384698 update_engine[1603]: I1009 07:56:40.384331 1603 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Oct 9 07:56:40.386041 update_engine[1603]: I1009 07:56:40.385251 1603 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Oct 9 07:56:40.386041 update_engine[1603]: I1009 07:56:40.385329 1603 omaha_request_action.cc:271] Posting an Omaha request to disabled Oct 9 07:56:40.386041 update_engine[1603]: I1009 07:56:40.385339 1603 omaha_request_action.cc:272] Request: Oct 9 07:56:40.386041 update_engine[1603]: Oct 9 07:56:40.386041 update_engine[1603]: Oct 9 07:56:40.386041 update_engine[1603]: Oct 9 07:56:40.386041 update_engine[1603]: Oct 9 07:56:40.386041 update_engine[1603]: Oct 9 07:56:40.386041 update_engine[1603]: Oct 9 07:56:40.386041 update_engine[1603]: I1009 07:56:40.385345 1603 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 9 07:56:40.386041 update_engine[1603]: I1009 07:56:40.385550 1603 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 9 07:56:40.386041 update_engine[1603]: I1009 07:56:40.385747 1603 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 9 07:56:40.386853 locksmithd[1642]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Oct 9 07:56:40.387480 update_engine[1603]: E1009 07:56:40.386093 1603 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 9 07:56:40.387480 update_engine[1603]: I1009 07:56:40.386137 1603 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Oct 9 07:56:40.387480 update_engine[1603]: I1009 07:56:40.386145 1603 omaha_request_action.cc:617] Omaha request response: Oct 9 07:56:40.387480 update_engine[1603]: I1009 07:56:40.386152 1603 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Oct 9 07:56:40.387480 update_engine[1603]: I1009 07:56:40.386157 1603 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Oct 9 07:56:40.387480 update_engine[1603]: I1009 07:56:40.386161 1603 update_attempter.cc:306] Processing Done. Oct 9 07:56:40.387480 update_engine[1603]: I1009 07:56:40.386168 1603 update_attempter.cc:310] Error event sent. Oct 9 07:56:40.387480 update_engine[1603]: I1009 07:56:40.386189 1603 update_check_scheduler.cc:74] Next update check in 40m30s Oct 9 07:56:40.388134 locksmithd[1642]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Oct 9 07:56:40.902457 sshd[5318]: pam_unix(sshd:session): session closed for user core Oct 9 07:56:40.912278 systemd-logind[1599]: Session 15 logged out. Waiting for processes to exit. Oct 9 07:56:40.913386 systemd[1]: sshd@12-10.230.72.98:22-147.75.109.163:57816.service: Deactivated successfully. Oct 9 07:56:40.918965 systemd[1]: session-15.scope: Deactivated successfully. Oct 9 07:56:40.921303 systemd-logind[1599]: Removed session 15. Oct 9 07:56:41.265842 kubelet[2877]: I1009 07:56:41.265520 2877 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 07:56:46.018982 systemd[1]: Started sshd@13-10.230.72.98:22-147.75.109.163:57818.service - OpenSSH per-connection server daemon (147.75.109.163:57818). Oct 9 07:56:46.989046 sshd[5358]: Accepted publickey for core from 147.75.109.163 port 57818 ssh2: RSA SHA256:z8SERNOz70RXmsszd0t6WAiJugIQVIPROP/9OQBL8sM Oct 9 07:56:46.991473 sshd[5358]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:56:46.999980 systemd-logind[1599]: New session 16 of user core. Oct 9 07:56:47.006055 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 9 07:56:47.764547 sshd[5358]: pam_unix(sshd:session): session closed for user core Oct 9 07:56:47.769996 systemd[1]: sshd@13-10.230.72.98:22-147.75.109.163:57818.service: Deactivated successfully. Oct 9 07:56:47.773083 systemd[1]: session-16.scope: Deactivated successfully. Oct 9 07:56:47.773555 systemd-logind[1599]: Session 16 logged out. Waiting for processes to exit. Oct 9 07:56:47.776794 systemd-logind[1599]: Removed session 16. Oct 9 07:56:52.932084 systemd[1]: Started sshd@14-10.230.72.98:22-147.75.109.163:40542.service - OpenSSH per-connection server daemon (147.75.109.163:40542). Oct 9 07:56:53.918099 sshd[5399]: Accepted publickey for core from 147.75.109.163 port 40542 ssh2: RSA SHA256:z8SERNOz70RXmsszd0t6WAiJugIQVIPROP/9OQBL8sM Oct 9 07:56:53.920440 sshd[5399]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:56:53.928497 systemd-logind[1599]: New session 17 of user core. Oct 9 07:56:53.938894 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 9 07:56:54.720410 sshd[5399]: pam_unix(sshd:session): session closed for user core Oct 9 07:56:54.725649 systemd[1]: sshd@14-10.230.72.98:22-147.75.109.163:40542.service: Deactivated successfully. Oct 9 07:56:54.732597 systemd[1]: session-17.scope: Deactivated successfully. Oct 9 07:56:54.734312 systemd-logind[1599]: Session 17 logged out. Waiting for processes to exit. Oct 9 07:56:54.735950 systemd-logind[1599]: Removed session 17. Oct 9 07:56:54.886320 systemd[1]: Started sshd@15-10.230.72.98:22-147.75.109.163:40546.service - OpenSSH per-connection server daemon (147.75.109.163:40546). Oct 9 07:56:55.817699 sshd[5413]: Accepted publickey for core from 147.75.109.163 port 40546 ssh2: RSA SHA256:z8SERNOz70RXmsszd0t6WAiJugIQVIPROP/9OQBL8sM Oct 9 07:56:55.819326 sshd[5413]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:56:55.826198 systemd-logind[1599]: New session 18 of user core. Oct 9 07:56:55.835768 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 9 07:56:56.967945 sshd[5413]: pam_unix(sshd:session): session closed for user core Oct 9 07:56:56.981293 systemd[1]: sshd@15-10.230.72.98:22-147.75.109.163:40546.service: Deactivated successfully. Oct 9 07:56:56.991836 systemd[1]: session-18.scope: Deactivated successfully. Oct 9 07:56:56.996468 systemd-logind[1599]: Session 18 logged out. Waiting for processes to exit. Oct 9 07:56:57.001114 systemd-logind[1599]: Removed session 18. Oct 9 07:56:57.123144 systemd[1]: Started sshd@16-10.230.72.98:22-147.75.109.163:40554.service - OpenSSH per-connection server daemon (147.75.109.163:40554). Oct 9 07:56:58.095298 sshd[5425]: Accepted publickey for core from 147.75.109.163 port 40554 ssh2: RSA SHA256:z8SERNOz70RXmsszd0t6WAiJugIQVIPROP/9OQBL8sM Oct 9 07:56:58.098011 sshd[5425]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:56:58.106147 systemd-logind[1599]: New session 19 of user core. Oct 9 07:56:58.112745 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 9 07:57:01.482836 sshd[5425]: pam_unix(sshd:session): session closed for user core Oct 9 07:57:01.497327 systemd[1]: sshd@16-10.230.72.98:22-147.75.109.163:40554.service: Deactivated successfully. Oct 9 07:57:01.505361 systemd-logind[1599]: Session 19 logged out. Waiting for processes to exit. Oct 9 07:57:01.509252 systemd[1]: session-19.scope: Deactivated successfully. Oct 9 07:57:01.511783 systemd-logind[1599]: Removed session 19. Oct 9 07:57:01.653798 systemd[1]: Started sshd@17-10.230.72.98:22-147.75.109.163:59130.service - OpenSSH per-connection server daemon (147.75.109.163:59130). Oct 9 07:57:02.536737 systemd-resolved[1516]: Under memory pressure, flushing caches. Oct 9 07:57:02.545754 systemd-journald[1177]: Under memory pressure, flushing caches. Oct 9 07:57:02.536756 systemd-resolved[1516]: Flushed all caches. Oct 9 07:57:02.595110 sshd[5470]: Accepted publickey for core from 147.75.109.163 port 59130 ssh2: RSA SHA256:z8SERNOz70RXmsszd0t6WAiJugIQVIPROP/9OQBL8sM Oct 9 07:57:02.597765 sshd[5470]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:57:02.606269 systemd-logind[1599]: New session 20 of user core. Oct 9 07:57:02.612806 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 9 07:57:03.735600 sshd[5470]: pam_unix(sshd:session): session closed for user core Oct 9 07:57:03.742618 systemd[1]: sshd@17-10.230.72.98:22-147.75.109.163:59130.service: Deactivated successfully. Oct 9 07:57:03.746842 systemd[1]: session-20.scope: Deactivated successfully. Oct 9 07:57:03.748368 systemd-logind[1599]: Session 20 logged out. Waiting for processes to exit. Oct 9 07:57:03.750131 systemd-logind[1599]: Removed session 20. Oct 9 07:57:03.901735 systemd[1]: Started sshd@18-10.230.72.98:22-147.75.109.163:59140.service - OpenSSH per-connection server daemon (147.75.109.163:59140). Oct 9 07:57:04.830222 sshd[5490]: Accepted publickey for core from 147.75.109.163 port 59140 ssh2: RSA SHA256:z8SERNOz70RXmsszd0t6WAiJugIQVIPROP/9OQBL8sM Oct 9 07:57:04.832628 sshd[5490]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:57:04.839682 systemd-logind[1599]: New session 21 of user core. Oct 9 07:57:04.850039 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 9 07:57:05.574601 sshd[5490]: pam_unix(sshd:session): session closed for user core Oct 9 07:57:05.581289 systemd[1]: sshd@18-10.230.72.98:22-147.75.109.163:59140.service: Deactivated successfully. Oct 9 07:57:05.582254 systemd-logind[1599]: Session 21 logged out. Waiting for processes to exit. Oct 9 07:57:05.586981 systemd[1]: session-21.scope: Deactivated successfully. Oct 9 07:57:05.590558 systemd-logind[1599]: Removed session 21. Oct 9 07:57:10.781521 systemd[1]: Started sshd@19-10.230.72.98:22-147.75.109.163:46850.service - OpenSSH per-connection server daemon (147.75.109.163:46850). Oct 9 07:57:11.854272 sshd[5513]: Accepted publickey for core from 147.75.109.163 port 46850 ssh2: RSA SHA256:z8SERNOz70RXmsszd0t6WAiJugIQVIPROP/9OQBL8sM Oct 9 07:57:11.856429 sshd[5513]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:57:11.863470 systemd-logind[1599]: New session 22 of user core. Oct 9 07:57:11.870754 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 9 07:57:12.738935 sshd[5513]: pam_unix(sshd:session): session closed for user core Oct 9 07:57:12.744943 systemd[1]: sshd@19-10.230.72.98:22-147.75.109.163:46850.service: Deactivated successfully. Oct 9 07:57:12.748597 systemd-logind[1599]: Session 22 logged out. Waiting for processes to exit. Oct 9 07:57:12.749411 systemd[1]: session-22.scope: Deactivated successfully. Oct 9 07:57:12.751687 systemd-logind[1599]: Removed session 22. Oct 9 07:57:17.890633 systemd[1]: Started sshd@20-10.230.72.98:22-147.75.109.163:49910.service - OpenSSH per-connection server daemon (147.75.109.163:49910). Oct 9 07:57:18.845882 sshd[5560]: Accepted publickey for core from 147.75.109.163 port 49910 ssh2: RSA SHA256:z8SERNOz70RXmsszd0t6WAiJugIQVIPROP/9OQBL8sM Oct 9 07:57:18.848603 sshd[5560]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:57:18.855813 systemd-logind[1599]: New session 23 of user core. Oct 9 07:57:18.863827 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 9 07:57:19.682857 sshd[5560]: pam_unix(sshd:session): session closed for user core Oct 9 07:57:19.686769 systemd-logind[1599]: Session 23 logged out. Waiting for processes to exit. Oct 9 07:57:19.687085 systemd[1]: sshd@20-10.230.72.98:22-147.75.109.163:49910.service: Deactivated successfully. Oct 9 07:57:19.691895 systemd[1]: session-23.scope: Deactivated successfully. Oct 9 07:57:19.695387 systemd-logind[1599]: Removed session 23. Oct 9 07:57:24.808534 systemd[1]: Started sshd@21-10.230.72.98:22-147.75.109.163:49920.service - OpenSSH per-connection server daemon (147.75.109.163:49920). Oct 9 07:57:25.757021 sshd[5580]: Accepted publickey for core from 147.75.109.163 port 49920 ssh2: RSA SHA256:z8SERNOz70RXmsszd0t6WAiJugIQVIPROP/9OQBL8sM Oct 9 07:57:25.759664 sshd[5580]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:57:25.767039 systemd-logind[1599]: New session 24 of user core. Oct 9 07:57:25.773413 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 9 07:57:26.503544 sshd[5580]: pam_unix(sshd:session): session closed for user core Oct 9 07:57:26.508433 systemd[1]: sshd@21-10.230.72.98:22-147.75.109.163:49920.service: Deactivated successfully. Oct 9 07:57:26.514707 systemd[1]: session-24.scope: Deactivated successfully. Oct 9 07:57:26.517514 systemd-logind[1599]: Session 24 logged out. Waiting for processes to exit. Oct 9 07:57:26.519026 systemd-logind[1599]: Removed session 24.