Aug 6 00:16:14.049040 kernel: Linux version 6.6.43-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Mon Aug 5 20:36:22 -00 2024 Aug 6 00:16:14.049149 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=4763ee6059e6f81f5b007c7bdf42f5dcad676aac40503ddb8a29787eba4ab695 Aug 6 00:16:14.049167 kernel: BIOS-provided physical RAM map: Aug 6 00:16:14.049289 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Aug 6 00:16:14.049308 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Aug 6 00:16:14.049318 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 6 00:16:14.049330 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Aug 6 00:16:14.049341 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Aug 6 00:16:14.049351 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Aug 6 00:16:14.049362 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Aug 6 00:16:14.049400 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 6 00:16:14.049417 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 6 00:16:14.049435 kernel: NX (Execute Disable) protection: active Aug 6 00:16:14.049446 kernel: APIC: Static calls initialized Aug 6 00:16:14.049458 kernel: SMBIOS 2.8 present. Aug 6 00:16:14.049470 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Aug 6 00:16:14.049482 kernel: Hypervisor detected: KVM Aug 6 00:16:14.049498 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 6 00:16:14.049509 kernel: kvm-clock: using sched offset of 4639534589 cycles Aug 6 00:16:14.049521 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 6 00:16:14.049533 kernel: tsc: Detected 2499.998 MHz processor Aug 6 00:16:14.049545 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 6 00:16:14.049557 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 6 00:16:14.049568 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Aug 6 00:16:14.049580 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Aug 6 00:16:14.049591 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 6 00:16:14.049607 kernel: Using GB pages for direct mapping Aug 6 00:16:14.049619 kernel: ACPI: Early table checksum verification disabled Aug 6 00:16:14.049630 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Aug 6 00:16:14.049642 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 6 00:16:14.049654 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 6 00:16:14.049665 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 6 00:16:14.049677 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Aug 6 00:16:14.049688 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 6 00:16:14.049700 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 6 00:16:14.049715 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 6 00:16:14.049727 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 6 00:16:14.049738 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Aug 6 00:16:14.049771 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Aug 6 00:16:14.049784 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Aug 6 00:16:14.049803 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Aug 6 00:16:14.049815 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Aug 6 00:16:14.049832 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Aug 6 00:16:14.049844 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Aug 6 00:16:14.049856 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Aug 6 00:16:14.049868 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Aug 6 00:16:14.049880 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Aug 6 00:16:14.049892 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Aug 6 00:16:14.049904 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Aug 6 00:16:14.049915 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Aug 6 00:16:14.049932 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Aug 6 00:16:14.049943 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Aug 6 00:16:14.049955 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Aug 6 00:16:14.049967 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Aug 6 00:16:14.049979 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Aug 6 00:16:14.049990 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Aug 6 00:16:14.050002 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Aug 6 00:16:14.050014 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Aug 6 00:16:14.050026 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Aug 6 00:16:14.050043 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Aug 6 00:16:14.050055 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Aug 6 00:16:14.050067 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Aug 6 00:16:14.050079 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Aug 6 00:16:14.050091 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Aug 6 00:16:14.050103 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Aug 6 00:16:14.050115 kernel: Zone ranges: Aug 6 00:16:14.050127 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 6 00:16:14.050140 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Aug 6 00:16:14.050156 kernel: Normal empty Aug 6 00:16:14.050168 kernel: Movable zone start for each node Aug 6 00:16:14.050263 kernel: Early memory node ranges Aug 6 00:16:14.050275 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 6 00:16:14.050287 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Aug 6 00:16:14.050299 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Aug 6 00:16:14.050311 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 6 00:16:14.050323 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 6 00:16:14.050335 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Aug 6 00:16:14.050347 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 6 00:16:14.050365 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 6 00:16:14.050377 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 6 00:16:14.050389 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 6 00:16:14.050401 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 6 00:16:14.050413 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 6 00:16:14.050426 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 6 00:16:14.050438 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 6 00:16:14.050450 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 6 00:16:14.050461 kernel: TSC deadline timer available Aug 6 00:16:14.050478 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Aug 6 00:16:14.050490 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 6 00:16:14.050503 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Aug 6 00:16:14.050515 kernel: Booting paravirtualized kernel on KVM Aug 6 00:16:14.050527 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 6 00:16:14.050539 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Aug 6 00:16:14.050551 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u262144 Aug 6 00:16:14.050563 kernel: pcpu-alloc: s196904 r8192 d32472 u262144 alloc=1*2097152 Aug 6 00:16:14.050575 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Aug 6 00:16:14.050592 kernel: kvm-guest: PV spinlocks enabled Aug 6 00:16:14.050604 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 6 00:16:14.050618 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=4763ee6059e6f81f5b007c7bdf42f5dcad676aac40503ddb8a29787eba4ab695 Aug 6 00:16:14.050631 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 6 00:16:14.050643 kernel: random: crng init done Aug 6 00:16:14.050655 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 6 00:16:14.050703 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 6 00:16:14.050717 kernel: Fallback order for Node 0: 0 Aug 6 00:16:14.050736 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Aug 6 00:16:14.052778 kernel: Policy zone: DMA32 Aug 6 00:16:14.052796 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 6 00:16:14.052809 kernel: software IO TLB: area num 16. Aug 6 00:16:14.052822 kernel: Memory: 1895384K/2096616K available (12288K kernel code, 2302K rwdata, 22640K rodata, 49372K init, 1972K bss, 200972K reserved, 0K cma-reserved) Aug 6 00:16:14.052834 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Aug 6 00:16:14.052847 kernel: Kernel/User page tables isolation: enabled Aug 6 00:16:14.052859 kernel: ftrace: allocating 37659 entries in 148 pages Aug 6 00:16:14.052871 kernel: ftrace: allocated 148 pages with 3 groups Aug 6 00:16:14.052892 kernel: Dynamic Preempt: voluntary Aug 6 00:16:14.052905 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 6 00:16:14.052918 kernel: rcu: RCU event tracing is enabled. Aug 6 00:16:14.052930 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Aug 6 00:16:14.052943 kernel: Trampoline variant of Tasks RCU enabled. Aug 6 00:16:14.052968 kernel: Rude variant of Tasks RCU enabled. Aug 6 00:16:14.052986 kernel: Tracing variant of Tasks RCU enabled. Aug 6 00:16:14.052999 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 6 00:16:14.053011 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Aug 6 00:16:14.053024 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Aug 6 00:16:14.053037 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 6 00:16:14.053049 kernel: Console: colour VGA+ 80x25 Aug 6 00:16:14.053066 kernel: printk: console [tty0] enabled Aug 6 00:16:14.053079 kernel: printk: console [ttyS0] enabled Aug 6 00:16:14.053092 kernel: ACPI: Core revision 20230628 Aug 6 00:16:14.053104 kernel: APIC: Switch to symmetric I/O mode setup Aug 6 00:16:14.053117 kernel: x2apic enabled Aug 6 00:16:14.053135 kernel: APIC: Switched APIC routing to: physical x2apic Aug 6 00:16:14.053148 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Aug 6 00:16:14.053161 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Aug 6 00:16:14.053198 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Aug 6 00:16:14.053213 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Aug 6 00:16:14.053226 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Aug 6 00:16:14.053239 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 6 00:16:14.053251 kernel: Spectre V2 : Mitigation: Retpolines Aug 6 00:16:14.053264 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Aug 6 00:16:14.053283 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Aug 6 00:16:14.053296 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Aug 6 00:16:14.053308 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 6 00:16:14.053321 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 6 00:16:14.053334 kernel: MDS: Mitigation: Clear CPU buffers Aug 6 00:16:14.053346 kernel: MMIO Stale Data: Unknown: No mitigations Aug 6 00:16:14.053359 kernel: SRBDS: Unknown: Dependent on hypervisor status Aug 6 00:16:14.053371 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 6 00:16:14.053384 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 6 00:16:14.053397 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 6 00:16:14.053409 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 6 00:16:14.053427 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Aug 6 00:16:14.053440 kernel: Freeing SMP alternatives memory: 32K Aug 6 00:16:14.053452 kernel: pid_max: default: 32768 minimum: 301 Aug 6 00:16:14.053465 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Aug 6 00:16:14.053477 kernel: SELinux: Initializing. Aug 6 00:16:14.053490 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 6 00:16:14.053503 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 6 00:16:14.053516 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Aug 6 00:16:14.053528 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1. Aug 6 00:16:14.053541 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1. Aug 6 00:16:14.053554 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1. Aug 6 00:16:14.053572 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Aug 6 00:16:14.053586 kernel: signal: max sigframe size: 1776 Aug 6 00:16:14.053598 kernel: rcu: Hierarchical SRCU implementation. Aug 6 00:16:14.053612 kernel: rcu: Max phase no-delay instances is 400. Aug 6 00:16:14.053624 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Aug 6 00:16:14.053637 kernel: smp: Bringing up secondary CPUs ... Aug 6 00:16:14.053650 kernel: smpboot: x86: Booting SMP configuration: Aug 6 00:16:14.053663 kernel: .... node #0, CPUs: #1 Aug 6 00:16:14.053676 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Aug 6 00:16:14.053693 kernel: smp: Brought up 1 node, 2 CPUs Aug 6 00:16:14.053706 kernel: smpboot: Max logical packages: 16 Aug 6 00:16:14.053719 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Aug 6 00:16:14.053732 kernel: devtmpfs: initialized Aug 6 00:16:14.053759 kernel: x86/mm: Memory block size: 128MB Aug 6 00:16:14.053774 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 6 00:16:14.053788 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Aug 6 00:16:14.053800 kernel: pinctrl core: initialized pinctrl subsystem Aug 6 00:16:14.053814 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 6 00:16:14.053833 kernel: audit: initializing netlink subsys (disabled) Aug 6 00:16:14.053846 kernel: audit: type=2000 audit(1722903372.423:1): state=initialized audit_enabled=0 res=1 Aug 6 00:16:14.053859 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 6 00:16:14.053872 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 6 00:16:14.053884 kernel: cpuidle: using governor menu Aug 6 00:16:14.053897 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 6 00:16:14.053910 kernel: dca service started, version 1.12.1 Aug 6 00:16:14.053923 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Aug 6 00:16:14.053936 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Aug 6 00:16:14.053954 kernel: PCI: Using configuration type 1 for base access Aug 6 00:16:14.053967 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 6 00:16:14.053980 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 6 00:16:14.053993 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 6 00:16:14.054006 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 6 00:16:14.054019 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 6 00:16:14.054032 kernel: ACPI: Added _OSI(Module Device) Aug 6 00:16:14.054045 kernel: ACPI: Added _OSI(Processor Device) Aug 6 00:16:14.054057 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Aug 6 00:16:14.054075 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 6 00:16:14.054088 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 6 00:16:14.054101 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Aug 6 00:16:14.054114 kernel: ACPI: Interpreter enabled Aug 6 00:16:14.054126 kernel: ACPI: PM: (supports S0 S5) Aug 6 00:16:14.054139 kernel: ACPI: Using IOAPIC for interrupt routing Aug 6 00:16:14.054153 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 6 00:16:14.054166 kernel: PCI: Using E820 reservations for host bridge windows Aug 6 00:16:14.054190 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Aug 6 00:16:14.054209 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 6 00:16:14.054531 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 6 00:16:14.054716 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Aug 6 00:16:14.055234 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Aug 6 00:16:14.055257 kernel: PCI host bridge to bus 0000:00 Aug 6 00:16:14.055459 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 6 00:16:14.055618 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 6 00:16:14.059087 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 6 00:16:14.059304 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Aug 6 00:16:14.059457 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Aug 6 00:16:14.059639 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Aug 6 00:16:14.059858 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 6 00:16:14.060052 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Aug 6 00:16:14.060279 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Aug 6 00:16:14.060453 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Aug 6 00:16:14.060623 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Aug 6 00:16:14.062087 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Aug 6 00:16:14.062295 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 6 00:16:14.062482 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Aug 6 00:16:14.062652 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Aug 6 00:16:14.064942 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Aug 6 00:16:14.065128 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Aug 6 00:16:14.065382 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Aug 6 00:16:14.065555 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Aug 6 00:16:14.065735 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Aug 6 00:16:14.066043 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Aug 6 00:16:14.066256 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Aug 6 00:16:14.066424 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Aug 6 00:16:14.066602 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Aug 6 00:16:14.068586 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Aug 6 00:16:14.068819 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Aug 6 00:16:14.068999 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Aug 6 00:16:14.069210 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Aug 6 00:16:14.069465 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Aug 6 00:16:14.069654 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Aug 6 00:16:14.071120 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Aug 6 00:16:14.071584 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Aug 6 00:16:14.072851 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Aug 6 00:16:14.073034 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Aug 6 00:16:14.073296 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Aug 6 00:16:14.073468 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Aug 6 00:16:14.073636 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Aug 6 00:16:14.075948 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Aug 6 00:16:14.076146 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Aug 6 00:16:14.076338 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Aug 6 00:16:14.076537 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Aug 6 00:16:14.076720 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Aug 6 00:16:14.076926 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Aug 6 00:16:14.077111 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Aug 6 00:16:14.077349 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Aug 6 00:16:14.077541 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Aug 6 00:16:14.077718 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Aug 6 00:16:14.081107 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Aug 6 00:16:14.081308 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Aug 6 00:16:14.081480 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Aug 6 00:16:14.081671 kernel: pci_bus 0000:02: extended config space not accessible Aug 6 00:16:14.081913 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Aug 6 00:16:14.082093 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Aug 6 00:16:14.082311 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Aug 6 00:16:14.082490 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Aug 6 00:16:14.082679 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Aug 6 00:16:14.082971 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Aug 6 00:16:14.083141 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Aug 6 00:16:14.083358 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Aug 6 00:16:14.083525 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Aug 6 00:16:14.083719 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Aug 6 00:16:14.086489 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Aug 6 00:16:14.086694 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Aug 6 00:16:14.086899 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Aug 6 00:16:14.087071 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Aug 6 00:16:14.087295 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Aug 6 00:16:14.087470 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Aug 6 00:16:14.087638 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Aug 6 00:16:14.088888 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Aug 6 00:16:14.089179 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Aug 6 00:16:14.089359 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Aug 6 00:16:14.089536 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Aug 6 00:16:14.089703 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Aug 6 00:16:14.089973 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Aug 6 00:16:14.090147 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Aug 6 00:16:14.090336 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Aug 6 00:16:14.090515 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Aug 6 00:16:14.090691 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Aug 6 00:16:14.090880 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Aug 6 00:16:14.091054 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Aug 6 00:16:14.091074 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 6 00:16:14.091088 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 6 00:16:14.091102 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 6 00:16:14.091115 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 6 00:16:14.091135 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Aug 6 00:16:14.091148 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Aug 6 00:16:14.091161 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Aug 6 00:16:14.091186 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Aug 6 00:16:14.091200 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Aug 6 00:16:14.091212 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Aug 6 00:16:14.091225 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Aug 6 00:16:14.091239 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Aug 6 00:16:14.091251 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Aug 6 00:16:14.091270 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Aug 6 00:16:14.091283 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Aug 6 00:16:14.091296 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Aug 6 00:16:14.091309 kernel: iommu: Default domain type: Translated Aug 6 00:16:14.091322 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 6 00:16:14.091336 kernel: PCI: Using ACPI for IRQ routing Aug 6 00:16:14.091349 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 6 00:16:14.091362 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Aug 6 00:16:14.091375 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Aug 6 00:16:14.091550 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Aug 6 00:16:14.091718 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Aug 6 00:16:14.091900 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 6 00:16:14.091920 kernel: vgaarb: loaded Aug 6 00:16:14.091934 kernel: clocksource: Switched to clocksource kvm-clock Aug 6 00:16:14.091947 kernel: VFS: Disk quotas dquot_6.6.0 Aug 6 00:16:14.091961 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 6 00:16:14.091973 kernel: pnp: PnP ACPI init Aug 6 00:16:14.092164 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Aug 6 00:16:14.092204 kernel: pnp: PnP ACPI: found 5 devices Aug 6 00:16:14.092218 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 6 00:16:14.092231 kernel: NET: Registered PF_INET protocol family Aug 6 00:16:14.092245 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 6 00:16:14.092258 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Aug 6 00:16:14.092271 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 6 00:16:14.092284 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 6 00:16:14.092296 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Aug 6 00:16:14.092314 kernel: TCP: Hash tables configured (established 16384 bind 16384) Aug 6 00:16:14.092328 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 6 00:16:14.092341 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 6 00:16:14.092354 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 6 00:16:14.092367 kernel: NET: Registered PF_XDP protocol family Aug 6 00:16:14.092536 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Aug 6 00:16:14.092708 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Aug 6 00:16:14.092914 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Aug 6 00:16:14.093104 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Aug 6 00:16:14.093290 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Aug 6 00:16:14.093463 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Aug 6 00:16:14.093637 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Aug 6 00:16:14.093880 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Aug 6 00:16:14.094045 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Aug 6 00:16:14.094232 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Aug 6 00:16:14.094396 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Aug 6 00:16:14.094561 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Aug 6 00:16:14.094728 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Aug 6 00:16:14.094925 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Aug 6 00:16:14.095093 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Aug 6 00:16:14.095288 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Aug 6 00:16:14.095471 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Aug 6 00:16:14.095681 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Aug 6 00:16:14.095920 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Aug 6 00:16:14.096084 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Aug 6 00:16:14.096262 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Aug 6 00:16:14.096426 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Aug 6 00:16:14.096590 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Aug 6 00:16:14.096774 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Aug 6 00:16:14.096943 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Aug 6 00:16:14.097118 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Aug 6 00:16:14.097294 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Aug 6 00:16:14.097461 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Aug 6 00:16:14.097625 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Aug 6 00:16:14.097834 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Aug 6 00:16:14.098009 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Aug 6 00:16:14.098203 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Aug 6 00:16:14.098368 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Aug 6 00:16:14.098531 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Aug 6 00:16:14.098694 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Aug 6 00:16:14.098916 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Aug 6 00:16:14.099083 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Aug 6 00:16:14.099261 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Aug 6 00:16:14.099427 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Aug 6 00:16:14.099593 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Aug 6 00:16:14.099816 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Aug 6 00:16:14.099982 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Aug 6 00:16:14.100145 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Aug 6 00:16:14.100321 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Aug 6 00:16:14.100484 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Aug 6 00:16:14.100656 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Aug 6 00:16:14.100850 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Aug 6 00:16:14.101014 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Aug 6 00:16:14.101191 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Aug 6 00:16:14.101355 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Aug 6 00:16:14.101518 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 6 00:16:14.101669 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 6 00:16:14.101874 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 6 00:16:14.102023 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Aug 6 00:16:14.102191 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Aug 6 00:16:14.102341 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Aug 6 00:16:14.102512 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Aug 6 00:16:14.102669 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Aug 6 00:16:14.102856 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Aug 6 00:16:14.103028 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Aug 6 00:16:14.104041 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Aug 6 00:16:14.104260 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Aug 6 00:16:14.104416 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Aug 6 00:16:14.104584 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Aug 6 00:16:14.104739 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Aug 6 00:16:14.104927 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Aug 6 00:16:14.105095 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Aug 6 00:16:14.105400 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Aug 6 00:16:14.105943 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Aug 6 00:16:14.106129 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Aug 6 00:16:14.106301 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Aug 6 00:16:14.106455 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Aug 6 00:16:14.106623 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Aug 6 00:16:14.106808 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Aug 6 00:16:14.106974 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Aug 6 00:16:14.107145 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Aug 6 00:16:14.107313 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Aug 6 00:16:14.107470 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Aug 6 00:16:14.107651 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Aug 6 00:16:14.107855 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Aug 6 00:16:14.108011 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Aug 6 00:16:14.108040 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Aug 6 00:16:14.108055 kernel: PCI: CLS 0 bytes, default 64 Aug 6 00:16:14.108069 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Aug 6 00:16:14.108083 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Aug 6 00:16:14.108097 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Aug 6 00:16:14.108111 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Aug 6 00:16:14.108124 kernel: Initialise system trusted keyrings Aug 6 00:16:14.108138 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Aug 6 00:16:14.108157 kernel: Key type asymmetric registered Aug 6 00:16:14.108182 kernel: Asymmetric key parser 'x509' registered Aug 6 00:16:14.108197 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 6 00:16:14.108211 kernel: io scheduler mq-deadline registered Aug 6 00:16:14.108224 kernel: io scheduler kyber registered Aug 6 00:16:14.108238 kernel: io scheduler bfq registered Aug 6 00:16:14.108411 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Aug 6 00:16:14.108579 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Aug 6 00:16:14.108769 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 6 00:16:14.108958 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Aug 6 00:16:14.109127 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Aug 6 00:16:14.109308 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 6 00:16:14.109481 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Aug 6 00:16:14.109652 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Aug 6 00:16:14.109887 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 6 00:16:14.110070 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Aug 6 00:16:14.110250 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Aug 6 00:16:14.110415 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 6 00:16:14.110584 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Aug 6 00:16:14.110772 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Aug 6 00:16:14.110946 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 6 00:16:14.111127 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Aug 6 00:16:14.111310 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Aug 6 00:16:14.111480 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 6 00:16:14.111653 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Aug 6 00:16:14.111878 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Aug 6 00:16:14.112045 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 6 00:16:14.112236 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Aug 6 00:16:14.112403 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Aug 6 00:16:14.112566 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 6 00:16:14.112588 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 6 00:16:14.112603 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Aug 6 00:16:14.112617 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Aug 6 00:16:14.112637 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 6 00:16:14.112651 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 6 00:16:14.112665 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 6 00:16:14.112679 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 6 00:16:14.112693 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 6 00:16:14.112898 kernel: rtc_cmos 00:03: RTC can wake from S4 Aug 6 00:16:14.112922 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 6 00:16:14.113070 kernel: rtc_cmos 00:03: registered as rtc0 Aug 6 00:16:14.113248 kernel: rtc_cmos 00:03: setting system clock to 2024-08-06T00:16:13 UTC (1722903373) Aug 6 00:16:14.113404 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Aug 6 00:16:14.113424 kernel: intel_pstate: CPU model not supported Aug 6 00:16:14.113438 kernel: NET: Registered PF_INET6 protocol family Aug 6 00:16:14.113452 kernel: Segment Routing with IPv6 Aug 6 00:16:14.113466 kernel: In-situ OAM (IOAM) with IPv6 Aug 6 00:16:14.113479 kernel: NET: Registered PF_PACKET protocol family Aug 6 00:16:14.113493 kernel: Key type dns_resolver registered Aug 6 00:16:14.113507 kernel: IPI shorthand broadcast: enabled Aug 6 00:16:14.113528 kernel: sched_clock: Marking stable (1388004043, 236422480)->(1773376370, -148949847) Aug 6 00:16:14.113541 kernel: registered taskstats version 1 Aug 6 00:16:14.113555 kernel: Loading compiled-in X.509 certificates Aug 6 00:16:14.113569 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.43-flatcar: d8f193b4a33a492a73da7ce4522bbc835ec39532' Aug 6 00:16:14.113582 kernel: Key type .fscrypt registered Aug 6 00:16:14.113596 kernel: Key type fscrypt-provisioning registered Aug 6 00:16:14.113609 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 6 00:16:14.113623 kernel: ima: Allocated hash algorithm: sha1 Aug 6 00:16:14.113637 kernel: ima: No architecture policies found Aug 6 00:16:14.113655 kernel: clk: Disabling unused clocks Aug 6 00:16:14.113670 kernel: Freeing unused kernel image (initmem) memory: 49372K Aug 6 00:16:14.113683 kernel: Write protecting the kernel read-only data: 36864k Aug 6 00:16:14.113697 kernel: Freeing unused kernel image (rodata/data gap) memory: 1936K Aug 6 00:16:14.113710 kernel: Run /init as init process Aug 6 00:16:14.113723 kernel: with arguments: Aug 6 00:16:14.113736 kernel: /init Aug 6 00:16:14.113801 kernel: with environment: Aug 6 00:16:14.113817 kernel: HOME=/ Aug 6 00:16:14.113839 kernel: TERM=linux Aug 6 00:16:14.113853 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 6 00:16:14.113880 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 6 00:16:14.113900 systemd[1]: Detected virtualization kvm. Aug 6 00:16:14.113914 systemd[1]: Detected architecture x86-64. Aug 6 00:16:14.113928 systemd[1]: Running in initrd. Aug 6 00:16:14.113942 systemd[1]: No hostname configured, using default hostname. Aug 6 00:16:14.113956 systemd[1]: Hostname set to . Aug 6 00:16:14.113978 systemd[1]: Initializing machine ID from VM UUID. Aug 6 00:16:14.113992 systemd[1]: Queued start job for default target initrd.target. Aug 6 00:16:14.114007 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 6 00:16:14.114022 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 6 00:16:14.114037 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 6 00:16:14.114051 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 6 00:16:14.114066 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 6 00:16:14.114081 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 6 00:16:14.114102 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 6 00:16:14.114117 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 6 00:16:14.114132 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 6 00:16:14.114147 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 6 00:16:14.114161 systemd[1]: Reached target paths.target - Path Units. Aug 6 00:16:14.114188 systemd[1]: Reached target slices.target - Slice Units. Aug 6 00:16:14.114210 systemd[1]: Reached target swap.target - Swaps. Aug 6 00:16:14.114224 systemd[1]: Reached target timers.target - Timer Units. Aug 6 00:16:14.114239 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 6 00:16:14.114254 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 6 00:16:14.114269 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 6 00:16:14.114284 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 6 00:16:14.114298 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 6 00:16:14.114313 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 6 00:16:14.114327 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 6 00:16:14.114346 systemd[1]: Reached target sockets.target - Socket Units. Aug 6 00:16:14.114361 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 6 00:16:14.114376 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 6 00:16:14.114390 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 6 00:16:14.114404 systemd[1]: Starting systemd-fsck-usr.service... Aug 6 00:16:14.114419 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 6 00:16:14.114433 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 6 00:16:14.114448 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 6 00:16:14.114517 systemd-journald[201]: Collecting audit messages is disabled. Aug 6 00:16:14.114558 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 6 00:16:14.114573 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 6 00:16:14.114588 systemd[1]: Finished systemd-fsck-usr.service. Aug 6 00:16:14.114608 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 6 00:16:14.114625 systemd-journald[201]: Journal started Aug 6 00:16:14.114653 systemd-journald[201]: Runtime Journal (/run/log/journal/aea59aeb59f445d0a1f445329adcf190) is 4.7M, max 38.0M, 33.2M free. Aug 6 00:16:14.062983 systemd-modules-load[202]: Inserted module 'overlay' Aug 6 00:16:14.171005 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 6 00:16:14.171043 kernel: Bridge firewalling registered Aug 6 00:16:14.127538 systemd-modules-load[202]: Inserted module 'br_netfilter' Aug 6 00:16:14.180858 systemd[1]: Started systemd-journald.service - Journal Service. Aug 6 00:16:14.181779 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 6 00:16:14.184223 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 6 00:16:14.186343 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 6 00:16:14.204081 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 6 00:16:14.212378 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 6 00:16:14.217070 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 6 00:16:14.226991 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Aug 6 00:16:14.242525 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 6 00:16:14.245553 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 6 00:16:14.246691 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 6 00:16:14.258002 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 6 00:16:14.260821 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 6 00:16:14.274067 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 6 00:16:14.289690 dracut-cmdline[234]: dracut-dracut-053 Aug 6 00:16:14.294448 dracut-cmdline[234]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=4763ee6059e6f81f5b007c7bdf42f5dcad676aac40503ddb8a29787eba4ab695 Aug 6 00:16:14.334397 systemd-resolved[237]: Positive Trust Anchors: Aug 6 00:16:14.334437 systemd-resolved[237]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 6 00:16:14.334482 systemd-resolved[237]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Aug 6 00:16:14.344053 systemd-resolved[237]: Defaulting to hostname 'linux'. Aug 6 00:16:14.348306 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 6 00:16:14.349785 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 6 00:16:14.411889 kernel: SCSI subsystem initialized Aug 6 00:16:14.426790 kernel: Loading iSCSI transport class v2.0-870. Aug 6 00:16:14.445425 kernel: iscsi: registered transport (tcp) Aug 6 00:16:14.477897 kernel: iscsi: registered transport (qla4xxx) Aug 6 00:16:14.478008 kernel: QLogic iSCSI HBA Driver Aug 6 00:16:14.540916 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 6 00:16:14.549060 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 6 00:16:14.594152 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 6 00:16:14.594266 kernel: device-mapper: uevent: version 1.0.3 Aug 6 00:16:14.595809 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 6 00:16:14.649796 kernel: raid6: sse2x4 gen() 13823 MB/s Aug 6 00:16:14.667790 kernel: raid6: sse2x2 gen() 9407 MB/s Aug 6 00:16:14.686409 kernel: raid6: sse2x1 gen() 9928 MB/s Aug 6 00:16:14.686524 kernel: raid6: using algorithm sse2x4 gen() 13823 MB/s Aug 6 00:16:14.707064 kernel: raid6: .... xor() 7682 MB/s, rmw enabled Aug 6 00:16:14.707156 kernel: raid6: using ssse3x2 recovery algorithm Aug 6 00:16:14.740793 kernel: xor: automatically using best checksumming function avx Aug 6 00:16:14.956789 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 6 00:16:14.972202 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 6 00:16:14.978975 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 6 00:16:15.007672 systemd-udevd[420]: Using default interface naming scheme 'v255'. Aug 6 00:16:15.015068 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 6 00:16:15.022937 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 6 00:16:15.050194 dracut-pre-trigger[428]: rd.md=0: removing MD RAID activation Aug 6 00:16:15.094852 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 6 00:16:15.101970 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 6 00:16:15.233918 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 6 00:16:15.245013 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 6 00:16:15.277813 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 6 00:16:15.281461 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 6 00:16:15.283551 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 6 00:16:15.284805 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 6 00:16:15.294024 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 6 00:16:15.323429 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 6 00:16:15.367794 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Aug 6 00:16:15.427674 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Aug 6 00:16:15.427957 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 6 00:16:15.427981 kernel: GPT:17805311 != 125829119 Aug 6 00:16:15.427999 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 6 00:16:15.428030 kernel: GPT:17805311 != 125829119 Aug 6 00:16:15.428048 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 6 00:16:15.428066 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 6 00:16:15.428083 kernel: cryptd: max_cpu_qlen set to 1000 Aug 6 00:16:15.428100 kernel: AVX version of gcm_enc/dec engaged. Aug 6 00:16:15.428118 kernel: AES CTR mode by8 optimization enabled Aug 6 00:16:15.430041 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 6 00:16:15.431093 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 6 00:16:15.434009 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 6 00:16:15.434729 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 6 00:16:15.436945 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 6 00:16:15.437711 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 6 00:16:15.464552 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 6 00:16:15.466743 kernel: libata version 3.00 loaded. Aug 6 00:16:15.479774 kernel: ahci 0000:00:1f.2: version 3.0 Aug 6 00:16:15.516980 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Aug 6 00:16:15.517019 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Aug 6 00:16:15.517264 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Aug 6 00:16:15.517475 kernel: scsi host0: ahci Aug 6 00:16:15.517694 kernel: scsi host1: ahci Aug 6 00:16:15.517929 kernel: scsi host2: ahci Aug 6 00:16:15.518155 kernel: scsi host3: ahci Aug 6 00:16:15.518367 kernel: scsi host4: ahci Aug 6 00:16:15.518585 kernel: scsi host5: ahci Aug 6 00:16:15.522869 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Aug 6 00:16:15.522896 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Aug 6 00:16:15.522914 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Aug 6 00:16:15.522933 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Aug 6 00:16:15.522960 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Aug 6 00:16:15.522979 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Aug 6 00:16:15.526772 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (468) Aug 6 00:16:15.550518 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 6 00:16:15.631873 kernel: ACPI: bus type USB registered Aug 6 00:16:15.631920 kernel: usbcore: registered new interface driver usbfs Aug 6 00:16:15.631941 kernel: usbcore: registered new interface driver hub Aug 6 00:16:15.631960 kernel: usbcore: registered new device driver usb Aug 6 00:16:15.631994 kernel: BTRFS: device fsid 24d7efdf-5582-42d2-aafd-43221656b08f devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (474) Aug 6 00:16:15.633068 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 6 00:16:15.648521 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 6 00:16:15.661286 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 6 00:16:15.672892 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 6 00:16:15.673790 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 6 00:16:15.685007 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 6 00:16:15.689305 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 6 00:16:15.694908 disk-uuid[562]: Primary Header is updated. Aug 6 00:16:15.694908 disk-uuid[562]: Secondary Entries is updated. Aug 6 00:16:15.694908 disk-uuid[562]: Secondary Header is updated. Aug 6 00:16:15.699788 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 6 00:16:15.707833 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 6 00:16:15.720781 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 6 00:16:15.720804 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 6 00:16:15.823786 kernel: ata2: SATA link down (SStatus 0 SControl 300) Aug 6 00:16:15.823894 kernel: ata3: SATA link down (SStatus 0 SControl 300) Aug 6 00:16:15.830789 kernel: ata1: SATA link down (SStatus 0 SControl 300) Aug 6 00:16:15.830857 kernel: ata4: SATA link down (SStatus 0 SControl 300) Aug 6 00:16:15.834347 kernel: ata5: SATA link down (SStatus 0 SControl 300) Aug 6 00:16:15.834382 kernel: ata6: SATA link down (SStatus 0 SControl 300) Aug 6 00:16:15.904786 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Aug 6 00:16:15.929944 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Aug 6 00:16:15.930182 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Aug 6 00:16:15.930393 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Aug 6 00:16:15.930606 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Aug 6 00:16:15.930829 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Aug 6 00:16:15.931031 kernel: hub 1-0:1.0: USB hub found Aug 6 00:16:15.931264 kernel: hub 1-0:1.0: 4 ports detected Aug 6 00:16:15.931480 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Aug 6 00:16:15.931696 kernel: hub 2-0:1.0: USB hub found Aug 6 00:16:15.935987 kernel: hub 2-0:1.0: 4 ports detected Aug 6 00:16:16.162970 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Aug 6 00:16:16.305800 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 6 00:16:16.313552 kernel: usbcore: registered new interface driver usbhid Aug 6 00:16:16.313590 kernel: usbhid: USB HID core driver Aug 6 00:16:16.321324 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Aug 6 00:16:16.321401 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Aug 6 00:16:16.716897 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 6 00:16:16.717878 disk-uuid[563]: The operation has completed successfully. Aug 6 00:16:16.775442 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 6 00:16:16.775632 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 6 00:16:16.797048 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 6 00:16:16.801130 sh[585]: Success Aug 6 00:16:16.820791 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Aug 6 00:16:16.921569 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 6 00:16:16.924721 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 6 00:16:16.926587 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 6 00:16:16.950878 kernel: BTRFS info (device dm-0): first mount of filesystem 24d7efdf-5582-42d2-aafd-43221656b08f Aug 6 00:16:16.950957 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 6 00:16:16.953012 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 6 00:16:16.955200 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 6 00:16:16.956859 kernel: BTRFS info (device dm-0): using free space tree Aug 6 00:16:16.969141 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 6 00:16:16.970708 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 6 00:16:16.979073 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 6 00:16:16.981935 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 6 00:16:16.995774 kernel: BTRFS info (device vda6): first mount of filesystem b97abe4c-c512-4c9a-9e43-191f8cef484b Aug 6 00:16:16.999462 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 6 00:16:16.999534 kernel: BTRFS info (device vda6): using free space tree Aug 6 00:16:17.007777 kernel: BTRFS info (device vda6): auto enabling async discard Aug 6 00:16:17.023387 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 6 00:16:17.028289 kernel: BTRFS info (device vda6): last unmount of filesystem b97abe4c-c512-4c9a-9e43-191f8cef484b Aug 6 00:16:17.032650 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 6 00:16:17.041154 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 6 00:16:17.156011 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 6 00:16:17.171062 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 6 00:16:17.189814 ignition[687]: Ignition 2.19.0 Aug 6 00:16:17.189840 ignition[687]: Stage: fetch-offline Aug 6 00:16:17.189935 ignition[687]: no configs at "/usr/lib/ignition/base.d" Aug 6 00:16:17.189957 ignition[687]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Aug 6 00:16:17.194720 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 6 00:16:17.190148 ignition[687]: parsed url from cmdline: "" Aug 6 00:16:17.190155 ignition[687]: no config URL provided Aug 6 00:16:17.190165 ignition[687]: reading system config file "/usr/lib/ignition/user.ign" Aug 6 00:16:17.190181 ignition[687]: no config at "/usr/lib/ignition/user.ign" Aug 6 00:16:17.190190 ignition[687]: failed to fetch config: resource requires networking Aug 6 00:16:17.190492 ignition[687]: Ignition finished successfully Aug 6 00:16:17.201340 systemd-networkd[772]: lo: Link UP Aug 6 00:16:17.201346 systemd-networkd[772]: lo: Gained carrier Aug 6 00:16:17.203658 systemd-networkd[772]: Enumeration completed Aug 6 00:16:17.203966 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 6 00:16:17.204240 systemd-networkd[772]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 6 00:16:17.204246 systemd-networkd[772]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 6 00:16:17.205962 systemd-networkd[772]: eth0: Link UP Aug 6 00:16:17.205968 systemd-networkd[772]: eth0: Gained carrier Aug 6 00:16:17.205980 systemd-networkd[772]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 6 00:16:17.206396 systemd[1]: Reached target network.target - Network. Aug 6 00:16:17.220998 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 6 00:16:17.239268 ignition[776]: Ignition 2.19.0 Aug 6 00:16:17.239291 ignition[776]: Stage: fetch Aug 6 00:16:17.239580 ignition[776]: no configs at "/usr/lib/ignition/base.d" Aug 6 00:16:17.239601 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Aug 6 00:16:17.239739 ignition[776]: parsed url from cmdline: "" Aug 6 00:16:17.239768 ignition[776]: no config URL provided Aug 6 00:16:17.239780 ignition[776]: reading system config file "/usr/lib/ignition/user.ign" Aug 6 00:16:17.239796 ignition[776]: no config at "/usr/lib/ignition/user.ign" Aug 6 00:16:17.239963 ignition[776]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Aug 6 00:16:17.240162 ignition[776]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Aug 6 00:16:17.240196 ignition[776]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Aug 6 00:16:17.240333 ignition[776]: GET error: Get "http://169.254.169.254/openstack/latest/user_data": dial tcp 169.254.169.254:80: connect: network is unreachable Aug 6 00:16:17.264879 systemd-networkd[772]: eth0: DHCPv4 address 10.244.27.62/30, gateway 10.244.27.61 acquired from 10.244.27.61 Aug 6 00:16:17.440619 ignition[776]: GET http://169.254.169.254/openstack/latest/user_data: attempt #2 Aug 6 00:16:17.456520 ignition[776]: GET result: OK Aug 6 00:16:17.457676 ignition[776]: parsing config with SHA512: 7cc66990d46e86f583ff69a0c354ef01ed06a3a3920bdc69b3e03a66e560b50429ac657764881c7e6497737a8f866810b3d8504773b5dd338e1875e6102eb575 Aug 6 00:16:17.464981 unknown[776]: fetched base config from "system" Aug 6 00:16:17.465486 ignition[776]: fetch: fetch complete Aug 6 00:16:17.465014 unknown[776]: fetched base config from "system" Aug 6 00:16:17.465495 ignition[776]: fetch: fetch passed Aug 6 00:16:17.465024 unknown[776]: fetched user config from "openstack" Aug 6 00:16:17.465565 ignition[776]: Ignition finished successfully Aug 6 00:16:17.469449 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 6 00:16:17.478053 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 6 00:16:17.498643 ignition[784]: Ignition 2.19.0 Aug 6 00:16:17.498669 ignition[784]: Stage: kargs Aug 6 00:16:17.498987 ignition[784]: no configs at "/usr/lib/ignition/base.d" Aug 6 00:16:17.499009 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Aug 6 00:16:17.502823 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 6 00:16:17.500228 ignition[784]: kargs: kargs passed Aug 6 00:16:17.500305 ignition[784]: Ignition finished successfully Aug 6 00:16:17.511001 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 6 00:16:17.531030 ignition[792]: Ignition 2.19.0 Aug 6 00:16:17.531050 ignition[792]: Stage: disks Aug 6 00:16:17.531351 ignition[792]: no configs at "/usr/lib/ignition/base.d" Aug 6 00:16:17.531372 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Aug 6 00:16:17.535945 ignition[792]: disks: disks passed Aug 6 00:16:17.536665 ignition[792]: Ignition finished successfully Aug 6 00:16:17.538556 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 6 00:16:17.539972 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 6 00:16:17.540721 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 6 00:16:17.541723 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 6 00:16:17.542405 systemd[1]: Reached target sysinit.target - System Initialization. Aug 6 00:16:17.543072 systemd[1]: Reached target basic.target - Basic System. Aug 6 00:16:17.551029 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 6 00:16:17.572698 systemd-fsck[801]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Aug 6 00:16:17.576468 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 6 00:16:17.582891 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 6 00:16:17.725772 kernel: EXT4-fs (vda9): mounted filesystem b6919f21-4a66-43c1-b816-e6fe5d1b75ef r/w with ordered data mode. Quota mode: none. Aug 6 00:16:17.726588 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 6 00:16:17.727938 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 6 00:16:17.740936 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 6 00:16:17.743880 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 6 00:16:17.747225 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 6 00:16:17.749370 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Aug 6 00:16:17.753770 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 6 00:16:17.753817 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 6 00:16:17.762073 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (809) Aug 6 00:16:17.762129 kernel: BTRFS info (device vda6): first mount of filesystem b97abe4c-c512-4c9a-9e43-191f8cef484b Aug 6 00:16:17.764765 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 6 00:16:17.766148 kernel: BTRFS info (device vda6): using free space tree Aug 6 00:16:17.767787 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 6 00:16:17.770818 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 6 00:16:17.783769 kernel: BTRFS info (device vda6): auto enabling async discard Aug 6 00:16:17.791784 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 6 00:16:17.852681 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Aug 6 00:16:17.859002 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Aug 6 00:16:17.870406 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Aug 6 00:16:17.879511 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Aug 6 00:16:17.985806 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 6 00:16:17.989891 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 6 00:16:17.992929 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 6 00:16:18.007088 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 6 00:16:18.010319 kernel: BTRFS info (device vda6): last unmount of filesystem b97abe4c-c512-4c9a-9e43-191f8cef484b Aug 6 00:16:18.035192 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 6 00:16:18.042257 ignition[927]: INFO : Ignition 2.19.0 Aug 6 00:16:18.042257 ignition[927]: INFO : Stage: mount Aug 6 00:16:18.044119 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 6 00:16:18.044119 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Aug 6 00:16:18.044119 ignition[927]: INFO : mount: mount passed Aug 6 00:16:18.044119 ignition[927]: INFO : Ignition finished successfully Aug 6 00:16:18.045023 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 6 00:16:18.647707 systemd-networkd[772]: eth0: Gained IPv6LL Aug 6 00:16:19.684011 systemd-networkd[772]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:6cf:24:19ff:fef4:1b3e/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:6cf:24:19ff:fef4:1b3e/64 assigned by NDisc. Aug 6 00:16:19.684028 systemd-networkd[772]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Aug 6 00:16:24.918246 coreos-metadata[811]: Aug 06 00:16:24.918 WARN failed to locate config-drive, using the metadata service API instead Aug 6 00:16:24.942197 coreos-metadata[811]: Aug 06 00:16:24.942 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Aug 6 00:16:24.989657 coreos-metadata[811]: Aug 06 00:16:24.989 INFO Fetch successful Aug 6 00:16:24.990591 coreos-metadata[811]: Aug 06 00:16:24.989 INFO wrote hostname srv-iww3y.gb1.brightbox.com to /sysroot/etc/hostname Aug 6 00:16:24.993227 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Aug 6 00:16:24.993412 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Aug 6 00:16:25.008927 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 6 00:16:25.023269 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 6 00:16:25.046821 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (945) Aug 6 00:16:25.057431 kernel: BTRFS info (device vda6): first mount of filesystem b97abe4c-c512-4c9a-9e43-191f8cef484b Aug 6 00:16:25.057498 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 6 00:16:25.057518 kernel: BTRFS info (device vda6): using free space tree Aug 6 00:16:25.068776 kernel: BTRFS info (device vda6): auto enabling async discard Aug 6 00:16:25.072838 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 6 00:16:25.114053 ignition[963]: INFO : Ignition 2.19.0 Aug 6 00:16:25.114053 ignition[963]: INFO : Stage: files Aug 6 00:16:25.116093 ignition[963]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 6 00:16:25.116093 ignition[963]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Aug 6 00:16:25.116093 ignition[963]: DEBUG : files: compiled without relabeling support, skipping Aug 6 00:16:25.118839 ignition[963]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 6 00:16:25.118839 ignition[963]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 6 00:16:25.120936 ignition[963]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 6 00:16:25.121945 ignition[963]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 6 00:16:25.121945 ignition[963]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 6 00:16:25.121722 unknown[963]: wrote ssh authorized keys file for user: core Aug 6 00:16:25.125051 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 6 00:16:25.125051 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 6 00:16:25.881772 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 6 00:16:26.094374 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 6 00:16:26.094374 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 6 00:16:26.097081 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 6 00:16:26.097081 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 6 00:16:26.097081 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 6 00:16:26.097081 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 6 00:16:26.097081 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 6 00:16:26.097081 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 6 00:16:26.097081 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 6 00:16:26.097081 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 6 00:16:26.097081 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 6 00:16:26.097081 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Aug 6 00:16:26.097081 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Aug 6 00:16:26.097081 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Aug 6 00:16:26.097081 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Aug 6 00:16:26.741486 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 6 00:16:34.333601 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Aug 6 00:16:34.333601 ignition[963]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 6 00:16:34.337257 ignition[963]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 6 00:16:34.337257 ignition[963]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 6 00:16:34.337257 ignition[963]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 6 00:16:34.337257 ignition[963]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Aug 6 00:16:34.337257 ignition[963]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Aug 6 00:16:34.337257 ignition[963]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 6 00:16:34.337257 ignition[963]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 6 00:16:34.337257 ignition[963]: INFO : files: files passed Aug 6 00:16:34.348554 ignition[963]: INFO : Ignition finished successfully Aug 6 00:16:34.340105 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 6 00:16:34.355181 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 6 00:16:34.375649 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 6 00:16:34.383098 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 6 00:16:34.383365 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 6 00:16:34.394044 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 6 00:16:34.395880 initrd-setup-root-after-ignition[992]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 6 00:16:34.398457 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 6 00:16:34.398077 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 6 00:16:34.401179 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 6 00:16:34.407018 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 6 00:16:34.463711 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 6 00:16:34.463944 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 6 00:16:34.466366 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 6 00:16:34.467220 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 6 00:16:34.468964 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 6 00:16:34.474974 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 6 00:16:34.496410 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 6 00:16:34.503986 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 6 00:16:34.534198 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 6 00:16:34.535225 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 6 00:16:34.537025 systemd[1]: Stopped target timers.target - Timer Units. Aug 6 00:16:34.538615 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 6 00:16:34.538902 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 6 00:16:34.540869 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 6 00:16:34.541848 systemd[1]: Stopped target basic.target - Basic System. Aug 6 00:16:34.543451 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 6 00:16:34.544942 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 6 00:16:34.546325 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 6 00:16:34.547962 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 6 00:16:34.549653 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 6 00:16:34.551415 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 6 00:16:34.553092 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 6 00:16:34.554688 systemd[1]: Stopped target swap.target - Swaps. Aug 6 00:16:34.556093 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 6 00:16:34.556368 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 6 00:16:34.558166 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 6 00:16:34.559303 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 6 00:16:34.560930 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 6 00:16:34.561153 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 6 00:16:34.562587 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 6 00:16:34.562905 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 6 00:16:34.564983 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 6 00:16:34.565300 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 6 00:16:34.567027 systemd[1]: ignition-files.service: Deactivated successfully. Aug 6 00:16:34.567256 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 6 00:16:34.576413 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 6 00:16:34.580144 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 6 00:16:34.581219 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 6 00:16:34.581495 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 6 00:16:34.586036 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 6 00:16:34.586303 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 6 00:16:34.595623 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 6 00:16:34.596628 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 6 00:16:34.618774 ignition[1016]: INFO : Ignition 2.19.0 Aug 6 00:16:34.618774 ignition[1016]: INFO : Stage: umount Aug 6 00:16:34.618774 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 6 00:16:34.618774 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Aug 6 00:16:34.622621 ignition[1016]: INFO : umount: umount passed Aug 6 00:16:34.622621 ignition[1016]: INFO : Ignition finished successfully Aug 6 00:16:34.623858 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 6 00:16:34.624847 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 6 00:16:34.629503 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 6 00:16:34.631181 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 6 00:16:34.631367 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 6 00:16:34.634573 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 6 00:16:34.634689 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 6 00:16:34.636350 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 6 00:16:34.636422 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 6 00:16:34.637737 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 6 00:16:34.637837 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 6 00:16:34.639269 systemd[1]: Stopped target network.target - Network. Aug 6 00:16:34.640552 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 6 00:16:34.640645 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 6 00:16:34.642143 systemd[1]: Stopped target paths.target - Path Units. Aug 6 00:16:34.643453 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 6 00:16:34.646853 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 6 00:16:34.648558 systemd[1]: Stopped target slices.target - Slice Units. Aug 6 00:16:34.650301 systemd[1]: Stopped target sockets.target - Socket Units. Aug 6 00:16:34.651952 systemd[1]: iscsid.socket: Deactivated successfully. Aug 6 00:16:34.652060 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 6 00:16:34.653263 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 6 00:16:34.653350 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 6 00:16:34.654613 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 6 00:16:34.654697 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 6 00:16:34.656087 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 6 00:16:34.656155 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 6 00:16:34.657645 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 6 00:16:34.657734 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 6 00:16:34.659523 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 6 00:16:34.662020 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 6 00:16:34.664002 systemd-networkd[772]: eth0: DHCPv6 lease lost Aug 6 00:16:34.667990 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 6 00:16:34.668211 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 6 00:16:34.669979 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 6 00:16:34.670064 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 6 00:16:34.677122 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 6 00:16:34.678707 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 6 00:16:34.679700 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 6 00:16:34.681452 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 6 00:16:34.685206 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 6 00:16:34.685387 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 6 00:16:34.695494 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 6 00:16:34.696849 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 6 00:16:34.706851 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 6 00:16:34.706975 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 6 00:16:34.708029 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 6 00:16:34.708101 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 6 00:16:34.709233 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 6 00:16:34.709318 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 6 00:16:34.711657 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 6 00:16:34.711737 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 6 00:16:34.713403 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 6 00:16:34.713509 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 6 00:16:34.725079 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 6 00:16:34.727210 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 6 00:16:34.727315 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 6 00:16:34.729908 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 6 00:16:34.730000 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 6 00:16:34.730787 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 6 00:16:34.730873 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 6 00:16:34.732551 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 6 00:16:34.732638 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 6 00:16:34.734498 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 6 00:16:34.734572 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 6 00:16:34.735428 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 6 00:16:34.735511 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 6 00:16:34.739043 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 6 00:16:34.739144 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 6 00:16:34.740972 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 6 00:16:34.741174 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 6 00:16:34.743518 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 6 00:16:34.743708 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 6 00:16:34.746327 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 6 00:16:34.753157 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 6 00:16:34.767987 systemd[1]: Switching root. Aug 6 00:16:34.804145 systemd-journald[201]: Journal stopped Aug 6 00:16:36.413408 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Aug 6 00:16:36.413602 kernel: SELinux: policy capability network_peer_controls=1 Aug 6 00:16:36.413666 kernel: SELinux: policy capability open_perms=1 Aug 6 00:16:36.413701 kernel: SELinux: policy capability extended_socket_class=1 Aug 6 00:16:36.413728 kernel: SELinux: policy capability always_check_network=0 Aug 6 00:16:36.415334 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 6 00:16:36.415373 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 6 00:16:36.415404 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 6 00:16:36.415442 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 6 00:16:36.415470 kernel: audit: type=1403 audit(1722903395.106:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 6 00:16:36.415524 systemd[1]: Successfully loaded SELinux policy in 55.128ms. Aug 6 00:16:36.415596 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 24.356ms. Aug 6 00:16:36.415639 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 6 00:16:36.415669 systemd[1]: Detected virtualization kvm. Aug 6 00:16:36.415700 systemd[1]: Detected architecture x86-64. Aug 6 00:16:36.415722 systemd[1]: Detected first boot. Aug 6 00:16:36.415758 systemd[1]: Hostname set to . Aug 6 00:16:36.415799 systemd[1]: Initializing machine ID from VM UUID. Aug 6 00:16:36.415830 zram_generator::config[1058]: No configuration found. Aug 6 00:16:36.415877 systemd[1]: Populated /etc with preset unit settings. Aug 6 00:16:36.415900 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 6 00:16:36.415921 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 6 00:16:36.415949 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 6 00:16:36.415972 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 6 00:16:36.415994 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 6 00:16:36.416015 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 6 00:16:36.416043 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 6 00:16:36.416072 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 6 00:16:36.416110 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 6 00:16:36.416133 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 6 00:16:36.416170 systemd[1]: Created slice user.slice - User and Session Slice. Aug 6 00:16:36.416202 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 6 00:16:36.416231 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 6 00:16:36.416254 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 6 00:16:36.416275 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 6 00:16:36.416304 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 6 00:16:36.416342 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 6 00:16:36.416365 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 6 00:16:36.416397 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 6 00:16:36.416425 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 6 00:16:36.416457 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 6 00:16:36.416479 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 6 00:16:36.416508 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 6 00:16:36.416542 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 6 00:16:36.416572 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 6 00:16:36.416595 systemd[1]: Reached target slices.target - Slice Units. Aug 6 00:16:36.416616 systemd[1]: Reached target swap.target - Swaps. Aug 6 00:16:36.416644 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 6 00:16:36.416672 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 6 00:16:36.416720 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 6 00:16:36.416821 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 6 00:16:36.416863 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 6 00:16:36.416887 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 6 00:16:36.416909 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 6 00:16:36.416937 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 6 00:16:36.416960 systemd[1]: Mounting media.mount - External Media Directory... Aug 6 00:16:36.416982 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 6 00:16:36.417003 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 6 00:16:36.417037 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 6 00:16:36.417067 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 6 00:16:36.417091 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 6 00:16:36.417113 systemd[1]: Reached target machines.target - Containers. Aug 6 00:16:36.417141 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 6 00:16:36.417170 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 6 00:16:36.417203 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 6 00:16:36.417227 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 6 00:16:36.417259 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 6 00:16:36.417283 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 6 00:16:36.417304 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 6 00:16:36.417331 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 6 00:16:36.417363 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 6 00:16:36.417385 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 6 00:16:36.417406 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 6 00:16:36.417428 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 6 00:16:36.417462 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 6 00:16:36.417486 systemd[1]: Stopped systemd-fsck-usr.service. Aug 6 00:16:36.417515 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 6 00:16:36.417537 kernel: fuse: init (API version 7.39) Aug 6 00:16:36.417558 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 6 00:16:36.417580 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 6 00:16:36.417613 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 6 00:16:36.417640 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 6 00:16:36.417662 systemd[1]: verity-setup.service: Deactivated successfully. Aug 6 00:16:36.417702 systemd[1]: Stopped verity-setup.service. Aug 6 00:16:36.417736 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 6 00:16:36.417824 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 6 00:16:36.417858 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 6 00:16:36.417880 systemd[1]: Mounted media.mount - External Media Directory. Aug 6 00:16:36.417908 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 6 00:16:36.417944 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 6 00:16:36.417967 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 6 00:16:36.417998 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 6 00:16:36.418021 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 6 00:16:36.418042 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 6 00:16:36.418063 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 6 00:16:36.418085 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 6 00:16:36.418122 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 6 00:16:36.418146 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 6 00:16:36.418167 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 6 00:16:36.418197 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 6 00:16:36.418226 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 6 00:16:36.418255 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 6 00:16:36.418322 systemd-journald[1146]: Collecting audit messages is disabled. Aug 6 00:16:36.418397 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 6 00:16:36.418432 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 6 00:16:36.418461 systemd-journald[1146]: Journal started Aug 6 00:16:36.418516 systemd-journald[1146]: Runtime Journal (/run/log/journal/aea59aeb59f445d0a1f445329adcf190) is 4.7M, max 38.0M, 33.2M free. Aug 6 00:16:35.940331 systemd[1]: Queued start job for default target multi-user.target. Aug 6 00:16:35.968544 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 6 00:16:35.969439 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 6 00:16:36.424778 kernel: loop: module loaded Aug 6 00:16:36.428843 kernel: ACPI: bus type drm_connector registered Aug 6 00:16:36.434780 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 6 00:16:36.453772 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 6 00:16:36.459938 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 6 00:16:36.459999 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 6 00:16:36.465841 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 6 00:16:36.474768 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 6 00:16:36.484854 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 6 00:16:36.489568 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 6 00:16:36.498242 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 6 00:16:36.506405 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 6 00:16:36.508844 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 6 00:16:36.521337 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 6 00:16:36.533330 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 6 00:16:36.553962 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 6 00:16:36.554047 systemd[1]: Started systemd-journald.service - Journal Service. Aug 6 00:16:36.566315 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 6 00:16:36.569305 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 6 00:16:36.569603 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 6 00:16:36.571273 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 6 00:16:36.571882 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 6 00:16:36.574347 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 6 00:16:36.576284 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 6 00:16:36.579229 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 6 00:16:36.631029 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 6 00:16:36.633043 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 6 00:16:36.642797 kernel: loop0: detected capacity change from 0 to 80568 Aug 6 00:16:36.644869 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 6 00:16:36.647703 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 6 00:16:36.658805 kernel: block loop0: the capability attribute has been deprecated. Aug 6 00:16:36.663465 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 6 00:16:36.698833 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 6 00:16:36.712443 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 6 00:16:36.712550 systemd-journald[1146]: Time spent on flushing to /var/log/journal/aea59aeb59f445d0a1f445329adcf190 is 71.221ms for 1149 entries. Aug 6 00:16:36.712550 systemd-journald[1146]: System Journal (/var/log/journal/aea59aeb59f445d0a1f445329adcf190) is 8.0M, max 584.8M, 576.8M free. Aug 6 00:16:36.817350 systemd-journald[1146]: Received client request to flush runtime journal. Aug 6 00:16:36.817412 kernel: loop1: detected capacity change from 0 to 209816 Aug 6 00:16:36.732279 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Aug 6 00:16:36.732303 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Aug 6 00:16:36.763414 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 6 00:16:36.773087 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 6 00:16:36.776201 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 6 00:16:36.779484 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 6 00:16:36.820303 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 6 00:16:36.844831 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 6 00:16:36.856304 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 6 00:16:36.861084 kernel: loop2: detected capacity change from 0 to 139760 Aug 6 00:16:36.899586 udevadm[1211]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Aug 6 00:16:36.927062 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 6 00:16:36.937043 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 6 00:16:36.940849 kernel: loop3: detected capacity change from 0 to 8 Aug 6 00:16:36.973798 kernel: loop4: detected capacity change from 0 to 80568 Aug 6 00:16:37.001826 kernel: loop5: detected capacity change from 0 to 209816 Aug 6 00:16:37.009379 systemd-tmpfiles[1215]: ACLs are not supported, ignoring. Aug 6 00:16:37.009408 systemd-tmpfiles[1215]: ACLs are not supported, ignoring. Aug 6 00:16:37.026420 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 6 00:16:37.034812 kernel: loop6: detected capacity change from 0 to 139760 Aug 6 00:16:37.053901 kernel: loop7: detected capacity change from 0 to 8 Aug 6 00:16:37.055579 (sd-merge)[1218]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Aug 6 00:16:37.058564 (sd-merge)[1218]: Merged extensions into '/usr'. Aug 6 00:16:37.063588 systemd[1]: Reloading requested from client PID 1172 ('systemd-sysext') (unit systemd-sysext.service)... Aug 6 00:16:37.063902 systemd[1]: Reloading... Aug 6 00:16:37.185811 zram_generator::config[1243]: No configuration found. Aug 6 00:16:37.343529 ldconfig[1168]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 6 00:16:37.399362 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 6 00:16:37.472086 systemd[1]: Reloading finished in 407 ms. Aug 6 00:16:37.505386 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 6 00:16:37.506891 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 6 00:16:37.520079 systemd[1]: Starting ensure-sysext.service... Aug 6 00:16:37.529052 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Aug 6 00:16:37.546137 systemd[1]: Reloading requested from client PID 1299 ('systemctl') (unit ensure-sysext.service)... Aug 6 00:16:37.546382 systemd[1]: Reloading... Aug 6 00:16:37.595305 systemd-tmpfiles[1300]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 6 00:16:37.596622 systemd-tmpfiles[1300]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 6 00:16:37.600159 systemd-tmpfiles[1300]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 6 00:16:37.600597 systemd-tmpfiles[1300]: ACLs are not supported, ignoring. Aug 6 00:16:37.600705 systemd-tmpfiles[1300]: ACLs are not supported, ignoring. Aug 6 00:16:37.609226 systemd-tmpfiles[1300]: Detected autofs mount point /boot during canonicalization of boot. Aug 6 00:16:37.609247 systemd-tmpfiles[1300]: Skipping /boot Aug 6 00:16:37.639531 systemd-tmpfiles[1300]: Detected autofs mount point /boot during canonicalization of boot. Aug 6 00:16:37.639554 systemd-tmpfiles[1300]: Skipping /boot Aug 6 00:16:37.647804 zram_generator::config[1324]: No configuration found. Aug 6 00:16:37.847172 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 6 00:16:37.914872 systemd[1]: Reloading finished in 367 ms. Aug 6 00:16:37.938707 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 6 00:16:37.945505 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 6 00:16:37.957993 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 6 00:16:37.965056 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 6 00:16:37.971080 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 6 00:16:37.984005 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 6 00:16:37.996137 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 6 00:16:38.005312 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 6 00:16:38.014540 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 6 00:16:38.014872 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 6 00:16:38.020181 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 6 00:16:38.026344 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 6 00:16:38.043786 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 6 00:16:38.045106 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 6 00:16:38.045314 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 6 00:16:38.063486 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 6 00:16:38.069219 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 6 00:16:38.069577 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 6 00:16:38.070315 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 6 00:16:38.070520 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 6 00:16:38.082049 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 6 00:16:38.086545 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 6 00:16:38.088639 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 6 00:16:38.090373 systemd-udevd[1393]: Using default interface naming scheme 'v255'. Aug 6 00:16:38.098245 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 6 00:16:38.102401 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 6 00:16:38.102676 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 6 00:16:38.104127 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 6 00:16:38.104379 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 6 00:16:38.107390 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 6 00:16:38.107611 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 6 00:16:38.110293 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 6 00:16:38.110554 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 6 00:16:38.112233 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 6 00:16:38.112456 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 6 00:16:38.121349 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 6 00:16:38.121856 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 6 00:16:38.128998 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 6 00:16:38.139214 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 6 00:16:38.158375 systemd[1]: Finished ensure-sysext.service. Aug 6 00:16:38.172049 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 6 00:16:38.197440 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 6 00:16:38.205646 augenrules[1417]: No rules Aug 6 00:16:38.207665 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 6 00:16:38.209439 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 6 00:16:38.223796 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 6 00:16:38.230418 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 6 00:16:38.256131 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 6 00:16:38.258900 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 6 00:16:38.389538 systemd-resolved[1392]: Positive Trust Anchors: Aug 6 00:16:38.390266 systemd-resolved[1392]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 6 00:16:38.390318 systemd-resolved[1392]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Aug 6 00:16:38.405646 systemd-resolved[1392]: Using system hostname 'srv-iww3y.gb1.brightbox.com'. Aug 6 00:16:38.411963 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 6 00:16:38.413489 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 6 00:16:38.429631 systemd-networkd[1432]: lo: Link UP Aug 6 00:16:38.429647 systemd-networkd[1432]: lo: Gained carrier Aug 6 00:16:38.430789 systemd-networkd[1432]: Enumeration completed Aug 6 00:16:38.430943 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 6 00:16:38.431904 systemd[1]: Reached target network.target - Network. Aug 6 00:16:38.445029 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 6 00:16:38.446869 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 6 00:16:38.448796 systemd[1]: Reached target time-set.target - System Time Set. Aug 6 00:16:38.469799 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1434) Aug 6 00:16:38.474571 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 6 00:16:38.521801 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1435) Aug 6 00:16:38.573360 systemd-networkd[1432]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 6 00:16:38.573375 systemd-networkd[1432]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 6 00:16:38.578060 systemd-networkd[1432]: eth0: Link UP Aug 6 00:16:38.578075 systemd-networkd[1432]: eth0: Gained carrier Aug 6 00:16:38.578094 systemd-networkd[1432]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 6 00:16:38.594862 systemd-networkd[1432]: eth0: DHCPv4 address 10.244.27.62/30, gateway 10.244.27.61 acquired from 10.244.27.61 Aug 6 00:16:38.596646 systemd-timesyncd[1415]: Network configuration changed, trying to establish connection. Aug 6 00:16:38.646236 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Aug 6 00:16:38.655793 kernel: mousedev: PS/2 mouse device common for all mice Aug 6 00:16:38.656073 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 6 00:16:38.657801 kernel: ACPI: button: Power Button [PWRF] Aug 6 00:16:38.667018 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 6 00:16:38.703695 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Aug 6 00:16:38.724324 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Aug 6 00:16:38.724617 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Aug 6 00:16:38.729611 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 Aug 6 00:16:38.717275 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 6 00:16:38.789314 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 6 00:16:38.991692 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 6 00:16:39.030455 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 6 00:16:39.039050 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 6 00:16:39.070639 lvm[1469]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 6 00:16:39.111016 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 6 00:16:39.112344 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 6 00:16:39.113222 systemd[1]: Reached target sysinit.target - System Initialization. Aug 6 00:16:39.114257 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 6 00:16:39.115591 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 6 00:16:39.118958 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 6 00:16:39.119977 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 6 00:16:39.120794 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 6 00:16:39.121578 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 6 00:16:39.121652 systemd[1]: Reached target paths.target - Path Units. Aug 6 00:16:39.122329 systemd[1]: Reached target timers.target - Timer Units. Aug 6 00:16:39.124583 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 6 00:16:39.128020 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 6 00:16:39.134421 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 6 00:16:39.136997 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 6 00:16:39.138367 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 6 00:16:39.139265 systemd[1]: Reached target sockets.target - Socket Units. Aug 6 00:16:39.139966 systemd[1]: Reached target basic.target - Basic System. Aug 6 00:16:39.140677 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 6 00:16:39.140725 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 6 00:16:39.143883 systemd[1]: Starting containerd.service - containerd container runtime... Aug 6 00:16:39.151971 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 6 00:16:39.155936 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 6 00:16:39.158785 lvm[1473]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 6 00:16:39.165915 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 6 00:16:39.172211 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 6 00:16:39.173309 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 6 00:16:39.176666 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 6 00:16:39.181220 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 6 00:16:39.185970 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 6 00:16:39.196002 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 6 00:16:39.215023 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 6 00:16:39.217280 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 6 00:16:39.219083 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 6 00:16:39.237649 systemd[1]: Starting update-engine.service - Update Engine... Aug 6 00:16:39.242135 jq[1477]: false Aug 6 00:16:39.244894 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 6 00:16:39.248499 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 6 00:16:39.255217 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 6 00:16:39.255981 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 6 00:16:39.258494 dbus-daemon[1476]: [system] SELinux support is enabled Aug 6 00:16:39.262815 dbus-daemon[1476]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1432 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Aug 6 00:16:39.266023 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 6 00:16:39.277500 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 6 00:16:39.277560 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 6 00:16:39.279103 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 6 00:16:39.279149 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 6 00:16:39.280060 dbus-daemon[1476]: [system] Successfully activated service 'org.freedesktop.systemd1' Aug 6 00:16:39.293978 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Aug 6 00:16:39.311975 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 6 00:16:39.312851 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 6 00:16:39.347909 jq[1489]: true Aug 6 00:16:39.347088 systemd[1]: motdgen.service: Deactivated successfully. Aug 6 00:16:39.349186 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 6 00:16:39.363778 tar[1494]: linux-amd64/helm Aug 6 00:16:39.386246 update_engine[1486]: I0806 00:16:39.365716 1486 main.cc:92] Flatcar Update Engine starting Aug 6 00:16:39.368375 (ntainerd)[1508]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 6 00:16:39.387105 extend-filesystems[1478]: Found loop4 Aug 6 00:16:39.387105 extend-filesystems[1478]: Found loop5 Aug 6 00:16:39.387105 extend-filesystems[1478]: Found loop6 Aug 6 00:16:39.387105 extend-filesystems[1478]: Found loop7 Aug 6 00:16:39.387105 extend-filesystems[1478]: Found vda Aug 6 00:16:39.387105 extend-filesystems[1478]: Found vda1 Aug 6 00:16:39.387105 extend-filesystems[1478]: Found vda2 Aug 6 00:16:39.387105 extend-filesystems[1478]: Found vda3 Aug 6 00:16:39.387105 extend-filesystems[1478]: Found usr Aug 6 00:16:39.387105 extend-filesystems[1478]: Found vda4 Aug 6 00:16:39.387105 extend-filesystems[1478]: Found vda6 Aug 6 00:16:39.387105 extend-filesystems[1478]: Found vda7 Aug 6 00:16:39.387105 extend-filesystems[1478]: Found vda9 Aug 6 00:16:39.387105 extend-filesystems[1478]: Checking size of /dev/vda9 Aug 6 00:16:39.417992 extend-filesystems[1478]: Resized partition /dev/vda9 Aug 6 00:16:39.396369 systemd[1]: Started update-engine.service - Update Engine. Aug 6 00:16:39.418895 update_engine[1486]: I0806 00:16:39.400422 1486 update_check_scheduler.cc:74] Next update check in 11m37s Aug 6 00:16:39.429201 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 6 00:16:39.430997 jq[1511]: true Aug 6 00:16:39.437793 extend-filesystems[1517]: resize2fs 1.47.0 (5-Feb-2023) Aug 6 00:16:39.457795 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Aug 6 00:16:39.482801 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1429) Aug 6 00:16:39.488552 systemd-logind[1485]: Watching system buttons on /dev/input/event2 (Power Button) Aug 6 00:16:39.491103 systemd-logind[1485]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 6 00:16:39.502085 systemd-logind[1485]: New seat seat0. Aug 6 00:16:39.509599 systemd[1]: Started systemd-logind.service - User Login Management. Aug 6 00:16:39.602430 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 6 00:16:40.260348 systemd-resolved[1392]: Clock change detected. Flushing caches. Aug 6 00:16:40.273733 systemd-timesyncd[1415]: Contacted time server 57.128.182.127:123 (0.flatcar.pool.ntp.org). Aug 6 00:16:40.273862 systemd-timesyncd[1415]: Initial clock synchronization to Tue 2024-08-06 00:16:40.259325 UTC. Aug 6 00:16:40.288419 dbus-daemon[1476]: [system] Successfully activated service 'org.freedesktop.hostname1' Aug 6 00:16:40.290151 dbus-daemon[1476]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1504 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Aug 6 00:16:40.290456 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Aug 6 00:16:40.316345 systemd[1]: Starting polkit.service - Authorization Manager... Aug 6 00:16:40.318989 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Aug 6 00:16:40.343017 bash[1535]: Updated "/home/core/.ssh/authorized_keys" Aug 6 00:16:40.342886 polkitd[1536]: Started polkitd version 121 Aug 6 00:16:40.343595 extend-filesystems[1517]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 6 00:16:40.343595 extend-filesystems[1517]: old_desc_blocks = 1, new_desc_blocks = 8 Aug 6 00:16:40.343595 extend-filesystems[1517]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Aug 6 00:16:40.323766 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 6 00:16:40.351994 extend-filesystems[1478]: Resized filesystem in /dev/vda9 Aug 6 00:16:40.337070 systemd[1]: Starting sshkeys.service... Aug 6 00:16:40.346330 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 6 00:16:40.346658 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 6 00:16:40.383364 polkitd[1536]: Loading rules from directory /etc/polkit-1/rules.d Aug 6 00:16:40.384127 polkitd[1536]: Loading rules from directory /usr/share/polkit-1/rules.d Aug 6 00:16:40.392413 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 6 00:16:40.399742 systemd-networkd[1432]: eth0: Gained IPv6LL Aug 6 00:16:40.405210 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 6 00:16:40.422035 polkitd[1536]: Finished loading, compiling and executing 2 rules Aug 6 00:16:40.422923 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 6 00:16:40.425388 systemd[1]: Reached target network-online.target - Network is Online. Aug 6 00:16:40.427133 dbus-daemon[1476]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Aug 6 00:16:40.429884 polkitd[1536]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Aug 6 00:16:40.439317 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 6 00:16:40.448331 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 6 00:16:40.450626 systemd[1]: Started polkit.service - Authorization Manager. Aug 6 00:16:40.467805 locksmithd[1516]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 6 00:16:40.494009 systemd-hostnamed[1504]: Hostname set to (static) Aug 6 00:16:40.571198 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 6 00:16:40.589375 containerd[1508]: time="2024-08-06T00:16:40.587902456Z" level=info msg="starting containerd" revision=cd7148ac666309abf41fd4a49a8a5895b905e7f3 version=v1.7.18 Aug 6 00:16:40.692610 containerd[1508]: time="2024-08-06T00:16:40.692472243Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 6 00:16:40.697773 containerd[1508]: time="2024-08-06T00:16:40.695045569Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 6 00:16:40.702234 containerd[1508]: time="2024-08-06T00:16:40.702188146Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.43-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 6 00:16:40.702533 containerd[1508]: time="2024-08-06T00:16:40.702351890Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 6 00:16:40.702796 containerd[1508]: time="2024-08-06T00:16:40.702762239Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 6 00:16:40.703741 containerd[1508]: time="2024-08-06T00:16:40.703710974Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 6 00:16:40.704074 containerd[1508]: time="2024-08-06T00:16:40.704042573Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 6 00:16:40.705121 containerd[1508]: time="2024-08-06T00:16:40.705087022Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 6 00:16:40.705257 containerd[1508]: time="2024-08-06T00:16:40.705228441Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 6 00:16:40.707087 containerd[1508]: time="2024-08-06T00:16:40.705471041Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 6 00:16:40.707087 containerd[1508]: time="2024-08-06T00:16:40.705857381Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 6 00:16:40.707087 containerd[1508]: time="2024-08-06T00:16:40.705885718Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Aug 6 00:16:40.707087 containerd[1508]: time="2024-08-06T00:16:40.705903566Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 6 00:16:40.708161 containerd[1508]: time="2024-08-06T00:16:40.708124383Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 6 00:16:40.708589 containerd[1508]: time="2024-08-06T00:16:40.708251166Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 6 00:16:40.708589 containerd[1508]: time="2024-08-06T00:16:40.708367233Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Aug 6 00:16:40.708589 containerd[1508]: time="2024-08-06T00:16:40.708392831Z" level=info msg="metadata content store policy set" policy=shared Aug 6 00:16:40.718888 containerd[1508]: time="2024-08-06T00:16:40.717769851Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 6 00:16:40.718888 containerd[1508]: time="2024-08-06T00:16:40.717848253Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 6 00:16:40.718888 containerd[1508]: time="2024-08-06T00:16:40.717877016Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 6 00:16:40.718888 containerd[1508]: time="2024-08-06T00:16:40.718016380Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 6 00:16:40.718888 containerd[1508]: time="2024-08-06T00:16:40.718050230Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 6 00:16:40.718888 containerd[1508]: time="2024-08-06T00:16:40.718077458Z" level=info msg="NRI interface is disabled by configuration." Aug 6 00:16:40.718888 containerd[1508]: time="2024-08-06T00:16:40.718101535Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 6 00:16:40.718888 containerd[1508]: time="2024-08-06T00:16:40.718304997Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 6 00:16:40.718888 containerd[1508]: time="2024-08-06T00:16:40.718332990Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 6 00:16:40.718888 containerd[1508]: time="2024-08-06T00:16:40.718354168Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 6 00:16:40.718888 containerd[1508]: time="2024-08-06T00:16:40.718382855Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 6 00:16:40.718888 containerd[1508]: time="2024-08-06T00:16:40.718407753Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 6 00:16:40.718888 containerd[1508]: time="2024-08-06T00:16:40.718434861Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 6 00:16:40.718888 containerd[1508]: time="2024-08-06T00:16:40.718466393Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 6 00:16:40.719477 containerd[1508]: time="2024-08-06T00:16:40.718493993Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 6 00:16:40.719477 containerd[1508]: time="2024-08-06T00:16:40.718521395Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 6 00:16:40.719477 containerd[1508]: time="2024-08-06T00:16:40.718544699Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 6 00:16:40.719477 containerd[1508]: time="2024-08-06T00:16:40.718566743Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 6 00:16:40.719477 containerd[1508]: time="2024-08-06T00:16:40.718587606Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 6 00:16:40.719477 containerd[1508]: time="2024-08-06T00:16:40.718766792Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 6 00:16:40.729984 containerd[1508]: time="2024-08-06T00:16:40.728347232Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 6 00:16:40.729984 containerd[1508]: time="2024-08-06T00:16:40.728421735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 6 00:16:40.729984 containerd[1508]: time="2024-08-06T00:16:40.728462927Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 6 00:16:40.729984 containerd[1508]: time="2024-08-06T00:16:40.728515184Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 6 00:16:40.729984 containerd[1508]: time="2024-08-06T00:16:40.728627104Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 6 00:16:40.729984 containerd[1508]: time="2024-08-06T00:16:40.728659984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 6 00:16:40.729984 containerd[1508]: time="2024-08-06T00:16:40.728687824Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 6 00:16:40.729984 containerd[1508]: time="2024-08-06T00:16:40.728713147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 6 00:16:40.729984 containerd[1508]: time="2024-08-06T00:16:40.728745997Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 6 00:16:40.729984 containerd[1508]: time="2024-08-06T00:16:40.728773575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 6 00:16:40.729984 containerd[1508]: time="2024-08-06T00:16:40.728800625Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 6 00:16:40.729984 containerd[1508]: time="2024-08-06T00:16:40.728828797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 6 00:16:40.729984 containerd[1508]: time="2024-08-06T00:16:40.728856505Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 6 00:16:40.729984 containerd[1508]: time="2024-08-06T00:16:40.729160224Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 6 00:16:40.730530 containerd[1508]: time="2024-08-06T00:16:40.729198267Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 6 00:16:40.730530 containerd[1508]: time="2024-08-06T00:16:40.729224218Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 6 00:16:40.730530 containerd[1508]: time="2024-08-06T00:16:40.729250208Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 6 00:16:40.730530 containerd[1508]: time="2024-08-06T00:16:40.729276068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 6 00:16:40.730530 containerd[1508]: time="2024-08-06T00:16:40.729302980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 6 00:16:40.730530 containerd[1508]: time="2024-08-06T00:16:40.729327761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 6 00:16:40.730530 containerd[1508]: time="2024-08-06T00:16:40.729350856Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 6 00:16:40.730779 containerd[1508]: time="2024-08-06T00:16:40.729820542Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 6 00:16:40.730779 containerd[1508]: time="2024-08-06T00:16:40.729923519Z" level=info msg="Connect containerd service" Aug 6 00:16:40.740089 containerd[1508]: time="2024-08-06T00:16:40.735487505Z" level=info msg="using legacy CRI server" Aug 6 00:16:40.740089 containerd[1508]: time="2024-08-06T00:16:40.735661090Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 6 00:16:40.740089 containerd[1508]: time="2024-08-06T00:16:40.735875904Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 6 00:16:40.740089 containerd[1508]: time="2024-08-06T00:16:40.736835424Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 6 00:16:40.740089 containerd[1508]: time="2024-08-06T00:16:40.736910242Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 6 00:16:40.740089 containerd[1508]: time="2024-08-06T00:16:40.736958195Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 6 00:16:40.740089 containerd[1508]: time="2024-08-06T00:16:40.737001912Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 6 00:16:40.740089 containerd[1508]: time="2024-08-06T00:16:40.737024105Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 6 00:16:40.740089 containerd[1508]: time="2024-08-06T00:16:40.737573054Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 6 00:16:40.740089 containerd[1508]: time="2024-08-06T00:16:40.737658837Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 6 00:16:40.740089 containerd[1508]: time="2024-08-06T00:16:40.737774784Z" level=info msg="Start subscribing containerd event" Aug 6 00:16:40.740089 containerd[1508]: time="2024-08-06T00:16:40.737868143Z" level=info msg="Start recovering state" Aug 6 00:16:40.740089 containerd[1508]: time="2024-08-06T00:16:40.738012388Z" level=info msg="Start event monitor" Aug 6 00:16:40.740089 containerd[1508]: time="2024-08-06T00:16:40.738053131Z" level=info msg="Start snapshots syncer" Aug 6 00:16:40.740089 containerd[1508]: time="2024-08-06T00:16:40.738076761Z" level=info msg="Start cni network conf syncer for default" Aug 6 00:16:40.740089 containerd[1508]: time="2024-08-06T00:16:40.738091835Z" level=info msg="Start streaming server" Aug 6 00:16:40.740089 containerd[1508]: time="2024-08-06T00:16:40.739676311Z" level=info msg="containerd successfully booted in 0.156210s" Aug 6 00:16:40.738319 systemd[1]: Started containerd.service - containerd container runtime. Aug 6 00:16:41.211013 tar[1494]: linux-amd64/LICENSE Aug 6 00:16:41.211013 tar[1494]: linux-amd64/README.md Aug 6 00:16:41.238121 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 6 00:16:41.245212 sshd_keygen[1503]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 6 00:16:41.253131 systemd-networkd[1432]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:6cf:24:19ff:fef4:1b3e/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:6cf:24:19ff:fef4:1b3e/64 assigned by NDisc. Aug 6 00:16:41.253142 systemd-networkd[1432]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Aug 6 00:16:41.276104 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 6 00:16:41.288524 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 6 00:16:41.295807 systemd[1]: Started sshd@0-10.244.27.62:22-139.178.89.65:37858.service - OpenSSH per-connection server daemon (139.178.89.65:37858). Aug 6 00:16:41.308990 systemd[1]: issuegen.service: Deactivated successfully. Aug 6 00:16:41.309251 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 6 00:16:41.319486 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 6 00:16:41.341301 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 6 00:16:41.348494 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 6 00:16:41.351435 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 6 00:16:41.353588 systemd[1]: Reached target getty.target - Login Prompts. Aug 6 00:16:41.676454 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 6 00:16:41.692761 (kubelet)[1605]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 6 00:16:42.205452 sshd[1590]: Accepted publickey for core from 139.178.89.65 port 37858 ssh2: RSA SHA256:MLhCj2QQz+1ufXK8br7fHkzSPc3j4VOop8SP6hp3dC8 Aug 6 00:16:42.208634 sshd[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 00:16:42.229192 systemd-logind[1485]: New session 1 of user core. Aug 6 00:16:42.229510 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 6 00:16:42.238716 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 6 00:16:42.267990 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 6 00:16:42.279354 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 6 00:16:42.290561 (systemd)[1613]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 6 00:16:42.429404 kubelet[1605]: E0806 00:16:42.429280 1605 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 6 00:16:42.431538 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 6 00:16:42.431783 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 6 00:16:42.432272 systemd[1]: kubelet.service: Consumed 1.054s CPU time. Aug 6 00:16:42.443295 systemd[1613]: Queued start job for default target default.target. Aug 6 00:16:42.449110 systemd[1613]: Created slice app.slice - User Application Slice. Aug 6 00:16:42.449367 systemd[1613]: Reached target paths.target - Paths. Aug 6 00:16:42.449509 systemd[1613]: Reached target timers.target - Timers. Aug 6 00:16:42.451880 systemd[1613]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 6 00:16:42.475223 systemd[1613]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 6 00:16:42.475449 systemd[1613]: Reached target sockets.target - Sockets. Aug 6 00:16:42.475485 systemd[1613]: Reached target basic.target - Basic System. Aug 6 00:16:42.475554 systemd[1613]: Reached target default.target - Main User Target. Aug 6 00:16:42.475637 systemd[1613]: Startup finished in 173ms. Aug 6 00:16:42.475730 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 6 00:16:42.493377 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 6 00:16:43.123467 systemd[1]: Started sshd@1-10.244.27.62:22-139.178.89.65:40628.service - OpenSSH per-connection server daemon (139.178.89.65:40628). Aug 6 00:16:43.991928 sshd[1628]: Accepted publickey for core from 139.178.89.65 port 40628 ssh2: RSA SHA256:MLhCj2QQz+1ufXK8br7fHkzSPc3j4VOop8SP6hp3dC8 Aug 6 00:16:43.993907 sshd[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 00:16:44.000534 systemd-logind[1485]: New session 2 of user core. Aug 6 00:16:44.011377 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 6 00:16:44.602089 sshd[1628]: pam_unix(sshd:session): session closed for user core Aug 6 00:16:44.606420 systemd-logind[1485]: Session 2 logged out. Waiting for processes to exit. Aug 6 00:16:44.607830 systemd[1]: sshd@1-10.244.27.62:22-139.178.89.65:40628.service: Deactivated successfully. Aug 6 00:16:44.609827 systemd[1]: session-2.scope: Deactivated successfully. Aug 6 00:16:44.611435 systemd-logind[1485]: Removed session 2. Aug 6 00:16:44.754179 systemd[1]: Started sshd@2-10.244.27.62:22-139.178.89.65:40634.service - OpenSSH per-connection server daemon (139.178.89.65:40634). Aug 6 00:16:45.635798 sshd[1636]: Accepted publickey for core from 139.178.89.65 port 40634 ssh2: RSA SHA256:MLhCj2QQz+1ufXK8br7fHkzSPc3j4VOop8SP6hp3dC8 Aug 6 00:16:45.637811 sshd[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 00:16:45.646637 systemd-logind[1485]: New session 3 of user core. Aug 6 00:16:45.653358 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 6 00:16:46.250055 sshd[1636]: pam_unix(sshd:session): session closed for user core Aug 6 00:16:46.254585 systemd[1]: sshd@2-10.244.27.62:22-139.178.89.65:40634.service: Deactivated successfully. Aug 6 00:16:46.257453 systemd[1]: session-3.scope: Deactivated successfully. Aug 6 00:16:46.258729 systemd-logind[1485]: Session 3 logged out. Waiting for processes to exit. Aug 6 00:16:46.260162 systemd-logind[1485]: Removed session 3. Aug 6 00:16:46.418868 login[1598]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Aug 6 00:16:46.425949 login[1596]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Aug 6 00:16:46.426215 systemd-logind[1485]: New session 4 of user core. Aug 6 00:16:46.436418 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 6 00:16:46.441477 systemd-logind[1485]: New session 5 of user core. Aug 6 00:16:46.443036 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 6 00:16:46.820292 coreos-metadata[1475]: Aug 06 00:16:46.820 WARN failed to locate config-drive, using the metadata service API instead Aug 6 00:16:46.845483 coreos-metadata[1475]: Aug 06 00:16:46.845 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Aug 6 00:16:46.853055 coreos-metadata[1475]: Aug 06 00:16:46.852 INFO Fetch failed with 404: resource not found Aug 6 00:16:46.853260 coreos-metadata[1475]: Aug 06 00:16:46.853 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Aug 6 00:16:46.854038 coreos-metadata[1475]: Aug 06 00:16:46.853 INFO Fetch successful Aug 6 00:16:46.854292 coreos-metadata[1475]: Aug 06 00:16:46.854 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Aug 6 00:16:46.903703 coreos-metadata[1475]: Aug 06 00:16:46.903 INFO Fetch successful Aug 6 00:16:46.904045 coreos-metadata[1475]: Aug 06 00:16:46.903 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Aug 6 00:16:46.951381 coreos-metadata[1475]: Aug 06 00:16:46.951 INFO Fetch successful Aug 6 00:16:46.951649 coreos-metadata[1475]: Aug 06 00:16:46.951 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Aug 6 00:16:47.006082 coreos-metadata[1475]: Aug 06 00:16:47.005 INFO Fetch successful Aug 6 00:16:47.006502 coreos-metadata[1475]: Aug 06 00:16:47.006 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Aug 6 00:16:47.062951 coreos-metadata[1475]: Aug 06 00:16:47.062 INFO Fetch successful Aug 6 00:16:47.103370 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 6 00:16:47.104452 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 6 00:16:47.569285 coreos-metadata[1554]: Aug 06 00:16:47.569 WARN failed to locate config-drive, using the metadata service API instead Aug 6 00:16:47.591750 coreos-metadata[1554]: Aug 06 00:16:47.591 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Aug 6 00:16:47.639470 coreos-metadata[1554]: Aug 06 00:16:47.639 INFO Fetch successful Aug 6 00:16:47.639788 coreos-metadata[1554]: Aug 06 00:16:47.639 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Aug 6 00:16:47.704435 coreos-metadata[1554]: Aug 06 00:16:47.704 INFO Fetch successful Aug 6 00:16:47.706697 unknown[1554]: wrote ssh authorized keys file for user: core Aug 6 00:16:47.726765 update-ssh-keys[1670]: Updated "/home/core/.ssh/authorized_keys" Aug 6 00:16:47.727352 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 6 00:16:47.729596 systemd[1]: Finished sshkeys.service. Aug 6 00:16:47.732364 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 6 00:16:47.732573 systemd[1]: Startup finished in 1.568s (kernel) + 21.345s (initrd) + 12.113s (userspace) = 35.027s. Aug 6 00:16:52.490599 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 6 00:16:52.501226 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 6 00:16:52.725314 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 6 00:16:52.734345 (kubelet)[1681]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 6 00:16:52.823940 kubelet[1681]: E0806 00:16:52.823629 1681 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 6 00:16:52.828123 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 6 00:16:52.828379 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 6 00:16:56.404298 systemd[1]: Started sshd@3-10.244.27.62:22-139.178.89.65:42156.service - OpenSSH per-connection server daemon (139.178.89.65:42156). Aug 6 00:16:57.277046 sshd[1690]: Accepted publickey for core from 139.178.89.65 port 42156 ssh2: RSA SHA256:MLhCj2QQz+1ufXK8br7fHkzSPc3j4VOop8SP6hp3dC8 Aug 6 00:16:57.279144 sshd[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 00:16:57.288591 systemd-logind[1485]: New session 6 of user core. Aug 6 00:16:57.299310 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 6 00:16:57.886362 sshd[1690]: pam_unix(sshd:session): session closed for user core Aug 6 00:16:57.890127 systemd[1]: sshd@3-10.244.27.62:22-139.178.89.65:42156.service: Deactivated successfully. Aug 6 00:16:57.892080 systemd[1]: session-6.scope: Deactivated successfully. Aug 6 00:16:57.893857 systemd-logind[1485]: Session 6 logged out. Waiting for processes to exit. Aug 6 00:16:57.895244 systemd-logind[1485]: Removed session 6. Aug 6 00:16:58.043623 systemd[1]: Started sshd@4-10.244.27.62:22-139.178.89.65:42164.service - OpenSSH per-connection server daemon (139.178.89.65:42164). Aug 6 00:16:58.905319 sshd[1697]: Accepted publickey for core from 139.178.89.65 port 42164 ssh2: RSA SHA256:MLhCj2QQz+1ufXK8br7fHkzSPc3j4VOop8SP6hp3dC8 Aug 6 00:16:58.907552 sshd[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 00:16:58.914692 systemd-logind[1485]: New session 7 of user core. Aug 6 00:16:58.923293 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 6 00:16:59.509372 sshd[1697]: pam_unix(sshd:session): session closed for user core Aug 6 00:16:59.514892 systemd-logind[1485]: Session 7 logged out. Waiting for processes to exit. Aug 6 00:16:59.515605 systemd[1]: sshd@4-10.244.27.62:22-139.178.89.65:42164.service: Deactivated successfully. Aug 6 00:16:59.517932 systemd[1]: session-7.scope: Deactivated successfully. Aug 6 00:16:59.519389 systemd-logind[1485]: Removed session 7. Aug 6 00:16:59.666527 systemd[1]: Started sshd@5-10.244.27.62:22-139.178.89.65:42180.service - OpenSSH per-connection server daemon (139.178.89.65:42180). Aug 6 00:17:00.529642 sshd[1704]: Accepted publickey for core from 139.178.89.65 port 42180 ssh2: RSA SHA256:MLhCj2QQz+1ufXK8br7fHkzSPc3j4VOop8SP6hp3dC8 Aug 6 00:17:00.531736 sshd[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 00:17:00.539243 systemd-logind[1485]: New session 8 of user core. Aug 6 00:17:00.546311 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 6 00:17:01.138480 sshd[1704]: pam_unix(sshd:session): session closed for user core Aug 6 00:17:01.144548 systemd[1]: sshd@5-10.244.27.62:22-139.178.89.65:42180.service: Deactivated successfully. Aug 6 00:17:01.147215 systemd[1]: session-8.scope: Deactivated successfully. Aug 6 00:17:01.148355 systemd-logind[1485]: Session 8 logged out. Waiting for processes to exit. Aug 6 00:17:01.150139 systemd-logind[1485]: Removed session 8. Aug 6 00:17:01.294506 systemd[1]: Started sshd@6-10.244.27.62:22-139.178.89.65:38862.service - OpenSSH per-connection server daemon (139.178.89.65:38862). Aug 6 00:17:02.174345 sshd[1711]: Accepted publickey for core from 139.178.89.65 port 38862 ssh2: RSA SHA256:MLhCj2QQz+1ufXK8br7fHkzSPc3j4VOop8SP6hp3dC8 Aug 6 00:17:02.176316 sshd[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 00:17:02.182654 systemd-logind[1485]: New session 9 of user core. Aug 6 00:17:02.191187 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 6 00:17:02.663913 sudo[1714]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 6 00:17:02.664524 sudo[1714]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 6 00:17:02.684656 sudo[1714]: pam_unix(sudo:session): session closed for user root Aug 6 00:17:02.826925 sshd[1711]: pam_unix(sshd:session): session closed for user core Aug 6 00:17:02.832165 systemd[1]: sshd@6-10.244.27.62:22-139.178.89.65:38862.service: Deactivated successfully. Aug 6 00:17:02.835155 systemd[1]: session-9.scope: Deactivated successfully. Aug 6 00:17:02.836536 systemd-logind[1485]: Session 9 logged out. Waiting for processes to exit. Aug 6 00:17:02.837110 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 6 00:17:02.845343 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 6 00:17:02.847588 systemd-logind[1485]: Removed session 9. Aug 6 00:17:02.990492 systemd[1]: Started sshd@7-10.244.27.62:22-139.178.89.65:38878.service - OpenSSH per-connection server daemon (139.178.89.65:38878). Aug 6 00:17:03.093091 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 6 00:17:03.099195 (kubelet)[1729]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 6 00:17:03.171425 kubelet[1729]: E0806 00:17:03.171331 1729 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 6 00:17:03.174619 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 6 00:17:03.175110 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 6 00:17:03.855984 sshd[1722]: Accepted publickey for core from 139.178.89.65 port 38878 ssh2: RSA SHA256:MLhCj2QQz+1ufXK8br7fHkzSPc3j4VOop8SP6hp3dC8 Aug 6 00:17:03.858551 sshd[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 00:17:03.866483 systemd-logind[1485]: New session 10 of user core. Aug 6 00:17:03.873206 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 6 00:17:04.325275 sudo[1738]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 6 00:17:04.325758 sudo[1738]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 6 00:17:04.331795 sudo[1738]: pam_unix(sudo:session): session closed for user root Aug 6 00:17:04.340345 sudo[1737]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 6 00:17:04.340799 sudo[1737]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 6 00:17:04.371655 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 6 00:17:04.375071 auditctl[1741]: No rules Aug 6 00:17:04.375610 systemd[1]: audit-rules.service: Deactivated successfully. Aug 6 00:17:04.375926 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 6 00:17:04.383856 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 6 00:17:04.448765 augenrules[1759]: No rules Aug 6 00:17:04.450576 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 6 00:17:04.452993 sudo[1737]: pam_unix(sudo:session): session closed for user root Aug 6 00:17:04.594089 sshd[1722]: pam_unix(sshd:session): session closed for user core Aug 6 00:17:04.600223 systemd-logind[1485]: Session 10 logged out. Waiting for processes to exit. Aug 6 00:17:04.600657 systemd[1]: sshd@7-10.244.27.62:22-139.178.89.65:38878.service: Deactivated successfully. Aug 6 00:17:04.602696 systemd[1]: session-10.scope: Deactivated successfully. Aug 6 00:17:04.604168 systemd-logind[1485]: Removed session 10. Aug 6 00:17:04.751395 systemd[1]: Started sshd@8-10.244.27.62:22-139.178.89.65:38892.service - OpenSSH per-connection server daemon (139.178.89.65:38892). Aug 6 00:17:05.624777 sshd[1767]: Accepted publickey for core from 139.178.89.65 port 38892 ssh2: RSA SHA256:MLhCj2QQz+1ufXK8br7fHkzSPc3j4VOop8SP6hp3dC8 Aug 6 00:17:05.626590 sshd[1767]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 00:17:05.632717 systemd-logind[1485]: New session 11 of user core. Aug 6 00:17:05.642268 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 6 00:17:06.092551 sudo[1770]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 6 00:17:06.093538 sudo[1770]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 6 00:17:06.266722 (dockerd)[1780]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 6 00:17:06.268344 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 6 00:17:06.683141 dockerd[1780]: time="2024-08-06T00:17:06.682676425Z" level=info msg="Starting up" Aug 6 00:17:06.754869 dockerd[1780]: time="2024-08-06T00:17:06.753687329Z" level=info msg="Loading containers: start." Aug 6 00:17:06.925223 kernel: Initializing XFRM netlink socket Aug 6 00:17:07.038236 systemd-networkd[1432]: docker0: Link UP Aug 6 00:17:07.068522 dockerd[1780]: time="2024-08-06T00:17:07.067650716Z" level=info msg="Loading containers: done." Aug 6 00:17:07.174169 dockerd[1780]: time="2024-08-06T00:17:07.173925221Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 6 00:17:07.174902 dockerd[1780]: time="2024-08-06T00:17:07.174852607Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Aug 6 00:17:07.175132 dockerd[1780]: time="2024-08-06T00:17:07.175093266Z" level=info msg="Daemon has completed initialization" Aug 6 00:17:07.218043 dockerd[1780]: time="2024-08-06T00:17:07.215576182Z" level=info msg="API listen on /run/docker.sock" Aug 6 00:17:07.217186 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 6 00:17:08.601228 containerd[1508]: time="2024-08-06T00:17:08.601093224Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.12\"" Aug 6 00:17:09.472087 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3767456247.mount: Deactivated successfully. Aug 6 00:17:11.305611 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Aug 6 00:17:12.233187 containerd[1508]: time="2024-08-06T00:17:12.233047039Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 00:17:12.235800 containerd[1508]: time="2024-08-06T00:17:12.235494211Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.12: active requests=0, bytes read=34527325" Aug 6 00:17:12.236799 containerd[1508]: time="2024-08-06T00:17:12.236704053Z" level=info msg="ImageCreate event name:\"sha256:e273eb47a05653f4156904acde3c077c9d6aa606e8f8326423a0cd229dec41ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 00:17:12.240868 containerd[1508]: time="2024-08-06T00:17:12.240795389Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:ac3b6876d95fe7b7691e69f2161a5466adbe9d72d44f342d595674321ce16d23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 00:17:12.242930 containerd[1508]: time="2024-08-06T00:17:12.242533442Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.12\" with image id \"sha256:e273eb47a05653f4156904acde3c077c9d6aa606e8f8326423a0cd229dec41ba\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:ac3b6876d95fe7b7691e69f2161a5466adbe9d72d44f342d595674321ce16d23\", size \"34524117\" in 3.641321581s" Aug 6 00:17:12.242930 containerd[1508]: time="2024-08-06T00:17:12.242611771Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.12\" returns image reference \"sha256:e273eb47a05653f4156904acde3c077c9d6aa606e8f8326423a0cd229dec41ba\"" Aug 6 00:17:12.272163 containerd[1508]: time="2024-08-06T00:17:12.272109532Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.12\"" Aug 6 00:17:13.241185 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Aug 6 00:17:13.250448 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 6 00:17:13.459342 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 6 00:17:13.463550 (kubelet)[1987]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 6 00:17:13.556091 kubelet[1987]: E0806 00:17:13.555840 1987 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 6 00:17:13.561256 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 6 00:17:13.561532 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 6 00:17:16.006055 containerd[1508]: time="2024-08-06T00:17:16.005751756Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 00:17:16.008043 containerd[1508]: time="2024-08-06T00:17:16.007989779Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.12: active requests=0, bytes read=31847075" Aug 6 00:17:16.009691 containerd[1508]: time="2024-08-06T00:17:16.009610078Z" level=info msg="ImageCreate event name:\"sha256:e7dd86d2e68b50ae5c49b982edd7e69404b46696a21dd4c9de65b213e9468512\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 00:17:16.013988 containerd[1508]: time="2024-08-06T00:17:16.013635285Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:996c6259e4405ab79083fbb52bcf53003691a50b579862bf29b3abaa468460db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 00:17:16.016498 containerd[1508]: time="2024-08-06T00:17:16.015343463Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.12\" with image id \"sha256:e7dd86d2e68b50ae5c49b982edd7e69404b46696a21dd4c9de65b213e9468512\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:996c6259e4405ab79083fbb52bcf53003691a50b579862bf29b3abaa468460db\", size \"33397013\" in 3.743177294s" Aug 6 00:17:16.016498 containerd[1508]: time="2024-08-06T00:17:16.015394999Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.12\" returns image reference \"sha256:e7dd86d2e68b50ae5c49b982edd7e69404b46696a21dd4c9de65b213e9468512\"" Aug 6 00:17:16.052848 containerd[1508]: time="2024-08-06T00:17:16.052782986Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.12\"" Aug 6 00:17:17.692805 containerd[1508]: time="2024-08-06T00:17:17.692605269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 00:17:17.695186 containerd[1508]: time="2024-08-06T00:17:17.695085490Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.12: active requests=0, bytes read=17097303" Aug 6 00:17:17.696188 containerd[1508]: time="2024-08-06T00:17:17.696139901Z" level=info msg="ImageCreate event name:\"sha256:ee5fb2190e0207cd765596f1cd7c9a492c9cfded10710d45ef19f23e70d3b4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 00:17:17.705325 containerd[1508]: time="2024-08-06T00:17:17.705153364Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d93a3b5961248820beb5ec6dfb0320d12c0dba82fc48693d20d345754883551c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 00:17:17.708381 containerd[1508]: time="2024-08-06T00:17:17.708334371Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.12\" with image id \"sha256:ee5fb2190e0207cd765596f1cd7c9a492c9cfded10710d45ef19f23e70d3b4a9\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d93a3b5961248820beb5ec6dfb0320d12c0dba82fc48693d20d345754883551c\", size \"18647259\" in 1.65549031s" Aug 6 00:17:17.708548 containerd[1508]: time="2024-08-06T00:17:17.708391191Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.12\" returns image reference \"sha256:ee5fb2190e0207cd765596f1cd7c9a492c9cfded10710d45ef19f23e70d3b4a9\"" Aug 6 00:17:17.737532 containerd[1508]: time="2024-08-06T00:17:17.737110977Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.12\"" Aug 6 00:17:19.208644 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2222443520.mount: Deactivated successfully. Aug 6 00:17:19.854902 containerd[1508]: time="2024-08-06T00:17:19.854807882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 00:17:19.856266 containerd[1508]: time="2024-08-06T00:17:19.856135226Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.12: active requests=0, bytes read=28303777" Aug 6 00:17:19.857237 containerd[1508]: time="2024-08-06T00:17:19.857192103Z" level=info msg="ImageCreate event name:\"sha256:1610963ec6edeaf744dc6bc6475bb85db4736faef7394a1ad6f0ccb9d30d2ab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 00:17:19.861282 containerd[1508]: time="2024-08-06T00:17:19.861240760Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7dd7829fa889ac805a0b1047eba04599fa5006bdbcb5cb9c8d14e1dc8910488b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 00:17:19.862295 containerd[1508]: time="2024-08-06T00:17:19.862248409Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.12\" with image id \"sha256:1610963ec6edeaf744dc6bc6475bb85db4736faef7394a1ad6f0ccb9d30d2ab3\", repo tag \"registry.k8s.io/kube-proxy:v1.28.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:7dd7829fa889ac805a0b1047eba04599fa5006bdbcb5cb9c8d14e1dc8910488b\", size \"28302788\" in 2.125080377s" Aug 6 00:17:19.862386 containerd[1508]: time="2024-08-06T00:17:19.862325146Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.12\" returns image reference \"sha256:1610963ec6edeaf744dc6bc6475bb85db4736faef7394a1ad6f0ccb9d30d2ab3\"" Aug 6 00:17:19.905123 containerd[1508]: time="2024-08-06T00:17:19.904332864Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Aug 6 00:17:20.473378 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2919862309.mount: Deactivated successfully. Aug 6 00:17:20.479100 containerd[1508]: time="2024-08-06T00:17:20.479020368Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 00:17:20.480508 containerd[1508]: time="2024-08-06T00:17:20.480394883Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Aug 6 00:17:20.481625 containerd[1508]: time="2024-08-06T00:17:20.481548033Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 00:17:20.485750 containerd[1508]: time="2024-08-06T00:17:20.485252935Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 00:17:20.486608 containerd[1508]: time="2024-08-06T00:17:20.486555530Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 582.153929ms" Aug 6 00:17:20.486731 containerd[1508]: time="2024-08-06T00:17:20.486614091Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Aug 6 00:17:20.520923 containerd[1508]: time="2024-08-06T00:17:20.520854904Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Aug 6 00:17:21.173267 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1459951279.mount: Deactivated successfully. Aug 6 00:17:23.744083 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Aug 6 00:17:23.756947 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 6 00:17:23.977251 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 6 00:17:23.991829 (kubelet)[2084]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 6 00:17:24.130808 kubelet[2084]: E0806 00:17:24.130105 2084 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 6 00:17:24.134752 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 6 00:17:24.135459 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 6 00:17:25.193502 containerd[1508]: time="2024-08-06T00:17:25.193338356Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 00:17:25.195631 containerd[1508]: time="2024-08-06T00:17:25.195508617Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651633" Aug 6 00:17:25.197874 containerd[1508]: time="2024-08-06T00:17:25.197793006Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 00:17:25.203887 containerd[1508]: time="2024-08-06T00:17:25.203817372Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 00:17:25.206471 containerd[1508]: time="2024-08-06T00:17:25.205885555Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 4.684957267s" Aug 6 00:17:25.206471 containerd[1508]: time="2024-08-06T00:17:25.206026775Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Aug 6 00:17:25.240700 containerd[1508]: time="2024-08-06T00:17:25.240208887Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Aug 6 00:17:25.716421 update_engine[1486]: I0806 00:17:25.716287 1486 update_attempter.cc:509] Updating boot flags... Aug 6 00:17:25.767000 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2107) Aug 6 00:17:25.866005 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2108) Aug 6 00:17:25.921387 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4189436253.mount: Deactivated successfully. Aug 6 00:17:28.226121 containerd[1508]: time="2024-08-06T00:17:28.226020258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 00:17:28.227893 containerd[1508]: time="2024-08-06T00:17:28.227546844Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=16191757" Aug 6 00:17:28.229180 containerd[1508]: time="2024-08-06T00:17:28.229025038Z" level=info msg="ImageCreate event name:\"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 00:17:28.232070 containerd[1508]: time="2024-08-06T00:17:28.232035127Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 00:17:28.233431 containerd[1508]: time="2024-08-06T00:17:28.233257912Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"16190758\" in 2.992989041s" Aug 6 00:17:28.233431 containerd[1508]: time="2024-08-06T00:17:28.233308307Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Aug 6 00:17:32.683162 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 6 00:17:32.694395 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 6 00:17:32.725005 systemd[1]: Reloading requested from client PID 2182 ('systemctl') (unit session-11.scope)... Aug 6 00:17:32.725270 systemd[1]: Reloading... Aug 6 00:17:32.913008 zram_generator::config[2220]: No configuration found. Aug 6 00:17:33.072082 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 6 00:17:33.180382 systemd[1]: Reloading finished in 454 ms. Aug 6 00:17:33.252732 (kubelet)[2278]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 6 00:17:33.254480 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 6 00:17:33.255304 systemd[1]: kubelet.service: Deactivated successfully. Aug 6 00:17:33.255669 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 6 00:17:33.259339 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 6 00:17:33.407360 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 6 00:17:33.422441 (kubelet)[2289]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 6 00:17:33.508666 kubelet[2289]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 6 00:17:33.510044 kubelet[2289]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 6 00:17:33.510044 kubelet[2289]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 6 00:17:33.510044 kubelet[2289]: I0806 00:17:33.509380 2289 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 6 00:17:34.198998 kubelet[2289]: I0806 00:17:34.198846 2289 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Aug 6 00:17:34.198998 kubelet[2289]: I0806 00:17:34.198908 2289 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 6 00:17:34.200698 kubelet[2289]: I0806 00:17:34.199358 2289 server.go:895] "Client rotation is on, will bootstrap in background" Aug 6 00:17:34.223593 kubelet[2289]: I0806 00:17:34.223550 2289 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 6 00:17:34.230785 kubelet[2289]: E0806 00:17:34.230742 2289 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.244.27.62:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.244.27.62:6443: connect: connection refused Aug 6 00:17:34.244318 kubelet[2289]: I0806 00:17:34.244287 2289 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 6 00:17:34.245833 kubelet[2289]: I0806 00:17:34.245784 2289 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 6 00:17:34.246148 kubelet[2289]: I0806 00:17:34.246100 2289 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Aug 6 00:17:34.246374 kubelet[2289]: I0806 00:17:34.246165 2289 topology_manager.go:138] "Creating topology manager with none policy" Aug 6 00:17:34.246374 kubelet[2289]: I0806 00:17:34.246183 2289 container_manager_linux.go:301] "Creating device plugin manager" Aug 6 00:17:34.247172 kubelet[2289]: I0806 00:17:34.247125 2289 state_mem.go:36] "Initialized new in-memory state store" Aug 6 00:17:34.248763 kubelet[2289]: I0806 00:17:34.248728 2289 kubelet.go:393] "Attempting to sync node with API server" Aug 6 00:17:34.248868 kubelet[2289]: I0806 00:17:34.248771 2289 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 6 00:17:34.248868 kubelet[2289]: I0806 00:17:34.248838 2289 kubelet.go:309] "Adding apiserver pod source" Aug 6 00:17:34.250521 kubelet[2289]: I0806 00:17:34.248881 2289 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 6 00:17:34.253699 kubelet[2289]: W0806 00:17:34.252650 2289 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.244.27.62:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.244.27.62:6443: connect: connection refused Aug 6 00:17:34.253699 kubelet[2289]: E0806 00:17:34.252723 2289 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.244.27.62:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.244.27.62:6443: connect: connection refused Aug 6 00:17:34.253699 kubelet[2289]: W0806 00:17:34.253137 2289 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.244.27.62:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-iww3y.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.27.62:6443: connect: connection refused Aug 6 00:17:34.253699 kubelet[2289]: E0806 00:17:34.253184 2289 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.244.27.62:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-iww3y.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.27.62:6443: connect: connection refused Aug 6 00:17:34.253699 kubelet[2289]: I0806 00:17:34.253307 2289 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Aug 6 00:17:34.256705 kubelet[2289]: W0806 00:17:34.256669 2289 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 6 00:17:34.258183 kubelet[2289]: I0806 00:17:34.258159 2289 server.go:1232] "Started kubelet" Aug 6 00:17:34.262533 kubelet[2289]: I0806 00:17:34.262507 2289 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 6 00:17:34.265261 kubelet[2289]: I0806 00:17:34.265237 2289 volume_manager.go:291] "Starting Kubelet Volume Manager" Aug 6 00:17:34.269532 kubelet[2289]: E0806 00:17:34.263029 2289 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"srv-iww3y.gb1.brightbox.com.17e8fb9035595256", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"srv-iww3y.gb1.brightbox.com", UID:"srv-iww3y.gb1.brightbox.com", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"srv-iww3y.gb1.brightbox.com"}, FirstTimestamp:time.Date(2024, time.August, 6, 0, 17, 34, 258123350, time.Local), LastTimestamp:time.Date(2024, time.August, 6, 0, 17, 34, 258123350, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"srv-iww3y.gb1.brightbox.com"}': 'Post "https://10.244.27.62:6443/api/v1/namespaces/default/events": dial tcp 10.244.27.62:6443: connect: connection refused'(may retry after sleeping) Aug 6 00:17:34.269739 kubelet[2289]: I0806 00:17:34.263663 2289 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Aug 6 00:17:34.271141 kubelet[2289]: I0806 00:17:34.271116 2289 server.go:462] "Adding debug handlers to kubelet server" Aug 6 00:17:34.274843 kubelet[2289]: I0806 00:17:34.274817 2289 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Aug 6 00:17:34.275123 kubelet[2289]: I0806 00:17:34.275100 2289 reconciler_new.go:29] "Reconciler: start to sync state" Aug 6 00:17:34.275261 kubelet[2289]: I0806 00:17:34.263716 2289 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Aug 6 00:17:34.275660 kubelet[2289]: I0806 00:17:34.275637 2289 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 6 00:17:34.276272 kubelet[2289]: E0806 00:17:34.263847 2289 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Aug 6 00:17:34.276634 kubelet[2289]: E0806 00:17:34.276613 2289 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 6 00:17:34.280603 kubelet[2289]: E0806 00:17:34.280571 2289 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.27.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-iww3y.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.27.62:6443: connect: connection refused" interval="200ms" Aug 6 00:17:34.284890 kubelet[2289]: W0806 00:17:34.284830 2289 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.244.27.62:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.27.62:6443: connect: connection refused Aug 6 00:17:34.285076 kubelet[2289]: E0806 00:17:34.284899 2289 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.244.27.62:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.27.62:6443: connect: connection refused Aug 6 00:17:34.298325 kubelet[2289]: I0806 00:17:34.297814 2289 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 6 00:17:34.300851 kubelet[2289]: I0806 00:17:34.299786 2289 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 6 00:17:34.300851 kubelet[2289]: I0806 00:17:34.299832 2289 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 6 00:17:34.300851 kubelet[2289]: I0806 00:17:34.299870 2289 kubelet.go:2303] "Starting kubelet main sync loop" Aug 6 00:17:34.300851 kubelet[2289]: E0806 00:17:34.299981 2289 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 6 00:17:34.315521 kubelet[2289]: W0806 00:17:34.314998 2289 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.244.27.62:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.27.62:6443: connect: connection refused Aug 6 00:17:34.315521 kubelet[2289]: E0806 00:17:34.315526 2289 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.244.27.62:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.27.62:6443: connect: connection refused Aug 6 00:17:34.332778 kubelet[2289]: I0806 00:17:34.332736 2289 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 6 00:17:34.333009 kubelet[2289]: I0806 00:17:34.332988 2289 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 6 00:17:34.333151 kubelet[2289]: I0806 00:17:34.333132 2289 state_mem.go:36] "Initialized new in-memory state store" Aug 6 00:17:34.335210 kubelet[2289]: I0806 00:17:34.335183 2289 policy_none.go:49] "None policy: Start" Aug 6 00:17:34.336271 kubelet[2289]: I0806 00:17:34.336181 2289 memory_manager.go:169] "Starting memorymanager" policy="None" Aug 6 00:17:34.336434 kubelet[2289]: I0806 00:17:34.336413 2289 state_mem.go:35] "Initializing new in-memory state store" Aug 6 00:17:34.353385 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 6 00:17:34.377612 kubelet[2289]: I0806 00:17:34.377569 2289 kubelet_node_status.go:70] "Attempting to register node" node="srv-iww3y.gb1.brightbox.com" Aug 6 00:17:34.378412 kubelet[2289]: E0806 00:17:34.378364 2289 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.244.27.62:6443/api/v1/nodes\": dial tcp 10.244.27.62:6443: connect: connection refused" node="srv-iww3y.gb1.brightbox.com" Aug 6 00:17:34.382353 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 6 00:17:34.387233 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 6 00:17:34.388940 kubelet[2289]: W0806 00:17:34.388824 2289 helpers.go:242] readString: Failed to read "/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/cpuset.cpus.effective": read /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/cpuset.cpus.effective: no such device Aug 6 00:17:34.401083 kubelet[2289]: E0806 00:17:34.401004 2289 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 6 00:17:34.401447 kubelet[2289]: I0806 00:17:34.401296 2289 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 6 00:17:34.401812 kubelet[2289]: I0806 00:17:34.401779 2289 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 6 00:17:34.403786 kubelet[2289]: E0806 00:17:34.403652 2289 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-iww3y.gb1.brightbox.com\" not found" Aug 6 00:17:34.481780 kubelet[2289]: E0806 00:17:34.481609 2289 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.27.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-iww3y.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.27.62:6443: connect: connection refused" interval="400ms" Aug 6 00:17:34.582501 kubelet[2289]: I0806 00:17:34.582021 2289 kubelet_node_status.go:70] "Attempting to register node" node="srv-iww3y.gb1.brightbox.com" Aug 6 00:17:34.582501 kubelet[2289]: E0806 00:17:34.582449 2289 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.244.27.62:6443/api/v1/nodes\": dial tcp 10.244.27.62:6443: connect: connection refused" node="srv-iww3y.gb1.brightbox.com" Aug 6 00:17:34.601836 kubelet[2289]: I0806 00:17:34.601760 2289 topology_manager.go:215] "Topology Admit Handler" podUID="fafa61b99d536933826221b5c8b5ccd4" podNamespace="kube-system" podName="kube-apiserver-srv-iww3y.gb1.brightbox.com" Aug 6 00:17:34.604941 kubelet[2289]: I0806 00:17:34.604583 2289 topology_manager.go:215] "Topology Admit Handler" podUID="412b2d9684f938d69c723108684c1528" podNamespace="kube-system" podName="kube-controller-manager-srv-iww3y.gb1.brightbox.com" Aug 6 00:17:34.606907 kubelet[2289]: I0806 00:17:34.606870 2289 topology_manager.go:215] "Topology Admit Handler" podUID="7b743f811f91b6fa67889d5f0ceed075" podNamespace="kube-system" podName="kube-scheduler-srv-iww3y.gb1.brightbox.com" Aug 6 00:17:34.617676 systemd[1]: Created slice kubepods-burstable-podfafa61b99d536933826221b5c8b5ccd4.slice - libcontainer container kubepods-burstable-podfafa61b99d536933826221b5c8b5ccd4.slice. Aug 6 00:17:34.639237 systemd[1]: Created slice kubepods-burstable-pod412b2d9684f938d69c723108684c1528.slice - libcontainer container kubepods-burstable-pod412b2d9684f938d69c723108684c1528.slice. Aug 6 00:17:34.655676 systemd[1]: Created slice kubepods-burstable-pod7b743f811f91b6fa67889d5f0ceed075.slice - libcontainer container kubepods-burstable-pod7b743f811f91b6fa67889d5f0ceed075.slice. Aug 6 00:17:34.677732 kubelet[2289]: I0806 00:17:34.677593 2289 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/412b2d9684f938d69c723108684c1528-ca-certs\") pod \"kube-controller-manager-srv-iww3y.gb1.brightbox.com\" (UID: \"412b2d9684f938d69c723108684c1528\") " pod="kube-system/kube-controller-manager-srv-iww3y.gb1.brightbox.com" Aug 6 00:17:34.677732 kubelet[2289]: I0806 00:17:34.677653 2289 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/412b2d9684f938d69c723108684c1528-flexvolume-dir\") pod \"kube-controller-manager-srv-iww3y.gb1.brightbox.com\" (UID: \"412b2d9684f938d69c723108684c1528\") " pod="kube-system/kube-controller-manager-srv-iww3y.gb1.brightbox.com" Aug 6 00:17:34.677732 kubelet[2289]: I0806 00:17:34.677689 2289 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/412b2d9684f938d69c723108684c1528-kubeconfig\") pod \"kube-controller-manager-srv-iww3y.gb1.brightbox.com\" (UID: \"412b2d9684f938d69c723108684c1528\") " pod="kube-system/kube-controller-manager-srv-iww3y.gb1.brightbox.com" Aug 6 00:17:34.677732 kubelet[2289]: I0806 00:17:34.677736 2289 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fafa61b99d536933826221b5c8b5ccd4-ca-certs\") pod \"kube-apiserver-srv-iww3y.gb1.brightbox.com\" (UID: \"fafa61b99d536933826221b5c8b5ccd4\") " pod="kube-system/kube-apiserver-srv-iww3y.gb1.brightbox.com" Aug 6 00:17:34.678131 kubelet[2289]: I0806 00:17:34.677771 2289 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fafa61b99d536933826221b5c8b5ccd4-k8s-certs\") pod \"kube-apiserver-srv-iww3y.gb1.brightbox.com\" (UID: \"fafa61b99d536933826221b5c8b5ccd4\") " pod="kube-system/kube-apiserver-srv-iww3y.gb1.brightbox.com" Aug 6 00:17:34.678131 kubelet[2289]: I0806 00:17:34.677804 2289 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fafa61b99d536933826221b5c8b5ccd4-usr-share-ca-certificates\") pod \"kube-apiserver-srv-iww3y.gb1.brightbox.com\" (UID: \"fafa61b99d536933826221b5c8b5ccd4\") " pod="kube-system/kube-apiserver-srv-iww3y.gb1.brightbox.com" Aug 6 00:17:34.678131 kubelet[2289]: I0806 00:17:34.677835 2289 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/412b2d9684f938d69c723108684c1528-k8s-certs\") pod \"kube-controller-manager-srv-iww3y.gb1.brightbox.com\" (UID: \"412b2d9684f938d69c723108684c1528\") " pod="kube-system/kube-controller-manager-srv-iww3y.gb1.brightbox.com" Aug 6 00:17:34.678131 kubelet[2289]: I0806 00:17:34.677867 2289 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/412b2d9684f938d69c723108684c1528-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-iww3y.gb1.brightbox.com\" (UID: \"412b2d9684f938d69c723108684c1528\") " pod="kube-system/kube-controller-manager-srv-iww3y.gb1.brightbox.com" Aug 6 00:17:34.678131 kubelet[2289]: I0806 00:17:34.677899 2289 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b743f811f91b6fa67889d5f0ceed075-kubeconfig\") pod \"kube-scheduler-srv-iww3y.gb1.brightbox.com\" (UID: \"7b743f811f91b6fa67889d5f0ceed075\") " pod="kube-system/kube-scheduler-srv-iww3y.gb1.brightbox.com" Aug 6 00:17:34.883377 kubelet[2289]: E0806 00:17:34.883319 2289 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.27.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-iww3y.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.27.62:6443: connect: connection refused" interval="800ms" Aug 6 00:17:34.938128 containerd[1508]: time="2024-08-06T00:17:34.938010966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-iww3y.gb1.brightbox.com,Uid:fafa61b99d536933826221b5c8b5ccd4,Namespace:kube-system,Attempt:0,}" Aug 6 00:17:34.948458 containerd[1508]: time="2024-08-06T00:17:34.948405137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-iww3y.gb1.brightbox.com,Uid:412b2d9684f938d69c723108684c1528,Namespace:kube-system,Attempt:0,}" Aug 6 00:17:34.959215 containerd[1508]: time="2024-08-06T00:17:34.959144132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-iww3y.gb1.brightbox.com,Uid:7b743f811f91b6fa67889d5f0ceed075,Namespace:kube-system,Attempt:0,}" Aug 6 00:17:34.986416 kubelet[2289]: I0806 00:17:34.986373 2289 kubelet_node_status.go:70] "Attempting to register node" node="srv-iww3y.gb1.brightbox.com" Aug 6 00:17:34.986910 kubelet[2289]: E0806 00:17:34.986840 2289 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.244.27.62:6443/api/v1/nodes\": dial tcp 10.244.27.62:6443: connect: connection refused" node="srv-iww3y.gb1.brightbox.com" Aug 6 00:17:35.238798 kubelet[2289]: W0806 00:17:35.238443 2289 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.244.27.62:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-iww3y.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.27.62:6443: connect: connection refused Aug 6 00:17:35.238798 kubelet[2289]: E0806 00:17:35.238589 2289 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.244.27.62:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-iww3y.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.27.62:6443: connect: connection refused Aug 6 00:17:35.238798 kubelet[2289]: W0806 00:17:35.238509 2289 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.244.27.62:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.244.27.62:6443: connect: connection refused Aug 6 00:17:35.238798 kubelet[2289]: E0806 00:17:35.238640 2289 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.244.27.62:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.244.27.62:6443: connect: connection refused Aug 6 00:17:35.542018 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3320979788.mount: Deactivated successfully. Aug 6 00:17:35.547931 kubelet[2289]: W0806 00:17:35.547698 2289 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.244.27.62:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.27.62:6443: connect: connection refused Aug 6 00:17:35.547931 kubelet[2289]: E0806 00:17:35.547769 2289 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.244.27.62:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.27.62:6443: connect: connection refused Aug 6 00:17:35.550453 containerd[1508]: time="2024-08-06T00:17:35.549077651Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 6 00:17:35.553107 containerd[1508]: time="2024-08-06T00:17:35.553022430Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Aug 6 00:17:35.553860 containerd[1508]: time="2024-08-06T00:17:35.553812428Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 6 00:17:35.555004 containerd[1508]: time="2024-08-06T00:17:35.554952921Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 6 00:17:35.555989 containerd[1508]: time="2024-08-06T00:17:35.555698014Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 6 00:17:35.557264 containerd[1508]: time="2024-08-06T00:17:35.557145624Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 6 00:17:35.558131 containerd[1508]: time="2024-08-06T00:17:35.558087970Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 6 00:17:35.561785 containerd[1508]: time="2024-08-06T00:17:35.561706403Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 6 00:17:35.566465 containerd[1508]: time="2024-08-06T00:17:35.565567077Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 627.290757ms" Aug 6 00:17:35.615540 containerd[1508]: time="2024-08-06T00:17:35.614693118Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 666.000283ms" Aug 6 00:17:35.616131 containerd[1508]: time="2024-08-06T00:17:35.616083553Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 656.839802ms" Aug 6 00:17:35.689406 kubelet[2289]: E0806 00:17:35.689352 2289 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.27.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-iww3y.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.27.62:6443: connect: connection refused" interval="1.6s" Aug 6 00:17:35.745820 kubelet[2289]: W0806 00:17:35.745685 2289 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.244.27.62:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.27.62:6443: connect: connection refused Aug 6 00:17:35.745820 kubelet[2289]: E0806 00:17:35.745774 2289 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.244.27.62:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.27.62:6443: connect: connection refused Aug 6 00:17:35.791578 kubelet[2289]: I0806 00:17:35.791409 2289 kubelet_node_status.go:70] "Attempting to register node" node="srv-iww3y.gb1.brightbox.com" Aug 6 00:17:35.792132 kubelet[2289]: E0806 00:17:35.791897 2289 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.244.27.62:6443/api/v1/nodes\": dial tcp 10.244.27.62:6443: connect: connection refused" node="srv-iww3y.gb1.brightbox.com" Aug 6 00:17:35.876925 containerd[1508]: time="2024-08-06T00:17:35.876257460Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 6 00:17:35.876925 containerd[1508]: time="2024-08-06T00:17:35.876369270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 00:17:35.876925 containerd[1508]: time="2024-08-06T00:17:35.876422474Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 6 00:17:35.876925 containerd[1508]: time="2024-08-06T00:17:35.876452758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 00:17:35.886531 containerd[1508]: time="2024-08-06T00:17:35.886236028Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 6 00:17:35.886531 containerd[1508]: time="2024-08-06T00:17:35.886332135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 00:17:35.886531 containerd[1508]: time="2024-08-06T00:17:35.886362258Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 6 00:17:35.886531 containerd[1508]: time="2024-08-06T00:17:35.886379125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 00:17:35.892616 containerd[1508]: time="2024-08-06T00:17:35.890216368Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 6 00:17:35.892616 containerd[1508]: time="2024-08-06T00:17:35.890295227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 00:17:35.892616 containerd[1508]: time="2024-08-06T00:17:35.890323841Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 6 00:17:35.892616 containerd[1508]: time="2024-08-06T00:17:35.890344841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 00:17:35.931221 systemd[1]: Started cri-containerd-09340d2054abf4bde889ccb630159ee7b0e2070dc7300965f02929b289e96f4d.scope - libcontainer container 09340d2054abf4bde889ccb630159ee7b0e2070dc7300965f02929b289e96f4d. Aug 6 00:17:35.945322 systemd[1]: Started cri-containerd-c370c90b9b0579aa9e5c76fc11f92972dbe4c613eeddaab635829ae5891d2e19.scope - libcontainer container c370c90b9b0579aa9e5c76fc11f92972dbe4c613eeddaab635829ae5891d2e19. Aug 6 00:17:35.962265 systemd[1]: Started cri-containerd-5d6fbd5536914638d17ba9f1d6fcdac32cd3179a8aa18f63abcf558bf26a1642.scope - libcontainer container 5d6fbd5536914638d17ba9f1d6fcdac32cd3179a8aa18f63abcf558bf26a1642. Aug 6 00:17:36.065376 containerd[1508]: time="2024-08-06T00:17:36.064545986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-iww3y.gb1.brightbox.com,Uid:7b743f811f91b6fa67889d5f0ceed075,Namespace:kube-system,Attempt:0,} returns sandbox id \"09340d2054abf4bde889ccb630159ee7b0e2070dc7300965f02929b289e96f4d\"" Aug 6 00:17:36.074513 containerd[1508]: time="2024-08-06T00:17:36.074371114Z" level=info msg="CreateContainer within sandbox \"09340d2054abf4bde889ccb630159ee7b0e2070dc7300965f02929b289e96f4d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 6 00:17:36.075382 containerd[1508]: time="2024-08-06T00:17:36.075329745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-iww3y.gb1.brightbox.com,Uid:412b2d9684f938d69c723108684c1528,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d6fbd5536914638d17ba9f1d6fcdac32cd3179a8aa18f63abcf558bf26a1642\"" Aug 6 00:17:36.079327 containerd[1508]: time="2024-08-06T00:17:36.079287675Z" level=info msg="CreateContainer within sandbox \"5d6fbd5536914638d17ba9f1d6fcdac32cd3179a8aa18f63abcf558bf26a1642\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 6 00:17:36.083572 containerd[1508]: time="2024-08-06T00:17:36.083418593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-iww3y.gb1.brightbox.com,Uid:fafa61b99d536933826221b5c8b5ccd4,Namespace:kube-system,Attempt:0,} returns sandbox id \"c370c90b9b0579aa9e5c76fc11f92972dbe4c613eeddaab635829ae5891d2e19\"" Aug 6 00:17:36.091923 containerd[1508]: time="2024-08-06T00:17:36.091700505Z" level=info msg="CreateContainer within sandbox \"c370c90b9b0579aa9e5c76fc11f92972dbe4c613eeddaab635829ae5891d2e19\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 6 00:17:36.112305 containerd[1508]: time="2024-08-06T00:17:36.112228919Z" level=info msg="CreateContainer within sandbox \"5d6fbd5536914638d17ba9f1d6fcdac32cd3179a8aa18f63abcf558bf26a1642\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b82aaaadae9e76623dc00ac556df14fbb204ca57c1bd1ac23d9f21292b93d3b7\"" Aug 6 00:17:36.113257 containerd[1508]: time="2024-08-06T00:17:36.113110341Z" level=info msg="CreateContainer within sandbox \"09340d2054abf4bde889ccb630159ee7b0e2070dc7300965f02929b289e96f4d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ca7af6b4ec2fdb456eaac0bbd61fecb7a64bb9bec13f10e230844dd2f1770cf3\"" Aug 6 00:17:36.114120 containerd[1508]: time="2024-08-06T00:17:36.114082454Z" level=info msg="StartContainer for \"ca7af6b4ec2fdb456eaac0bbd61fecb7a64bb9bec13f10e230844dd2f1770cf3\"" Aug 6 00:17:36.117398 containerd[1508]: time="2024-08-06T00:17:36.116086776Z" level=info msg="StartContainer for \"b82aaaadae9e76623dc00ac556df14fbb204ca57c1bd1ac23d9f21292b93d3b7\"" Aug 6 00:17:36.121183 containerd[1508]: time="2024-08-06T00:17:36.121127430Z" level=info msg="CreateContainer within sandbox \"c370c90b9b0579aa9e5c76fc11f92972dbe4c613eeddaab635829ae5891d2e19\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"df750c682bf0227dbe95b8605df4a36b9410dad5d86296f71b2454a57489909c\"" Aug 6 00:17:36.122150 containerd[1508]: time="2024-08-06T00:17:36.122111284Z" level=info msg="StartContainer for \"df750c682bf0227dbe95b8605df4a36b9410dad5d86296f71b2454a57489909c\"" Aug 6 00:17:36.173292 systemd[1]: Started cri-containerd-ca7af6b4ec2fdb456eaac0bbd61fecb7a64bb9bec13f10e230844dd2f1770cf3.scope - libcontainer container ca7af6b4ec2fdb456eaac0bbd61fecb7a64bb9bec13f10e230844dd2f1770cf3. Aug 6 00:17:36.184904 systemd[1]: Started cri-containerd-b82aaaadae9e76623dc00ac556df14fbb204ca57c1bd1ac23d9f21292b93d3b7.scope - libcontainer container b82aaaadae9e76623dc00ac556df14fbb204ca57c1bd1ac23d9f21292b93d3b7. Aug 6 00:17:36.202197 systemd[1]: Started cri-containerd-df750c682bf0227dbe95b8605df4a36b9410dad5d86296f71b2454a57489909c.scope - libcontainer container df750c682bf0227dbe95b8605df4a36b9410dad5d86296f71b2454a57489909c. Aug 6 00:17:36.302059 containerd[1508]: time="2024-08-06T00:17:36.301884146Z" level=info msg="StartContainer for \"b82aaaadae9e76623dc00ac556df14fbb204ca57c1bd1ac23d9f21292b93d3b7\" returns successfully" Aug 6 00:17:36.309102 kubelet[2289]: E0806 00:17:36.309049 2289 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.244.27.62:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.244.27.62:6443: connect: connection refused Aug 6 00:17:36.310314 containerd[1508]: time="2024-08-06T00:17:36.310267129Z" level=info msg="StartContainer for \"ca7af6b4ec2fdb456eaac0bbd61fecb7a64bb9bec13f10e230844dd2f1770cf3\" returns successfully" Aug 6 00:17:36.361061 containerd[1508]: time="2024-08-06T00:17:36.360883655Z" level=info msg="StartContainer for \"df750c682bf0227dbe95b8605df4a36b9410dad5d86296f71b2454a57489909c\" returns successfully" Aug 6 00:17:37.394882 kubelet[2289]: I0806 00:17:37.394842 2289 kubelet_node_status.go:70] "Attempting to register node" node="srv-iww3y.gb1.brightbox.com" Aug 6 00:17:39.049032 kubelet[2289]: E0806 00:17:39.048942 2289 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-iww3y.gb1.brightbox.com\" not found" node="srv-iww3y.gb1.brightbox.com" Aug 6 00:17:39.091118 kubelet[2289]: I0806 00:17:39.090643 2289 kubelet_node_status.go:73] "Successfully registered node" node="srv-iww3y.gb1.brightbox.com" Aug 6 00:17:39.185365 kubelet[2289]: E0806 00:17:39.185299 2289 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-srv-iww3y.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-iww3y.gb1.brightbox.com" Aug 6 00:17:39.253906 kubelet[2289]: I0806 00:17:39.253844 2289 apiserver.go:52] "Watching apiserver" Aug 6 00:17:39.276271 kubelet[2289]: I0806 00:17:39.276194 2289 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Aug 6 00:17:39.363050 kubelet[2289]: E0806 00:17:39.362502 2289 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-srv-iww3y.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-iww3y.gb1.brightbox.com" Aug 6 00:17:40.369543 kubelet[2289]: W0806 00:17:40.368835 2289 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 6 00:17:42.109569 systemd[1]: Reloading requested from client PID 2567 ('systemctl') (unit session-11.scope)... Aug 6 00:17:42.109610 systemd[1]: Reloading... Aug 6 00:17:42.243996 zram_generator::config[2604]: No configuration found. Aug 6 00:17:42.435734 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 6 00:17:42.562832 systemd[1]: Reloading finished in 452 ms. Aug 6 00:17:42.628549 kubelet[2289]: I0806 00:17:42.628493 2289 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 6 00:17:42.629430 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 6 00:17:42.639682 systemd[1]: kubelet.service: Deactivated successfully. Aug 6 00:17:42.640192 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 6 00:17:42.640316 systemd[1]: kubelet.service: Consumed 1.414s CPU time, 111.6M memory peak, 0B memory swap peak. Aug 6 00:17:42.646330 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 6 00:17:42.846512 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 6 00:17:42.860490 (kubelet)[2668]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 6 00:17:42.989257 kubelet[2668]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 6 00:17:42.990338 kubelet[2668]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 6 00:17:42.990338 kubelet[2668]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 6 00:17:42.990338 kubelet[2668]: I0806 00:17:42.989876 2668 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 6 00:17:43.007640 kubelet[2668]: I0806 00:17:43.007598 2668 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Aug 6 00:17:43.008983 kubelet[2668]: I0806 00:17:43.007881 2668 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 6 00:17:43.008983 kubelet[2668]: I0806 00:17:43.008184 2668 server.go:895] "Client rotation is on, will bootstrap in background" Aug 6 00:17:43.010726 kubelet[2668]: I0806 00:17:43.010666 2668 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 6 00:17:43.015312 kubelet[2668]: I0806 00:17:43.014309 2668 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 6 00:17:43.039604 kubelet[2668]: I0806 00:17:43.038695 2668 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 6 00:17:43.039984 kubelet[2668]: I0806 00:17:43.039933 2668 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 6 00:17:43.040949 kubelet[2668]: I0806 00:17:43.040649 2668 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Aug 6 00:17:43.040949 kubelet[2668]: I0806 00:17:43.040941 2668 topology_manager.go:138] "Creating topology manager with none policy" Aug 6 00:17:43.041272 kubelet[2668]: I0806 00:17:43.040995 2668 container_manager_linux.go:301] "Creating device plugin manager" Aug 6 00:17:43.041272 kubelet[2668]: I0806 00:17:43.041222 2668 state_mem.go:36] "Initialized new in-memory state store" Aug 6 00:17:43.044737 kubelet[2668]: I0806 00:17:43.043914 2668 kubelet.go:393] "Attempting to sync node with API server" Aug 6 00:17:43.044737 kubelet[2668]: I0806 00:17:43.043956 2668 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 6 00:17:43.044737 kubelet[2668]: I0806 00:17:43.044385 2668 kubelet.go:309] "Adding apiserver pod source" Aug 6 00:17:43.047521 kubelet[2668]: I0806 00:17:43.044949 2668 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 6 00:17:43.050599 kubelet[2668]: I0806 00:17:43.050565 2668 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Aug 6 00:17:43.052824 kubelet[2668]: I0806 00:17:43.052797 2668 server.go:1232] "Started kubelet" Aug 6 00:17:43.057975 kubelet[2668]: I0806 00:17:43.057939 2668 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Aug 6 00:17:43.060144 kubelet[2668]: I0806 00:17:43.060111 2668 server.go:462] "Adding debug handlers to kubelet server" Aug 6 00:17:43.062794 kubelet[2668]: I0806 00:17:43.062767 2668 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Aug 6 00:17:43.065725 kubelet[2668]: I0806 00:17:43.064259 2668 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 6 00:17:43.066648 kubelet[2668]: E0806 00:17:43.066625 2668 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Aug 6 00:17:43.066806 kubelet[2668]: E0806 00:17:43.066787 2668 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 6 00:17:43.070131 kubelet[2668]: I0806 00:17:43.069999 2668 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 6 00:17:43.082048 kubelet[2668]: I0806 00:17:43.081697 2668 volume_manager.go:291] "Starting Kubelet Volume Manager" Aug 6 00:17:43.082716 kubelet[2668]: I0806 00:17:43.082690 2668 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Aug 6 00:17:43.083126 kubelet[2668]: I0806 00:17:43.083104 2668 reconciler_new.go:29] "Reconciler: start to sync state" Aug 6 00:17:43.119196 kubelet[2668]: I0806 00:17:43.119078 2668 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 6 00:17:43.121905 kubelet[2668]: I0806 00:17:43.121427 2668 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 6 00:17:43.121905 kubelet[2668]: I0806 00:17:43.121475 2668 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 6 00:17:43.121905 kubelet[2668]: I0806 00:17:43.121512 2668 kubelet.go:2303] "Starting kubelet main sync loop" Aug 6 00:17:43.121905 kubelet[2668]: E0806 00:17:43.121598 2668 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 6 00:17:43.206445 kubelet[2668]: I0806 00:17:43.206394 2668 kubelet_node_status.go:70] "Attempting to register node" node="srv-iww3y.gb1.brightbox.com" Aug 6 00:17:43.221981 kubelet[2668]: E0806 00:17:43.221860 2668 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 6 00:17:43.239071 kubelet[2668]: I0806 00:17:43.237121 2668 kubelet_node_status.go:108] "Node was previously registered" node="srv-iww3y.gb1.brightbox.com" Aug 6 00:17:43.239071 kubelet[2668]: I0806 00:17:43.237235 2668 kubelet_node_status.go:73] "Successfully registered node" node="srv-iww3y.gb1.brightbox.com" Aug 6 00:17:43.259927 kubelet[2668]: I0806 00:17:43.259416 2668 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 6 00:17:43.259927 kubelet[2668]: I0806 00:17:43.259452 2668 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 6 00:17:43.259927 kubelet[2668]: I0806 00:17:43.259490 2668 state_mem.go:36] "Initialized new in-memory state store" Aug 6 00:17:43.259927 kubelet[2668]: I0806 00:17:43.259770 2668 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 6 00:17:43.259927 kubelet[2668]: I0806 00:17:43.259811 2668 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 6 00:17:43.259927 kubelet[2668]: I0806 00:17:43.259834 2668 policy_none.go:49] "None policy: Start" Aug 6 00:17:43.261998 kubelet[2668]: I0806 00:17:43.261081 2668 memory_manager.go:169] "Starting memorymanager" policy="None" Aug 6 00:17:43.261998 kubelet[2668]: I0806 00:17:43.261132 2668 state_mem.go:35] "Initializing new in-memory state store" Aug 6 00:17:43.261998 kubelet[2668]: I0806 00:17:43.261380 2668 state_mem.go:75] "Updated machine memory state" Aug 6 00:17:43.278571 kubelet[2668]: I0806 00:17:43.278539 2668 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 6 00:17:43.282514 kubelet[2668]: I0806 00:17:43.281823 2668 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 6 00:17:43.423294 kubelet[2668]: I0806 00:17:43.423147 2668 topology_manager.go:215] "Topology Admit Handler" podUID="fafa61b99d536933826221b5c8b5ccd4" podNamespace="kube-system" podName="kube-apiserver-srv-iww3y.gb1.brightbox.com" Aug 6 00:17:43.423454 kubelet[2668]: I0806 00:17:43.423372 2668 topology_manager.go:215] "Topology Admit Handler" podUID="412b2d9684f938d69c723108684c1528" podNamespace="kube-system" podName="kube-controller-manager-srv-iww3y.gb1.brightbox.com" Aug 6 00:17:43.423535 kubelet[2668]: I0806 00:17:43.423476 2668 topology_manager.go:215] "Topology Admit Handler" podUID="7b743f811f91b6fa67889d5f0ceed075" podNamespace="kube-system" podName="kube-scheduler-srv-iww3y.gb1.brightbox.com" Aug 6 00:17:43.435853 kubelet[2668]: W0806 00:17:43.433793 2668 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 6 00:17:43.463495 kubelet[2668]: W0806 00:17:43.462878 2668 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 6 00:17:43.465210 kubelet[2668]: W0806 00:17:43.465177 2668 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 6 00:17:43.465310 kubelet[2668]: E0806 00:17:43.465267 2668 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-srv-iww3y.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-iww3y.gb1.brightbox.com" Aug 6 00:17:43.488652 kubelet[2668]: I0806 00:17:43.487495 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/412b2d9684f938d69c723108684c1528-ca-certs\") pod \"kube-controller-manager-srv-iww3y.gb1.brightbox.com\" (UID: \"412b2d9684f938d69c723108684c1528\") " pod="kube-system/kube-controller-manager-srv-iww3y.gb1.brightbox.com" Aug 6 00:17:43.488652 kubelet[2668]: I0806 00:17:43.487743 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/412b2d9684f938d69c723108684c1528-k8s-certs\") pod \"kube-controller-manager-srv-iww3y.gb1.brightbox.com\" (UID: \"412b2d9684f938d69c723108684c1528\") " pod="kube-system/kube-controller-manager-srv-iww3y.gb1.brightbox.com" Aug 6 00:17:43.488652 kubelet[2668]: I0806 00:17:43.487900 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/412b2d9684f938d69c723108684c1528-kubeconfig\") pod \"kube-controller-manager-srv-iww3y.gb1.brightbox.com\" (UID: \"412b2d9684f938d69c723108684c1528\") " pod="kube-system/kube-controller-manager-srv-iww3y.gb1.brightbox.com" Aug 6 00:17:43.488652 kubelet[2668]: I0806 00:17:43.488005 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/412b2d9684f938d69c723108684c1528-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-iww3y.gb1.brightbox.com\" (UID: \"412b2d9684f938d69c723108684c1528\") " pod="kube-system/kube-controller-manager-srv-iww3y.gb1.brightbox.com" Aug 6 00:17:43.488652 kubelet[2668]: I0806 00:17:43.488160 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fafa61b99d536933826221b5c8b5ccd4-k8s-certs\") pod \"kube-apiserver-srv-iww3y.gb1.brightbox.com\" (UID: \"fafa61b99d536933826221b5c8b5ccd4\") " pod="kube-system/kube-apiserver-srv-iww3y.gb1.brightbox.com" Aug 6 00:17:43.489143 kubelet[2668]: I0806 00:17:43.488247 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fafa61b99d536933826221b5c8b5ccd4-usr-share-ca-certificates\") pod \"kube-apiserver-srv-iww3y.gb1.brightbox.com\" (UID: \"fafa61b99d536933826221b5c8b5ccd4\") " pod="kube-system/kube-apiserver-srv-iww3y.gb1.brightbox.com" Aug 6 00:17:43.489143 kubelet[2668]: I0806 00:17:43.488348 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/412b2d9684f938d69c723108684c1528-flexvolume-dir\") pod \"kube-controller-manager-srv-iww3y.gb1.brightbox.com\" (UID: \"412b2d9684f938d69c723108684c1528\") " pod="kube-system/kube-controller-manager-srv-iww3y.gb1.brightbox.com" Aug 6 00:17:43.489143 kubelet[2668]: I0806 00:17:43.488524 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b743f811f91b6fa67889d5f0ceed075-kubeconfig\") pod \"kube-scheduler-srv-iww3y.gb1.brightbox.com\" (UID: \"7b743f811f91b6fa67889d5f0ceed075\") " pod="kube-system/kube-scheduler-srv-iww3y.gb1.brightbox.com" Aug 6 00:17:43.489143 kubelet[2668]: I0806 00:17:43.488622 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fafa61b99d536933826221b5c8b5ccd4-ca-certs\") pod \"kube-apiserver-srv-iww3y.gb1.brightbox.com\" (UID: \"fafa61b99d536933826221b5c8b5ccd4\") " pod="kube-system/kube-apiserver-srv-iww3y.gb1.brightbox.com" Aug 6 00:17:44.046362 kubelet[2668]: I0806 00:17:44.046002 2668 apiserver.go:52] "Watching apiserver" Aug 6 00:17:44.083348 kubelet[2668]: I0806 00:17:44.083214 2668 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Aug 6 00:17:44.185529 kubelet[2668]: I0806 00:17:44.185443 2668 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-iww3y.gb1.brightbox.com" podStartSLOduration=1.184045203 podCreationTimestamp="2024-08-06 00:17:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-06 00:17:44.181920761 +0000 UTC m=+1.304227251" watchObservedRunningTime="2024-08-06 00:17:44.184045203 +0000 UTC m=+1.306351684" Aug 6 00:17:44.206170 kubelet[2668]: I0806 00:17:44.205824 2668 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-iww3y.gb1.brightbox.com" podStartSLOduration=1.205778595 podCreationTimestamp="2024-08-06 00:17:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-06 00:17:44.204359462 +0000 UTC m=+1.326665963" watchObservedRunningTime="2024-08-06 00:17:44.205778595 +0000 UTC m=+1.328085090" Aug 6 00:17:44.206170 kubelet[2668]: I0806 00:17:44.206014 2668 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-iww3y.gb1.brightbox.com" podStartSLOduration=4.205988657 podCreationTimestamp="2024-08-06 00:17:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-06 00:17:44.195505514 +0000 UTC m=+1.317812014" watchObservedRunningTime="2024-08-06 00:17:44.205988657 +0000 UTC m=+1.328295147" Aug 6 00:17:48.088149 sudo[1770]: pam_unix(sudo:session): session closed for user root Aug 6 00:17:48.232489 sshd[1767]: pam_unix(sshd:session): session closed for user core Aug 6 00:17:48.239111 systemd[1]: sshd@8-10.244.27.62:22-139.178.89.65:38892.service: Deactivated successfully. Aug 6 00:17:48.243860 systemd[1]: session-11.scope: Deactivated successfully. Aug 6 00:17:48.244525 systemd[1]: session-11.scope: Consumed 6.304s CPU time, 134.3M memory peak, 0B memory swap peak. Aug 6 00:17:48.246602 systemd-logind[1485]: Session 11 logged out. Waiting for processes to exit. Aug 6 00:17:48.249100 systemd-logind[1485]: Removed session 11. Aug 6 00:17:55.963741 kubelet[2668]: I0806 00:17:55.963317 2668 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 6 00:17:55.965208 containerd[1508]: time="2024-08-06T00:17:55.964909276Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 6 00:17:55.966452 kubelet[2668]: I0806 00:17:55.965588 2668 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 6 00:17:56.643518 kubelet[2668]: I0806 00:17:56.643408 2668 topology_manager.go:215] "Topology Admit Handler" podUID="2ca3f805-4759-4040-ae21-a63658e309ac" podNamespace="kube-system" podName="kube-proxy-ghfcr" Aug 6 00:17:56.672136 systemd[1]: Created slice kubepods-besteffort-pod2ca3f805_4759_4040_ae21_a63658e309ac.slice - libcontainer container kubepods-besteffort-pod2ca3f805_4759_4040_ae21_a63658e309ac.slice. Aug 6 00:17:56.683141 kubelet[2668]: I0806 00:17:56.682126 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2ca3f805-4759-4040-ae21-a63658e309ac-xtables-lock\") pod \"kube-proxy-ghfcr\" (UID: \"2ca3f805-4759-4040-ae21-a63658e309ac\") " pod="kube-system/kube-proxy-ghfcr" Aug 6 00:17:56.683141 kubelet[2668]: I0806 00:17:56.682200 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zg9p\" (UniqueName: \"kubernetes.io/projected/2ca3f805-4759-4040-ae21-a63658e309ac-kube-api-access-6zg9p\") pod \"kube-proxy-ghfcr\" (UID: \"2ca3f805-4759-4040-ae21-a63658e309ac\") " pod="kube-system/kube-proxy-ghfcr" Aug 6 00:17:56.683141 kubelet[2668]: I0806 00:17:56.682304 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2ca3f805-4759-4040-ae21-a63658e309ac-kube-proxy\") pod \"kube-proxy-ghfcr\" (UID: \"2ca3f805-4759-4040-ae21-a63658e309ac\") " pod="kube-system/kube-proxy-ghfcr" Aug 6 00:17:56.683141 kubelet[2668]: I0806 00:17:56.682354 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2ca3f805-4759-4040-ae21-a63658e309ac-lib-modules\") pod \"kube-proxy-ghfcr\" (UID: \"2ca3f805-4759-4040-ae21-a63658e309ac\") " pod="kube-system/kube-proxy-ghfcr" Aug 6 00:17:56.989374 containerd[1508]: time="2024-08-06T00:17:56.989121152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ghfcr,Uid:2ca3f805-4759-4040-ae21-a63658e309ac,Namespace:kube-system,Attempt:0,}" Aug 6 00:17:57.006041 kubelet[2668]: I0806 00:17:57.005538 2668 topology_manager.go:215] "Topology Admit Handler" podUID="d8dc0ee7-626b-4a2f-a721-33463c0d8b16" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-pt795" Aug 6 00:17:57.035511 systemd[1]: Created slice kubepods-besteffort-podd8dc0ee7_626b_4a2f_a721_33463c0d8b16.slice - libcontainer container kubepods-besteffort-podd8dc0ee7_626b_4a2f_a721_33463c0d8b16.slice. Aug 6 00:17:57.081843 containerd[1508]: time="2024-08-06T00:17:57.081393797Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 6 00:17:57.083139 containerd[1508]: time="2024-08-06T00:17:57.081896966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 00:17:57.083139 containerd[1508]: time="2024-08-06T00:17:57.082761142Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 6 00:17:57.083139 containerd[1508]: time="2024-08-06T00:17:57.082794265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 00:17:57.087051 kubelet[2668]: I0806 00:17:57.086853 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d8dc0ee7-626b-4a2f-a721-33463c0d8b16-var-lib-calico\") pod \"tigera-operator-76c4974c85-pt795\" (UID: \"d8dc0ee7-626b-4a2f-a721-33463c0d8b16\") " pod="tigera-operator/tigera-operator-76c4974c85-pt795" Aug 6 00:17:57.087051 kubelet[2668]: I0806 00:17:57.086950 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmgwl\" (UniqueName: \"kubernetes.io/projected/d8dc0ee7-626b-4a2f-a721-33463c0d8b16-kube-api-access-lmgwl\") pod \"tigera-operator-76c4974c85-pt795\" (UID: \"d8dc0ee7-626b-4a2f-a721-33463c0d8b16\") " pod="tigera-operator/tigera-operator-76c4974c85-pt795" Aug 6 00:17:57.116283 systemd[1]: run-containerd-runc-k8s.io-a83ff869c2157bd893365deb3157b9ffbbab0ca8e10c99fa1020b0806c0fc651-runc.c2IrBD.mount: Deactivated successfully. Aug 6 00:17:57.129258 systemd[1]: Started cri-containerd-a83ff869c2157bd893365deb3157b9ffbbab0ca8e10c99fa1020b0806c0fc651.scope - libcontainer container a83ff869c2157bd893365deb3157b9ffbbab0ca8e10c99fa1020b0806c0fc651. Aug 6 00:17:57.178010 containerd[1508]: time="2024-08-06T00:17:57.177647362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ghfcr,Uid:2ca3f805-4759-4040-ae21-a63658e309ac,Namespace:kube-system,Attempt:0,} returns sandbox id \"a83ff869c2157bd893365deb3157b9ffbbab0ca8e10c99fa1020b0806c0fc651\"" Aug 6 00:17:57.183397 containerd[1508]: time="2024-08-06T00:17:57.183320614Z" level=info msg="CreateContainer within sandbox \"a83ff869c2157bd893365deb3157b9ffbbab0ca8e10c99fa1020b0806c0fc651\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 6 00:17:57.207870 containerd[1508]: time="2024-08-06T00:17:57.207689287Z" level=info msg="CreateContainer within sandbox \"a83ff869c2157bd893365deb3157b9ffbbab0ca8e10c99fa1020b0806c0fc651\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"290c883e7023c90d082b9cccf2e9f58560a80a9229302faf60b54914a6fadad3\"" Aug 6 00:17:57.209079 containerd[1508]: time="2024-08-06T00:17:57.208993241Z" level=info msg="StartContainer for \"290c883e7023c90d082b9cccf2e9f58560a80a9229302faf60b54914a6fadad3\"" Aug 6 00:17:57.257182 systemd[1]: Started cri-containerd-290c883e7023c90d082b9cccf2e9f58560a80a9229302faf60b54914a6fadad3.scope - libcontainer container 290c883e7023c90d082b9cccf2e9f58560a80a9229302faf60b54914a6fadad3. Aug 6 00:17:57.313602 containerd[1508]: time="2024-08-06T00:17:57.313534341Z" level=info msg="StartContainer for \"290c883e7023c90d082b9cccf2e9f58560a80a9229302faf60b54914a6fadad3\" returns successfully" Aug 6 00:17:57.354025 containerd[1508]: time="2024-08-06T00:17:57.353640149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-pt795,Uid:d8dc0ee7-626b-4a2f-a721-33463c0d8b16,Namespace:tigera-operator,Attempt:0,}" Aug 6 00:17:57.397689 containerd[1508]: time="2024-08-06T00:17:57.397295520Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 6 00:17:57.397689 containerd[1508]: time="2024-08-06T00:17:57.397454396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 00:17:57.397689 containerd[1508]: time="2024-08-06T00:17:57.397486384Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 6 00:17:57.397689 containerd[1508]: time="2024-08-06T00:17:57.397507289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 00:17:57.432215 systemd[1]: Started cri-containerd-9c4212e884cf98b313e208e5681ed4c67d025b3d68dc4a0965d9985116cd218b.scope - libcontainer container 9c4212e884cf98b313e208e5681ed4c67d025b3d68dc4a0965d9985116cd218b. Aug 6 00:17:57.517165 containerd[1508]: time="2024-08-06T00:17:57.516784397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-pt795,Uid:d8dc0ee7-626b-4a2f-a721-33463c0d8b16,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"9c4212e884cf98b313e208e5681ed4c67d025b3d68dc4a0965d9985116cd218b\"" Aug 6 00:17:57.525366 containerd[1508]: time="2024-08-06T00:17:57.524249203Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Aug 6 00:17:58.223071 kubelet[2668]: I0806 00:17:58.222759 2668 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-ghfcr" podStartSLOduration=2.22154407 podCreationTimestamp="2024-08-06 00:17:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-06 00:17:58.218479597 +0000 UTC m=+15.340786100" watchObservedRunningTime="2024-08-06 00:17:58.22154407 +0000 UTC m=+15.343850560" Aug 6 00:17:59.350204 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3309213568.mount: Deactivated successfully. Aug 6 00:18:00.221534 containerd[1508]: time="2024-08-06T00:18:00.221411933Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 00:18:00.223662 containerd[1508]: time="2024-08-06T00:18:00.223432039Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=22076088" Aug 6 00:18:00.225272 containerd[1508]: time="2024-08-06T00:18:00.224741156Z" level=info msg="ImageCreate event name:\"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 00:18:00.228221 containerd[1508]: time="2024-08-06T00:18:00.228183059Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 00:18:00.229725 containerd[1508]: time="2024-08-06T00:18:00.229656196Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"22070263\" in 2.704067722s" Aug 6 00:18:00.229875 containerd[1508]: time="2024-08-06T00:18:00.229845648Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\"" Aug 6 00:18:00.233619 containerd[1508]: time="2024-08-06T00:18:00.233582386Z" level=info msg="CreateContainer within sandbox \"9c4212e884cf98b313e208e5681ed4c67d025b3d68dc4a0965d9985116cd218b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 6 00:18:00.251055 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2013771709.mount: Deactivated successfully. Aug 6 00:18:00.253574 containerd[1508]: time="2024-08-06T00:18:00.253486474Z" level=info msg="CreateContainer within sandbox \"9c4212e884cf98b313e208e5681ed4c67d025b3d68dc4a0965d9985116cd218b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"fc123f9c030c6f7f5481a97bef61f13b263f7590d30331ded7b25f7b92589d36\"" Aug 6 00:18:00.256155 containerd[1508]: time="2024-08-06T00:18:00.255226454Z" level=info msg="StartContainer for \"fc123f9c030c6f7f5481a97bef61f13b263f7590d30331ded7b25f7b92589d36\"" Aug 6 00:18:00.321230 systemd[1]: Started cri-containerd-fc123f9c030c6f7f5481a97bef61f13b263f7590d30331ded7b25f7b92589d36.scope - libcontainer container fc123f9c030c6f7f5481a97bef61f13b263f7590d30331ded7b25f7b92589d36. Aug 6 00:18:00.366673 containerd[1508]: time="2024-08-06T00:18:00.366609709Z" level=info msg="StartContainer for \"fc123f9c030c6f7f5481a97bef61f13b263f7590d30331ded7b25f7b92589d36\" returns successfully" Aug 6 00:18:03.156602 kubelet[2668]: I0806 00:18:03.156078 2668 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-pt795" podStartSLOduration=4.446022683 podCreationTimestamp="2024-08-06 00:17:56 +0000 UTC" firstStartedPulling="2024-08-06 00:17:57.520638658 +0000 UTC m=+14.642945140" lastFinishedPulling="2024-08-06 00:18:00.230612282 +0000 UTC m=+17.352918758" observedRunningTime="2024-08-06 00:18:01.246098486 +0000 UTC m=+18.368404981" watchObservedRunningTime="2024-08-06 00:18:03.155996301 +0000 UTC m=+20.278302791" Aug 6 00:18:03.963597 kubelet[2668]: I0806 00:18:03.963458 2668 topology_manager.go:215] "Topology Admit Handler" podUID="a6a8ab48-45d5-46d9-80de-ddf5a9d45b5c" podNamespace="calico-system" podName="calico-typha-69c64cdf78-cz4gq" Aug 6 00:18:03.993744 systemd[1]: Created slice kubepods-besteffort-poda6a8ab48_45d5_46d9_80de_ddf5a9d45b5c.slice - libcontainer container kubepods-besteffort-poda6a8ab48_45d5_46d9_80de_ddf5a9d45b5c.slice. Aug 6 00:18:04.042673 kubelet[2668]: I0806 00:18:04.042567 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6a8ab48-45d5-46d9-80de-ddf5a9d45b5c-tigera-ca-bundle\") pod \"calico-typha-69c64cdf78-cz4gq\" (UID: \"a6a8ab48-45d5-46d9-80de-ddf5a9d45b5c\") " pod="calico-system/calico-typha-69c64cdf78-cz4gq" Aug 6 00:18:04.042673 kubelet[2668]: I0806 00:18:04.042687 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a6a8ab48-45d5-46d9-80de-ddf5a9d45b5c-typha-certs\") pod \"calico-typha-69c64cdf78-cz4gq\" (UID: \"a6a8ab48-45d5-46d9-80de-ddf5a9d45b5c\") " pod="calico-system/calico-typha-69c64cdf78-cz4gq" Aug 6 00:18:04.042982 kubelet[2668]: I0806 00:18:04.042730 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxqjx\" (UniqueName: \"kubernetes.io/projected/a6a8ab48-45d5-46d9-80de-ddf5a9d45b5c-kube-api-access-zxqjx\") pod \"calico-typha-69c64cdf78-cz4gq\" (UID: \"a6a8ab48-45d5-46d9-80de-ddf5a9d45b5c\") " pod="calico-system/calico-typha-69c64cdf78-cz4gq" Aug 6 00:18:04.080419 kubelet[2668]: I0806 00:18:04.078670 2668 topology_manager.go:215] "Topology Admit Handler" podUID="2775ecb8-81a0-40d4-9404-6f50c268b066" podNamespace="calico-system" podName="calico-node-29zdn" Aug 6 00:18:04.095508 systemd[1]: Created slice kubepods-besteffort-pod2775ecb8_81a0_40d4_9404_6f50c268b066.slice - libcontainer container kubepods-besteffort-pod2775ecb8_81a0_40d4_9404_6f50c268b066.slice. Aug 6 00:18:04.144214 kubelet[2668]: I0806 00:18:04.144065 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2775ecb8-81a0-40d4-9404-6f50c268b066-cni-bin-dir\") pod \"calico-node-29zdn\" (UID: \"2775ecb8-81a0-40d4-9404-6f50c268b066\") " pod="calico-system/calico-node-29zdn" Aug 6 00:18:04.144214 kubelet[2668]: I0806 00:18:04.144201 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2775ecb8-81a0-40d4-9404-6f50c268b066-cni-log-dir\") pod \"calico-node-29zdn\" (UID: \"2775ecb8-81a0-40d4-9404-6f50c268b066\") " pod="calico-system/calico-node-29zdn" Aug 6 00:18:04.144214 kubelet[2668]: I0806 00:18:04.144245 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2775ecb8-81a0-40d4-9404-6f50c268b066-lib-modules\") pod \"calico-node-29zdn\" (UID: \"2775ecb8-81a0-40d4-9404-6f50c268b066\") " pod="calico-system/calico-node-29zdn" Aug 6 00:18:04.144566 kubelet[2668]: I0806 00:18:04.144292 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2775ecb8-81a0-40d4-9404-6f50c268b066-var-lib-calico\") pod \"calico-node-29zdn\" (UID: \"2775ecb8-81a0-40d4-9404-6f50c268b066\") " pod="calico-system/calico-node-29zdn" Aug 6 00:18:04.144566 kubelet[2668]: I0806 00:18:04.144391 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2775ecb8-81a0-40d4-9404-6f50c268b066-xtables-lock\") pod \"calico-node-29zdn\" (UID: \"2775ecb8-81a0-40d4-9404-6f50c268b066\") " pod="calico-system/calico-node-29zdn" Aug 6 00:18:04.144566 kubelet[2668]: I0806 00:18:04.144431 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2775ecb8-81a0-40d4-9404-6f50c268b066-policysync\") pod \"calico-node-29zdn\" (UID: \"2775ecb8-81a0-40d4-9404-6f50c268b066\") " pod="calico-system/calico-node-29zdn" Aug 6 00:18:04.144566 kubelet[2668]: I0806 00:18:04.144475 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2775ecb8-81a0-40d4-9404-6f50c268b066-node-certs\") pod \"calico-node-29zdn\" (UID: \"2775ecb8-81a0-40d4-9404-6f50c268b066\") " pod="calico-system/calico-node-29zdn" Aug 6 00:18:04.144566 kubelet[2668]: I0806 00:18:04.144510 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2775ecb8-81a0-40d4-9404-6f50c268b066-flexvol-driver-host\") pod \"calico-node-29zdn\" (UID: \"2775ecb8-81a0-40d4-9404-6f50c268b066\") " pod="calico-system/calico-node-29zdn" Aug 6 00:18:04.144805 kubelet[2668]: I0806 00:18:04.144547 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2775ecb8-81a0-40d4-9404-6f50c268b066-tigera-ca-bundle\") pod \"calico-node-29zdn\" (UID: \"2775ecb8-81a0-40d4-9404-6f50c268b066\") " pod="calico-system/calico-node-29zdn" Aug 6 00:18:04.144805 kubelet[2668]: I0806 00:18:04.144592 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2775ecb8-81a0-40d4-9404-6f50c268b066-var-run-calico\") pod \"calico-node-29zdn\" (UID: \"2775ecb8-81a0-40d4-9404-6f50c268b066\") " pod="calico-system/calico-node-29zdn" Aug 6 00:18:04.144805 kubelet[2668]: I0806 00:18:04.144631 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp8ct\" (UniqueName: \"kubernetes.io/projected/2775ecb8-81a0-40d4-9404-6f50c268b066-kube-api-access-dp8ct\") pod \"calico-node-29zdn\" (UID: \"2775ecb8-81a0-40d4-9404-6f50c268b066\") " pod="calico-system/calico-node-29zdn" Aug 6 00:18:04.144805 kubelet[2668]: I0806 00:18:04.144695 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2775ecb8-81a0-40d4-9404-6f50c268b066-cni-net-dir\") pod \"calico-node-29zdn\" (UID: \"2775ecb8-81a0-40d4-9404-6f50c268b066\") " pod="calico-system/calico-node-29zdn" Aug 6 00:18:04.217354 kubelet[2668]: I0806 00:18:04.217181 2668 topology_manager.go:215] "Topology Admit Handler" podUID="a0233e65-9604-4b6a-9cef-2013386fdc9d" podNamespace="calico-system" podName="csi-node-driver-tdjb9" Aug 6 00:18:04.220259 kubelet[2668]: E0806 00:18:04.218885 2668 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tdjb9" podUID="a0233e65-9604-4b6a-9cef-2013386fdc9d" Aug 6 00:18:04.251916 kubelet[2668]: I0806 00:18:04.245662 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a0233e65-9604-4b6a-9cef-2013386fdc9d-varrun\") pod \"csi-node-driver-tdjb9\" (UID: \"a0233e65-9604-4b6a-9cef-2013386fdc9d\") " pod="calico-system/csi-node-driver-tdjb9" Aug 6 00:18:04.251916 kubelet[2668]: I0806 00:18:04.245842 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a0233e65-9604-4b6a-9cef-2013386fdc9d-kubelet-dir\") pod \"csi-node-driver-tdjb9\" (UID: \"a0233e65-9604-4b6a-9cef-2013386fdc9d\") " pod="calico-system/csi-node-driver-tdjb9" Aug 6 00:18:04.251916 kubelet[2668]: I0806 00:18:04.245898 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a0233e65-9604-4b6a-9cef-2013386fdc9d-registration-dir\") pod \"csi-node-driver-tdjb9\" (UID: \"a0233e65-9604-4b6a-9cef-2013386fdc9d\") " pod="calico-system/csi-node-driver-tdjb9" Aug 6 00:18:04.251916 kubelet[2668]: I0806 00:18:04.245952 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmsgv\" (UniqueName: \"kubernetes.io/projected/a0233e65-9604-4b6a-9cef-2013386fdc9d-kube-api-access-gmsgv\") pod \"csi-node-driver-tdjb9\" (UID: \"a0233e65-9604-4b6a-9cef-2013386fdc9d\") " pod="calico-system/csi-node-driver-tdjb9" Aug 6 00:18:04.251916 kubelet[2668]: I0806 00:18:04.246027 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a0233e65-9604-4b6a-9cef-2013386fdc9d-socket-dir\") pod \"csi-node-driver-tdjb9\" (UID: \"a0233e65-9604-4b6a-9cef-2013386fdc9d\") " pod="calico-system/csi-node-driver-tdjb9" Aug 6 00:18:04.258989 kubelet[2668]: E0806 00:18:04.256225 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 00:18:04.258989 kubelet[2668]: W0806 00:18:04.256271 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 00:18:04.258989 kubelet[2668]: E0806 00:18:04.256331 2668 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 00:18:04.264304 kubelet[2668]: E0806 00:18:04.264266 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 00:18:04.264304 kubelet[2668]: W0806 00:18:04.264298 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 00:18:04.264508 kubelet[2668]: E0806 00:18:04.264329 2668 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 00:18:04.305945 containerd[1508]: time="2024-08-06T00:18:04.303495763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-69c64cdf78-cz4gq,Uid:a6a8ab48-45d5-46d9-80de-ddf5a9d45b5c,Namespace:calico-system,Attempt:0,}" Aug 6 00:18:04.307190 kubelet[2668]: E0806 00:18:04.307142 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 00:18:04.307190 kubelet[2668]: W0806 00:18:04.307175 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 00:18:04.308030 kubelet[2668]: E0806 00:18:04.307997 2668 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 00:18:04.347270 kubelet[2668]: E0806 00:18:04.347111 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 00:18:04.347270 kubelet[2668]: W0806 00:18:04.347139 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 00:18:04.347270 kubelet[2668]: E0806 00:18:04.347167 2668 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 00:18:04.348021 kubelet[2668]: E0806 00:18:04.347951 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 00:18:04.348294 kubelet[2668]: W0806 00:18:04.348125 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 00:18:04.348294 kubelet[2668]: E0806 00:18:04.348195 2668 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 00:18:04.348997 kubelet[2668]: E0806 00:18:04.348809 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 00:18:04.348997 kubelet[2668]: W0806 00:18:04.348828 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 00:18:04.348997 kubelet[2668]: E0806 00:18:04.348848 2668 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 00:18:04.349508 kubelet[2668]: E0806 00:18:04.349445 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 00:18:04.349508 kubelet[2668]: W0806 00:18:04.349464 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 00:18:04.349871 kubelet[2668]: E0806 00:18:04.349713 2668 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 00:18:04.350147 kubelet[2668]: E0806 00:18:04.350092 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 00:18:04.350351 kubelet[2668]: W0806 00:18:04.350252 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 00:18:04.350540 kubelet[2668]: E0806 00:18:04.350432 2668 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 00:18:04.351246 kubelet[2668]: E0806 00:18:04.351148 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 00:18:04.351246 kubelet[2668]: W0806 00:18:04.351167 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 00:18:04.351246 kubelet[2668]: E0806 00:18:04.351212 2668 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 00:18:04.352049 kubelet[2668]: E0806 00:18:04.351869 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 00:18:04.352049 kubelet[2668]: W0806 00:18:04.351887 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 00:18:04.352049 kubelet[2668]: E0806 00:18:04.351958 2668 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 00:18:04.352685 kubelet[2668]: E0806 00:18:04.352584 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 00:18:04.352685 kubelet[2668]: W0806 00:18:04.352605 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 00:18:04.353117 kubelet[2668]: E0806 00:18:04.352853 2668 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 00:18:04.353331 kubelet[2668]: E0806 00:18:04.353310 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 00:18:04.353581 kubelet[2668]: W0806 00:18:04.353433 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 00:18:04.353895 kubelet[2668]: E0806 00:18:04.353808 2668 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 00:18:04.355090 kubelet[2668]: E0806 00:18:04.355070 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 00:18:04.355414 kubelet[2668]: W0806 00:18:04.355207 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 00:18:04.355414 kubelet[2668]: E0806 00:18:04.355273 2668 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 00:18:04.355672 kubelet[2668]: E0806 00:18:04.355653 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 00:18:04.355839 kubelet[2668]: W0806 00:18:04.355762 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 00:18:04.355839 kubelet[2668]: E0806 00:18:04.355814 2668 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 00:18:04.356584 kubelet[2668]: E0806 00:18:04.356427 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 00:18:04.356584 kubelet[2668]: W0806 00:18:04.356445 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 00:18:04.356584 kubelet[2668]: E0806 00:18:04.356488 2668 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 00:18:04.357004 kubelet[2668]: E0806 00:18:04.356875 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 00:18:04.357004 kubelet[2668]: W0806 00:18:04.356893 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 00:18:04.357004 kubelet[2668]: E0806 00:18:04.356939 2668 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 00:18:04.357632 kubelet[2668]: E0806 00:18:04.357496 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 00:18:04.357632 kubelet[2668]: W0806 00:18:04.357513 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 00:18:04.357814 kubelet[2668]: E0806 00:18:04.357788 2668 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 00:18:04.358068 kubelet[2668]: E0806 00:18:04.358029 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 00:18:04.358068 kubelet[2668]: W0806 00:18:04.358046 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 00:18:04.358418 kubelet[2668]: E0806 00:18:04.358294 2668 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 00:18:04.358680 kubelet[2668]: E0806 00:18:04.358661 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 00:18:04.358909 kubelet[2668]: W0806 00:18:04.358808 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 00:18:04.359143 kubelet[2668]: E0806 00:18:04.359061 2668 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 00:18:04.361207 kubelet[2668]: E0806 00:18:04.360346 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 00:18:04.361207 kubelet[2668]: W0806 00:18:04.360364 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 00:18:04.361207 kubelet[2668]: E0806 00:18:04.360442 2668 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 00:18:04.362797 kubelet[2668]: E0806 00:18:04.361598 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 00:18:04.362797 kubelet[2668]: W0806 00:18:04.361616 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 00:18:04.362797 kubelet[2668]: E0806 00:18:04.361686 2668 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 00:18:04.362797 kubelet[2668]: E0806 00:18:04.362153 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 00:18:04.362797 kubelet[2668]: W0806 00:18:04.362168 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 00:18:04.362797 kubelet[2668]: E0806 00:18:04.362234 2668 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 00:18:04.364398 kubelet[2668]: E0806 00:18:04.364223 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 00:18:04.364398 kubelet[2668]: W0806 00:18:04.364246 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 00:18:04.364398 kubelet[2668]: E0806 00:18:04.364349 2668 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 00:18:04.365117 kubelet[2668]: E0806 00:18:04.364854 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 00:18:04.365117 kubelet[2668]: W0806 00:18:04.364872 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 00:18:04.365117 kubelet[2668]: E0806 00:18:04.365067 2668 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 00:18:04.366284 kubelet[2668]: E0806 00:18:04.366011 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 00:18:04.366284 kubelet[2668]: W0806 00:18:04.366030 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 00:18:04.366422 kubelet[2668]: E0806 00:18:04.366286 2668 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 00:18:04.366925 kubelet[2668]: E0806 00:18:04.366904 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 00:18:04.367066 kubelet[2668]: W0806 00:18:04.367045 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 00:18:04.368353 kubelet[2668]: E0806 00:18:04.368319 2668 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 00:18:04.369575 kubelet[2668]: E0806 00:18:04.369555 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 00:18:04.369991 kubelet[2668]: W0806 00:18:04.369774 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 00:18:04.369991 kubelet[2668]: E0806 00:18:04.369816 2668 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 00:18:04.370562 kubelet[2668]: E0806 00:18:04.370522 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 00:18:04.370562 kubelet[2668]: W0806 00:18:04.370543 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 00:18:04.370562 kubelet[2668]: E0806 00:18:04.370564 2668 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 00:18:04.393497 kubelet[2668]: E0806 00:18:04.393385 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 6 00:18:04.393497 kubelet[2668]: W0806 00:18:04.393487 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 6 00:18:04.394280 kubelet[2668]: E0806 00:18:04.393549 2668 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 6 00:18:04.398783 containerd[1508]: time="2024-08-06T00:18:04.392328452Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 6 00:18:04.398902 containerd[1508]: time="2024-08-06T00:18:04.398799250Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 00:18:04.398998 containerd[1508]: time="2024-08-06T00:18:04.398938951Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 6 00:18:04.399080 containerd[1508]: time="2024-08-06T00:18:04.399027255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 00:18:04.404411 containerd[1508]: time="2024-08-06T00:18:04.404065787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-29zdn,Uid:2775ecb8-81a0-40d4-9404-6f50c268b066,Namespace:calico-system,Attempt:0,}" Aug 6 00:18:04.471790 systemd[1]: Started cri-containerd-04ac13a1c6ddab1edffe75d4280d78086acea5f7a9721a1ceddfd188baf6666b.scope - libcontainer container 04ac13a1c6ddab1edffe75d4280d78086acea5f7a9721a1ceddfd188baf6666b. Aug 6 00:18:04.501222 containerd[1508]: time="2024-08-06T00:18:04.498875017Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 6 00:18:04.502260 containerd[1508]: time="2024-08-06T00:18:04.502182282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 00:18:04.502451 containerd[1508]: time="2024-08-06T00:18:04.502384705Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 6 00:18:04.502451 containerd[1508]: time="2024-08-06T00:18:04.502415888Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 00:18:04.552182 systemd[1]: Started cri-containerd-cdb9dc759ac1d75f93b5daabb8932f654119b76d79c010a695072742ae8be2f0.scope - libcontainer container cdb9dc759ac1d75f93b5daabb8932f654119b76d79c010a695072742ae8be2f0. Aug 6 00:18:04.669715 containerd[1508]: time="2024-08-06T00:18:04.669387050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-29zdn,Uid:2775ecb8-81a0-40d4-9404-6f50c268b066,Namespace:calico-system,Attempt:0,} returns sandbox id \"cdb9dc759ac1d75f93b5daabb8932f654119b76d79c010a695072742ae8be2f0\"" Aug 6 00:18:04.675258 containerd[1508]: time="2024-08-06T00:18:04.675204515Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Aug 6 00:18:04.716409 containerd[1508]: time="2024-08-06T00:18:04.716304206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-69c64cdf78-cz4gq,Uid:a6a8ab48-45d5-46d9-80de-ddf5a9d45b5c,Namespace:calico-system,Attempt:0,} returns sandbox id \"04ac13a1c6ddab1edffe75d4280d78086acea5f7a9721a1ceddfd188baf6666b\"" Aug 6 00:18:06.123854 kubelet[2668]: E0806 00:18:06.123013 2668 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tdjb9" podUID="a0233e65-9604-4b6a-9cef-2013386fdc9d" Aug 6 00:18:06.345003 containerd[1508]: time="2024-08-06T00:18:06.344857934Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 00:18:06.347242 containerd[1508]: time="2024-08-06T00:18:06.347159960Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=5140568" Aug 6 00:18:06.349246 containerd[1508]: time="2024-08-06T00:18:06.349163673Z" level=info msg="ImageCreate event name:\"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 00:18:06.354769 containerd[1508]: time="2024-08-06T00:18:06.354653101Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 00:18:06.357892 containerd[1508]: time="2024-08-06T00:18:06.355787808Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6588288\" in 1.680514759s" Aug 6 00:18:06.357892 containerd[1508]: time="2024-08-06T00:18:06.355837168Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\"" Aug 6 00:18:06.358512 containerd[1508]: time="2024-08-06T00:18:06.358477398Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Aug 6 00:18:06.361511 containerd[1508]: time="2024-08-06T00:18:06.361462336Z" level=info msg="CreateContainer within sandbox \"cdb9dc759ac1d75f93b5daabb8932f654119b76d79c010a695072742ae8be2f0\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 6 00:18:06.392609 containerd[1508]: time="2024-08-06T00:18:06.391817105Z" level=info msg="CreateContainer within sandbox \"cdb9dc759ac1d75f93b5daabb8932f654119b76d79c010a695072742ae8be2f0\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"66ede5c36d5893058745c9f4a90c5908d07055315c2a30378321f02d936ed05d\"" Aug 6 00:18:06.397088 containerd[1508]: time="2024-08-06T00:18:06.395122140Z" level=info msg="StartContainer for \"66ede5c36d5893058745c9f4a90c5908d07055315c2a30378321f02d936ed05d\"" Aug 6 00:18:06.535232 systemd[1]: Started cri-containerd-66ede5c36d5893058745c9f4a90c5908d07055315c2a30378321f02d936ed05d.scope - libcontainer container 66ede5c36d5893058745c9f4a90c5908d07055315c2a30378321f02d936ed05d. Aug 6 00:18:06.631485 containerd[1508]: time="2024-08-06T00:18:06.631416541Z" level=info msg="StartContainer for \"66ede5c36d5893058745c9f4a90c5908d07055315c2a30378321f02d936ed05d\" returns successfully" Aug 6 00:18:06.656931 systemd[1]: cri-containerd-66ede5c36d5893058745c9f4a90c5908d07055315c2a30378321f02d936ed05d.scope: Deactivated successfully. Aug 6 00:18:06.718055 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-66ede5c36d5893058745c9f4a90c5908d07055315c2a30378321f02d936ed05d-rootfs.mount: Deactivated successfully. Aug 6 00:18:06.741185 containerd[1508]: time="2024-08-06T00:18:06.740872057Z" level=info msg="shim disconnected" id=66ede5c36d5893058745c9f4a90c5908d07055315c2a30378321f02d936ed05d namespace=k8s.io Aug 6 00:18:06.741185 containerd[1508]: time="2024-08-06T00:18:06.741176948Z" level=warning msg="cleaning up after shim disconnected" id=66ede5c36d5893058745c9f4a90c5908d07055315c2a30378321f02d936ed05d namespace=k8s.io Aug 6 00:18:06.741185 containerd[1508]: time="2024-08-06T00:18:06.741196661Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 6 00:18:08.123537 kubelet[2668]: E0806 00:18:08.122956 2668 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tdjb9" podUID="a0233e65-9604-4b6a-9cef-2013386fdc9d" Aug 6 00:18:09.877029 containerd[1508]: time="2024-08-06T00:18:09.876874748Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 00:18:09.879786 containerd[1508]: time="2024-08-06T00:18:09.879412601Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=29458030" Aug 6 00:18:09.880994 containerd[1508]: time="2024-08-06T00:18:09.880774859Z" level=info msg="ImageCreate event name:\"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 00:18:09.885681 containerd[1508]: time="2024-08-06T00:18:09.885646678Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 00:18:09.888471 containerd[1508]: time="2024-08-06T00:18:09.888434816Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"30905782\" in 3.529902144s" Aug 6 00:18:09.890101 containerd[1508]: time="2024-08-06T00:18:09.888488508Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\"" Aug 6 00:18:09.890885 containerd[1508]: time="2024-08-06T00:18:09.890536775Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Aug 6 00:18:09.940950 containerd[1508]: time="2024-08-06T00:18:09.940884216Z" level=info msg="CreateContainer within sandbox \"04ac13a1c6ddab1edffe75d4280d78086acea5f7a9721a1ceddfd188baf6666b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 6 00:18:09.974566 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2191744615.mount: Deactivated successfully. Aug 6 00:18:09.986087 containerd[1508]: time="2024-08-06T00:18:09.985897658Z" level=info msg="CreateContainer within sandbox \"04ac13a1c6ddab1edffe75d4280d78086acea5f7a9721a1ceddfd188baf6666b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"17b1727a0028f3b2331f1773db00fff9ad467814288b04e9f3dbe4444fc770a3\"" Aug 6 00:18:09.987802 containerd[1508]: time="2024-08-06T00:18:09.987445187Z" level=info msg="StartContainer for \"17b1727a0028f3b2331f1773db00fff9ad467814288b04e9f3dbe4444fc770a3\"" Aug 6 00:18:10.066476 systemd[1]: Started cri-containerd-17b1727a0028f3b2331f1773db00fff9ad467814288b04e9f3dbe4444fc770a3.scope - libcontainer container 17b1727a0028f3b2331f1773db00fff9ad467814288b04e9f3dbe4444fc770a3. Aug 6 00:18:10.122252 kubelet[2668]: E0806 00:18:10.122127 2668 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tdjb9" podUID="a0233e65-9604-4b6a-9cef-2013386fdc9d" Aug 6 00:18:10.152632 containerd[1508]: time="2024-08-06T00:18:10.152263227Z" level=info msg="StartContainer for \"17b1727a0028f3b2331f1773db00fff9ad467814288b04e9f3dbe4444fc770a3\" returns successfully" Aug 6 00:18:10.298143 kubelet[2668]: I0806 00:18:10.297554 2668 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-69c64cdf78-cz4gq" podStartSLOduration=2.127182353 podCreationTimestamp="2024-08-06 00:18:03 +0000 UTC" firstStartedPulling="2024-08-06 00:18:04.718813363 +0000 UTC m=+21.841119846" lastFinishedPulling="2024-08-06 00:18:09.88909073 +0000 UTC m=+27.011397220" observedRunningTime="2024-08-06 00:18:10.296114203 +0000 UTC m=+27.418420703" watchObservedRunningTime="2024-08-06 00:18:10.297459727 +0000 UTC m=+27.419766218" Aug 6 00:18:11.285673 kubelet[2668]: I0806 00:18:11.285622 2668 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 6 00:18:12.123116 kubelet[2668]: E0806 00:18:12.122568 2668 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tdjb9" podUID="a0233e65-9604-4b6a-9cef-2013386fdc9d" Aug 6 00:18:14.122441 kubelet[2668]: E0806 00:18:14.122329 2668 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tdjb9" podUID="a0233e65-9604-4b6a-9cef-2013386fdc9d" Aug 6 00:18:15.844646 containerd[1508]: time="2024-08-06T00:18:15.844532623Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 00:18:15.847198 containerd[1508]: time="2024-08-06T00:18:15.847064055Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=93087850" Aug 6 00:18:15.848056 containerd[1508]: time="2024-08-06T00:18:15.847945584Z" level=info msg="ImageCreate event name:\"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 00:18:15.853049 containerd[1508]: time="2024-08-06T00:18:15.852943070Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 00:18:15.854987 containerd[1508]: time="2024-08-06T00:18:15.854631927Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"94535610\" in 5.96403692s" Aug 6 00:18:15.854987 containerd[1508]: time="2024-08-06T00:18:15.854690250Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\"" Aug 6 00:18:15.859476 containerd[1508]: time="2024-08-06T00:18:15.859432611Z" level=info msg="CreateContainer within sandbox \"cdb9dc759ac1d75f93b5daabb8932f654119b76d79c010a695072742ae8be2f0\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 6 00:18:15.914097 containerd[1508]: time="2024-08-06T00:18:15.914027124Z" level=info msg="CreateContainer within sandbox \"cdb9dc759ac1d75f93b5daabb8932f654119b76d79c010a695072742ae8be2f0\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a18c2db9c3b7b5d67cb82fe97e1dc2a1ac863d747b6cbd35662c7b30cf98cc72\"" Aug 6 00:18:15.915293 containerd[1508]: time="2024-08-06T00:18:15.915255611Z" level=info msg="StartContainer for \"a18c2db9c3b7b5d67cb82fe97e1dc2a1ac863d747b6cbd35662c7b30cf98cc72\"" Aug 6 00:18:16.005780 systemd[1]: run-containerd-runc-k8s.io-a18c2db9c3b7b5d67cb82fe97e1dc2a1ac863d747b6cbd35662c7b30cf98cc72-runc.QrtuAz.mount: Deactivated successfully. Aug 6 00:18:16.021204 systemd[1]: Started cri-containerd-a18c2db9c3b7b5d67cb82fe97e1dc2a1ac863d747b6cbd35662c7b30cf98cc72.scope - libcontainer container a18c2db9c3b7b5d67cb82fe97e1dc2a1ac863d747b6cbd35662c7b30cf98cc72. Aug 6 00:18:16.093442 containerd[1508]: time="2024-08-06T00:18:16.093374850Z" level=info msg="StartContainer for \"a18c2db9c3b7b5d67cb82fe97e1dc2a1ac863d747b6cbd35662c7b30cf98cc72\" returns successfully" Aug 6 00:18:16.122480 kubelet[2668]: E0806 00:18:16.122342 2668 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tdjb9" podUID="a0233e65-9604-4b6a-9cef-2013386fdc9d" Aug 6 00:18:16.828577 systemd[1]: cri-containerd-a18c2db9c3b7b5d67cb82fe97e1dc2a1ac863d747b6cbd35662c7b30cf98cc72.scope: Deactivated successfully. Aug 6 00:18:16.880753 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a18c2db9c3b7b5d67cb82fe97e1dc2a1ac863d747b6cbd35662c7b30cf98cc72-rootfs.mount: Deactivated successfully. Aug 6 00:18:16.963591 kubelet[2668]: I0806 00:18:16.963548 2668 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Aug 6 00:18:16.997333 containerd[1508]: time="2024-08-06T00:18:16.997217132Z" level=info msg="shim disconnected" id=a18c2db9c3b7b5d67cb82fe97e1dc2a1ac863d747b6cbd35662c7b30cf98cc72 namespace=k8s.io Aug 6 00:18:16.997333 containerd[1508]: time="2024-08-06T00:18:16.997329525Z" level=warning msg="cleaning up after shim disconnected" id=a18c2db9c3b7b5d67cb82fe97e1dc2a1ac863d747b6cbd35662c7b30cf98cc72 namespace=k8s.io Aug 6 00:18:16.997333 containerd[1508]: time="2024-08-06T00:18:16.997346213Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 6 00:18:17.051324 kubelet[2668]: I0806 00:18:17.051192 2668 topology_manager.go:215] "Topology Admit Handler" podUID="ef6fd30e-55f2-4b6b-a5ca-c27d38eac3af" podNamespace="kube-system" podName="coredns-5dd5756b68-zpvfb" Aug 6 00:18:17.057055 kubelet[2668]: I0806 00:18:17.057002 2668 topology_manager.go:215] "Topology Admit Handler" podUID="eed81736-ac05-4e75-94df-0684511dda22" podNamespace="kube-system" podName="coredns-5dd5756b68-rxfx5" Aug 6 00:18:17.060636 kubelet[2668]: I0806 00:18:17.060483 2668 topology_manager.go:215] "Topology Admit Handler" podUID="a9dae8f8-5755-4951-8d86-28ab0b866d2e" podNamespace="calico-system" podName="calico-kube-controllers-79cc9f459b-fwh5q" Aug 6 00:18:17.064850 kubelet[2668]: I0806 00:18:17.064033 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddpmb\" (UniqueName: \"kubernetes.io/projected/eed81736-ac05-4e75-94df-0684511dda22-kube-api-access-ddpmb\") pod \"coredns-5dd5756b68-rxfx5\" (UID: \"eed81736-ac05-4e75-94df-0684511dda22\") " pod="kube-system/coredns-5dd5756b68-rxfx5" Aug 6 00:18:17.064850 kubelet[2668]: I0806 00:18:17.064123 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eed81736-ac05-4e75-94df-0684511dda22-config-volume\") pod \"coredns-5dd5756b68-rxfx5\" (UID: \"eed81736-ac05-4e75-94df-0684511dda22\") " pod="kube-system/coredns-5dd5756b68-rxfx5" Aug 6 00:18:17.064850 kubelet[2668]: I0806 00:18:17.064305 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwgg4\" (UniqueName: \"kubernetes.io/projected/ef6fd30e-55f2-4b6b-a5ca-c27d38eac3af-kube-api-access-gwgg4\") pod \"coredns-5dd5756b68-zpvfb\" (UID: \"ef6fd30e-55f2-4b6b-a5ca-c27d38eac3af\") " pod="kube-system/coredns-5dd5756b68-zpvfb" Aug 6 00:18:17.064850 kubelet[2668]: I0806 00:18:17.064356 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ef6fd30e-55f2-4b6b-a5ca-c27d38eac3af-config-volume\") pod \"coredns-5dd5756b68-zpvfb\" (UID: \"ef6fd30e-55f2-4b6b-a5ca-c27d38eac3af\") " pod="kube-system/coredns-5dd5756b68-zpvfb" Aug 6 00:18:17.106094 systemd[1]: Created slice kubepods-besteffort-poda9dae8f8_5755_4951_8d86_28ab0b866d2e.slice - libcontainer container kubepods-besteffort-poda9dae8f8_5755_4951_8d86_28ab0b866d2e.slice. Aug 6 00:18:17.111896 systemd[1]: Created slice kubepods-burstable-podef6fd30e_55f2_4b6b_a5ca_c27d38eac3af.slice - libcontainer container kubepods-burstable-podef6fd30e_55f2_4b6b_a5ca_c27d38eac3af.slice. Aug 6 00:18:17.129014 systemd[1]: Created slice kubepods-burstable-podeed81736_ac05_4e75_94df_0684511dda22.slice - libcontainer container kubepods-burstable-podeed81736_ac05_4e75_94df_0684511dda22.slice. Aug 6 00:18:17.166521 kubelet[2668]: I0806 00:18:17.166260 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9dae8f8-5755-4951-8d86-28ab0b866d2e-tigera-ca-bundle\") pod \"calico-kube-controllers-79cc9f459b-fwh5q\" (UID: \"a9dae8f8-5755-4951-8d86-28ab0b866d2e\") " pod="calico-system/calico-kube-controllers-79cc9f459b-fwh5q" Aug 6 00:18:17.166521 kubelet[2668]: I0806 00:18:17.166398 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpzpx\" (UniqueName: \"kubernetes.io/projected/a9dae8f8-5755-4951-8d86-28ab0b866d2e-kube-api-access-zpzpx\") pod \"calico-kube-controllers-79cc9f459b-fwh5q\" (UID: \"a9dae8f8-5755-4951-8d86-28ab0b866d2e\") " pod="calico-system/calico-kube-controllers-79cc9f459b-fwh5q" Aug 6 00:18:17.304073 containerd[1508]: time="2024-08-06T00:18:17.304009308Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Aug 6 00:18:17.419409 containerd[1508]: time="2024-08-06T00:18:17.419045928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-zpvfb,Uid:ef6fd30e-55f2-4b6b-a5ca-c27d38eac3af,Namespace:kube-system,Attempt:0,}" Aug 6 00:18:17.419409 containerd[1508]: time="2024-08-06T00:18:17.419350273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79cc9f459b-fwh5q,Uid:a9dae8f8-5755-4951-8d86-28ab0b866d2e,Namespace:calico-system,Attempt:0,}" Aug 6 00:18:17.452846 containerd[1508]: time="2024-08-06T00:18:17.452791243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-rxfx5,Uid:eed81736-ac05-4e75-94df-0684511dda22,Namespace:kube-system,Attempt:0,}" Aug 6 00:18:17.728673 containerd[1508]: time="2024-08-06T00:18:17.728506454Z" level=error msg="Failed to destroy network for sandbox \"8e9d0f9e4ba63d0b2ac367464d576a9d485695e1c9268b11f227c25931fae6fa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 6 00:18:17.736382 containerd[1508]: time="2024-08-06T00:18:17.736320632Z" level=error msg="Failed to destroy network for sandbox \"34ebfb44c36ceb9f510cdd10f01b036ae2598526abcd2ed9f8d660b1604b4a96\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 6 00:18:17.740462 containerd[1508]: time="2024-08-06T00:18:17.740409542Z" level=error msg="encountered an error cleaning up failed sandbox \"34ebfb44c36ceb9f510cdd10f01b036ae2598526abcd2ed9f8d660b1604b4a96\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 6 00:18:17.740550 containerd[1508]: time="2024-08-06T00:18:17.740499257Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-zpvfb,Uid:ef6fd30e-55f2-4b6b-a5ca-c27d38eac3af,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"34ebfb44c36ceb9f510cdd10f01b036ae2598526abcd2ed9f8d660b1604b4a96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 6 00:18:17.740743 containerd[1508]: time="2024-08-06T00:18:17.740699844Z" level=error msg="encountered an error cleaning up failed sandbox \"8e9d0f9e4ba63d0b2ac367464d576a9d485695e1c9268b11f227c25931fae6fa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 6 00:18:17.740906 containerd[1508]: time="2024-08-06T00:18:17.740855147Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-rxfx5,Uid:eed81736-ac05-4e75-94df-0684511dda22,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8e9d0f9e4ba63d0b2ac367464d576a9d485695e1c9268b11f227c25931fae6fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 6 00:18:17.747414 kubelet[2668]: E0806 00:18:17.747377 2668 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e9d0f9e4ba63d0b2ac367464d576a9d485695e1c9268b11f227c25931fae6fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 6 00:18:17.747545 kubelet[2668]: E0806 00:18:17.747507 2668 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e9d0f9e4ba63d0b2ac367464d576a9d485695e1c9268b11f227c25931fae6fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-rxfx5" Aug 6 00:18:17.747632 kubelet[2668]: E0806 00:18:17.747558 2668 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e9d0f9e4ba63d0b2ac367464d576a9d485695e1c9268b11f227c25931fae6fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-rxfx5" Aug 6 00:18:17.747695 kubelet[2668]: E0806 00:18:17.747643 2668 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-rxfx5_kube-system(eed81736-ac05-4e75-94df-0684511dda22)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-rxfx5_kube-system(eed81736-ac05-4e75-94df-0684511dda22)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8e9d0f9e4ba63d0b2ac367464d576a9d485695e1c9268b11f227c25931fae6fa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-rxfx5" podUID="eed81736-ac05-4e75-94df-0684511dda22" Aug 6 00:18:17.749990 kubelet[2668]: E0806 00:18:17.748004 2668 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34ebfb44c36ceb9f510cdd10f01b036ae2598526abcd2ed9f8d660b1604b4a96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 6 00:18:17.749990 kubelet[2668]: E0806 00:18:17.748053 2668 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34ebfb44c36ceb9f510cdd10f01b036ae2598526abcd2ed9f8d660b1604b4a96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-zpvfb" Aug 6 00:18:17.749990 kubelet[2668]: E0806 00:18:17.748088 2668 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34ebfb44c36ceb9f510cdd10f01b036ae2598526abcd2ed9f8d660b1604b4a96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-zpvfb" Aug 6 00:18:17.750589 kubelet[2668]: E0806 00:18:17.748162 2668 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-zpvfb_kube-system(ef6fd30e-55f2-4b6b-a5ca-c27d38eac3af)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-zpvfb_kube-system(ef6fd30e-55f2-4b6b-a5ca-c27d38eac3af)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"34ebfb44c36ceb9f510cdd10f01b036ae2598526abcd2ed9f8d660b1604b4a96\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-zpvfb" podUID="ef6fd30e-55f2-4b6b-a5ca-c27d38eac3af" Aug 6 00:18:17.763001 containerd[1508]: time="2024-08-06T00:18:17.762869617Z" level=error msg="Failed to destroy network for sandbox \"f84c52352e0c7365d1369d21cd69c3101a6e9c37e4e992a344df384ad6fd8e70\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 6 00:18:17.763534 containerd[1508]: time="2024-08-06T00:18:17.763484879Z" level=error msg="encountered an error cleaning up failed sandbox \"f84c52352e0c7365d1369d21cd69c3101a6e9c37e4e992a344df384ad6fd8e70\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 6 00:18:17.763612 containerd[1508]: time="2024-08-06T00:18:17.763558943Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79cc9f459b-fwh5q,Uid:a9dae8f8-5755-4951-8d86-28ab0b866d2e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f84c52352e0c7365d1369d21cd69c3101a6e9c37e4e992a344df384ad6fd8e70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 6 00:18:17.764508 kubelet[2668]: E0806 00:18:17.764051 2668 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f84c52352e0c7365d1369d21cd69c3101a6e9c37e4e992a344df384ad6fd8e70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 6 00:18:17.764508 kubelet[2668]: E0806 00:18:17.764124 2668 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f84c52352e0c7365d1369d21cd69c3101a6e9c37e4e992a344df384ad6fd8e70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79cc9f459b-fwh5q" Aug 6 00:18:17.764508 kubelet[2668]: E0806 00:18:17.764156 2668 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f84c52352e0c7365d1369d21cd69c3101a6e9c37e4e992a344df384ad6fd8e70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79cc9f459b-fwh5q" Aug 6 00:18:17.764812 kubelet[2668]: E0806 00:18:17.764229 2668 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-79cc9f459b-fwh5q_calico-system(a9dae8f8-5755-4951-8d86-28ab0b866d2e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-79cc9f459b-fwh5q_calico-system(a9dae8f8-5755-4951-8d86-28ab0b866d2e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f84c52352e0c7365d1369d21cd69c3101a6e9c37e4e992a344df384ad6fd8e70\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-79cc9f459b-fwh5q" podUID="a9dae8f8-5755-4951-8d86-28ab0b866d2e" Aug 6 00:18:18.132159 systemd[1]: Created slice kubepods-besteffort-poda0233e65_9604_4b6a_9cef_2013386fdc9d.slice - libcontainer container kubepods-besteffort-poda0233e65_9604_4b6a_9cef_2013386fdc9d.slice. Aug 6 00:18:18.136001 containerd[1508]: time="2024-08-06T00:18:18.135899201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tdjb9,Uid:a0233e65-9604-4b6a-9cef-2013386fdc9d,Namespace:calico-system,Attempt:0,}" Aug 6 00:18:18.205819 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f84c52352e0c7365d1369d21cd69c3101a6e9c37e4e992a344df384ad6fd8e70-shm.mount: Deactivated successfully. Aug 6 00:18:18.206001 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-34ebfb44c36ceb9f510cdd10f01b036ae2598526abcd2ed9f8d660b1604b4a96-shm.mount: Deactivated successfully. Aug 6 00:18:18.242303 containerd[1508]: time="2024-08-06T00:18:18.242184776Z" level=error msg="Failed to destroy network for sandbox \"474ffc6da7419b7a0c0ed7cbf6a1a41e9760185e369eb0d86140f8cfc2acab0b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 6 00:18:18.244566 containerd[1508]: time="2024-08-06T00:18:18.244432667Z" level=error msg="encountered an error cleaning up failed sandbox \"474ffc6da7419b7a0c0ed7cbf6a1a41e9760185e369eb0d86140f8cfc2acab0b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 6 00:18:18.244566 containerd[1508]: time="2024-08-06T00:18:18.244523104Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tdjb9,Uid:a0233e65-9604-4b6a-9cef-2013386fdc9d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"474ffc6da7419b7a0c0ed7cbf6a1a41e9760185e369eb0d86140f8cfc2acab0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 6 00:18:18.245471 kubelet[2668]: E0806 00:18:18.244888 2668 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"474ffc6da7419b7a0c0ed7cbf6a1a41e9760185e369eb0d86140f8cfc2acab0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 6 00:18:18.245471 kubelet[2668]: E0806 00:18:18.244988 2668 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"474ffc6da7419b7a0c0ed7cbf6a1a41e9760185e369eb0d86140f8cfc2acab0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tdjb9" Aug 6 00:18:18.245471 kubelet[2668]: E0806 00:18:18.245029 2668 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"474ffc6da7419b7a0c0ed7cbf6a1a41e9760185e369eb0d86140f8cfc2acab0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tdjb9" Aug 6 00:18:18.246270 kubelet[2668]: E0806 00:18:18.245117 2668 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tdjb9_calico-system(a0233e65-9604-4b6a-9cef-2013386fdc9d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tdjb9_calico-system(a0233e65-9604-4b6a-9cef-2013386fdc9d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"474ffc6da7419b7a0c0ed7cbf6a1a41e9760185e369eb0d86140f8cfc2acab0b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tdjb9" podUID="a0233e65-9604-4b6a-9cef-2013386fdc9d" Aug 6 00:18:18.247741 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-474ffc6da7419b7a0c0ed7cbf6a1a41e9760185e369eb0d86140f8cfc2acab0b-shm.mount: Deactivated successfully. Aug 6 00:18:18.305159 kubelet[2668]: I0806 00:18:18.304291 2668 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="474ffc6da7419b7a0c0ed7cbf6a1a41e9760185e369eb0d86140f8cfc2acab0b" Aug 6 00:18:18.308100 kubelet[2668]: I0806 00:18:18.307745 2668 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f84c52352e0c7365d1369d21cd69c3101a6e9c37e4e992a344df384ad6fd8e70" Aug 6 00:18:18.329006 kubelet[2668]: I0806 00:18:18.328820 2668 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34ebfb44c36ceb9f510cdd10f01b036ae2598526abcd2ed9f8d660b1604b4a96" Aug 6 00:18:18.329987 containerd[1508]: time="2024-08-06T00:18:18.329688495Z" level=info msg="StopPodSandbox for \"474ffc6da7419b7a0c0ed7cbf6a1a41e9760185e369eb0d86140f8cfc2acab0b\"" Aug 6 00:18:18.330379 containerd[1508]: time="2024-08-06T00:18:18.330305154Z" level=info msg="StopPodSandbox for \"f84c52352e0c7365d1369d21cd69c3101a6e9c37e4e992a344df384ad6fd8e70\"" Aug 6 00:18:18.338015 containerd[1508]: time="2024-08-06T00:18:18.337577493Z" level=info msg="Ensure that sandbox 474ffc6da7419b7a0c0ed7cbf6a1a41e9760185e369eb0d86140f8cfc2acab0b in task-service has been cleanup successfully" Aug 6 00:18:18.338015 containerd[1508]: time="2024-08-06T00:18:18.337650088Z" level=info msg="StopPodSandbox for \"34ebfb44c36ceb9f510cdd10f01b036ae2598526abcd2ed9f8d660b1604b4a96\"" Aug 6 00:18:18.338452 containerd[1508]: time="2024-08-06T00:18:18.338401063Z" level=info msg="Ensure that sandbox 34ebfb44c36ceb9f510cdd10f01b036ae2598526abcd2ed9f8d660b1604b4a96 in task-service has been cleanup successfully" Aug 6 00:18:18.340258 containerd[1508]: time="2024-08-06T00:18:18.340225466Z" level=info msg="Ensure that sandbox f84c52352e0c7365d1369d21cd69c3101a6e9c37e4e992a344df384ad6fd8e70 in task-service has been cleanup successfully" Aug 6 00:18:18.343562 kubelet[2668]: I0806 00:18:18.343365 2668 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e9d0f9e4ba63d0b2ac367464d576a9d485695e1c9268b11f227c25931fae6fa" Aug 6 00:18:18.346707 containerd[1508]: time="2024-08-06T00:18:18.346512729Z" level=info msg="StopPodSandbox for \"8e9d0f9e4ba63d0b2ac367464d576a9d485695e1c9268b11f227c25931fae6fa\"" Aug 6 00:18:18.350164 containerd[1508]: time="2024-08-06T00:18:18.350128089Z" level=info msg="Ensure that sandbox 8e9d0f9e4ba63d0b2ac367464d576a9d485695e1c9268b11f227c25931fae6fa in task-service has been cleanup successfully" Aug 6 00:18:18.430755 containerd[1508]: time="2024-08-06T00:18:18.430405494Z" level=error msg="StopPodSandbox for \"f84c52352e0c7365d1369d21cd69c3101a6e9c37e4e992a344df384ad6fd8e70\" failed" error="failed to destroy network for sandbox \"f84c52352e0c7365d1369d21cd69c3101a6e9c37e4e992a344df384ad6fd8e70\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 6 00:18:18.432361 kubelet[2668]: E0806 00:18:18.431314 2668 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f84c52352e0c7365d1369d21cd69c3101a6e9c37e4e992a344df384ad6fd8e70\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f84c52352e0c7365d1369d21cd69c3101a6e9c37e4e992a344df384ad6fd8e70" Aug 6 00:18:18.439161 kubelet[2668]: E0806 00:18:18.439115 2668 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f84c52352e0c7365d1369d21cd69c3101a6e9c37e4e992a344df384ad6fd8e70"} Aug 6 00:18:18.440393 kubelet[2668]: E0806 00:18:18.439202 2668 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a9dae8f8-5755-4951-8d86-28ab0b866d2e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f84c52352e0c7365d1369d21cd69c3101a6e9c37e4e992a344df384ad6fd8e70\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 6 00:18:18.440393 kubelet[2668]: E0806 00:18:18.439248 2668 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a9dae8f8-5755-4951-8d86-28ab0b866d2e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f84c52352e0c7365d1369d21cd69c3101a6e9c37e4e992a344df384ad6fd8e70\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-79cc9f459b-fwh5q" podUID="a9dae8f8-5755-4951-8d86-28ab0b866d2e" Aug 6 00:18:18.441067 containerd[1508]: time="2024-08-06T00:18:18.441016051Z" level=error msg="StopPodSandbox for \"474ffc6da7419b7a0c0ed7cbf6a1a41e9760185e369eb0d86140f8cfc2acab0b\" failed" error="failed to destroy network for sandbox \"474ffc6da7419b7a0c0ed7cbf6a1a41e9760185e369eb0d86140f8cfc2acab0b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 6 00:18:18.441877 kubelet[2668]: E0806 00:18:18.441729 2668 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"474ffc6da7419b7a0c0ed7cbf6a1a41e9760185e369eb0d86140f8cfc2acab0b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="474ffc6da7419b7a0c0ed7cbf6a1a41e9760185e369eb0d86140f8cfc2acab0b" Aug 6 00:18:18.442266 kubelet[2668]: E0806 00:18:18.442082 2668 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"474ffc6da7419b7a0c0ed7cbf6a1a41e9760185e369eb0d86140f8cfc2acab0b"} Aug 6 00:18:18.442266 kubelet[2668]: E0806 00:18:18.442159 2668 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a0233e65-9604-4b6a-9cef-2013386fdc9d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"474ffc6da7419b7a0c0ed7cbf6a1a41e9760185e369eb0d86140f8cfc2acab0b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 6 00:18:18.442266 kubelet[2668]: E0806 00:18:18.442234 2668 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a0233e65-9604-4b6a-9cef-2013386fdc9d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"474ffc6da7419b7a0c0ed7cbf6a1a41e9760185e369eb0d86140f8cfc2acab0b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tdjb9" podUID="a0233e65-9604-4b6a-9cef-2013386fdc9d" Aug 6 00:18:18.451573 containerd[1508]: time="2024-08-06T00:18:18.451502021Z" level=error msg="StopPodSandbox for \"34ebfb44c36ceb9f510cdd10f01b036ae2598526abcd2ed9f8d660b1604b4a96\" failed" error="failed to destroy network for sandbox \"34ebfb44c36ceb9f510cdd10f01b036ae2598526abcd2ed9f8d660b1604b4a96\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 6 00:18:18.452509 kubelet[2668]: E0806 00:18:18.452226 2668 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"34ebfb44c36ceb9f510cdd10f01b036ae2598526abcd2ed9f8d660b1604b4a96\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="34ebfb44c36ceb9f510cdd10f01b036ae2598526abcd2ed9f8d660b1604b4a96" Aug 6 00:18:18.452509 kubelet[2668]: E0806 00:18:18.452331 2668 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"34ebfb44c36ceb9f510cdd10f01b036ae2598526abcd2ed9f8d660b1604b4a96"} Aug 6 00:18:18.452509 kubelet[2668]: E0806 00:18:18.452404 2668 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ef6fd30e-55f2-4b6b-a5ca-c27d38eac3af\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"34ebfb44c36ceb9f510cdd10f01b036ae2598526abcd2ed9f8d660b1604b4a96\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 6 00:18:18.452509 kubelet[2668]: E0806 00:18:18.452459 2668 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ef6fd30e-55f2-4b6b-a5ca-c27d38eac3af\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"34ebfb44c36ceb9f510cdd10f01b036ae2598526abcd2ed9f8d660b1604b4a96\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-zpvfb" podUID="ef6fd30e-55f2-4b6b-a5ca-c27d38eac3af" Aug 6 00:18:18.457531 containerd[1508]: time="2024-08-06T00:18:18.457400696Z" level=error msg="StopPodSandbox for \"8e9d0f9e4ba63d0b2ac367464d576a9d485695e1c9268b11f227c25931fae6fa\" failed" error="failed to destroy network for sandbox \"8e9d0f9e4ba63d0b2ac367464d576a9d485695e1c9268b11f227c25931fae6fa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 6 00:18:18.457765 kubelet[2668]: E0806 00:18:18.457718 2668 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8e9d0f9e4ba63d0b2ac367464d576a9d485695e1c9268b11f227c25931fae6fa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8e9d0f9e4ba63d0b2ac367464d576a9d485695e1c9268b11f227c25931fae6fa" Aug 6 00:18:18.457765 kubelet[2668]: E0806 00:18:18.457770 2668 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8e9d0f9e4ba63d0b2ac367464d576a9d485695e1c9268b11f227c25931fae6fa"} Aug 6 00:18:18.457914 kubelet[2668]: E0806 00:18:18.457836 2668 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"eed81736-ac05-4e75-94df-0684511dda22\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8e9d0f9e4ba63d0b2ac367464d576a9d485695e1c9268b11f227c25931fae6fa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 6 00:18:18.457914 kubelet[2668]: E0806 00:18:18.457881 2668 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"eed81736-ac05-4e75-94df-0684511dda22\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8e9d0f9e4ba63d0b2ac367464d576a9d485695e1c9268b11f227c25931fae6fa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-rxfx5" podUID="eed81736-ac05-4e75-94df-0684511dda22" Aug 6 00:18:20.651666 kubelet[2668]: I0806 00:18:20.649927 2668 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 6 00:18:27.303698 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1155045418.mount: Deactivated successfully. Aug 6 00:18:27.417405 containerd[1508]: time="2024-08-06T00:18:27.416954782Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=115238750" Aug 6 00:18:27.418666 containerd[1508]: time="2024-08-06T00:18:27.411985098Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 00:18:27.450015 containerd[1508]: time="2024-08-06T00:18:27.449321384Z" level=info msg="ImageCreate event name:\"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 00:18:27.452629 containerd[1508]: time="2024-08-06T00:18:27.452567260Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 00:18:27.453808 containerd[1508]: time="2024-08-06T00:18:27.453592238Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"115238612\" in 10.149514997s" Aug 6 00:18:27.453808 containerd[1508]: time="2024-08-06T00:18:27.453659098Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\"" Aug 6 00:18:27.528936 containerd[1508]: time="2024-08-06T00:18:27.528846880Z" level=info msg="CreateContainer within sandbox \"cdb9dc759ac1d75f93b5daabb8932f654119b76d79c010a695072742ae8be2f0\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 6 00:18:27.570619 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1603701604.mount: Deactivated successfully. Aug 6 00:18:27.582762 containerd[1508]: time="2024-08-06T00:18:27.582704945Z" level=info msg="CreateContainer within sandbox \"cdb9dc759ac1d75f93b5daabb8932f654119b76d79c010a695072742ae8be2f0\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0ad8d4e673a52397b383f83c438b40385c70021993d429f7ea80e548be94017d\"" Aug 6 00:18:27.594624 containerd[1508]: time="2024-08-06T00:18:27.594583267Z" level=info msg="StartContainer for \"0ad8d4e673a52397b383f83c438b40385c70021993d429f7ea80e548be94017d\"" Aug 6 00:18:27.743283 systemd[1]: Started cri-containerd-0ad8d4e673a52397b383f83c438b40385c70021993d429f7ea80e548be94017d.scope - libcontainer container 0ad8d4e673a52397b383f83c438b40385c70021993d429f7ea80e548be94017d. Aug 6 00:18:27.846003 containerd[1508]: time="2024-08-06T00:18:27.845730394Z" level=info msg="StartContainer for \"0ad8d4e673a52397b383f83c438b40385c70021993d429f7ea80e548be94017d\" returns successfully" Aug 6 00:18:27.946278 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 6 00:18:27.948486 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 6 00:18:28.496197 kubelet[2668]: I0806 00:18:28.495325 2668 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-29zdn" podStartSLOduration=1.685257772 podCreationTimestamp="2024-08-06 00:18:04 +0000 UTC" firstStartedPulling="2024-08-06 00:18:04.673238203 +0000 UTC m=+21.795544679" lastFinishedPulling="2024-08-06 00:18:27.454038028 +0000 UTC m=+44.576344517" observedRunningTime="2024-08-06 00:18:28.46311902 +0000 UTC m=+45.585425519" watchObservedRunningTime="2024-08-06 00:18:28.46605761 +0000 UTC m=+45.588364098" Aug 6 00:18:29.125453 containerd[1508]: time="2024-08-06T00:18:29.125034881Z" level=info msg="StopPodSandbox for \"8e9d0f9e4ba63d0b2ac367464d576a9d485695e1c9268b11f227c25931fae6fa\"" Aug 6 00:18:29.415586 containerd[1508]: 2024-08-06 00:18:29.209 [INFO][3637] k8s.go 608: Cleaning up netns ContainerID="8e9d0f9e4ba63d0b2ac367464d576a9d485695e1c9268b11f227c25931fae6fa" Aug 6 00:18:29.415586 containerd[1508]: 2024-08-06 00:18:29.212 [INFO][3637] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="8e9d0f9e4ba63d0b2ac367464d576a9d485695e1c9268b11f227c25931fae6fa" iface="eth0" netns="/var/run/netns/cni-d326228f-fdad-539b-de36-988442dba5ef" Aug 6 00:18:29.415586 containerd[1508]: 2024-08-06 00:18:29.213 [INFO][3637] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="8e9d0f9e4ba63d0b2ac367464d576a9d485695e1c9268b11f227c25931fae6fa" iface="eth0" netns="/var/run/netns/cni-d326228f-fdad-539b-de36-988442dba5ef" Aug 6 00:18:29.415586 containerd[1508]: 2024-08-06 00:18:29.214 [INFO][3637] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="8e9d0f9e4ba63d0b2ac367464d576a9d485695e1c9268b11f227c25931fae6fa" iface="eth0" netns="/var/run/netns/cni-d326228f-fdad-539b-de36-988442dba5ef" Aug 6 00:18:29.415586 containerd[1508]: 2024-08-06 00:18:29.214 [INFO][3637] k8s.go 615: Releasing IP address(es) ContainerID="8e9d0f9e4ba63d0b2ac367464d576a9d485695e1c9268b11f227c25931fae6fa" Aug 6 00:18:29.415586 containerd[1508]: 2024-08-06 00:18:29.214 [INFO][3637] utils.go 188: Calico CNI releasing IP address ContainerID="8e9d0f9e4ba63d0b2ac367464d576a9d485695e1c9268b11f227c25931fae6fa" Aug 6 00:18:29.415586 containerd[1508]: 2024-08-06 00:18:29.393 [INFO][3643] ipam_plugin.go 411: Releasing address using handleID ContainerID="8e9d0f9e4ba63d0b2ac367464d576a9d485695e1c9268b11f227c25931fae6fa" HandleID="k8s-pod-network.8e9d0f9e4ba63d0b2ac367464d576a9d485695e1c9268b11f227c25931fae6fa" Workload="srv--iww3y.gb1.brightbox.com-k8s-coredns--5dd5756b68--rxfx5-eth0" Aug 6 00:18:29.415586 containerd[1508]: 2024-08-06 00:18:29.395 [INFO][3643] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 6 00:18:29.415586 containerd[1508]: 2024-08-06 00:18:29.396 [INFO][3643] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 6 00:18:29.415586 containerd[1508]: 2024-08-06 00:18:29.409 [WARNING][3643] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="8e9d0f9e4ba63d0b2ac367464d576a9d485695e1c9268b11f227c25931fae6fa" HandleID="k8s-pod-network.8e9d0f9e4ba63d0b2ac367464d576a9d485695e1c9268b11f227c25931fae6fa" Workload="srv--iww3y.gb1.brightbox.com-k8s-coredns--5dd5756b68--rxfx5-eth0" Aug 6 00:18:29.415586 containerd[1508]: 2024-08-06 00:18:29.409 [INFO][3643] ipam_plugin.go 439: Releasing address using workloadID ContainerID="8e9d0f9e4ba63d0b2ac367464d576a9d485695e1c9268b11f227c25931fae6fa" HandleID="k8s-pod-network.8e9d0f9e4ba63d0b2ac367464d576a9d485695e1c9268b11f227c25931fae6fa" Workload="srv--iww3y.gb1.brightbox.com-k8s-coredns--5dd5756b68--rxfx5-eth0" Aug 6 00:18:29.415586 containerd[1508]: 2024-08-06 00:18:29.411 [INFO][3643] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 6 00:18:29.415586 containerd[1508]: 2024-08-06 00:18:29.413 [INFO][3637] k8s.go 621: Teardown processing complete. ContainerID="8e9d0f9e4ba63d0b2ac367464d576a9d485695e1c9268b11f227c25931fae6fa" Aug 6 00:18:29.417276 containerd[1508]: time="2024-08-06T00:18:29.416987481Z" level=info msg="TearDown network for sandbox \"8e9d0f9e4ba63d0b2ac367464d576a9d485695e1c9268b11f227c25931fae6fa\" successfully" Aug 6 00:18:29.417276 containerd[1508]: time="2024-08-06T00:18:29.417041164Z" level=info msg="StopPodSandbox for \"8e9d0f9e4ba63d0b2ac367464d576a9d485695e1c9268b11f227c25931fae6fa\" returns successfully" Aug 6 00:18:29.418425 containerd[1508]: time="2024-08-06T00:18:29.418148273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-rxfx5,Uid:eed81736-ac05-4e75-94df-0684511dda22,Namespace:kube-system,Attempt:1,}" Aug 6 00:18:29.424641 systemd[1]: run-netns-cni\x2dd326228f\x2dfdad\x2d539b\x2dde36\x2d988442dba5ef.mount: Deactivated successfully. Aug 6 00:18:29.922261 systemd-networkd[1432]: cali85a0edb663a: Link UP Aug 6 00:18:29.928024 systemd-networkd[1432]: cali85a0edb663a: Gained carrier Aug 6 00:18:29.934651 containerd[1508]: 2024-08-06 00:18:29.615 [INFO][3649] utils.go 100: File /var/lib/calico/mtu does not exist Aug 6 00:18:29.934651 containerd[1508]: 2024-08-06 00:18:29.633 [INFO][3649] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--iww3y.gb1.brightbox.com-k8s-coredns--5dd5756b68--rxfx5-eth0 coredns-5dd5756b68- kube-system eed81736-ac05-4e75-94df-0684511dda22 715 0 2024-08-06 00:17:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-iww3y.gb1.brightbox.com coredns-5dd5756b68-rxfx5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali85a0edb663a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="8d2e9dd876b73bb68847dc5aba3e3ec4b1472e0f8ab55f04edfcf2b2fea45b29" Namespace="kube-system" Pod="coredns-5dd5756b68-rxfx5" WorkloadEndpoint="srv--iww3y.gb1.brightbox.com-k8s-coredns--5dd5756b68--rxfx5-" Aug 6 00:18:29.934651 containerd[1508]: 2024-08-06 00:18:29.633 [INFO][3649] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8d2e9dd876b73bb68847dc5aba3e3ec4b1472e0f8ab55f04edfcf2b2fea45b29" Namespace="kube-system" Pod="coredns-5dd5756b68-rxfx5" WorkloadEndpoint="srv--iww3y.gb1.brightbox.com-k8s-coredns--5dd5756b68--rxfx5-eth0" Aug 6 00:18:29.934651 containerd[1508]: 2024-08-06 00:18:29.792 [INFO][3683] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8d2e9dd876b73bb68847dc5aba3e3ec4b1472e0f8ab55f04edfcf2b2fea45b29" HandleID="k8s-pod-network.8d2e9dd876b73bb68847dc5aba3e3ec4b1472e0f8ab55f04edfcf2b2fea45b29" Workload="srv--iww3y.gb1.brightbox.com-k8s-coredns--5dd5756b68--rxfx5-eth0" Aug 6 00:18:29.934651 containerd[1508]: 2024-08-06 00:18:29.823 [INFO][3683] ipam_plugin.go 264: Auto assigning IP ContainerID="8d2e9dd876b73bb68847dc5aba3e3ec4b1472e0f8ab55f04edfcf2b2fea45b29" HandleID="k8s-pod-network.8d2e9dd876b73bb68847dc5aba3e3ec4b1472e0f8ab55f04edfcf2b2fea45b29" Workload="srv--iww3y.gb1.brightbox.com-k8s-coredns--5dd5756b68--rxfx5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000384b20), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-iww3y.gb1.brightbox.com", "pod":"coredns-5dd5756b68-rxfx5", "timestamp":"2024-08-06 00:18:29.792832877 +0000 UTC"}, Hostname:"srv-iww3y.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 6 00:18:29.934651 containerd[1508]: 2024-08-06 00:18:29.824 [INFO][3683] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 6 00:18:29.934651 containerd[1508]: 2024-08-06 00:18:29.824 [INFO][3683] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 6 00:18:29.934651 containerd[1508]: 2024-08-06 00:18:29.824 [INFO][3683] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-iww3y.gb1.brightbox.com' Aug 6 00:18:29.934651 containerd[1508]: 2024-08-06 00:18:29.836 [INFO][3683] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8d2e9dd876b73bb68847dc5aba3e3ec4b1472e0f8ab55f04edfcf2b2fea45b29" host="srv-iww3y.gb1.brightbox.com" Aug 6 00:18:29.934651 containerd[1508]: 2024-08-06 00:18:29.848 [INFO][3683] ipam.go 372: Looking up existing affinities for host host="srv-iww3y.gb1.brightbox.com" Aug 6 00:18:29.934651 containerd[1508]: 2024-08-06 00:18:29.861 [INFO][3683] ipam.go 489: Trying affinity for 192.168.18.192/26 host="srv-iww3y.gb1.brightbox.com" Aug 6 00:18:29.934651 containerd[1508]: 2024-08-06 00:18:29.863 [INFO][3683] ipam.go 155: Attempting to load block cidr=192.168.18.192/26 host="srv-iww3y.gb1.brightbox.com" Aug 6 00:18:29.934651 containerd[1508]: 2024-08-06 00:18:29.867 [INFO][3683] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.18.192/26 host="srv-iww3y.gb1.brightbox.com" Aug 6 00:18:29.934651 containerd[1508]: 2024-08-06 00:18:29.867 [INFO][3683] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.18.192/26 handle="k8s-pod-network.8d2e9dd876b73bb68847dc5aba3e3ec4b1472e0f8ab55f04edfcf2b2fea45b29" host="srv-iww3y.gb1.brightbox.com" Aug 6 00:18:29.934651 containerd[1508]: 2024-08-06 00:18:29.870 [INFO][3683] ipam.go 1685: Creating new handle: k8s-pod-network.8d2e9dd876b73bb68847dc5aba3e3ec4b1472e0f8ab55f04edfcf2b2fea45b29 Aug 6 00:18:29.934651 containerd[1508]: 2024-08-06 00:18:29.875 [INFO][3683] ipam.go 1203: Writing block in order to claim IPs block=192.168.18.192/26 handle="k8s-pod-network.8d2e9dd876b73bb68847dc5aba3e3ec4b1472e0f8ab55f04edfcf2b2fea45b29" host="srv-iww3y.gb1.brightbox.com" Aug 6 00:18:29.934651 containerd[1508]: 2024-08-06 00:18:29.881 [INFO][3683] ipam.go 1216: Successfully claimed IPs: [192.168.18.193/26] block=192.168.18.192/26 handle="k8s-pod-network.8d2e9dd876b73bb68847dc5aba3e3ec4b1472e0f8ab55f04edfcf2b2fea45b29" host="srv-iww3y.gb1.brightbox.com" Aug 6 00:18:29.934651 containerd[1508]: 2024-08-06 00:18:29.881 [INFO][3683] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.18.193/26] handle="k8s-pod-network.8d2e9dd876b73bb68847dc5aba3e3ec4b1472e0f8ab55f04edfcf2b2fea45b29" host="srv-iww3y.gb1.brightbox.com" Aug 6 00:18:29.934651 containerd[1508]: 2024-08-06 00:18:29.881 [INFO][3683] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 6 00:18:29.934651 containerd[1508]: 2024-08-06 00:18:29.881 [INFO][3683] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.18.193/26] IPv6=[] ContainerID="8d2e9dd876b73bb68847dc5aba3e3ec4b1472e0f8ab55f04edfcf2b2fea45b29" HandleID="k8s-pod-network.8d2e9dd876b73bb68847dc5aba3e3ec4b1472e0f8ab55f04edfcf2b2fea45b29" Workload="srv--iww3y.gb1.brightbox.com-k8s-coredns--5dd5756b68--rxfx5-eth0" Aug 6 00:18:29.939292 containerd[1508]: 2024-08-06 00:18:29.886 [INFO][3649] k8s.go 386: Populated endpoint ContainerID="8d2e9dd876b73bb68847dc5aba3e3ec4b1472e0f8ab55f04edfcf2b2fea45b29" Namespace="kube-system" Pod="coredns-5dd5756b68-rxfx5" WorkloadEndpoint="srv--iww3y.gb1.brightbox.com-k8s-coredns--5dd5756b68--rxfx5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--iww3y.gb1.brightbox.com-k8s-coredns--5dd5756b68--rxfx5-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"eed81736-ac05-4e75-94df-0684511dda22", ResourceVersion:"715", Generation:0, CreationTimestamp:time.Date(2024, time.August, 6, 0, 17, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-iww3y.gb1.brightbox.com", ContainerID:"", Pod:"coredns-5dd5756b68-rxfx5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali85a0edb663a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 6 00:18:29.939292 containerd[1508]: 2024-08-06 00:18:29.887 [INFO][3649] k8s.go 387: Calico CNI using IPs: [192.168.18.193/32] ContainerID="8d2e9dd876b73bb68847dc5aba3e3ec4b1472e0f8ab55f04edfcf2b2fea45b29" Namespace="kube-system" Pod="coredns-5dd5756b68-rxfx5" WorkloadEndpoint="srv--iww3y.gb1.brightbox.com-k8s-coredns--5dd5756b68--rxfx5-eth0" Aug 6 00:18:29.939292 containerd[1508]: 2024-08-06 00:18:29.887 [INFO][3649] dataplane_linux.go 68: Setting the host side veth name to cali85a0edb663a ContainerID="8d2e9dd876b73bb68847dc5aba3e3ec4b1472e0f8ab55f04edfcf2b2fea45b29" Namespace="kube-system" Pod="coredns-5dd5756b68-rxfx5" WorkloadEndpoint="srv--iww3y.gb1.brightbox.com-k8s-coredns--5dd5756b68--rxfx5-eth0" Aug 6 00:18:29.939292 containerd[1508]: 2024-08-06 00:18:29.902 [INFO][3649] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="8d2e9dd876b73bb68847dc5aba3e3ec4b1472e0f8ab55f04edfcf2b2fea45b29" Namespace="kube-system" Pod="coredns-5dd5756b68-rxfx5" WorkloadEndpoint="srv--iww3y.gb1.brightbox.com-k8s-coredns--5dd5756b68--rxfx5-eth0" Aug 6 00:18:29.939292 containerd[1508]: 2024-08-06 00:18:29.903 [INFO][3649] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8d2e9dd876b73bb68847dc5aba3e3ec4b1472e0f8ab55f04edfcf2b2fea45b29" Namespace="kube-system" Pod="coredns-5dd5756b68-rxfx5" WorkloadEndpoint="srv--iww3y.gb1.brightbox.com-k8s-coredns--5dd5756b68--rxfx5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--iww3y.gb1.brightbox.com-k8s-coredns--5dd5756b68--rxfx5-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"eed81736-ac05-4e75-94df-0684511dda22", ResourceVersion:"715", Generation:0, CreationTimestamp:time.Date(2024, time.August, 6, 0, 17, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-iww3y.gb1.brightbox.com", ContainerID:"8d2e9dd876b73bb68847dc5aba3e3ec4b1472e0f8ab55f04edfcf2b2fea45b29", Pod:"coredns-5dd5756b68-rxfx5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali85a0edb663a", MAC:"12:3a:e9:4b:20:b6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 6 00:18:29.939292 containerd[1508]: 2024-08-06 00:18:29.924 [INFO][3649] k8s.go 500: Wrote updated endpoint to datastore ContainerID="8d2e9dd876b73bb68847dc5aba3e3ec4b1472e0f8ab55f04edfcf2b2fea45b29" Namespace="kube-system" Pod="coredns-5dd5756b68-rxfx5" WorkloadEndpoint="srv--iww3y.gb1.brightbox.com-k8s-coredns--5dd5756b68--rxfx5-eth0" Aug 6 00:18:30.056531 containerd[1508]: time="2024-08-06T00:18:30.053540907Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 6 00:18:30.056531 containerd[1508]: time="2024-08-06T00:18:30.053693258Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 00:18:30.056531 containerd[1508]: time="2024-08-06T00:18:30.053735822Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 6 00:18:30.056531 containerd[1508]: time="2024-08-06T00:18:30.053759963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 00:18:30.132309 systemd[1]: Started cri-containerd-8d2e9dd876b73bb68847dc5aba3e3ec4b1472e0f8ab55f04edfcf2b2fea45b29.scope - libcontainer container 8d2e9dd876b73bb68847dc5aba3e3ec4b1472e0f8ab55f04edfcf2b2fea45b29. Aug 6 00:18:30.265242 containerd[1508]: time="2024-08-06T00:18:30.265153574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-rxfx5,Uid:eed81736-ac05-4e75-94df-0684511dda22,Namespace:kube-system,Attempt:1,} returns sandbox id \"8d2e9dd876b73bb68847dc5aba3e3ec4b1472e0f8ab55f04edfcf2b2fea45b29\"" Aug 6 00:18:30.274332 containerd[1508]: time="2024-08-06T00:18:30.273254499Z" level=info msg="CreateContainer within sandbox \"8d2e9dd876b73bb68847dc5aba3e3ec4b1472e0f8ab55f04edfcf2b2fea45b29\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 6 00:18:30.309992 containerd[1508]: time="2024-08-06T00:18:30.309753048Z" level=info msg="CreateContainer within sandbox \"8d2e9dd876b73bb68847dc5aba3e3ec4b1472e0f8ab55f04edfcf2b2fea45b29\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5b1251ad852a9501fc67c59cc7b0c0cdd80007dbc76bf241b136fd9f69b239ab\"" Aug 6 00:18:30.312769 containerd[1508]: time="2024-08-06T00:18:30.312597846Z" level=info msg="StartContainer for \"5b1251ad852a9501fc67c59cc7b0c0cdd80007dbc76bf241b136fd9f69b239ab\"" Aug 6 00:18:30.404329 systemd[1]: Started cri-containerd-5b1251ad852a9501fc67c59cc7b0c0cdd80007dbc76bf241b136fd9f69b239ab.scope - libcontainer container 5b1251ad852a9501fc67c59cc7b0c0cdd80007dbc76bf241b136fd9f69b239ab. Aug 6 00:18:30.492930 containerd[1508]: time="2024-08-06T00:18:30.492695451Z" level=info msg="StartContainer for \"5b1251ad852a9501fc67c59cc7b0c0cdd80007dbc76bf241b136fd9f69b239ab\" returns successfully" Aug 6 00:18:30.897791 systemd-networkd[1432]: vxlan.calico: Link UP Aug 6 00:18:30.897806 systemd-networkd[1432]: vxlan.calico: Gained carrier Aug 6 00:18:31.308321 systemd-networkd[1432]: cali85a0edb663a: Gained IPv6LL Aug 6 00:18:31.487464 kubelet[2668]: I0806 00:18:31.487372 2668 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-rxfx5" podStartSLOduration=35.487312358 podCreationTimestamp="2024-08-06 00:17:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-06 00:18:31.486642022 +0000 UTC m=+48.608948537" watchObservedRunningTime="2024-08-06 00:18:31.487312358 +0000 UTC m=+48.609618861" Aug 6 00:18:32.124210 containerd[1508]: time="2024-08-06T00:18:32.122990407Z" level=info msg="StopPodSandbox for \"474ffc6da7419b7a0c0ed7cbf6a1a41e9760185e369eb0d86140f8cfc2acab0b\"" Aug 6 00:18:32.250521 containerd[1508]: 2024-08-06 00:18:32.197 [INFO][4024] k8s.go 608: Cleaning up netns ContainerID="474ffc6da7419b7a0c0ed7cbf6a1a41e9760185e369eb0d86140f8cfc2acab0b" Aug 6 00:18:32.250521 containerd[1508]: 2024-08-06 00:18:32.198 [INFO][4024] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="474ffc6da7419b7a0c0ed7cbf6a1a41e9760185e369eb0d86140f8cfc2acab0b" iface="eth0" netns="/var/run/netns/cni-bd9bfd41-1407-ca8b-57e6-3e6512ee71c2" Aug 6 00:18:32.250521 containerd[1508]: 2024-08-06 00:18:32.198 [INFO][4024] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="474ffc6da7419b7a0c0ed7cbf6a1a41e9760185e369eb0d86140f8cfc2acab0b" iface="eth0" netns="/var/run/netns/cni-bd9bfd41-1407-ca8b-57e6-3e6512ee71c2" Aug 6 00:18:32.250521 containerd[1508]: 2024-08-06 00:18:32.198 [INFO][4024] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="474ffc6da7419b7a0c0ed7cbf6a1a41e9760185e369eb0d86140f8cfc2acab0b" iface="eth0" netns="/var/run/netns/cni-bd9bfd41-1407-ca8b-57e6-3e6512ee71c2" Aug 6 00:18:32.250521 containerd[1508]: 2024-08-06 00:18:32.198 [INFO][4024] k8s.go 615: Releasing IP address(es) ContainerID="474ffc6da7419b7a0c0ed7cbf6a1a41e9760185e369eb0d86140f8cfc2acab0b" Aug 6 00:18:32.250521 containerd[1508]: 2024-08-06 00:18:32.199 [INFO][4024] utils.go 188: Calico CNI releasing IP address ContainerID="474ffc6da7419b7a0c0ed7cbf6a1a41e9760185e369eb0d86140f8cfc2acab0b" Aug 6 00:18:32.250521 containerd[1508]: 2024-08-06 00:18:32.234 [INFO][4030] ipam_plugin.go 411: Releasing address using handleID ContainerID="474ffc6da7419b7a0c0ed7cbf6a1a41e9760185e369eb0d86140f8cfc2acab0b" HandleID="k8s-pod-network.474ffc6da7419b7a0c0ed7cbf6a1a41e9760185e369eb0d86140f8cfc2acab0b" Workload="srv--iww3y.gb1.brightbox.com-k8s-csi--node--driver--tdjb9-eth0" Aug 6 00:18:32.250521 containerd[1508]: 2024-08-06 00:18:32.235 [INFO][4030] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 6 00:18:32.250521 containerd[1508]: 2024-08-06 00:18:32.235 [INFO][4030] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 6 00:18:32.250521 containerd[1508]: 2024-08-06 00:18:32.243 [WARNING][4030] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="474ffc6da7419b7a0c0ed7cbf6a1a41e9760185e369eb0d86140f8cfc2acab0b" HandleID="k8s-pod-network.474ffc6da7419b7a0c0ed7cbf6a1a41e9760185e369eb0d86140f8cfc2acab0b" Workload="srv--iww3y.gb1.brightbox.com-k8s-csi--node--driver--tdjb9-eth0" Aug 6 00:18:32.250521 containerd[1508]: 2024-08-06 00:18:32.243 [INFO][4030] ipam_plugin.go 439: Releasing address using workloadID ContainerID="474ffc6da7419b7a0c0ed7cbf6a1a41e9760185e369eb0d86140f8cfc2acab0b" HandleID="k8s-pod-network.474ffc6da7419b7a0c0ed7cbf6a1a41e9760185e369eb0d86140f8cfc2acab0b" Workload="srv--iww3y.gb1.brightbox.com-k8s-csi--node--driver--tdjb9-eth0" Aug 6 00:18:32.250521 containerd[1508]: 2024-08-06 00:18:32.246 [INFO][4030] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 6 00:18:32.250521 containerd[1508]: 2024-08-06 00:18:32.248 [INFO][4024] k8s.go 621: Teardown processing complete. ContainerID="474ffc6da7419b7a0c0ed7cbf6a1a41e9760185e369eb0d86140f8cfc2acab0b" Aug 6 00:18:32.254533 containerd[1508]: time="2024-08-06T00:18:32.253100043Z" level=info msg="TearDown network for sandbox \"474ffc6da7419b7a0c0ed7cbf6a1a41e9760185e369eb0d86140f8cfc2acab0b\" successfully" Aug 6 00:18:32.254533 containerd[1508]: time="2024-08-06T00:18:32.253146343Z" level=info msg="StopPodSandbox for \"474ffc6da7419b7a0c0ed7cbf6a1a41e9760185e369eb0d86140f8cfc2acab0b\" returns successfully" Aug 6 00:18:32.255458 containerd[1508]: time="2024-08-06T00:18:32.254903169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tdjb9,Uid:a0233e65-9604-4b6a-9cef-2013386fdc9d,Namespace:calico-system,Attempt:1,}" Aug 6 00:18:32.255819 systemd[1]: run-netns-cni\x2dbd9bfd41\x2d1407\x2dca8b\x2d57e6\x2d3e6512ee71c2.mount: Deactivated successfully. Aug 6 00:18:32.268178 systemd-networkd[1432]: vxlan.calico: Gained IPv6LL Aug 6 00:18:32.444950 systemd-networkd[1432]: cali8a679b92145: Link UP Aug 6 00:18:32.446947 systemd-networkd[1432]: cali8a679b92145: Gained carrier Aug 6 00:18:32.488342 containerd[1508]: 2024-08-06 00:18:32.325 [INFO][4040] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--iww3y.gb1.brightbox.com-k8s-csi--node--driver--tdjb9-eth0 csi-node-driver- calico-system a0233e65-9604-4b6a-9cef-2013386fdc9d 740 0 2024-08-06 00:18:04 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s srv-iww3y.gb1.brightbox.com csi-node-driver-tdjb9 eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali8a679b92145 [] []}} ContainerID="7d39a39525f165ffb952425b30b5fe8fb85a5b9fdb3e34222444282618fb8b79" Namespace="calico-system" Pod="csi-node-driver-tdjb9" WorkloadEndpoint="srv--iww3y.gb1.brightbox.com-k8s-csi--node--driver--tdjb9-" Aug 6 00:18:32.488342 containerd[1508]: 2024-08-06 00:18:32.325 [INFO][4040] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7d39a39525f165ffb952425b30b5fe8fb85a5b9fdb3e34222444282618fb8b79" Namespace="calico-system" Pod="csi-node-driver-tdjb9" WorkloadEndpoint="srv--iww3y.gb1.brightbox.com-k8s-csi--node--driver--tdjb9-eth0" Aug 6 00:18:32.488342 containerd[1508]: 2024-08-06 00:18:32.373 [INFO][4048] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7d39a39525f165ffb952425b30b5fe8fb85a5b9fdb3e34222444282618fb8b79" HandleID="k8s-pod-network.7d39a39525f165ffb952425b30b5fe8fb85a5b9fdb3e34222444282618fb8b79" Workload="srv--iww3y.gb1.brightbox.com-k8s-csi--node--driver--tdjb9-eth0" Aug 6 00:18:32.488342 containerd[1508]: 2024-08-06 00:18:32.386 [INFO][4048] ipam_plugin.go 264: Auto assigning IP ContainerID="7d39a39525f165ffb952425b30b5fe8fb85a5b9fdb3e34222444282618fb8b79" HandleID="k8s-pod-network.7d39a39525f165ffb952425b30b5fe8fb85a5b9fdb3e34222444282618fb8b79" Workload="srv--iww3y.gb1.brightbox.com-k8s-csi--node--driver--tdjb9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027d2f0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-iww3y.gb1.brightbox.com", "pod":"csi-node-driver-tdjb9", "timestamp":"2024-08-06 00:18:32.373258274 +0000 UTC"}, Hostname:"srv-iww3y.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 6 00:18:32.488342 containerd[1508]: 2024-08-06 00:18:32.386 [INFO][4048] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 6 00:18:32.488342 containerd[1508]: 2024-08-06 00:18:32.386 [INFO][4048] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 6 00:18:32.488342 containerd[1508]: 2024-08-06 00:18:32.386 [INFO][4048] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-iww3y.gb1.brightbox.com' Aug 6 00:18:32.488342 containerd[1508]: 2024-08-06 00:18:32.389 [INFO][4048] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7d39a39525f165ffb952425b30b5fe8fb85a5b9fdb3e34222444282618fb8b79" host="srv-iww3y.gb1.brightbox.com" Aug 6 00:18:32.488342 containerd[1508]: 2024-08-06 00:18:32.395 [INFO][4048] ipam.go 372: Looking up existing affinities for host host="srv-iww3y.gb1.brightbox.com" Aug 6 00:18:32.488342 containerd[1508]: 2024-08-06 00:18:32.403 [INFO][4048] ipam.go 489: Trying affinity for 192.168.18.192/26 host="srv-iww3y.gb1.brightbox.com" Aug 6 00:18:32.488342 containerd[1508]: 2024-08-06 00:18:32.406 [INFO][4048] ipam.go 155: Attempting to load block cidr=192.168.18.192/26 host="srv-iww3y.gb1.brightbox.com" Aug 6 00:18:32.488342 containerd[1508]: 2024-08-06 00:18:32.410 [INFO][4048] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.18.192/26 host="srv-iww3y.gb1.brightbox.com" Aug 6 00:18:32.488342 containerd[1508]: 2024-08-06 00:18:32.410 [INFO][4048] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.18.192/26 handle="k8s-pod-network.7d39a39525f165ffb952425b30b5fe8fb85a5b9fdb3e34222444282618fb8b79" host="srv-iww3y.gb1.brightbox.com" Aug 6 00:18:32.488342 containerd[1508]: 2024-08-06 00:18:32.419 [INFO][4048] ipam.go 1685: Creating new handle: k8s-pod-network.7d39a39525f165ffb952425b30b5fe8fb85a5b9fdb3e34222444282618fb8b79 Aug 6 00:18:32.488342 containerd[1508]: 2024-08-06 00:18:32.427 [INFO][4048] ipam.go 1203: Writing block in order to claim IPs block=192.168.18.192/26 handle="k8s-pod-network.7d39a39525f165ffb952425b30b5fe8fb85a5b9fdb3e34222444282618fb8b79" host="srv-iww3y.gb1.brightbox.com" Aug 6 00:18:32.488342 containerd[1508]: 2024-08-06 00:18:32.436 [INFO][4048] ipam.go 1216: Successfully claimed IPs: [192.168.18.194/26] block=192.168.18.192/26 handle="k8s-pod-network.7d39a39525f165ffb952425b30b5fe8fb85a5b9fdb3e34222444282618fb8b79" host="srv-iww3y.gb1.brightbox.com" Aug 6 00:18:32.488342 containerd[1508]: 2024-08-06 00:18:32.437 [INFO][4048] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.18.194/26] handle="k8s-pod-network.7d39a39525f165ffb952425b30b5fe8fb85a5b9fdb3e34222444282618fb8b79" host="srv-iww3y.gb1.brightbox.com" Aug 6 00:18:32.488342 containerd[1508]: 2024-08-06 00:18:32.437 [INFO][4048] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 6 00:18:32.488342 containerd[1508]: 2024-08-06 00:18:32.437 [INFO][4048] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.18.194/26] IPv6=[] ContainerID="7d39a39525f165ffb952425b30b5fe8fb85a5b9fdb3e34222444282618fb8b79" HandleID="k8s-pod-network.7d39a39525f165ffb952425b30b5fe8fb85a5b9fdb3e34222444282618fb8b79" Workload="srv--iww3y.gb1.brightbox.com-k8s-csi--node--driver--tdjb9-eth0" Aug 6 00:18:32.490390 containerd[1508]: 2024-08-06 00:18:32.440 [INFO][4040] k8s.go 386: Populated endpoint ContainerID="7d39a39525f165ffb952425b30b5fe8fb85a5b9fdb3e34222444282618fb8b79" Namespace="calico-system" Pod="csi-node-driver-tdjb9" WorkloadEndpoint="srv--iww3y.gb1.brightbox.com-k8s-csi--node--driver--tdjb9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--iww3y.gb1.brightbox.com-k8s-csi--node--driver--tdjb9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a0233e65-9604-4b6a-9cef-2013386fdc9d", ResourceVersion:"740", Generation:0, CreationTimestamp:time.Date(2024, time.August, 6, 0, 18, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-iww3y.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-tdjb9", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.18.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali8a679b92145", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 6 00:18:32.490390 containerd[1508]: 2024-08-06 00:18:32.440 [INFO][4040] k8s.go 387: Calico CNI using IPs: [192.168.18.194/32] ContainerID="7d39a39525f165ffb952425b30b5fe8fb85a5b9fdb3e34222444282618fb8b79" Namespace="calico-system" Pod="csi-node-driver-tdjb9" WorkloadEndpoint="srv--iww3y.gb1.brightbox.com-k8s-csi--node--driver--tdjb9-eth0" Aug 6 00:18:32.490390 containerd[1508]: 2024-08-06 00:18:32.440 [INFO][4040] dataplane_linux.go 68: Setting the host side veth name to cali8a679b92145 ContainerID="7d39a39525f165ffb952425b30b5fe8fb85a5b9fdb3e34222444282618fb8b79" Namespace="calico-system" Pod="csi-node-driver-tdjb9" WorkloadEndpoint="srv--iww3y.gb1.brightbox.com-k8s-csi--node--driver--tdjb9-eth0" Aug 6 00:18:32.490390 containerd[1508]: 2024-08-06 00:18:32.447 [INFO][4040] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="7d39a39525f165ffb952425b30b5fe8fb85a5b9fdb3e34222444282618fb8b79" Namespace="calico-system" Pod="csi-node-driver-tdjb9" WorkloadEndpoint="srv--iww3y.gb1.brightbox.com-k8s-csi--node--driver--tdjb9-eth0" Aug 6 00:18:32.490390 containerd[1508]: 2024-08-06 00:18:32.447 [INFO][4040] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7d39a39525f165ffb952425b30b5fe8fb85a5b9fdb3e34222444282618fb8b79" Namespace="calico-system" Pod="csi-node-driver-tdjb9" WorkloadEndpoint="srv--iww3y.gb1.brightbox.com-k8s-csi--node--driver--tdjb9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--iww3y.gb1.brightbox.com-k8s-csi--node--driver--tdjb9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a0233e65-9604-4b6a-9cef-2013386fdc9d", ResourceVersion:"740", Generation:0, CreationTimestamp:time.Date(2024, time.August, 6, 0, 18, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-iww3y.gb1.brightbox.com", ContainerID:"7d39a39525f165ffb952425b30b5fe8fb85a5b9fdb3e34222444282618fb8b79", Pod:"csi-node-driver-tdjb9", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.18.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali8a679b92145", MAC:"1a:96:9f:f3:e6:31", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 6 00:18:32.490390 containerd[1508]: 2024-08-06 00:18:32.481 [INFO][4040] k8s.go 500: Wrote updated endpoint to datastore ContainerID="7d39a39525f165ffb952425b30b5fe8fb85a5b9fdb3e34222444282618fb8b79" Namespace="calico-system" Pod="csi-node-driver-tdjb9" WorkloadEndpoint="srv--iww3y.gb1.brightbox.com-k8s-csi--node--driver--tdjb9-eth0" Aug 6 00:18:32.526108 containerd[1508]: time="2024-08-06T00:18:32.525926605Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 6 00:18:32.527263 containerd[1508]: time="2024-08-06T00:18:32.526950190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 00:18:32.527263 containerd[1508]: time="2024-08-06T00:18:32.527026001Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 6 00:18:32.527263 containerd[1508]: time="2024-08-06T00:18:32.527046117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 00:18:32.562100 systemd[1]: run-containerd-runc-k8s.io-7d39a39525f165ffb952425b30b5fe8fb85a5b9fdb3e34222444282618fb8b79-runc.i0r0wS.mount: Deactivated successfully. Aug 6 00:18:32.573174 systemd[1]: Started cri-containerd-7d39a39525f165ffb952425b30b5fe8fb85a5b9fdb3e34222444282618fb8b79.scope - libcontainer container 7d39a39525f165ffb952425b30b5fe8fb85a5b9fdb3e34222444282618fb8b79. Aug 6 00:18:32.621251 containerd[1508]: time="2024-08-06T00:18:32.621187842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tdjb9,Uid:a0233e65-9604-4b6a-9cef-2013386fdc9d,Namespace:calico-system,Attempt:1,} returns sandbox id \"7d39a39525f165ffb952425b30b5fe8fb85a5b9fdb3e34222444282618fb8b79\"" Aug 6 00:18:32.629407 containerd[1508]: time="2024-08-06T00:18:32.627468715Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Aug 6 00:18:33.137820 containerd[1508]: time="2024-08-06T00:18:33.137766106Z" level=info msg="StopPodSandbox for \"f84c52352e0c7365d1369d21cd69c3101a6e9c37e4e992a344df384ad6fd8e70\"" Aug 6 00:18:33.147875 containerd[1508]: time="2024-08-06T00:18:33.146250136Z" level=info msg="StopPodSandbox for \"34ebfb44c36ceb9f510cdd10f01b036ae2598526abcd2ed9f8d660b1604b4a96\"" Aug 6 00:18:33.330034 containerd[1508]: 2024-08-06 00:18:33.251 [INFO][4133] k8s.go 608: Cleaning up netns ContainerID="34ebfb44c36ceb9f510cdd10f01b036ae2598526abcd2ed9f8d660b1604b4a96" Aug 6 00:18:33.330034 containerd[1508]: 2024-08-06 00:18:33.251 [INFO][4133] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="34ebfb44c36ceb9f510cdd10f01b036ae2598526abcd2ed9f8d660b1604b4a96" iface="eth0" netns="/var/run/netns/cni-ed7f6865-eeb4-a23c-86a2-4bbbeca61eeb" Aug 6 00:18:33.330034 containerd[1508]: 2024-08-06 00:18:33.253 [INFO][4133] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="34ebfb44c36ceb9f510cdd10f01b036ae2598526abcd2ed9f8d660b1604b4a96" iface="eth0" netns="/var/run/netns/cni-ed7f6865-eeb4-a23c-86a2-4bbbeca61eeb" Aug 6 00:18:33.330034 containerd[1508]: 2024-08-06 00:18:33.254 [INFO][4133] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="34ebfb44c36ceb9f510cdd10f01b036ae2598526abcd2ed9f8d660b1604b4a96" iface="eth0" netns="/var/run/netns/cni-ed7f6865-eeb4-a23c-86a2-4bbbeca61eeb" Aug 6 00:18:33.330034 containerd[1508]: 2024-08-06 00:18:33.254 [INFO][4133] k8s.go 615: Releasing IP address(es) ContainerID="34ebfb44c36ceb9f510cdd10f01b036ae2598526abcd2ed9f8d660b1604b4a96" Aug 6 00:18:33.330034 containerd[1508]: 2024-08-06 00:18:33.254 [INFO][4133] utils.go 188: Calico CNI releasing IP address ContainerID="34ebfb44c36ceb9f510cdd10f01b036ae2598526abcd2ed9f8d660b1604b4a96" Aug 6 00:18:33.330034 containerd[1508]: 2024-08-06 00:18:33.310 [INFO][4146] ipam_plugin.go 411: Releasing address using handleID ContainerID="34ebfb44c36ceb9f510cdd10f01b036ae2598526abcd2ed9f8d660b1604b4a96" HandleID="k8s-pod-network.34ebfb44c36ceb9f510cdd10f01b036ae2598526abcd2ed9f8d660b1604b4a96" Workload="srv--iww3y.gb1.brightbox.com-k8s-coredns--5dd5756b68--zpvfb-eth0" Aug 6 00:18:33.330034 containerd[1508]: 2024-08-06 00:18:33.310 [INFO][4146] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 6 00:18:33.330034 containerd[1508]: 2024-08-06 00:18:33.311 [INFO][4146] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 6 00:18:33.330034 containerd[1508]: 2024-08-06 00:18:33.320 [WARNING][4146] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="34ebfb44c36ceb9f510cdd10f01b036ae2598526abcd2ed9f8d660b1604b4a96" HandleID="k8s-pod-network.34ebfb44c36ceb9f510cdd10f01b036ae2598526abcd2ed9f8d660b1604b4a96" Workload="srv--iww3y.gb1.brightbox.com-k8s-coredns--5dd5756b68--zpvfb-eth0" Aug 6 00:18:33.330034 containerd[1508]: 2024-08-06 00:18:33.320 [INFO][4146] ipam_plugin.go 439: Releasing address using workloadID ContainerID="34ebfb44c36ceb9f510cdd10f01b036ae2598526abcd2ed9f8d660b1604b4a96" HandleID="k8s-pod-network.34ebfb44c36ceb9f510cdd10f01b036ae2598526abcd2ed9f8d660b1604b4a96" Workload="srv--iww3y.gb1.brightbox.com-k8s-coredns--5dd5756b68--zpvfb-eth0" Aug 6 00:18:33.330034 containerd[1508]: 2024-08-06 00:18:33.325 [INFO][4146] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 6 00:18:33.330034 containerd[1508]: 2024-08-06 00:18:33.327 [INFO][4133] k8s.go 621: Teardown processing complete. ContainerID="34ebfb44c36ceb9f510cdd10f01b036ae2598526abcd2ed9f8d660b1604b4a96" Aug 6 00:18:33.336078 containerd[1508]: time="2024-08-06T00:18:33.334217960Z" level=info msg="TearDown network for sandbox \"34ebfb44c36ceb9f510cdd10f01b036ae2598526abcd2ed9f8d660b1604b4a96\" successfully" Aug 6 00:18:33.336304 containerd[1508]: time="2024-08-06T00:18:33.336137497Z" level=info msg="StopPodSandbox for \"34ebfb44c36ceb9f510cdd10f01b036ae2598526abcd2ed9f8d660b1604b4a96\" returns successfully" Aug 6 00:18:33.336091 systemd[1]: run-netns-cni\x2ded7f6865\x2deeb4\x2da23c\x2d86a2\x2d4bbbeca61eeb.mount: Deactivated successfully. Aug 6 00:18:33.340059 containerd[1508]: time="2024-08-06T00:18:33.339381761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-zpvfb,Uid:ef6fd30e-55f2-4b6b-a5ca-c27d38eac3af,Namespace:kube-system,Attempt:1,}" Aug 6 00:18:33.360088 containerd[1508]: 2024-08-06 00:18:33.245 [INFO][4132] k8s.go 608: Cleaning up netns ContainerID="f84c52352e0c7365d1369d21cd69c3101a6e9c37e4e992a344df384ad6fd8e70" Aug 6 00:18:33.360088 containerd[1508]: 2024-08-06 00:18:33.245 [INFO][4132] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="f84c52352e0c7365d1369d21cd69c3101a6e9c37e4e992a344df384ad6fd8e70" iface="eth0" netns="/var/run/netns/cni-ffac58ba-0306-7dc2-b13f-8ae8f8c9edec" Aug 6 00:18:33.360088 containerd[1508]: 2024-08-06 00:18:33.245 [INFO][4132] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="f84c52352e0c7365d1369d21cd69c3101a6e9c37e4e992a344df384ad6fd8e70" iface="eth0" netns="/var/run/netns/cni-ffac58ba-0306-7dc2-b13f-8ae8f8c9edec" Aug 6 00:18:33.360088 containerd[1508]: 2024-08-06 00:18:33.246 [INFO][4132] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="f84c52352e0c7365d1369d21cd69c3101a6e9c37e4e992a344df384ad6fd8e70" iface="eth0" netns="/var/run/netns/cni-ffac58ba-0306-7dc2-b13f-8ae8f8c9edec" Aug 6 00:18:33.360088 containerd[1508]: 2024-08-06 00:18:33.246 [INFO][4132] k8s.go 615: Releasing IP address(es) ContainerID="f84c52352e0c7365d1369d21cd69c3101a6e9c37e4e992a344df384ad6fd8e70" Aug 6 00:18:33.360088 containerd[1508]: 2024-08-06 00:18:33.246 [INFO][4132] utils.go 188: Calico CNI releasing IP address ContainerID="f84c52352e0c7365d1369d21cd69c3101a6e9c37e4e992a344df384ad6fd8e70" Aug 6 00:18:33.360088 containerd[1508]: 2024-08-06 00:18:33.313 [INFO][4145] ipam_plugin.go 411: Releasing address using handleID ContainerID="f84c52352e0c7365d1369d21cd69c3101a6e9c37e4e992a344df384ad6fd8e70" HandleID="k8s-pod-network.f84c52352e0c7365d1369d21cd69c3101a6e9c37e4e992a344df384ad6fd8e70" Workload="srv--iww3y.gb1.brightbox.com-k8s-calico--kube--controllers--79cc9f459b--fwh5q-eth0" Aug 6 00:18:33.360088 containerd[1508]: 2024-08-06 00:18:33.316 [INFO][4145] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 6 00:18:33.360088 containerd[1508]: 2024-08-06 00:18:33.325 [INFO][4145] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 6 00:18:33.360088 containerd[1508]: 2024-08-06 00:18:33.350 [WARNING][4145] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="f84c52352e0c7365d1369d21cd69c3101a6e9c37e4e992a344df384ad6fd8e70" HandleID="k8s-pod-network.f84c52352e0c7365d1369d21cd69c3101a6e9c37e4e992a344df384ad6fd8e70" Workload="srv--iww3y.gb1.brightbox.com-k8s-calico--kube--controllers--79cc9f459b--fwh5q-eth0" Aug 6 00:18:33.360088 containerd[1508]: 2024-08-06 00:18:33.350 [INFO][4145] ipam_plugin.go 439: Releasing address using workloadID ContainerID="f84c52352e0c7365d1369d21cd69c3101a6e9c37e4e992a344df384ad6fd8e70" HandleID="k8s-pod-network.f84c52352e0c7365d1369d21cd69c3101a6e9c37e4e992a344df384ad6fd8e70" Workload="srv--iww3y.gb1.brightbox.com-k8s-calico--kube--controllers--79cc9f459b--fwh5q-eth0" Aug 6 00:18:33.360088 containerd[1508]: 2024-08-06 00:18:33.353 [INFO][4145] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 6 00:18:33.360088 containerd[1508]: 2024-08-06 00:18:33.355 [INFO][4132] k8s.go 621: Teardown processing complete. ContainerID="f84c52352e0c7365d1369d21cd69c3101a6e9c37e4e992a344df384ad6fd8e70" Aug 6 00:18:33.364911 containerd[1508]: time="2024-08-06T00:18:33.360265297Z" level=info msg="TearDown network for sandbox \"f84c52352e0c7365d1369d21cd69c3101a6e9c37e4e992a344df384ad6fd8e70\" successfully" Aug 6 00:18:33.364911 containerd[1508]: time="2024-08-06T00:18:33.360300478Z" level=info msg="StopPodSandbox for \"f84c52352e0c7365d1369d21cd69c3101a6e9c37e4e992a344df384ad6fd8e70\" returns successfully" Aug 6 00:18:33.364911 containerd[1508]: time="2024-08-06T00:18:33.361199250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79cc9f459b-fwh5q,Uid:a9dae8f8-5755-4951-8d86-28ab0b866d2e,Namespace:calico-system,Attempt:1,}" Aug 6 00:18:33.369096 systemd[1]: run-netns-cni\x2dffac58ba\x2d0306\x2d7dc2\x2db13f\x2d8ae8f8c9edec.mount: Deactivated successfully. Aug 6 00:18:33.609596 systemd-networkd[1432]: cali14443599c9c: Link UP Aug 6 00:18:33.613253 systemd-networkd[1432]: cali14443599c9c: Gained carrier Aug 6 00:18:33.653448 containerd[1508]: 2024-08-06 00:18:33.424 [INFO][4158] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--iww3y.gb1.brightbox.com-k8s-coredns--5dd5756b68--zpvfb-eth0 coredns-5dd5756b68- kube-system ef6fd30e-55f2-4b6b-a5ca-c27d38eac3af 749 0 2024-08-06 00:17:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-iww3y.gb1.brightbox.com coredns-5dd5756b68-zpvfb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali14443599c9c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="1753b9fe263639783a59ea8c227d19f9d1a53e30145e05a763c8c9e45b61b126" Namespace="kube-system" Pod="coredns-5dd5756b68-zpvfb" WorkloadEndpoint="srv--iww3y.gb1.brightbox.com-k8s-coredns--5dd5756b68--zpvfb-" Aug 6 00:18:33.653448 containerd[1508]: 2024-08-06 00:18:33.425 [INFO][4158] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1753b9fe263639783a59ea8c227d19f9d1a53e30145e05a763c8c9e45b61b126" Namespace="kube-system" Pod="coredns-5dd5756b68-zpvfb" WorkloadEndpoint="srv--iww3y.gb1.brightbox.com-k8s-coredns--5dd5756b68--zpvfb-eth0" Aug 6 00:18:33.653448 containerd[1508]: 2024-08-06 00:18:33.524 [INFO][4169] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1753b9fe263639783a59ea8c227d19f9d1a53e30145e05a763c8c9e45b61b126" HandleID="k8s-pod-network.1753b9fe263639783a59ea8c227d19f9d1a53e30145e05a763c8c9e45b61b126" Workload="srv--iww3y.gb1.brightbox.com-k8s-coredns--5dd5756b68--zpvfb-eth0" Aug 6 00:18:33.653448 containerd[1508]: 2024-08-06 00:18:33.543 [INFO][4169] ipam_plugin.go 264: Auto assigning IP ContainerID="1753b9fe263639783a59ea8c227d19f9d1a53e30145e05a763c8c9e45b61b126" HandleID="k8s-pod-network.1753b9fe263639783a59ea8c227d19f9d1a53e30145e05a763c8c9e45b61b126" Workload="srv--iww3y.gb1.brightbox.com-k8s-coredns--5dd5756b68--zpvfb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318750), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-iww3y.gb1.brightbox.com", "pod":"coredns-5dd5756b68-zpvfb", "timestamp":"2024-08-06 00:18:33.524084795 +0000 UTC"}, Hostname:"srv-iww3y.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 6 00:18:33.653448 containerd[1508]: 2024-08-06 00:18:33.543 [INFO][4169] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 6 00:18:33.653448 containerd[1508]: 2024-08-06 00:18:33.543 [INFO][4169] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 6 00:18:33.653448 containerd[1508]: 2024-08-06 00:18:33.543 [INFO][4169] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-iww3y.gb1.brightbox.com' Aug 6 00:18:33.653448 containerd[1508]: 2024-08-06 00:18:33.546 [INFO][4169] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1753b9fe263639783a59ea8c227d19f9d1a53e30145e05a763c8c9e45b61b126" host="srv-iww3y.gb1.brightbox.com" Aug 6 00:18:33.653448 containerd[1508]: 2024-08-06 00:18:33.556 [INFO][4169] ipam.go 372: Looking up existing affinities for host host="srv-iww3y.gb1.brightbox.com" Aug 6 00:18:33.653448 containerd[1508]: 2024-08-06 00:18:33.564 [INFO][4169] ipam.go 489: Trying affinity for 192.168.18.192/26 host="srv-iww3y.gb1.brightbox.com" Aug 6 00:18:33.653448 containerd[1508]: 2024-08-06 00:18:33.567 [INFO][4169] ipam.go 155: Attempting to load block cidr=192.168.18.192/26 host="srv-iww3y.gb1.brightbox.com" Aug 6 00:18:33.653448 containerd[1508]: 2024-08-06 00:18:33.571 [INFO][4169] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.18.192/26 host="srv-iww3y.gb1.brightbox.com" Aug 6 00:18:33.653448 containerd[1508]: 2024-08-06 00:18:33.571 [INFO][4169] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.18.192/26 handle="k8s-pod-network.1753b9fe263639783a59ea8c227d19f9d1a53e30145e05a763c8c9e45b61b126" host="srv-iww3y.gb1.brightbox.com" Aug 6 00:18:33.653448 containerd[1508]: 2024-08-06 00:18:33.574 [INFO][4169] ipam.go 1685: Creating new handle: k8s-pod-network.1753b9fe263639783a59ea8c227d19f9d1a53e30145e05a763c8c9e45b61b126 Aug 6 00:18:33.653448 containerd[1508]: 2024-08-06 00:18:33.585 [INFO][4169] ipam.go 1203: Writing block in order to claim IPs block=192.168.18.192/26 handle="k8s-pod-network.1753b9fe263639783a59ea8c227d19f9d1a53e30145e05a763c8c9e45b61b126" host="srv-iww3y.gb1.brightbox.com" Aug 6 00:18:33.653448 containerd[1508]: 2024-08-06 00:18:33.594 [INFO][4169] ipam.go 1216: Successfully claimed IPs: [192.168.18.195/26] block=192.168.18.192/26 handle="k8s-pod-network.1753b9fe263639783a59ea8c227d19f9d1a53e30145e05a763c8c9e45b61b126" host="srv-iww3y.gb1.brightbox.com" Aug 6 00:18:33.653448 containerd[1508]: 2024-08-06 00:18:33.594 [INFO][4169] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.18.195/26] handle="k8s-pod-network.1753b9fe263639783a59ea8c227d19f9d1a53e30145e05a763c8c9e45b61b126" host="srv-iww3y.gb1.brightbox.com" Aug 6 00:18:33.653448 containerd[1508]: 2024-08-06 00:18:33.594 [INFO][4169] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 6 00:18:33.653448 containerd[1508]: 2024-08-06 00:18:33.594 [INFO][4169] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.18.195/26] IPv6=[] ContainerID="1753b9fe263639783a59ea8c227d19f9d1a53e30145e05a763c8c9e45b61b126" HandleID="k8s-pod-network.1753b9fe263639783a59ea8c227d19f9d1a53e30145e05a763c8c9e45b61b126" Workload="srv--iww3y.gb1.brightbox.com-k8s-coredns--5dd5756b68--zpvfb-eth0" Aug 6 00:18:33.656609 containerd[1508]: 2024-08-06 00:18:33.598 [INFO][4158] k8s.go 386: Populated endpoint ContainerID="1753b9fe263639783a59ea8c227d19f9d1a53e30145e05a763c8c9e45b61b126" Namespace="kube-system" Pod="coredns-5dd5756b68-zpvfb" WorkloadEndpoint="srv--iww3y.gb1.brightbox.com-k8s-coredns--5dd5756b68--zpvfb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--iww3y.gb1.brightbox.com-k8s-coredns--5dd5756b68--zpvfb-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"ef6fd30e-55f2-4b6b-a5ca-c27d38eac3af", ResourceVersion:"749", Generation:0, CreationTimestamp:time.Date(2024, time.August, 6, 0, 17, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-iww3y.gb1.brightbox.com", ContainerID:"", Pod:"coredns-5dd5756b68-zpvfb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali14443599c9c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 6 00:18:33.656609 containerd[1508]: 2024-08-06 00:18:33.599 [INFO][4158] k8s.go 387: Calico CNI using IPs: [192.168.18.195/32] ContainerID="1753b9fe263639783a59ea8c227d19f9d1a53e30145e05a763c8c9e45b61b126" Namespace="kube-system" Pod="coredns-5dd5756b68-zpvfb" WorkloadEndpoint="srv--iww3y.gb1.brightbox.com-k8s-coredns--5dd5756b68--zpvfb-eth0" Aug 6 00:18:33.656609 containerd[1508]: 2024-08-06 00:18:33.599 [INFO][4158] dataplane_linux.go 68: Setting the host side veth name to cali14443599c9c ContainerID="1753b9fe263639783a59ea8c227d19f9d1a53e30145e05a763c8c9e45b61b126" Namespace="kube-system" Pod="coredns-5dd5756b68-zpvfb" WorkloadEndpoint="srv--iww3y.gb1.brightbox.com-k8s-coredns--5dd5756b68--zpvfb-eth0" Aug 6 00:18:33.656609 containerd[1508]: 2024-08-06 00:18:33.615 [INFO][4158] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="1753b9fe263639783a59ea8c227d19f9d1a53e30145e05a763c8c9e45b61b126" Namespace="kube-system" Pod="coredns-5dd5756b68-zpvfb" WorkloadEndpoint="srv--iww3y.gb1.brightbox.com-k8s-coredns--5dd5756b68--zpvfb-eth0" Aug 6 00:18:33.656609 containerd[1508]: 2024-08-06 00:18:33.619 [INFO][4158] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1753b9fe263639783a59ea8c227d19f9d1a53e30145e05a763c8c9e45b61b126" Namespace="kube-system" Pod="coredns-5dd5756b68-zpvfb" WorkloadEndpoint="srv--iww3y.gb1.brightbox.com-k8s-coredns--5dd5756b68--zpvfb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--iww3y.gb1.brightbox.com-k8s-coredns--5dd5756b68--zpvfb-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"ef6fd30e-55f2-4b6b-a5ca-c27d38eac3af", ResourceVersion:"749", Generation:0, CreationTimestamp:time.Date(2024, time.August, 6, 0, 17, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-iww3y.gb1.brightbox.com", ContainerID:"1753b9fe263639783a59ea8c227d19f9d1a53e30145e05a763c8c9e45b61b126", Pod:"coredns-5dd5756b68-zpvfb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali14443599c9c", MAC:"d2:df:d8:08:89:8c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 6 00:18:33.656609 containerd[1508]: 2024-08-06 00:18:33.648 [INFO][4158] k8s.go 500: Wrote updated endpoint to datastore ContainerID="1753b9fe263639783a59ea8c227d19f9d1a53e30145e05a763c8c9e45b61b126" Namespace="kube-system" Pod="coredns-5dd5756b68-zpvfb" WorkloadEndpoint="srv--iww3y.gb1.brightbox.com-k8s-coredns--5dd5756b68--zpvfb-eth0" Aug 6 00:18:33.752585 containerd[1508]: time="2024-08-06T00:18:33.752400308Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 6 00:18:33.753215 containerd[1508]: time="2024-08-06T00:18:33.752811723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 00:18:33.753215 containerd[1508]: time="2024-08-06T00:18:33.752843328Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 6 00:18:33.753215 containerd[1508]: time="2024-08-06T00:18:33.752897754Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 00:18:33.804188 systemd[1]: Started cri-containerd-1753b9fe263639783a59ea8c227d19f9d1a53e30145e05a763c8c9e45b61b126.scope - libcontainer container 1753b9fe263639783a59ea8c227d19f9d1a53e30145e05a763c8c9e45b61b126. Aug 6 00:18:33.846108 systemd-networkd[1432]: cali397c4cdd27f: Link UP Aug 6 00:18:33.849499 systemd-networkd[1432]: cali397c4cdd27f: Gained carrier Aug 6 00:18:33.887333 containerd[1508]: 2024-08-06 00:18:33.594 [INFO][4176] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--iww3y.gb1.brightbox.com-k8s-calico--kube--controllers--79cc9f459b--fwh5q-eth0 calico-kube-controllers-79cc9f459b- calico-system a9dae8f8-5755-4951-8d86-28ab0b866d2e 748 0 2024-08-06 00:18:04 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:79cc9f459b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-iww3y.gb1.brightbox.com calico-kube-controllers-79cc9f459b-fwh5q eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali397c4cdd27f [] []}} ContainerID="5cd4d185b599700297ad221948ea4e705851087c5e554b3e6680a4da1c78eaad" Namespace="calico-system" Pod="calico-kube-controllers-79cc9f459b-fwh5q" WorkloadEndpoint="srv--iww3y.gb1.brightbox.com-k8s-calico--kube--controllers--79cc9f459b--fwh5q-" Aug 6 00:18:33.887333 containerd[1508]: 2024-08-06 00:18:33.595 [INFO][4176] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5cd4d185b599700297ad221948ea4e705851087c5e554b3e6680a4da1c78eaad" Namespace="calico-system" Pod="calico-kube-controllers-79cc9f459b-fwh5q" WorkloadEndpoint="srv--iww3y.gb1.brightbox.com-k8s-calico--kube--controllers--79cc9f459b--fwh5q-eth0" Aug 6 00:18:33.887333 containerd[1508]: 2024-08-06 00:18:33.747 [INFO][4190] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5cd4d185b599700297ad221948ea4e705851087c5e554b3e6680a4da1c78eaad" HandleID="k8s-pod-network.5cd4d185b599700297ad221948ea4e705851087c5e554b3e6680a4da1c78eaad" Workload="srv--iww3y.gb1.brightbox.com-k8s-calico--kube--controllers--79cc9f459b--fwh5q-eth0" Aug 6 00:18:33.887333 containerd[1508]: 2024-08-06 00:18:33.773 [INFO][4190] ipam_plugin.go 264: Auto assigning IP ContainerID="5cd4d185b599700297ad221948ea4e705851087c5e554b3e6680a4da1c78eaad" HandleID="k8s-pod-network.5cd4d185b599700297ad221948ea4e705851087c5e554b3e6680a4da1c78eaad" Workload="srv--iww3y.gb1.brightbox.com-k8s-calico--kube--controllers--79cc9f459b--fwh5q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b3520), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-iww3y.gb1.brightbox.com", "pod":"calico-kube-controllers-79cc9f459b-fwh5q", "timestamp":"2024-08-06 00:18:33.747103856 +0000 UTC"}, Hostname:"srv-iww3y.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 6 00:18:33.887333 containerd[1508]: 2024-08-06 00:18:33.775 [INFO][4190] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 6 00:18:33.887333 containerd[1508]: 2024-08-06 00:18:33.775 [INFO][4190] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 6 00:18:33.887333 containerd[1508]: 2024-08-06 00:18:33.775 [INFO][4190] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-iww3y.gb1.brightbox.com' Aug 6 00:18:33.887333 containerd[1508]: 2024-08-06 00:18:33.779 [INFO][4190] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5cd4d185b599700297ad221948ea4e705851087c5e554b3e6680a4da1c78eaad" host="srv-iww3y.gb1.brightbox.com" Aug 6 00:18:33.887333 containerd[1508]: 2024-08-06 00:18:33.789 [INFO][4190] ipam.go 372: Looking up existing affinities for host host="srv-iww3y.gb1.brightbox.com" Aug 6 00:18:33.887333 containerd[1508]: 2024-08-06 00:18:33.798 [INFO][4190] ipam.go 489: Trying affinity for 192.168.18.192/26 host="srv-iww3y.gb1.brightbox.com" Aug 6 00:18:33.887333 containerd[1508]: 2024-08-06 00:18:33.803 [INFO][4190] ipam.go 155: Attempting to load block cidr=192.168.18.192/26 host="srv-iww3y.gb1.brightbox.com" Aug 6 00:18:33.887333 containerd[1508]: 2024-08-06 00:18:33.810 [INFO][4190] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.18.192/26 host="srv-iww3y.gb1.brightbox.com" Aug 6 00:18:33.887333 containerd[1508]: 2024-08-06 00:18:33.811 [INFO][4190] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.18.192/26 handle="k8s-pod-network.5cd4d185b599700297ad221948ea4e705851087c5e554b3e6680a4da1c78eaad" host="srv-iww3y.gb1.brightbox.com" Aug 6 00:18:33.887333 containerd[1508]: 2024-08-06 00:18:33.814 [INFO][4190] ipam.go 1685: Creating new handle: k8s-pod-network.5cd4d185b599700297ad221948ea4e705851087c5e554b3e6680a4da1c78eaad Aug 6 00:18:33.887333 containerd[1508]: 2024-08-06 00:18:33.823 [INFO][4190] ipam.go 1203: Writing block in order to claim IPs block=192.168.18.192/26 handle="k8s-pod-network.5cd4d185b599700297ad221948ea4e705851087c5e554b3e6680a4da1c78eaad" host="srv-iww3y.gb1.brightbox.com" Aug 6 00:18:33.887333 containerd[1508]: 2024-08-06 00:18:33.833 [INFO][4190] ipam.go 1216: Successfully claimed IPs: [192.168.18.196/26] block=192.168.18.192/26 handle="k8s-pod-network.5cd4d185b599700297ad221948ea4e705851087c5e554b3e6680a4da1c78eaad" host="srv-iww3y.gb1.brightbox.com" Aug 6 00:18:33.887333 containerd[1508]: 2024-08-06 00:18:33.834 [INFO][4190] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.18.196/26] handle="k8s-pod-network.5cd4d185b599700297ad221948ea4e705851087c5e554b3e6680a4da1c78eaad" host="srv-iww3y.gb1.brightbox.com" Aug 6 00:18:33.887333 containerd[1508]: 2024-08-06 00:18:33.834 [INFO][4190] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 6 00:18:33.887333 containerd[1508]: 2024-08-06 00:18:33.834 [INFO][4190] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.18.196/26] IPv6=[] ContainerID="5cd4d185b599700297ad221948ea4e705851087c5e554b3e6680a4da1c78eaad" HandleID="k8s-pod-network.5cd4d185b599700297ad221948ea4e705851087c5e554b3e6680a4da1c78eaad" Workload="srv--iww3y.gb1.brightbox.com-k8s-calico--kube--controllers--79cc9f459b--fwh5q-eth0" Aug 6 00:18:33.892121 containerd[1508]: 2024-08-06 00:18:33.837 [INFO][4176] k8s.go 386: Populated endpoint ContainerID="5cd4d185b599700297ad221948ea4e705851087c5e554b3e6680a4da1c78eaad" Namespace="calico-system" Pod="calico-kube-controllers-79cc9f459b-fwh5q" WorkloadEndpoint="srv--iww3y.gb1.brightbox.com-k8s-calico--kube--controllers--79cc9f459b--fwh5q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--iww3y.gb1.brightbox.com-k8s-calico--kube--controllers--79cc9f459b--fwh5q-eth0", GenerateName:"calico-kube-controllers-79cc9f459b-", Namespace:"calico-system", SelfLink:"", UID:"a9dae8f8-5755-4951-8d86-28ab0b866d2e", ResourceVersion:"748", Generation:0, CreationTimestamp:time.Date(2024, time.August, 6, 0, 18, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79cc9f459b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-iww3y.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-79cc9f459b-fwh5q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.18.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali397c4cdd27f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 6 00:18:33.892121 containerd[1508]: 2024-08-06 00:18:33.838 [INFO][4176] k8s.go 387: Calico CNI using IPs: [192.168.18.196/32] ContainerID="5cd4d185b599700297ad221948ea4e705851087c5e554b3e6680a4da1c78eaad" Namespace="calico-system" Pod="calico-kube-controllers-79cc9f459b-fwh5q" WorkloadEndpoint="srv--iww3y.gb1.brightbox.com-k8s-calico--kube--controllers--79cc9f459b--fwh5q-eth0" Aug 6 00:18:33.892121 containerd[1508]: 2024-08-06 00:18:33.840 [INFO][4176] dataplane_linux.go 68: Setting the host side veth name to cali397c4cdd27f ContainerID="5cd4d185b599700297ad221948ea4e705851087c5e554b3e6680a4da1c78eaad" Namespace="calico-system" Pod="calico-kube-controllers-79cc9f459b-fwh5q" WorkloadEndpoint="srv--iww3y.gb1.brightbox.com-k8s-calico--kube--controllers--79cc9f459b--fwh5q-eth0" Aug 6 00:18:33.892121 containerd[1508]: 2024-08-06 00:18:33.850 [INFO][4176] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="5cd4d185b599700297ad221948ea4e705851087c5e554b3e6680a4da1c78eaad" Namespace="calico-system" Pod="calico-kube-controllers-79cc9f459b-fwh5q" WorkloadEndpoint="srv--iww3y.gb1.brightbox.com-k8s-calico--kube--controllers--79cc9f459b--fwh5q-eth0" Aug 6 00:18:33.892121 containerd[1508]: 2024-08-06 00:18:33.853 [INFO][4176] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5cd4d185b599700297ad221948ea4e705851087c5e554b3e6680a4da1c78eaad" Namespace="calico-system" Pod="calico-kube-controllers-79cc9f459b-fwh5q" WorkloadEndpoint="srv--iww3y.gb1.brightbox.com-k8s-calico--kube--controllers--79cc9f459b--fwh5q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--iww3y.gb1.brightbox.com-k8s-calico--kube--controllers--79cc9f459b--fwh5q-eth0", GenerateName:"calico-kube-controllers-79cc9f459b-", Namespace:"calico-system", SelfLink:"", UID:"a9dae8f8-5755-4951-8d86-28ab0b866d2e", ResourceVersion:"748", Generation:0, CreationTimestamp:time.Date(2024, time.August, 6, 0, 18, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79cc9f459b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-iww3y.gb1.brightbox.com", ContainerID:"5cd4d185b599700297ad221948ea4e705851087c5e554b3e6680a4da1c78eaad", Pod:"calico-kube-controllers-79cc9f459b-fwh5q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.18.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali397c4cdd27f", MAC:"0a:a9:1e:13:0e:a2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 6 00:18:33.892121 containerd[1508]: 2024-08-06 00:18:33.883 [INFO][4176] k8s.go 500: Wrote updated endpoint to datastore ContainerID="5cd4d185b599700297ad221948ea4e705851087c5e554b3e6680a4da1c78eaad" Namespace="calico-system" Pod="calico-kube-controllers-79cc9f459b-fwh5q" WorkloadEndpoint="srv--iww3y.gb1.brightbox.com-k8s-calico--kube--controllers--79cc9f459b--fwh5q-eth0" Aug 6 00:18:33.947305 containerd[1508]: time="2024-08-06T00:18:33.947200607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-zpvfb,Uid:ef6fd30e-55f2-4b6b-a5ca-c27d38eac3af,Namespace:kube-system,Attempt:1,} returns sandbox id \"1753b9fe263639783a59ea8c227d19f9d1a53e30145e05a763c8c9e45b61b126\"" Aug 6 00:18:33.956761 containerd[1508]: time="2024-08-06T00:18:33.956684560Z" level=info msg="CreateContainer within sandbox \"1753b9fe263639783a59ea8c227d19f9d1a53e30145e05a763c8c9e45b61b126\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 6 00:18:33.970454 containerd[1508]: time="2024-08-06T00:18:33.970330909Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 6 00:18:33.971062 containerd[1508]: time="2024-08-06T00:18:33.970428968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 00:18:33.971145 containerd[1508]: time="2024-08-06T00:18:33.971093395Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 6 00:18:33.971282 containerd[1508]: time="2024-08-06T00:18:33.971158945Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 00:18:33.981051 containerd[1508]: time="2024-08-06T00:18:33.980996490Z" level=info msg="CreateContainer within sandbox \"1753b9fe263639783a59ea8c227d19f9d1a53e30145e05a763c8c9e45b61b126\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2f540b78195394e733df83e5ee4624b7abdfa64aef878df75c34ba7437e823b7\"" Aug 6 00:18:33.982117 containerd[1508]: time="2024-08-06T00:18:33.982079457Z" level=info msg="StartContainer for \"2f540b78195394e733df83e5ee4624b7abdfa64aef878df75c34ba7437e823b7\"" Aug 6 00:18:34.011224 systemd[1]: Started cri-containerd-5cd4d185b599700297ad221948ea4e705851087c5e554b3e6680a4da1c78eaad.scope - libcontainer container 5cd4d185b599700297ad221948ea4e705851087c5e554b3e6680a4da1c78eaad. Aug 6 00:18:34.044294 systemd[1]: Started cri-containerd-2f540b78195394e733df83e5ee4624b7abdfa64aef878df75c34ba7437e823b7.scope - libcontainer container 2f540b78195394e733df83e5ee4624b7abdfa64aef878df75c34ba7437e823b7. Aug 6 00:18:34.100502 containerd[1508]: time="2024-08-06T00:18:34.100409724Z" level=info msg="StartContainer for \"2f540b78195394e733df83e5ee4624b7abdfa64aef878df75c34ba7437e823b7\" returns successfully" Aug 6 00:18:34.137453 containerd[1508]: time="2024-08-06T00:18:34.137304661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79cc9f459b-fwh5q,Uid:a9dae8f8-5755-4951-8d86-28ab0b866d2e,Namespace:calico-system,Attempt:1,} returns sandbox id \"5cd4d185b599700297ad221948ea4e705851087c5e554b3e6680a4da1c78eaad\"" Aug 6 00:18:34.316391 systemd-networkd[1432]: cali8a679b92145: Gained IPv6LL Aug 6 00:18:34.534096 kubelet[2668]: I0806 00:18:34.533871 2668 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-zpvfb" podStartSLOduration=38.533724458 podCreationTimestamp="2024-08-06 00:17:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-06 00:18:34.531009014 +0000 UTC m=+51.653315516" watchObservedRunningTime="2024-08-06 00:18:34.533724458 +0000 UTC m=+51.656030963" Aug 6 00:18:34.599292 containerd[1508]: time="2024-08-06T00:18:34.597814948Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 00:18:34.601213 containerd[1508]: time="2024-08-06T00:18:34.601135012Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7641062" Aug 6 00:18:34.602420 containerd[1508]: time="2024-08-06T00:18:34.602369954Z" level=info msg="ImageCreate event name:\"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 00:18:34.613888 containerd[1508]: time="2024-08-06T00:18:34.613849932Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 00:18:34.617741 containerd[1508]: time="2024-08-06T00:18:34.617701394Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"9088822\" in 1.987914949s" Aug 6 00:18:34.617872 containerd[1508]: time="2024-08-06T00:18:34.617842743Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\"" Aug 6 00:18:34.620580 containerd[1508]: time="2024-08-06T00:18:34.620361418Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Aug 6 00:18:34.628298 containerd[1508]: time="2024-08-06T00:18:34.628033423Z" level=info msg="CreateContainer within sandbox \"7d39a39525f165ffb952425b30b5fe8fb85a5b9fdb3e34222444282618fb8b79\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 6 00:18:34.669929 containerd[1508]: time="2024-08-06T00:18:34.669870065Z" level=info msg="CreateContainer within sandbox \"7d39a39525f165ffb952425b30b5fe8fb85a5b9fdb3e34222444282618fb8b79\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"4bb9149b315dc3dcd1b45b4e5eba41699733fb27ce4aa4a481d5b238e25f3ca4\"" Aug 6 00:18:34.671048 containerd[1508]: time="2024-08-06T00:18:34.671016451Z" level=info msg="StartContainer for \"4bb9149b315dc3dcd1b45b4e5eba41699733fb27ce4aa4a481d5b238e25f3ca4\"" Aug 6 00:18:34.741311 systemd[1]: Started cri-containerd-4bb9149b315dc3dcd1b45b4e5eba41699733fb27ce4aa4a481d5b238e25f3ca4.scope - libcontainer container 4bb9149b315dc3dcd1b45b4e5eba41699733fb27ce4aa4a481d5b238e25f3ca4. Aug 6 00:18:34.815893 containerd[1508]: time="2024-08-06T00:18:34.815732620Z" level=info msg="StartContainer for \"4bb9149b315dc3dcd1b45b4e5eba41699733fb27ce4aa4a481d5b238e25f3ca4\" returns successfully" Aug 6 00:18:34.829134 systemd-networkd[1432]: cali14443599c9c: Gained IPv6LL Aug 6 00:18:35.468291 systemd-networkd[1432]: cali397c4cdd27f: Gained IPv6LL Aug 6 00:18:38.006643 containerd[1508]: time="2024-08-06T00:18:38.004709061Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 00:18:38.006643 containerd[1508]: time="2024-08-06T00:18:38.006414538Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=33505793" Aug 6 00:18:38.010226 containerd[1508]: time="2024-08-06T00:18:38.008552897Z" level=info msg="ImageCreate event name:\"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 00:18:38.011420 containerd[1508]: time="2024-08-06T00:18:38.011372590Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 00:18:38.013901 containerd[1508]: time="2024-08-06T00:18:38.013853254Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"34953521\" in 3.393444857s" Aug 6 00:18:38.014203 containerd[1508]: time="2024-08-06T00:18:38.014163782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\"" Aug 6 00:18:38.017986 containerd[1508]: time="2024-08-06T00:18:38.015756595Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Aug 6 00:18:38.050364 containerd[1508]: time="2024-08-06T00:18:38.049781257Z" level=info msg="CreateContainer within sandbox \"5cd4d185b599700297ad221948ea4e705851087c5e554b3e6680a4da1c78eaad\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 6 00:18:38.077195 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount285932625.mount: Deactivated successfully. Aug 6 00:18:38.085296 containerd[1508]: time="2024-08-06T00:18:38.085200091Z" level=info msg="CreateContainer within sandbox \"5cd4d185b599700297ad221948ea4e705851087c5e554b3e6680a4da1c78eaad\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"5b9941e88065113fb4d6127438f648f62bffa6804febb4fc55e29636dee305a5\"" Aug 6 00:18:38.088465 containerd[1508]: time="2024-08-06T00:18:38.087184144Z" level=info msg="StartContainer for \"5b9941e88065113fb4d6127438f648f62bffa6804febb4fc55e29636dee305a5\"" Aug 6 00:18:38.161201 systemd[1]: Started cri-containerd-5b9941e88065113fb4d6127438f648f62bffa6804febb4fc55e29636dee305a5.scope - libcontainer container 5b9941e88065113fb4d6127438f648f62bffa6804febb4fc55e29636dee305a5. Aug 6 00:18:38.263540 containerd[1508]: time="2024-08-06T00:18:38.263332934Z" level=info msg="StartContainer for \"5b9941e88065113fb4d6127438f648f62bffa6804febb4fc55e29636dee305a5\" returns successfully" Aug 6 00:18:38.570922 kubelet[2668]: I0806 00:18:38.570714 2668 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-79cc9f459b-fwh5q" podStartSLOduration=30.700139765 podCreationTimestamp="2024-08-06 00:18:04 +0000 UTC" firstStartedPulling="2024-08-06 00:18:34.145078761 +0000 UTC m=+51.267385237" lastFinishedPulling="2024-08-06 00:18:38.015573162 +0000 UTC m=+55.137879644" observedRunningTime="2024-08-06 00:18:38.557986956 +0000 UTC m=+55.680293452" watchObservedRunningTime="2024-08-06 00:18:38.570634172 +0000 UTC m=+55.692940648" Aug 6 00:18:39.637902 systemd[1]: run-containerd-runc-k8s.io-5b9941e88065113fb4d6127438f648f62bffa6804febb4fc55e29636dee305a5-runc.GFL3ZM.mount: Deactivated successfully. Aug 6 00:18:39.996989 containerd[1508]: time="2024-08-06T00:18:39.994007896Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 00:18:39.996989 containerd[1508]: time="2024-08-06T00:18:39.995296980Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=10147655" Aug 6 00:18:39.996989 containerd[1508]: time="2024-08-06T00:18:39.996383024Z" level=info msg="ImageCreate event name:\"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 00:18:40.017064 containerd[1508]: time="2024-08-06T00:18:40.015069221Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 00:18:40.017064 containerd[1508]: time="2024-08-06T00:18:40.016381015Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"11595367\" in 2.000572644s" Aug 6 00:18:40.017064 containerd[1508]: time="2024-08-06T00:18:40.016509216Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\"" Aug 6 00:18:40.021107 containerd[1508]: time="2024-08-06T00:18:40.021054524Z" level=info msg="CreateContainer within sandbox \"7d39a39525f165ffb952425b30b5fe8fb85a5b9fdb3e34222444282618fb8b79\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 6 00:18:40.057204 containerd[1508]: time="2024-08-06T00:18:40.057139362Z" level=info msg="CreateContainer within sandbox \"7d39a39525f165ffb952425b30b5fe8fb85a5b9fdb3e34222444282618fb8b79\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e31d142ea8767f90da5bd89fab54dec3cca5e3a409393e64f009f8a5e5554733\"" Aug 6 00:18:40.058816 containerd[1508]: time="2024-08-06T00:18:40.058777067Z" level=info msg="StartContainer for \"e31d142ea8767f90da5bd89fab54dec3cca5e3a409393e64f009f8a5e5554733\"" Aug 6 00:18:40.158185 systemd[1]: Started cri-containerd-e31d142ea8767f90da5bd89fab54dec3cca5e3a409393e64f009f8a5e5554733.scope - libcontainer container e31d142ea8767f90da5bd89fab54dec3cca5e3a409393e64f009f8a5e5554733. Aug 6 00:18:40.262121 containerd[1508]: time="2024-08-06T00:18:40.261667789Z" level=info msg="StartContainer for \"e31d142ea8767f90da5bd89fab54dec3cca5e3a409393e64f009f8a5e5554733\" returns successfully" Aug 6 00:18:40.453109 kubelet[2668]: I0806 00:18:40.452389 2668 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 6 00:18:40.453109 kubelet[2668]: I0806 00:18:40.452465 2668 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 6 00:18:40.575384 kubelet[2668]: I0806 00:18:40.575233 2668 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-tdjb9" podStartSLOduration=29.184003329 podCreationTimestamp="2024-08-06 00:18:04 +0000 UTC" firstStartedPulling="2024-08-06 00:18:32.626812374 +0000 UTC m=+49.749118850" lastFinishedPulling="2024-08-06 00:18:40.017947619 +0000 UTC m=+57.140254100" observedRunningTime="2024-08-06 00:18:40.574849282 +0000 UTC m=+57.697155777" watchObservedRunningTime="2024-08-06 00:18:40.575138579 +0000 UTC m=+57.697445066" Aug 6 00:18:43.112191 containerd[1508]: time="2024-08-06T00:18:43.111381895Z" level=info msg="StopPodSandbox for \"34ebfb44c36ceb9f510cdd10f01b036ae2598526abcd2ed9f8d660b1604b4a96\"" Aug 6 00:18:43.261406 containerd[1508]: 2024-08-06 00:18:43.196 [WARNING][4516] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="34ebfb44c36ceb9f510cdd10f01b036ae2598526abcd2ed9f8d660b1604b4a96" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--iww3y.gb1.brightbox.com-k8s-coredns--5dd5756b68--zpvfb-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"ef6fd30e-55f2-4b6b-a5ca-c27d38eac3af", ResourceVersion:"766", Generation:0, CreationTimestamp:time.Date(2024, time.August, 6, 0, 17, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-iww3y.gb1.brightbox.com", ContainerID:"1753b9fe263639783a59ea8c227d19f9d1a53e30145e05a763c8c9e45b61b126", Pod:"coredns-5dd5756b68-zpvfb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali14443599c9c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 6 00:18:43.261406 containerd[1508]: 2024-08-06 00:18:43.197 [INFO][4516] k8s.go 608: Cleaning up netns ContainerID="34ebfb44c36ceb9f510cdd10f01b036ae2598526abcd2ed9f8d660b1604b4a96" Aug 6 00:18:43.261406 containerd[1508]: 2024-08-06 00:18:43.197 [INFO][4516] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="34ebfb44c36ceb9f510cdd10f01b036ae2598526abcd2ed9f8d660b1604b4a96" iface="eth0" netns="" Aug 6 00:18:43.261406 containerd[1508]: 2024-08-06 00:18:43.197 [INFO][4516] k8s.go 615: Releasing IP address(es) ContainerID="34ebfb44c36ceb9f510cdd10f01b036ae2598526abcd2ed9f8d660b1604b4a96" Aug 6 00:18:43.261406 containerd[1508]: 2024-08-06 00:18:43.197 [INFO][4516] utils.go 188: Calico CNI releasing IP address ContainerID="34ebfb44c36ceb9f510cdd10f01b036ae2598526abcd2ed9f8d660b1604b4a96" Aug 6 00:18:43.261406 containerd[1508]: 2024-08-06 00:18:43.239 [INFO][4525] ipam_plugin.go 411: Releasing address using handleID ContainerID="34ebfb44c36ceb9f510cdd10f01b036ae2598526abcd2ed9f8d660b1604b4a96" HandleID="k8s-pod-network.34ebfb44c36ceb9f510cdd10f01b036ae2598526abcd2ed9f8d660b1604b4a96" Workload="srv--iww3y.gb1.brightbox.com-k8s-coredns--5dd5756b68--zpvfb-eth0" Aug 6 00:18:43.261406 containerd[1508]: 2024-08-06 00:18:43.239 [INFO][4525] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 6 00:18:43.261406 containerd[1508]: 2024-08-06 00:18:43.239 [INFO][4525] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 6 00:18:43.261406 containerd[1508]: 2024-08-06 00:18:43.251 [WARNING][4525] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="34ebfb44c36ceb9f510cdd10f01b036ae2598526abcd2ed9f8d660b1604b4a96" HandleID="k8s-pod-network.34ebfb44c36ceb9f510cdd10f01b036ae2598526abcd2ed9f8d660b1604b4a96" Workload="srv--iww3y.gb1.brightbox.com-k8s-coredns--5dd5756b68--zpvfb-eth0" Aug 6 00:18:43.261406 containerd[1508]: 2024-08-06 00:18:43.251 [INFO][4525] ipam_plugin.go 439: Releasing address using workloadID ContainerID="34ebfb44c36ceb9f510cdd10f01b036ae2598526abcd2ed9f8d660b1604b4a96" HandleID="k8s-pod-network.34ebfb44c36ceb9f510cdd10f01b036ae2598526abcd2ed9f8d660b1604b4a96" Workload="srv--iww3y.gb1.brightbox.com-k8s-coredns--5dd5756b68--zpvfb-eth0" Aug 6 00:18:43.261406 containerd[1508]: 2024-08-06 00:18:43.253 [INFO][4525] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 6 00:18:43.261406 containerd[1508]: 2024-08-06 00:18:43.259 [INFO][4516] k8s.go 621: Teardown processing complete. ContainerID="34ebfb44c36ceb9f510cdd10f01b036ae2598526abcd2ed9f8d660b1604b4a96" Aug 6 00:18:43.266123 containerd[1508]: time="2024-08-06T00:18:43.261440396Z" level=info msg="TearDown network for sandbox \"34ebfb44c36ceb9f510cdd10f01b036ae2598526abcd2ed9f8d660b1604b4a96\" successfully" Aug 6 00:18:43.266123 containerd[1508]: time="2024-08-06T00:18:43.261478627Z" level=info msg="StopPodSandbox for \"34ebfb44c36ceb9f510cdd10f01b036ae2598526abcd2ed9f8d660b1604b4a96\" returns successfully" Aug 6 00:18:43.281285 containerd[1508]: time="2024-08-06T00:18:43.281222509Z" level=info msg="RemovePodSandbox for \"34ebfb44c36ceb9f510cdd10f01b036ae2598526abcd2ed9f8d660b1604b4a96\"" Aug 6 00:18:43.281980 containerd[1508]: time="2024-08-06T00:18:43.281563755Z" level=info msg="Forcibly stopping sandbox \"34ebfb44c36ceb9f510cdd10f01b036ae2598526abcd2ed9f8d660b1604b4a96\"" Aug 6 00:18:43.439110 containerd[1508]: 2024-08-06 00:18:43.384 [WARNING][4543] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="34ebfb44c36ceb9f510cdd10f01b036ae2598526abcd2ed9f8d660b1604b4a96" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--iww3y.gb1.brightbox.com-k8s-coredns--5dd5756b68--zpvfb-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"ef6fd30e-55f2-4b6b-a5ca-c27d38eac3af", ResourceVersion:"766", Generation:0, CreationTimestamp:time.Date(2024, time.August, 6, 0, 17, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-iww3y.gb1.brightbox.com", ContainerID:"1753b9fe263639783a59ea8c227d19f9d1a53e30145e05a763c8c9e45b61b126", Pod:"coredns-5dd5756b68-zpvfb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali14443599c9c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 6 00:18:43.439110 containerd[1508]: 2024-08-06 00:18:43.384 [INFO][4543] k8s.go 608: Cleaning up netns ContainerID="34ebfb44c36ceb9f510cdd10f01b036ae2598526abcd2ed9f8d660b1604b4a96" Aug 6 00:18:43.439110 containerd[1508]: 2024-08-06 00:18:43.384 [INFO][4543] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="34ebfb44c36ceb9f510cdd10f01b036ae2598526abcd2ed9f8d660b1604b4a96" iface="eth0" netns="" Aug 6 00:18:43.439110 containerd[1508]: 2024-08-06 00:18:43.384 [INFO][4543] k8s.go 615: Releasing IP address(es) ContainerID="34ebfb44c36ceb9f510cdd10f01b036ae2598526abcd2ed9f8d660b1604b4a96" Aug 6 00:18:43.439110 containerd[1508]: 2024-08-06 00:18:43.384 [INFO][4543] utils.go 188: Calico CNI releasing IP address ContainerID="34ebfb44c36ceb9f510cdd10f01b036ae2598526abcd2ed9f8d660b1604b4a96" Aug 6 00:18:43.439110 containerd[1508]: 2024-08-06 00:18:43.421 [INFO][4549] ipam_plugin.go 411: Releasing address using handleID ContainerID="34ebfb44c36ceb9f510cdd10f01b036ae2598526abcd2ed9f8d660b1604b4a96" HandleID="k8s-pod-network.34ebfb44c36ceb9f510cdd10f01b036ae2598526abcd2ed9f8d660b1604b4a96" Workload="srv--iww3y.gb1.brightbox.com-k8s-coredns--5dd5756b68--zpvfb-eth0" Aug 6 00:18:43.439110 containerd[1508]: 2024-08-06 00:18:43.421 [INFO][4549] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 6 00:18:43.439110 containerd[1508]: 2024-08-06 00:18:43.421 [INFO][4549] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 6 00:18:43.439110 containerd[1508]: 2024-08-06 00:18:43.431 [WARNING][4549] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="34ebfb44c36ceb9f510cdd10f01b036ae2598526abcd2ed9f8d660b1604b4a96" HandleID="k8s-pod-network.34ebfb44c36ceb9f510cdd10f01b036ae2598526abcd2ed9f8d660b1604b4a96" Workload="srv--iww3y.gb1.brightbox.com-k8s-coredns--5dd5756b68--zpvfb-eth0" Aug 6 00:18:43.439110 containerd[1508]: 2024-08-06 00:18:43.431 [INFO][4549] ipam_plugin.go 439: Releasing address using workloadID ContainerID="34ebfb44c36ceb9f510cdd10f01b036ae2598526abcd2ed9f8d660b1604b4a96" HandleID="k8s-pod-network.34ebfb44c36ceb9f510cdd10f01b036ae2598526abcd2ed9f8d660b1604b4a96" Workload="srv--iww3y.gb1.brightbox.com-k8s-coredns--5dd5756b68--zpvfb-eth0" Aug 6 00:18:43.439110 containerd[1508]: 2024-08-06 00:18:43.434 [INFO][4549] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 6 00:18:43.439110 containerd[1508]: 2024-08-06 00:18:43.436 [INFO][4543] k8s.go 621: Teardown processing complete. ContainerID="34ebfb44c36ceb9f510cdd10f01b036ae2598526abcd2ed9f8d660b1604b4a96" Aug 6 00:18:43.440582 containerd[1508]: time="2024-08-06T00:18:43.440209537Z" level=info msg="TearDown network for sandbox \"34ebfb44c36ceb9f510cdd10f01b036ae2598526abcd2ed9f8d660b1604b4a96\" successfully" Aug 6 00:18:43.449194 containerd[1508]: time="2024-08-06T00:18:43.449086929Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"34ebfb44c36ceb9f510cdd10f01b036ae2598526abcd2ed9f8d660b1604b4a96\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 6 00:18:43.449681 containerd[1508]: time="2024-08-06T00:18:43.449252657Z" level=info msg="RemovePodSandbox \"34ebfb44c36ceb9f510cdd10f01b036ae2598526abcd2ed9f8d660b1604b4a96\" returns successfully" Aug 6 00:18:43.450283 containerd[1508]: time="2024-08-06T00:18:43.450013868Z" level=info msg="StopPodSandbox for \"f84c52352e0c7365d1369d21cd69c3101a6e9c37e4e992a344df384ad6fd8e70\"" Aug 6 00:18:43.582483 containerd[1508]: 2024-08-06 00:18:43.508 [WARNING][4567] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f84c52352e0c7365d1369d21cd69c3101a6e9c37e4e992a344df384ad6fd8e70" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--iww3y.gb1.brightbox.com-k8s-calico--kube--controllers--79cc9f459b--fwh5q-eth0", GenerateName:"calico-kube-controllers-79cc9f459b-", Namespace:"calico-system", SelfLink:"", UID:"a9dae8f8-5755-4951-8d86-28ab0b866d2e", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2024, time.August, 6, 0, 18, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79cc9f459b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-iww3y.gb1.brightbox.com", ContainerID:"5cd4d185b599700297ad221948ea4e705851087c5e554b3e6680a4da1c78eaad", Pod:"calico-kube-controllers-79cc9f459b-fwh5q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.18.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali397c4cdd27f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 6 00:18:43.582483 containerd[1508]: 2024-08-06 00:18:43.509 [INFO][4567] k8s.go 608: Cleaning up netns ContainerID="f84c52352e0c7365d1369d21cd69c3101a6e9c37e4e992a344df384ad6fd8e70" Aug 6 00:18:43.582483 containerd[1508]: 2024-08-06 00:18:43.509 [INFO][4567] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="f84c52352e0c7365d1369d21cd69c3101a6e9c37e4e992a344df384ad6fd8e70" iface="eth0" netns="" Aug 6 00:18:43.582483 containerd[1508]: 2024-08-06 00:18:43.509 [INFO][4567] k8s.go 615: Releasing IP address(es) ContainerID="f84c52352e0c7365d1369d21cd69c3101a6e9c37e4e992a344df384ad6fd8e70" Aug 6 00:18:43.582483 containerd[1508]: 2024-08-06 00:18:43.509 [INFO][4567] utils.go 188: Calico CNI releasing IP address ContainerID="f84c52352e0c7365d1369d21cd69c3101a6e9c37e4e992a344df384ad6fd8e70" Aug 6 00:18:43.582483 containerd[1508]: 2024-08-06 00:18:43.561 [INFO][4580] ipam_plugin.go 411: Releasing address using handleID ContainerID="f84c52352e0c7365d1369d21cd69c3101a6e9c37e4e992a344df384ad6fd8e70" HandleID="k8s-pod-network.f84c52352e0c7365d1369d21cd69c3101a6e9c37e4e992a344df384ad6fd8e70" Workload="srv--iww3y.gb1.brightbox.com-k8s-calico--kube--controllers--79cc9f459b--fwh5q-eth0" Aug 6 00:18:43.582483 containerd[1508]: 2024-08-06 00:18:43.561 [INFO][4580] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 6 00:18:43.582483 containerd[1508]: 2024-08-06 00:18:43.562 [INFO][4580] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 6 00:18:43.582483 containerd[1508]: 2024-08-06 00:18:43.575 [WARNING][4580] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="f84c52352e0c7365d1369d21cd69c3101a6e9c37e4e992a344df384ad6fd8e70" HandleID="k8s-pod-network.f84c52352e0c7365d1369d21cd69c3101a6e9c37e4e992a344df384ad6fd8e70" Workload="srv--iww3y.gb1.brightbox.com-k8s-calico--kube--controllers--79cc9f459b--fwh5q-eth0" Aug 6 00:18:43.582483 containerd[1508]: 2024-08-06 00:18:43.575 [INFO][4580] ipam_plugin.go 439: Releasing address using workloadID ContainerID="f84c52352e0c7365d1369d21cd69c3101a6e9c37e4e992a344df384ad6fd8e70" HandleID="k8s-pod-network.f84c52352e0c7365d1369d21cd69c3101a6e9c37e4e992a344df384ad6fd8e70" Workload="srv--iww3y.gb1.brightbox.com-k8s-calico--kube--controllers--79cc9f459b--fwh5q-eth0" Aug 6 00:18:43.582483 containerd[1508]: 2024-08-06 00:18:43.578 [INFO][4580] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 6 00:18:43.582483 containerd[1508]: 2024-08-06 00:18:43.580 [INFO][4567] k8s.go 621: Teardown processing complete. ContainerID="f84c52352e0c7365d1369d21cd69c3101a6e9c37e4e992a344df384ad6fd8e70" Aug 6 00:18:43.583743 containerd[1508]: time="2024-08-06T00:18:43.582540716Z" level=info msg="TearDown network for sandbox \"f84c52352e0c7365d1369d21cd69c3101a6e9c37e4e992a344df384ad6fd8e70\" successfully" Aug 6 00:18:43.583743 containerd[1508]: time="2024-08-06T00:18:43.582578651Z" level=info msg="StopPodSandbox for \"f84c52352e0c7365d1369d21cd69c3101a6e9c37e4e992a344df384ad6fd8e70\" returns successfully" Aug 6 00:18:43.583743 containerd[1508]: time="2024-08-06T00:18:43.583502720Z" level=info msg="RemovePodSandbox for \"f84c52352e0c7365d1369d21cd69c3101a6e9c37e4e992a344df384ad6fd8e70\"" Aug 6 00:18:43.583743 containerd[1508]: time="2024-08-06T00:18:43.583538941Z" level=info msg="Forcibly stopping sandbox \"f84c52352e0c7365d1369d21cd69c3101a6e9c37e4e992a344df384ad6fd8e70\"" Aug 6 00:18:43.711607 containerd[1508]: 2024-08-06 00:18:43.641 [WARNING][4614] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f84c52352e0c7365d1369d21cd69c3101a6e9c37e4e992a344df384ad6fd8e70" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--iww3y.gb1.brightbox.com-k8s-calico--kube--controllers--79cc9f459b--fwh5q-eth0", GenerateName:"calico-kube-controllers-79cc9f459b-", Namespace:"calico-system", SelfLink:"", UID:"a9dae8f8-5755-4951-8d86-28ab0b866d2e", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2024, time.August, 6, 0, 18, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79cc9f459b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-iww3y.gb1.brightbox.com", ContainerID:"5cd4d185b599700297ad221948ea4e705851087c5e554b3e6680a4da1c78eaad", Pod:"calico-kube-controllers-79cc9f459b-fwh5q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.18.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali397c4cdd27f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 6 00:18:43.711607 containerd[1508]: 2024-08-06 00:18:43.641 [INFO][4614] k8s.go 608: Cleaning up netns ContainerID="f84c52352e0c7365d1369d21cd69c3101a6e9c37e4e992a344df384ad6fd8e70" Aug 6 00:18:43.711607 containerd[1508]: 2024-08-06 00:18:43.642 [INFO][4614] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="f84c52352e0c7365d1369d21cd69c3101a6e9c37e4e992a344df384ad6fd8e70" iface="eth0" netns="" Aug 6 00:18:43.711607 containerd[1508]: 2024-08-06 00:18:43.642 [INFO][4614] k8s.go 615: Releasing IP address(es) ContainerID="f84c52352e0c7365d1369d21cd69c3101a6e9c37e4e992a344df384ad6fd8e70" Aug 6 00:18:43.711607 containerd[1508]: 2024-08-06 00:18:43.642 [INFO][4614] utils.go 188: Calico CNI releasing IP address ContainerID="f84c52352e0c7365d1369d21cd69c3101a6e9c37e4e992a344df384ad6fd8e70" Aug 6 00:18:43.711607 containerd[1508]: 2024-08-06 00:18:43.693 [INFO][4622] ipam_plugin.go 411: Releasing address using handleID ContainerID="f84c52352e0c7365d1369d21cd69c3101a6e9c37e4e992a344df384ad6fd8e70" HandleID="k8s-pod-network.f84c52352e0c7365d1369d21cd69c3101a6e9c37e4e992a344df384ad6fd8e70" Workload="srv--iww3y.gb1.brightbox.com-k8s-calico--kube--controllers--79cc9f459b--fwh5q-eth0" Aug 6 00:18:43.711607 containerd[1508]: 2024-08-06 00:18:43.693 [INFO][4622] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 6 00:18:43.711607 containerd[1508]: 2024-08-06 00:18:43.693 [INFO][4622] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 6 00:18:43.711607 containerd[1508]: 2024-08-06 00:18:43.702 [WARNING][4622] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="f84c52352e0c7365d1369d21cd69c3101a6e9c37e4e992a344df384ad6fd8e70" HandleID="k8s-pod-network.f84c52352e0c7365d1369d21cd69c3101a6e9c37e4e992a344df384ad6fd8e70" Workload="srv--iww3y.gb1.brightbox.com-k8s-calico--kube--controllers--79cc9f459b--fwh5q-eth0" Aug 6 00:18:43.711607 containerd[1508]: 2024-08-06 00:18:43.702 [INFO][4622] ipam_plugin.go 439: Releasing address using workloadID ContainerID="f84c52352e0c7365d1369d21cd69c3101a6e9c37e4e992a344df384ad6fd8e70" HandleID="k8s-pod-network.f84c52352e0c7365d1369d21cd69c3101a6e9c37e4e992a344df384ad6fd8e70" Workload="srv--iww3y.gb1.brightbox.com-k8s-calico--kube--controllers--79cc9f459b--fwh5q-eth0" Aug 6 00:18:43.711607 containerd[1508]: 2024-08-06 00:18:43.705 [INFO][4622] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 6 00:18:43.711607 containerd[1508]: 2024-08-06 00:18:43.708 [INFO][4614] k8s.go 621: Teardown processing complete. ContainerID="f84c52352e0c7365d1369d21cd69c3101a6e9c37e4e992a344df384ad6fd8e70" Aug 6 00:18:43.711607 containerd[1508]: time="2024-08-06T00:18:43.711567655Z" level=info msg="TearDown network for sandbox \"f84c52352e0c7365d1369d21cd69c3101a6e9c37e4e992a344df384ad6fd8e70\" successfully" Aug 6 00:18:43.723605 containerd[1508]: time="2024-08-06T00:18:43.723540368Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f84c52352e0c7365d1369d21cd69c3101a6e9c37e4e992a344df384ad6fd8e70\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 6 00:18:43.723813 containerd[1508]: time="2024-08-06T00:18:43.723686629Z" level=info msg="RemovePodSandbox \"f84c52352e0c7365d1369d21cd69c3101a6e9c37e4e992a344df384ad6fd8e70\" returns successfully" Aug 6 00:18:43.724734 containerd[1508]: time="2024-08-06T00:18:43.724659229Z" level=info msg="StopPodSandbox for \"8e9d0f9e4ba63d0b2ac367464d576a9d485695e1c9268b11f227c25931fae6fa\"" Aug 6 00:18:43.871515 containerd[1508]: 2024-08-06 00:18:43.818 [WARNING][4640] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8e9d0f9e4ba63d0b2ac367464d576a9d485695e1c9268b11f227c25931fae6fa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--iww3y.gb1.brightbox.com-k8s-coredns--5dd5756b68--rxfx5-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"eed81736-ac05-4e75-94df-0684511dda22", ResourceVersion:"733", Generation:0, CreationTimestamp:time.Date(2024, time.August, 6, 0, 17, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-iww3y.gb1.brightbox.com", ContainerID:"8d2e9dd876b73bb68847dc5aba3e3ec4b1472e0f8ab55f04edfcf2b2fea45b29", Pod:"coredns-5dd5756b68-rxfx5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali85a0edb663a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 6 00:18:43.871515 containerd[1508]: 2024-08-06 00:18:43.819 [INFO][4640] k8s.go 608: Cleaning up netns ContainerID="8e9d0f9e4ba63d0b2ac367464d576a9d485695e1c9268b11f227c25931fae6fa" Aug 6 00:18:43.871515 containerd[1508]: 2024-08-06 00:18:43.819 [INFO][4640] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="8e9d0f9e4ba63d0b2ac367464d576a9d485695e1c9268b11f227c25931fae6fa" iface="eth0" netns="" Aug 6 00:18:43.871515 containerd[1508]: 2024-08-06 00:18:43.819 [INFO][4640] k8s.go 615: Releasing IP address(es) ContainerID="8e9d0f9e4ba63d0b2ac367464d576a9d485695e1c9268b11f227c25931fae6fa" Aug 6 00:18:43.871515 containerd[1508]: 2024-08-06 00:18:43.819 [INFO][4640] utils.go 188: Calico CNI releasing IP address ContainerID="8e9d0f9e4ba63d0b2ac367464d576a9d485695e1c9268b11f227c25931fae6fa" Aug 6 00:18:43.871515 containerd[1508]: 2024-08-06 00:18:43.857 [INFO][4648] ipam_plugin.go 411: Releasing address using handleID ContainerID="8e9d0f9e4ba63d0b2ac367464d576a9d485695e1c9268b11f227c25931fae6fa" HandleID="k8s-pod-network.8e9d0f9e4ba63d0b2ac367464d576a9d485695e1c9268b11f227c25931fae6fa" Workload="srv--iww3y.gb1.brightbox.com-k8s-coredns--5dd5756b68--rxfx5-eth0" Aug 6 00:18:43.871515 containerd[1508]: 2024-08-06 00:18:43.857 [INFO][4648] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 6 00:18:43.871515 containerd[1508]: 2024-08-06 00:18:43.857 [INFO][4648] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 6 00:18:43.871515 containerd[1508]: 2024-08-06 00:18:43.866 [WARNING][4648] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="8e9d0f9e4ba63d0b2ac367464d576a9d485695e1c9268b11f227c25931fae6fa" HandleID="k8s-pod-network.8e9d0f9e4ba63d0b2ac367464d576a9d485695e1c9268b11f227c25931fae6fa" Workload="srv--iww3y.gb1.brightbox.com-k8s-coredns--5dd5756b68--rxfx5-eth0" Aug 6 00:18:43.871515 containerd[1508]: 2024-08-06 00:18:43.866 [INFO][4648] ipam_plugin.go 439: Releasing address using workloadID ContainerID="8e9d0f9e4ba63d0b2ac367464d576a9d485695e1c9268b11f227c25931fae6fa" HandleID="k8s-pod-network.8e9d0f9e4ba63d0b2ac367464d576a9d485695e1c9268b11f227c25931fae6fa" Workload="srv--iww3y.gb1.brightbox.com-k8s-coredns--5dd5756b68--rxfx5-eth0" Aug 6 00:18:43.871515 containerd[1508]: 2024-08-06 00:18:43.868 [INFO][4648] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 6 00:18:43.871515 containerd[1508]: 2024-08-06 00:18:43.869 [INFO][4640] k8s.go 621: Teardown processing complete. ContainerID="8e9d0f9e4ba63d0b2ac367464d576a9d485695e1c9268b11f227c25931fae6fa" Aug 6 00:18:43.873681 containerd[1508]: time="2024-08-06T00:18:43.871591098Z" level=info msg="TearDown network for sandbox \"8e9d0f9e4ba63d0b2ac367464d576a9d485695e1c9268b11f227c25931fae6fa\" successfully" Aug 6 00:18:43.873681 containerd[1508]: time="2024-08-06T00:18:43.871629859Z" level=info msg="StopPodSandbox for \"8e9d0f9e4ba63d0b2ac367464d576a9d485695e1c9268b11f227c25931fae6fa\" returns successfully" Aug 6 00:18:43.873681 containerd[1508]: time="2024-08-06T00:18:43.872468210Z" level=info msg="RemovePodSandbox for \"8e9d0f9e4ba63d0b2ac367464d576a9d485695e1c9268b11f227c25931fae6fa\"" Aug 6 00:18:43.873681 containerd[1508]: time="2024-08-06T00:18:43.872566667Z" level=info msg="Forcibly stopping sandbox \"8e9d0f9e4ba63d0b2ac367464d576a9d485695e1c9268b11f227c25931fae6fa\"" Aug 6 00:18:44.000411 containerd[1508]: 2024-08-06 00:18:43.941 [WARNING][4667] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8e9d0f9e4ba63d0b2ac367464d576a9d485695e1c9268b11f227c25931fae6fa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--iww3y.gb1.brightbox.com-k8s-coredns--5dd5756b68--rxfx5-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"eed81736-ac05-4e75-94df-0684511dda22", ResourceVersion:"733", Generation:0, CreationTimestamp:time.Date(2024, time.August, 6, 0, 17, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-iww3y.gb1.brightbox.com", ContainerID:"8d2e9dd876b73bb68847dc5aba3e3ec4b1472e0f8ab55f04edfcf2b2fea45b29", Pod:"coredns-5dd5756b68-rxfx5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali85a0edb663a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 6 00:18:44.000411 containerd[1508]: 2024-08-06 00:18:43.942 [INFO][4667] k8s.go 608: Cleaning up netns ContainerID="8e9d0f9e4ba63d0b2ac367464d576a9d485695e1c9268b11f227c25931fae6fa" Aug 6 00:18:44.000411 containerd[1508]: 2024-08-06 00:18:43.942 [INFO][4667] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="8e9d0f9e4ba63d0b2ac367464d576a9d485695e1c9268b11f227c25931fae6fa" iface="eth0" netns="" Aug 6 00:18:44.000411 containerd[1508]: 2024-08-06 00:18:43.943 [INFO][4667] k8s.go 615: Releasing IP address(es) ContainerID="8e9d0f9e4ba63d0b2ac367464d576a9d485695e1c9268b11f227c25931fae6fa" Aug 6 00:18:44.000411 containerd[1508]: 2024-08-06 00:18:43.943 [INFO][4667] utils.go 188: Calico CNI releasing IP address ContainerID="8e9d0f9e4ba63d0b2ac367464d576a9d485695e1c9268b11f227c25931fae6fa" Aug 6 00:18:44.000411 containerd[1508]: 2024-08-06 00:18:43.974 [INFO][4673] ipam_plugin.go 411: Releasing address using handleID ContainerID="8e9d0f9e4ba63d0b2ac367464d576a9d485695e1c9268b11f227c25931fae6fa" HandleID="k8s-pod-network.8e9d0f9e4ba63d0b2ac367464d576a9d485695e1c9268b11f227c25931fae6fa" Workload="srv--iww3y.gb1.brightbox.com-k8s-coredns--5dd5756b68--rxfx5-eth0" Aug 6 00:18:44.000411 containerd[1508]: 2024-08-06 00:18:43.975 [INFO][4673] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 6 00:18:44.000411 containerd[1508]: 2024-08-06 00:18:43.975 [INFO][4673] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 6 00:18:44.000411 containerd[1508]: 2024-08-06 00:18:43.987 [WARNING][4673] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="8e9d0f9e4ba63d0b2ac367464d576a9d485695e1c9268b11f227c25931fae6fa" HandleID="k8s-pod-network.8e9d0f9e4ba63d0b2ac367464d576a9d485695e1c9268b11f227c25931fae6fa" Workload="srv--iww3y.gb1.brightbox.com-k8s-coredns--5dd5756b68--rxfx5-eth0" Aug 6 00:18:44.000411 containerd[1508]: 2024-08-06 00:18:43.987 [INFO][4673] ipam_plugin.go 439: Releasing address using workloadID ContainerID="8e9d0f9e4ba63d0b2ac367464d576a9d485695e1c9268b11f227c25931fae6fa" HandleID="k8s-pod-network.8e9d0f9e4ba63d0b2ac367464d576a9d485695e1c9268b11f227c25931fae6fa" Workload="srv--iww3y.gb1.brightbox.com-k8s-coredns--5dd5756b68--rxfx5-eth0" Aug 6 00:18:44.000411 containerd[1508]: 2024-08-06 00:18:43.996 [INFO][4673] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 6 00:18:44.000411 containerd[1508]: 2024-08-06 00:18:43.998 [INFO][4667] k8s.go 621: Teardown processing complete. ContainerID="8e9d0f9e4ba63d0b2ac367464d576a9d485695e1c9268b11f227c25931fae6fa" Aug 6 00:18:44.001731 containerd[1508]: time="2024-08-06T00:18:44.000466164Z" level=info msg="TearDown network for sandbox \"8e9d0f9e4ba63d0b2ac367464d576a9d485695e1c9268b11f227c25931fae6fa\" successfully" Aug 6 00:18:44.008494 containerd[1508]: time="2024-08-06T00:18:44.008410300Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8e9d0f9e4ba63d0b2ac367464d576a9d485695e1c9268b11f227c25931fae6fa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 6 00:18:44.008664 containerd[1508]: time="2024-08-06T00:18:44.008557379Z" level=info msg="RemovePodSandbox \"8e9d0f9e4ba63d0b2ac367464d576a9d485695e1c9268b11f227c25931fae6fa\" returns successfully" Aug 6 00:18:44.009496 containerd[1508]: time="2024-08-06T00:18:44.009444874Z" level=info msg="StopPodSandbox for \"474ffc6da7419b7a0c0ed7cbf6a1a41e9760185e369eb0d86140f8cfc2acab0b\"" Aug 6 00:18:44.122674 containerd[1508]: 2024-08-06 00:18:44.068 [WARNING][4693] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="474ffc6da7419b7a0c0ed7cbf6a1a41e9760185e369eb0d86140f8cfc2acab0b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--iww3y.gb1.brightbox.com-k8s-csi--node--driver--tdjb9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a0233e65-9604-4b6a-9cef-2013386fdc9d", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2024, time.August, 6, 0, 18, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-iww3y.gb1.brightbox.com", ContainerID:"7d39a39525f165ffb952425b30b5fe8fb85a5b9fdb3e34222444282618fb8b79", Pod:"csi-node-driver-tdjb9", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.18.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali8a679b92145", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 6 00:18:44.122674 containerd[1508]: 2024-08-06 00:18:44.068 [INFO][4693] k8s.go 608: Cleaning up netns ContainerID="474ffc6da7419b7a0c0ed7cbf6a1a41e9760185e369eb0d86140f8cfc2acab0b" Aug 6 00:18:44.122674 containerd[1508]: 2024-08-06 00:18:44.068 [INFO][4693] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="474ffc6da7419b7a0c0ed7cbf6a1a41e9760185e369eb0d86140f8cfc2acab0b" iface="eth0" netns="" Aug 6 00:18:44.122674 containerd[1508]: 2024-08-06 00:18:44.069 [INFO][4693] k8s.go 615: Releasing IP address(es) ContainerID="474ffc6da7419b7a0c0ed7cbf6a1a41e9760185e369eb0d86140f8cfc2acab0b" Aug 6 00:18:44.122674 containerd[1508]: 2024-08-06 00:18:44.069 [INFO][4693] utils.go 188: Calico CNI releasing IP address ContainerID="474ffc6da7419b7a0c0ed7cbf6a1a41e9760185e369eb0d86140f8cfc2acab0b" Aug 6 00:18:44.122674 containerd[1508]: 2024-08-06 00:18:44.105 [INFO][4699] ipam_plugin.go 411: Releasing address using handleID ContainerID="474ffc6da7419b7a0c0ed7cbf6a1a41e9760185e369eb0d86140f8cfc2acab0b" HandleID="k8s-pod-network.474ffc6da7419b7a0c0ed7cbf6a1a41e9760185e369eb0d86140f8cfc2acab0b" Workload="srv--iww3y.gb1.brightbox.com-k8s-csi--node--driver--tdjb9-eth0" Aug 6 00:18:44.122674 containerd[1508]: 2024-08-06 00:18:44.105 [INFO][4699] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 6 00:18:44.122674 containerd[1508]: 2024-08-06 00:18:44.105 [INFO][4699] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 6 00:18:44.122674 containerd[1508]: 2024-08-06 00:18:44.116 [WARNING][4699] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="474ffc6da7419b7a0c0ed7cbf6a1a41e9760185e369eb0d86140f8cfc2acab0b" HandleID="k8s-pod-network.474ffc6da7419b7a0c0ed7cbf6a1a41e9760185e369eb0d86140f8cfc2acab0b" Workload="srv--iww3y.gb1.brightbox.com-k8s-csi--node--driver--tdjb9-eth0" Aug 6 00:18:44.122674 containerd[1508]: 2024-08-06 00:18:44.116 [INFO][4699] ipam_plugin.go 439: Releasing address using workloadID ContainerID="474ffc6da7419b7a0c0ed7cbf6a1a41e9760185e369eb0d86140f8cfc2acab0b" HandleID="k8s-pod-network.474ffc6da7419b7a0c0ed7cbf6a1a41e9760185e369eb0d86140f8cfc2acab0b" Workload="srv--iww3y.gb1.brightbox.com-k8s-csi--node--driver--tdjb9-eth0" Aug 6 00:18:44.122674 containerd[1508]: 2024-08-06 00:18:44.118 [INFO][4699] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 6 00:18:44.122674 containerd[1508]: 2024-08-06 00:18:44.120 [INFO][4693] k8s.go 621: Teardown processing complete. ContainerID="474ffc6da7419b7a0c0ed7cbf6a1a41e9760185e369eb0d86140f8cfc2acab0b" Aug 6 00:18:44.125200 containerd[1508]: time="2024-08-06T00:18:44.122749309Z" level=info msg="TearDown network for sandbox \"474ffc6da7419b7a0c0ed7cbf6a1a41e9760185e369eb0d86140f8cfc2acab0b\" successfully" Aug 6 00:18:44.125200 containerd[1508]: time="2024-08-06T00:18:44.122811976Z" level=info msg="StopPodSandbox for \"474ffc6da7419b7a0c0ed7cbf6a1a41e9760185e369eb0d86140f8cfc2acab0b\" returns successfully" Aug 6 00:18:44.125200 containerd[1508]: time="2024-08-06T00:18:44.123678536Z" level=info msg="RemovePodSandbox for \"474ffc6da7419b7a0c0ed7cbf6a1a41e9760185e369eb0d86140f8cfc2acab0b\"" Aug 6 00:18:44.125200 containerd[1508]: time="2024-08-06T00:18:44.123718931Z" level=info msg="Forcibly stopping sandbox \"474ffc6da7419b7a0c0ed7cbf6a1a41e9760185e369eb0d86140f8cfc2acab0b\"" Aug 6 00:18:44.234182 containerd[1508]: 2024-08-06 00:18:44.183 [WARNING][4718] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="474ffc6da7419b7a0c0ed7cbf6a1a41e9760185e369eb0d86140f8cfc2acab0b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--iww3y.gb1.brightbox.com-k8s-csi--node--driver--tdjb9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a0233e65-9604-4b6a-9cef-2013386fdc9d", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2024, time.August, 6, 0, 18, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-iww3y.gb1.brightbox.com", ContainerID:"7d39a39525f165ffb952425b30b5fe8fb85a5b9fdb3e34222444282618fb8b79", Pod:"csi-node-driver-tdjb9", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.18.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali8a679b92145", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 6 00:18:44.234182 containerd[1508]: 2024-08-06 00:18:44.184 [INFO][4718] k8s.go 608: Cleaning up netns ContainerID="474ffc6da7419b7a0c0ed7cbf6a1a41e9760185e369eb0d86140f8cfc2acab0b" Aug 6 00:18:44.234182 containerd[1508]: 2024-08-06 00:18:44.184 [INFO][4718] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="474ffc6da7419b7a0c0ed7cbf6a1a41e9760185e369eb0d86140f8cfc2acab0b" iface="eth0" netns="" Aug 6 00:18:44.234182 containerd[1508]: 2024-08-06 00:18:44.184 [INFO][4718] k8s.go 615: Releasing IP address(es) ContainerID="474ffc6da7419b7a0c0ed7cbf6a1a41e9760185e369eb0d86140f8cfc2acab0b" Aug 6 00:18:44.234182 containerd[1508]: 2024-08-06 00:18:44.184 [INFO][4718] utils.go 188: Calico CNI releasing IP address ContainerID="474ffc6da7419b7a0c0ed7cbf6a1a41e9760185e369eb0d86140f8cfc2acab0b" Aug 6 00:18:44.234182 containerd[1508]: 2024-08-06 00:18:44.215 [INFO][4725] ipam_plugin.go 411: Releasing address using handleID ContainerID="474ffc6da7419b7a0c0ed7cbf6a1a41e9760185e369eb0d86140f8cfc2acab0b" HandleID="k8s-pod-network.474ffc6da7419b7a0c0ed7cbf6a1a41e9760185e369eb0d86140f8cfc2acab0b" Workload="srv--iww3y.gb1.brightbox.com-k8s-csi--node--driver--tdjb9-eth0" Aug 6 00:18:44.234182 containerd[1508]: 2024-08-06 00:18:44.215 [INFO][4725] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 6 00:18:44.234182 containerd[1508]: 2024-08-06 00:18:44.215 [INFO][4725] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 6 00:18:44.234182 containerd[1508]: 2024-08-06 00:18:44.225 [WARNING][4725] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="474ffc6da7419b7a0c0ed7cbf6a1a41e9760185e369eb0d86140f8cfc2acab0b" HandleID="k8s-pod-network.474ffc6da7419b7a0c0ed7cbf6a1a41e9760185e369eb0d86140f8cfc2acab0b" Workload="srv--iww3y.gb1.brightbox.com-k8s-csi--node--driver--tdjb9-eth0" Aug 6 00:18:44.234182 containerd[1508]: 2024-08-06 00:18:44.225 [INFO][4725] ipam_plugin.go 439: Releasing address using workloadID ContainerID="474ffc6da7419b7a0c0ed7cbf6a1a41e9760185e369eb0d86140f8cfc2acab0b" HandleID="k8s-pod-network.474ffc6da7419b7a0c0ed7cbf6a1a41e9760185e369eb0d86140f8cfc2acab0b" Workload="srv--iww3y.gb1.brightbox.com-k8s-csi--node--driver--tdjb9-eth0" Aug 6 00:18:44.234182 containerd[1508]: 2024-08-06 00:18:44.229 [INFO][4725] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 6 00:18:44.234182 containerd[1508]: 2024-08-06 00:18:44.231 [INFO][4718] k8s.go 621: Teardown processing complete. ContainerID="474ffc6da7419b7a0c0ed7cbf6a1a41e9760185e369eb0d86140f8cfc2acab0b" Aug 6 00:18:44.235648 containerd[1508]: time="2024-08-06T00:18:44.234256295Z" level=info msg="TearDown network for sandbox \"474ffc6da7419b7a0c0ed7cbf6a1a41e9760185e369eb0d86140f8cfc2acab0b\" successfully" Aug 6 00:18:44.238226 containerd[1508]: time="2024-08-06T00:18:44.238112064Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"474ffc6da7419b7a0c0ed7cbf6a1a41e9760185e369eb0d86140f8cfc2acab0b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 6 00:18:44.238443 containerd[1508]: time="2024-08-06T00:18:44.238270489Z" level=info msg="RemovePodSandbox \"474ffc6da7419b7a0c0ed7cbf6a1a41e9760185e369eb0d86140f8cfc2acab0b\" returns successfully" Aug 6 00:18:48.615292 kubelet[2668]: I0806 00:18:48.613800 2668 topology_manager.go:215] "Topology Admit Handler" podUID="8000d12b-8c03-49fb-ae7f-7fd5e0388509" podNamespace="calico-apiserver" podName="calico-apiserver-5f545b59fb-66l42" Aug 6 00:18:48.616888 kubelet[2668]: I0806 00:18:48.615481 2668 topology_manager.go:215] "Topology Admit Handler" podUID="2bf0029b-1c3b-4166-8477-f3fb0d5e6189" podNamespace="calico-apiserver" podName="calico-apiserver-5f545b59fb-nw6zc" Aug 6 00:18:48.638489 systemd[1]: Created slice kubepods-besteffort-pod8000d12b_8c03_49fb_ae7f_7fd5e0388509.slice - libcontainer container kubepods-besteffort-pod8000d12b_8c03_49fb_ae7f_7fd5e0388509.slice. Aug 6 00:18:48.684519 systemd[1]: Created slice kubepods-besteffort-pod2bf0029b_1c3b_4166_8477_f3fb0d5e6189.slice - libcontainer container kubepods-besteffort-pod2bf0029b_1c3b_4166_8477_f3fb0d5e6189.slice. Aug 6 00:18:48.693760 kubelet[2668]: I0806 00:18:48.693706 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8000d12b-8c03-49fb-ae7f-7fd5e0388509-calico-apiserver-certs\") pod \"calico-apiserver-5f545b59fb-66l42\" (UID: \"8000d12b-8c03-49fb-ae7f-7fd5e0388509\") " pod="calico-apiserver/calico-apiserver-5f545b59fb-66l42" Aug 6 00:18:48.694044 kubelet[2668]: I0806 00:18:48.693790 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfgvh\" (UniqueName: \"kubernetes.io/projected/8000d12b-8c03-49fb-ae7f-7fd5e0388509-kube-api-access-hfgvh\") pod \"calico-apiserver-5f545b59fb-66l42\" (UID: \"8000d12b-8c03-49fb-ae7f-7fd5e0388509\") " pod="calico-apiserver/calico-apiserver-5f545b59fb-66l42" Aug 6 00:18:48.694044 kubelet[2668]: I0806 00:18:48.693843 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzmff\" (UniqueName: \"kubernetes.io/projected/2bf0029b-1c3b-4166-8477-f3fb0d5e6189-kube-api-access-tzmff\") pod \"calico-apiserver-5f545b59fb-nw6zc\" (UID: \"2bf0029b-1c3b-4166-8477-f3fb0d5e6189\") " pod="calico-apiserver/calico-apiserver-5f545b59fb-nw6zc" Aug 6 00:18:48.694044 kubelet[2668]: I0806 00:18:48.693918 2668 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2bf0029b-1c3b-4166-8477-f3fb0d5e6189-calico-apiserver-certs\") pod \"calico-apiserver-5f545b59fb-nw6zc\" (UID: \"2bf0029b-1c3b-4166-8477-f3fb0d5e6189\") " pod="calico-apiserver/calico-apiserver-5f545b59fb-nw6zc" Aug 6 00:18:48.794561 kubelet[2668]: E0806 00:18:48.794508 2668 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Aug 6 00:18:48.795253 kubelet[2668]: E0806 00:18:48.795223 2668 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Aug 6 00:18:48.802142 kubelet[2668]: E0806 00:18:48.802066 2668 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8000d12b-8c03-49fb-ae7f-7fd5e0388509-calico-apiserver-certs podName:8000d12b-8c03-49fb-ae7f-7fd5e0388509 nodeName:}" failed. No retries permitted until 2024-08-06 00:18:49.294632729 +0000 UTC m=+66.416939211 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/8000d12b-8c03-49fb-ae7f-7fd5e0388509-calico-apiserver-certs") pod "calico-apiserver-5f545b59fb-66l42" (UID: "8000d12b-8c03-49fb-ae7f-7fd5e0388509") : secret "calico-apiserver-certs" not found Aug 6 00:18:48.803110 kubelet[2668]: E0806 00:18:48.803051 2668 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2bf0029b-1c3b-4166-8477-f3fb0d5e6189-calico-apiserver-certs podName:2bf0029b-1c3b-4166-8477-f3fb0d5e6189 nodeName:}" failed. No retries permitted until 2024-08-06 00:18:49.302914335 +0000 UTC m=+66.425220823 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/2bf0029b-1c3b-4166-8477-f3fb0d5e6189-calico-apiserver-certs") pod "calico-apiserver-5f545b59fb-nw6zc" (UID: "2bf0029b-1c3b-4166-8477-f3fb0d5e6189") : secret "calico-apiserver-certs" not found Aug 6 00:18:49.582924 containerd[1508]: time="2024-08-06T00:18:49.582801567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f545b59fb-66l42,Uid:8000d12b-8c03-49fb-ae7f-7fd5e0388509,Namespace:calico-apiserver,Attempt:0,}" Aug 6 00:18:49.590256 containerd[1508]: time="2024-08-06T00:18:49.590152153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f545b59fb-nw6zc,Uid:2bf0029b-1c3b-4166-8477-f3fb0d5e6189,Namespace:calico-apiserver,Attempt:0,}" Aug 6 00:18:49.874714 systemd-networkd[1432]: cali4b40de34052: Link UP Aug 6 00:18:49.879882 systemd-networkd[1432]: cali4b40de34052: Gained carrier Aug 6 00:18:49.902534 containerd[1508]: 2024-08-06 00:18:49.715 [INFO][4759] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--iww3y.gb1.brightbox.com-k8s-calico--apiserver--5f545b59fb--nw6zc-eth0 calico-apiserver-5f545b59fb- calico-apiserver 2bf0029b-1c3b-4166-8477-f3fb0d5e6189 870 0 2024-08-06 00:18:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5f545b59fb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-iww3y.gb1.brightbox.com calico-apiserver-5f545b59fb-nw6zc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4b40de34052 [] []}} ContainerID="5c4dc4ba80a3942a132113718b41e413b3d37c6a93a74ba46b72cc7d800310a1" Namespace="calico-apiserver" Pod="calico-apiserver-5f545b59fb-nw6zc" WorkloadEndpoint="srv--iww3y.gb1.brightbox.com-k8s-calico--apiserver--5f545b59fb--nw6zc-" Aug 6 00:18:49.902534 containerd[1508]: 2024-08-06 00:18:49.715 [INFO][4759] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5c4dc4ba80a3942a132113718b41e413b3d37c6a93a74ba46b72cc7d800310a1" Namespace="calico-apiserver" Pod="calico-apiserver-5f545b59fb-nw6zc" WorkloadEndpoint="srv--iww3y.gb1.brightbox.com-k8s-calico--apiserver--5f545b59fb--nw6zc-eth0" Aug 6 00:18:49.902534 containerd[1508]: 2024-08-06 00:18:49.785 [INFO][4785] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5c4dc4ba80a3942a132113718b41e413b3d37c6a93a74ba46b72cc7d800310a1" HandleID="k8s-pod-network.5c4dc4ba80a3942a132113718b41e413b3d37c6a93a74ba46b72cc7d800310a1" Workload="srv--iww3y.gb1.brightbox.com-k8s-calico--apiserver--5f545b59fb--nw6zc-eth0" Aug 6 00:18:49.902534 containerd[1508]: 2024-08-06 00:18:49.805 [INFO][4785] ipam_plugin.go 264: Auto assigning IP ContainerID="5c4dc4ba80a3942a132113718b41e413b3d37c6a93a74ba46b72cc7d800310a1" HandleID="k8s-pod-network.5c4dc4ba80a3942a132113718b41e413b3d37c6a93a74ba46b72cc7d800310a1" Workload="srv--iww3y.gb1.brightbox.com-k8s-calico--apiserver--5f545b59fb--nw6zc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00036e080), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-iww3y.gb1.brightbox.com", "pod":"calico-apiserver-5f545b59fb-nw6zc", "timestamp":"2024-08-06 00:18:49.785070328 +0000 UTC"}, Hostname:"srv-iww3y.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 6 00:18:49.902534 containerd[1508]: 2024-08-06 00:18:49.805 [INFO][4785] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 6 00:18:49.902534 containerd[1508]: 2024-08-06 00:18:49.806 [INFO][4785] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 6 00:18:49.902534 containerd[1508]: 2024-08-06 00:18:49.806 [INFO][4785] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-iww3y.gb1.brightbox.com' Aug 6 00:18:49.902534 containerd[1508]: 2024-08-06 00:18:49.809 [INFO][4785] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5c4dc4ba80a3942a132113718b41e413b3d37c6a93a74ba46b72cc7d800310a1" host="srv-iww3y.gb1.brightbox.com" Aug 6 00:18:49.902534 containerd[1508]: 2024-08-06 00:18:49.815 [INFO][4785] ipam.go 372: Looking up existing affinities for host host="srv-iww3y.gb1.brightbox.com" Aug 6 00:18:49.902534 containerd[1508]: 2024-08-06 00:18:49.822 [INFO][4785] ipam.go 489: Trying affinity for 192.168.18.192/26 host="srv-iww3y.gb1.brightbox.com" Aug 6 00:18:49.902534 containerd[1508]: 2024-08-06 00:18:49.825 [INFO][4785] ipam.go 155: Attempting to load block cidr=192.168.18.192/26 host="srv-iww3y.gb1.brightbox.com" Aug 6 00:18:49.902534 containerd[1508]: 2024-08-06 00:18:49.830 [INFO][4785] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.18.192/26 host="srv-iww3y.gb1.brightbox.com" Aug 6 00:18:49.902534 containerd[1508]: 2024-08-06 00:18:49.831 [INFO][4785] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.18.192/26 handle="k8s-pod-network.5c4dc4ba80a3942a132113718b41e413b3d37c6a93a74ba46b72cc7d800310a1" host="srv-iww3y.gb1.brightbox.com" Aug 6 00:18:49.902534 containerd[1508]: 2024-08-06 00:18:49.834 [INFO][4785] ipam.go 1685: Creating new handle: k8s-pod-network.5c4dc4ba80a3942a132113718b41e413b3d37c6a93a74ba46b72cc7d800310a1 Aug 6 00:18:49.902534 containerd[1508]: 2024-08-06 00:18:49.841 [INFO][4785] ipam.go 1203: Writing block in order to claim IPs block=192.168.18.192/26 handle="k8s-pod-network.5c4dc4ba80a3942a132113718b41e413b3d37c6a93a74ba46b72cc7d800310a1" host="srv-iww3y.gb1.brightbox.com" Aug 6 00:18:49.902534 containerd[1508]: 2024-08-06 00:18:49.856 [INFO][4785] ipam.go 1216: Successfully claimed IPs: [192.168.18.197/26] block=192.168.18.192/26 handle="k8s-pod-network.5c4dc4ba80a3942a132113718b41e413b3d37c6a93a74ba46b72cc7d800310a1" host="srv-iww3y.gb1.brightbox.com" Aug 6 00:18:49.902534 containerd[1508]: 2024-08-06 00:18:49.857 [INFO][4785] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.18.197/26] handle="k8s-pod-network.5c4dc4ba80a3942a132113718b41e413b3d37c6a93a74ba46b72cc7d800310a1" host="srv-iww3y.gb1.brightbox.com" Aug 6 00:18:49.902534 containerd[1508]: 2024-08-06 00:18:49.857 [INFO][4785] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 6 00:18:49.902534 containerd[1508]: 2024-08-06 00:18:49.857 [INFO][4785] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.18.197/26] IPv6=[] ContainerID="5c4dc4ba80a3942a132113718b41e413b3d37c6a93a74ba46b72cc7d800310a1" HandleID="k8s-pod-network.5c4dc4ba80a3942a132113718b41e413b3d37c6a93a74ba46b72cc7d800310a1" Workload="srv--iww3y.gb1.brightbox.com-k8s-calico--apiserver--5f545b59fb--nw6zc-eth0" Aug 6 00:18:49.914550 containerd[1508]: 2024-08-06 00:18:49.861 [INFO][4759] k8s.go 386: Populated endpoint ContainerID="5c4dc4ba80a3942a132113718b41e413b3d37c6a93a74ba46b72cc7d800310a1" Namespace="calico-apiserver" Pod="calico-apiserver-5f545b59fb-nw6zc" WorkloadEndpoint="srv--iww3y.gb1.brightbox.com-k8s-calico--apiserver--5f545b59fb--nw6zc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--iww3y.gb1.brightbox.com-k8s-calico--apiserver--5f545b59fb--nw6zc-eth0", GenerateName:"calico-apiserver-5f545b59fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"2bf0029b-1c3b-4166-8477-f3fb0d5e6189", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2024, time.August, 6, 0, 18, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f545b59fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-iww3y.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-5f545b59fb-nw6zc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4b40de34052", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 6 00:18:49.914550 containerd[1508]: 2024-08-06 00:18:49.862 [INFO][4759] k8s.go 387: Calico CNI using IPs: [192.168.18.197/32] ContainerID="5c4dc4ba80a3942a132113718b41e413b3d37c6a93a74ba46b72cc7d800310a1" Namespace="calico-apiserver" Pod="calico-apiserver-5f545b59fb-nw6zc" WorkloadEndpoint="srv--iww3y.gb1.brightbox.com-k8s-calico--apiserver--5f545b59fb--nw6zc-eth0" Aug 6 00:18:49.914550 containerd[1508]: 2024-08-06 00:18:49.863 [INFO][4759] dataplane_linux.go 68: Setting the host side veth name to cali4b40de34052 ContainerID="5c4dc4ba80a3942a132113718b41e413b3d37c6a93a74ba46b72cc7d800310a1" Namespace="calico-apiserver" Pod="calico-apiserver-5f545b59fb-nw6zc" WorkloadEndpoint="srv--iww3y.gb1.brightbox.com-k8s-calico--apiserver--5f545b59fb--nw6zc-eth0" Aug 6 00:18:49.914550 containerd[1508]: 2024-08-06 00:18:49.879 [INFO][4759] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="5c4dc4ba80a3942a132113718b41e413b3d37c6a93a74ba46b72cc7d800310a1" Namespace="calico-apiserver" Pod="calico-apiserver-5f545b59fb-nw6zc" WorkloadEndpoint="srv--iww3y.gb1.brightbox.com-k8s-calico--apiserver--5f545b59fb--nw6zc-eth0" Aug 6 00:18:49.914550 containerd[1508]: 2024-08-06 00:18:49.881 [INFO][4759] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5c4dc4ba80a3942a132113718b41e413b3d37c6a93a74ba46b72cc7d800310a1" Namespace="calico-apiserver" Pod="calico-apiserver-5f545b59fb-nw6zc" WorkloadEndpoint="srv--iww3y.gb1.brightbox.com-k8s-calico--apiserver--5f545b59fb--nw6zc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--iww3y.gb1.brightbox.com-k8s-calico--apiserver--5f545b59fb--nw6zc-eth0", GenerateName:"calico-apiserver-5f545b59fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"2bf0029b-1c3b-4166-8477-f3fb0d5e6189", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2024, time.August, 6, 0, 18, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f545b59fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-iww3y.gb1.brightbox.com", ContainerID:"5c4dc4ba80a3942a132113718b41e413b3d37c6a93a74ba46b72cc7d800310a1", Pod:"calico-apiserver-5f545b59fb-nw6zc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4b40de34052", MAC:"fe:a2:49:3d:b8:11", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 6 00:18:49.914550 containerd[1508]: 2024-08-06 00:18:49.899 [INFO][4759] k8s.go 500: Wrote updated endpoint to datastore ContainerID="5c4dc4ba80a3942a132113718b41e413b3d37c6a93a74ba46b72cc7d800310a1" Namespace="calico-apiserver" Pod="calico-apiserver-5f545b59fb-nw6zc" WorkloadEndpoint="srv--iww3y.gb1.brightbox.com-k8s-calico--apiserver--5f545b59fb--nw6zc-eth0" Aug 6 00:18:49.981073 systemd-networkd[1432]: calife03f02d54a: Link UP Aug 6 00:18:49.982603 systemd-networkd[1432]: calife03f02d54a: Gained carrier Aug 6 00:18:50.012336 containerd[1508]: 2024-08-06 00:18:49.709 [INFO][4760] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--iww3y.gb1.brightbox.com-k8s-calico--apiserver--5f545b59fb--66l42-eth0 calico-apiserver-5f545b59fb- calico-apiserver 8000d12b-8c03-49fb-ae7f-7fd5e0388509 868 0 2024-08-06 00:18:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5f545b59fb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-iww3y.gb1.brightbox.com calico-apiserver-5f545b59fb-66l42 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calife03f02d54a [] []}} ContainerID="415e0336683608787f0f8ad247332302f0e6f7b24138279c4e228fd918191549" Namespace="calico-apiserver" Pod="calico-apiserver-5f545b59fb-66l42" WorkloadEndpoint="srv--iww3y.gb1.brightbox.com-k8s-calico--apiserver--5f545b59fb--66l42-" Aug 6 00:18:50.012336 containerd[1508]: 2024-08-06 00:18:49.710 [INFO][4760] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="415e0336683608787f0f8ad247332302f0e6f7b24138279c4e228fd918191549" Namespace="calico-apiserver" Pod="calico-apiserver-5f545b59fb-66l42" WorkloadEndpoint="srv--iww3y.gb1.brightbox.com-k8s-calico--apiserver--5f545b59fb--66l42-eth0" Aug 6 00:18:50.012336 containerd[1508]: 2024-08-06 00:18:49.789 [INFO][4784] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="415e0336683608787f0f8ad247332302f0e6f7b24138279c4e228fd918191549" HandleID="k8s-pod-network.415e0336683608787f0f8ad247332302f0e6f7b24138279c4e228fd918191549" Workload="srv--iww3y.gb1.brightbox.com-k8s-calico--apiserver--5f545b59fb--66l42-eth0" Aug 6 00:18:50.012336 containerd[1508]: 2024-08-06 00:18:49.807 [INFO][4784] ipam_plugin.go 264: Auto assigning IP ContainerID="415e0336683608787f0f8ad247332302f0e6f7b24138279c4e228fd918191549" HandleID="k8s-pod-network.415e0336683608787f0f8ad247332302f0e6f7b24138279c4e228fd918191549" Workload="srv--iww3y.gb1.brightbox.com-k8s-calico--apiserver--5f545b59fb--66l42-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000690930), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-iww3y.gb1.brightbox.com", "pod":"calico-apiserver-5f545b59fb-66l42", "timestamp":"2024-08-06 00:18:49.789453047 +0000 UTC"}, Hostname:"srv-iww3y.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 6 00:18:50.012336 containerd[1508]: 2024-08-06 00:18:49.807 [INFO][4784] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 6 00:18:50.012336 containerd[1508]: 2024-08-06 00:18:49.857 [INFO][4784] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 6 00:18:50.012336 containerd[1508]: 2024-08-06 00:18:49.857 [INFO][4784] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-iww3y.gb1.brightbox.com' Aug 6 00:18:50.012336 containerd[1508]: 2024-08-06 00:18:49.867 [INFO][4784] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.415e0336683608787f0f8ad247332302f0e6f7b24138279c4e228fd918191549" host="srv-iww3y.gb1.brightbox.com" Aug 6 00:18:50.012336 containerd[1508]: 2024-08-06 00:18:49.884 [INFO][4784] ipam.go 372: Looking up existing affinities for host host="srv-iww3y.gb1.brightbox.com" Aug 6 00:18:50.012336 containerd[1508]: 2024-08-06 00:18:49.905 [INFO][4784] ipam.go 489: Trying affinity for 192.168.18.192/26 host="srv-iww3y.gb1.brightbox.com" Aug 6 00:18:50.012336 containerd[1508]: 2024-08-06 00:18:49.917 [INFO][4784] ipam.go 155: Attempting to load block cidr=192.168.18.192/26 host="srv-iww3y.gb1.brightbox.com" Aug 6 00:18:50.012336 containerd[1508]: 2024-08-06 00:18:49.924 [INFO][4784] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.18.192/26 host="srv-iww3y.gb1.brightbox.com" Aug 6 00:18:50.012336 containerd[1508]: 2024-08-06 00:18:49.924 [INFO][4784] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.18.192/26 handle="k8s-pod-network.415e0336683608787f0f8ad247332302f0e6f7b24138279c4e228fd918191549" host="srv-iww3y.gb1.brightbox.com" Aug 6 00:18:50.012336 containerd[1508]: 2024-08-06 00:18:49.929 [INFO][4784] ipam.go 1685: Creating new handle: k8s-pod-network.415e0336683608787f0f8ad247332302f0e6f7b24138279c4e228fd918191549 Aug 6 00:18:50.012336 containerd[1508]: 2024-08-06 00:18:49.945 [INFO][4784] ipam.go 1203: Writing block in order to claim IPs block=192.168.18.192/26 handle="k8s-pod-network.415e0336683608787f0f8ad247332302f0e6f7b24138279c4e228fd918191549" host="srv-iww3y.gb1.brightbox.com" Aug 6 00:18:50.012336 containerd[1508]: 2024-08-06 00:18:49.963 [INFO][4784] ipam.go 1216: Successfully claimed IPs: [192.168.18.198/26] block=192.168.18.192/26 handle="k8s-pod-network.415e0336683608787f0f8ad247332302f0e6f7b24138279c4e228fd918191549" host="srv-iww3y.gb1.brightbox.com" Aug 6 00:18:50.012336 containerd[1508]: 2024-08-06 00:18:49.966 [INFO][4784] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.18.198/26] handle="k8s-pod-network.415e0336683608787f0f8ad247332302f0e6f7b24138279c4e228fd918191549" host="srv-iww3y.gb1.brightbox.com" Aug 6 00:18:50.012336 containerd[1508]: 2024-08-06 00:18:49.966 [INFO][4784] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 6 00:18:50.012336 containerd[1508]: 2024-08-06 00:18:49.966 [INFO][4784] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.18.198/26] IPv6=[] ContainerID="415e0336683608787f0f8ad247332302f0e6f7b24138279c4e228fd918191549" HandleID="k8s-pod-network.415e0336683608787f0f8ad247332302f0e6f7b24138279c4e228fd918191549" Workload="srv--iww3y.gb1.brightbox.com-k8s-calico--apiserver--5f545b59fb--66l42-eth0" Aug 6 00:18:50.013926 containerd[1508]: 2024-08-06 00:18:49.970 [INFO][4760] k8s.go 386: Populated endpoint ContainerID="415e0336683608787f0f8ad247332302f0e6f7b24138279c4e228fd918191549" Namespace="calico-apiserver" Pod="calico-apiserver-5f545b59fb-66l42" WorkloadEndpoint="srv--iww3y.gb1.brightbox.com-k8s-calico--apiserver--5f545b59fb--66l42-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--iww3y.gb1.brightbox.com-k8s-calico--apiserver--5f545b59fb--66l42-eth0", GenerateName:"calico-apiserver-5f545b59fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"8000d12b-8c03-49fb-ae7f-7fd5e0388509", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2024, time.August, 6, 0, 18, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f545b59fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-iww3y.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-5f545b59fb-66l42", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calife03f02d54a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 6 00:18:50.013926 containerd[1508]: 2024-08-06 00:18:49.970 [INFO][4760] k8s.go 387: Calico CNI using IPs: [192.168.18.198/32] ContainerID="415e0336683608787f0f8ad247332302f0e6f7b24138279c4e228fd918191549" Namespace="calico-apiserver" Pod="calico-apiserver-5f545b59fb-66l42" WorkloadEndpoint="srv--iww3y.gb1.brightbox.com-k8s-calico--apiserver--5f545b59fb--66l42-eth0" Aug 6 00:18:50.013926 containerd[1508]: 2024-08-06 00:18:49.971 [INFO][4760] dataplane_linux.go 68: Setting the host side veth name to calife03f02d54a ContainerID="415e0336683608787f0f8ad247332302f0e6f7b24138279c4e228fd918191549" Namespace="calico-apiserver" Pod="calico-apiserver-5f545b59fb-66l42" WorkloadEndpoint="srv--iww3y.gb1.brightbox.com-k8s-calico--apiserver--5f545b59fb--66l42-eth0" Aug 6 00:18:50.013926 containerd[1508]: 2024-08-06 00:18:49.984 [INFO][4760] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="415e0336683608787f0f8ad247332302f0e6f7b24138279c4e228fd918191549" Namespace="calico-apiserver" Pod="calico-apiserver-5f545b59fb-66l42" WorkloadEndpoint="srv--iww3y.gb1.brightbox.com-k8s-calico--apiserver--5f545b59fb--66l42-eth0" Aug 6 00:18:50.013926 containerd[1508]: 2024-08-06 00:18:49.986 [INFO][4760] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="415e0336683608787f0f8ad247332302f0e6f7b24138279c4e228fd918191549" Namespace="calico-apiserver" Pod="calico-apiserver-5f545b59fb-66l42" WorkloadEndpoint="srv--iww3y.gb1.brightbox.com-k8s-calico--apiserver--5f545b59fb--66l42-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--iww3y.gb1.brightbox.com-k8s-calico--apiserver--5f545b59fb--66l42-eth0", GenerateName:"calico-apiserver-5f545b59fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"8000d12b-8c03-49fb-ae7f-7fd5e0388509", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2024, time.August, 6, 0, 18, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f545b59fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-iww3y.gb1.brightbox.com", ContainerID:"415e0336683608787f0f8ad247332302f0e6f7b24138279c4e228fd918191549", Pod:"calico-apiserver-5f545b59fb-66l42", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calife03f02d54a", MAC:"6a:46:ee:19:ff:e5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 6 00:18:50.013926 containerd[1508]: 2024-08-06 00:18:50.006 [INFO][4760] k8s.go 500: Wrote updated endpoint to datastore ContainerID="415e0336683608787f0f8ad247332302f0e6f7b24138279c4e228fd918191549" Namespace="calico-apiserver" Pod="calico-apiserver-5f545b59fb-66l42" WorkloadEndpoint="srv--iww3y.gb1.brightbox.com-k8s-calico--apiserver--5f545b59fb--66l42-eth0" Aug 6 00:18:50.017949 containerd[1508]: time="2024-08-06T00:18:50.016148692Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 6 00:18:50.018119 containerd[1508]: time="2024-08-06T00:18:50.017117020Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 00:18:50.018881 containerd[1508]: time="2024-08-06T00:18:50.018810229Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 6 00:18:50.020309 containerd[1508]: time="2024-08-06T00:18:50.018847404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 00:18:50.090032 systemd[1]: run-containerd-runc-k8s.io-5c4dc4ba80a3942a132113718b41e413b3d37c6a93a74ba46b72cc7d800310a1-runc.d9us0a.mount: Deactivated successfully. Aug 6 00:18:50.102189 systemd[1]: Started cri-containerd-5c4dc4ba80a3942a132113718b41e413b3d37c6a93a74ba46b72cc7d800310a1.scope - libcontainer container 5c4dc4ba80a3942a132113718b41e413b3d37c6a93a74ba46b72cc7d800310a1. Aug 6 00:18:50.117381 containerd[1508]: time="2024-08-06T00:18:50.117086398Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 6 00:18:50.117381 containerd[1508]: time="2024-08-06T00:18:50.117173268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 00:18:50.118108 containerd[1508]: time="2024-08-06T00:18:50.117205555Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 6 00:18:50.118108 containerd[1508]: time="2024-08-06T00:18:50.117227619Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 6 00:18:50.165151 systemd[1]: Started cri-containerd-415e0336683608787f0f8ad247332302f0e6f7b24138279c4e228fd918191549.scope - libcontainer container 415e0336683608787f0f8ad247332302f0e6f7b24138279c4e228fd918191549. Aug 6 00:18:50.266292 containerd[1508]: time="2024-08-06T00:18:50.266238367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f545b59fb-nw6zc,Uid:2bf0029b-1c3b-4166-8477-f3fb0d5e6189,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5c4dc4ba80a3942a132113718b41e413b3d37c6a93a74ba46b72cc7d800310a1\"" Aug 6 00:18:50.277019 containerd[1508]: time="2024-08-06T00:18:50.276385565Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Aug 6 00:18:50.292815 containerd[1508]: time="2024-08-06T00:18:50.292569618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f545b59fb-66l42,Uid:8000d12b-8c03-49fb-ae7f-7fd5e0388509,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"415e0336683608787f0f8ad247332302f0e6f7b24138279c4e228fd918191549\"" Aug 6 00:18:51.084377 systemd-networkd[1432]: calife03f02d54a: Gained IPv6LL Aug 6 00:18:51.340478 systemd-networkd[1432]: cali4b40de34052: Gained IPv6LL Aug 6 00:18:55.023122 containerd[1508]: time="2024-08-06T00:18:55.022705173Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 00:18:55.024957 containerd[1508]: time="2024-08-06T00:18:55.023689652Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=40421260" Aug 6 00:18:55.028145 containerd[1508]: time="2024-08-06T00:18:55.028006180Z" level=info msg="ImageCreate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 00:18:55.032710 containerd[1508]: time="2024-08-06T00:18:55.032611557Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 00:18:55.035089 containerd[1508]: time="2024-08-06T00:18:55.033782433Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 4.75733708s" Aug 6 00:18:55.035089 containerd[1508]: time="2024-08-06T00:18:55.033844815Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Aug 6 00:18:55.035318 containerd[1508]: time="2024-08-06T00:18:55.035284558Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Aug 6 00:18:55.039038 containerd[1508]: time="2024-08-06T00:18:55.038991115Z" level=info msg="CreateContainer within sandbox \"5c4dc4ba80a3942a132113718b41e413b3d37c6a93a74ba46b72cc7d800310a1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 6 00:18:55.067654 containerd[1508]: time="2024-08-06T00:18:55.067591181Z" level=info msg="CreateContainer within sandbox \"5c4dc4ba80a3942a132113718b41e413b3d37c6a93a74ba46b72cc7d800310a1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"97ce1d31644f724b9c3597cc0c7c03d2427d99a72bdef121faea5257ff9d319a\"" Aug 6 00:18:55.073024 containerd[1508]: time="2024-08-06T00:18:55.068449209Z" level=info msg="StartContainer for \"97ce1d31644f724b9c3597cc0c7c03d2427d99a72bdef121faea5257ff9d319a\"" Aug 6 00:18:55.069288 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3392661505.mount: Deactivated successfully. Aug 6 00:18:55.179245 systemd[1]: Started cri-containerd-97ce1d31644f724b9c3597cc0c7c03d2427d99a72bdef121faea5257ff9d319a.scope - libcontainer container 97ce1d31644f724b9c3597cc0c7c03d2427d99a72bdef121faea5257ff9d319a. Aug 6 00:18:55.277542 containerd[1508]: time="2024-08-06T00:18:55.277350973Z" level=info msg="StartContainer for \"97ce1d31644f724b9c3597cc0c7c03d2427d99a72bdef121faea5257ff9d319a\" returns successfully" Aug 6 00:18:55.437919 containerd[1508]: time="2024-08-06T00:18:55.437853482Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 6 00:18:55.440486 containerd[1508]: time="2024-08-06T00:18:55.440434024Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=77" Aug 6 00:18:55.443941 containerd[1508]: time="2024-08-06T00:18:55.443864574Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 408.529285ms" Aug 6 00:18:55.444130 containerd[1508]: time="2024-08-06T00:18:55.443942477Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Aug 6 00:18:55.448179 containerd[1508]: time="2024-08-06T00:18:55.448106644Z" level=info msg="CreateContainer within sandbox \"415e0336683608787f0f8ad247332302f0e6f7b24138279c4e228fd918191549\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 6 00:18:55.478076 containerd[1508]: time="2024-08-06T00:18:55.477857137Z" level=info msg="CreateContainer within sandbox \"415e0336683608787f0f8ad247332302f0e6f7b24138279c4e228fd918191549\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ee47f0c0e5620c5e5fa6c138390fca7997178e762ab147aa67150c5c2b428cc4\"" Aug 6 00:18:55.479941 containerd[1508]: time="2024-08-06T00:18:55.478826757Z" level=info msg="StartContainer for \"ee47f0c0e5620c5e5fa6c138390fca7997178e762ab147aa67150c5c2b428cc4\"" Aug 6 00:18:55.525154 systemd[1]: Started cri-containerd-ee47f0c0e5620c5e5fa6c138390fca7997178e762ab147aa67150c5c2b428cc4.scope - libcontainer container ee47f0c0e5620c5e5fa6c138390fca7997178e762ab147aa67150c5c2b428cc4. Aug 6 00:18:55.604651 containerd[1508]: time="2024-08-06T00:18:55.604443731Z" level=info msg="StartContainer for \"ee47f0c0e5620c5e5fa6c138390fca7997178e762ab147aa67150c5c2b428cc4\" returns successfully" Aug 6 00:18:55.686340 kubelet[2668]: I0806 00:18:55.685839 2668 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5f545b59fb-66l42" podStartSLOduration=2.539264316 podCreationTimestamp="2024-08-06 00:18:48 +0000 UTC" firstStartedPulling="2024-08-06 00:18:50.29797016 +0000 UTC m=+67.420276647" lastFinishedPulling="2024-08-06 00:18:55.444422214 +0000 UTC m=+72.566728701" observedRunningTime="2024-08-06 00:18:55.660954462 +0000 UTC m=+72.783260959" watchObservedRunningTime="2024-08-06 00:18:55.68571637 +0000 UTC m=+72.808022860" Aug 6 00:18:55.686340 kubelet[2668]: I0806 00:18:55.686138 2668 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5f545b59fb-nw6zc" podStartSLOduration=2.926024031 podCreationTimestamp="2024-08-06 00:18:48 +0000 UTC" firstStartedPulling="2024-08-06 00:18:50.274486743 +0000 UTC m=+67.396793224" lastFinishedPulling="2024-08-06 00:18:55.034570674 +0000 UTC m=+72.156877155" observedRunningTime="2024-08-06 00:18:55.685026183 +0000 UTC m=+72.807332684" watchObservedRunningTime="2024-08-06 00:18:55.686107962 +0000 UTC m=+72.808414452" Aug 6 00:18:58.511489 systemd[1]: Started sshd@9-10.244.27.62:22-139.178.89.65:51032.service - OpenSSH per-connection server daemon (139.178.89.65:51032). Aug 6 00:18:59.476008 sshd[5014]: Accepted publickey for core from 139.178.89.65 port 51032 ssh2: RSA SHA256:MLhCj2QQz+1ufXK8br7fHkzSPc3j4VOop8SP6hp3dC8 Aug 6 00:18:59.479735 sshd[5014]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 00:18:59.489306 systemd-logind[1485]: New session 12 of user core. Aug 6 00:18:59.497218 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 6 00:19:00.628518 sshd[5014]: pam_unix(sshd:session): session closed for user core Aug 6 00:19:00.636503 systemd[1]: sshd@9-10.244.27.62:22-139.178.89.65:51032.service: Deactivated successfully. Aug 6 00:19:00.639277 systemd[1]: session-12.scope: Deactivated successfully. Aug 6 00:19:00.640511 systemd-logind[1485]: Session 12 logged out. Waiting for processes to exit. Aug 6 00:19:00.642674 systemd-logind[1485]: Removed session 12. Aug 6 00:19:05.791143 systemd[1]: Started sshd@10-10.244.27.62:22-139.178.89.65:59086.service - OpenSSH per-connection server daemon (139.178.89.65:59086). Aug 6 00:19:06.684230 sshd[5038]: Accepted publickey for core from 139.178.89.65 port 59086 ssh2: RSA SHA256:MLhCj2QQz+1ufXK8br7fHkzSPc3j4VOop8SP6hp3dC8 Aug 6 00:19:06.691499 sshd[5038]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 00:19:06.711845 systemd-logind[1485]: New session 13 of user core. Aug 6 00:19:06.719559 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 6 00:19:07.670784 sshd[5038]: pam_unix(sshd:session): session closed for user core Aug 6 00:19:07.677174 systemd[1]: sshd@10-10.244.27.62:22-139.178.89.65:59086.service: Deactivated successfully. Aug 6 00:19:07.680884 systemd[1]: session-13.scope: Deactivated successfully. Aug 6 00:19:07.684129 systemd-logind[1485]: Session 13 logged out. Waiting for processes to exit. Aug 6 00:19:07.685667 systemd-logind[1485]: Removed session 13. Aug 6 00:19:12.835463 systemd[1]: Started sshd@11-10.244.27.62:22-139.178.89.65:59068.service - OpenSSH per-connection server daemon (139.178.89.65:59068). Aug 6 00:19:13.660535 systemd[1]: run-containerd-runc-k8s.io-0ad8d4e673a52397b383f83c438b40385c70021993d429f7ea80e548be94017d-runc.on67tV.mount: Deactivated successfully. Aug 6 00:19:13.758013 sshd[5061]: Accepted publickey for core from 139.178.89.65 port 59068 ssh2: RSA SHA256:MLhCj2QQz+1ufXK8br7fHkzSPc3j4VOop8SP6hp3dC8 Aug 6 00:19:13.766076 sshd[5061]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 00:19:13.783768 systemd-logind[1485]: New session 14 of user core. Aug 6 00:19:13.788257 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 6 00:19:14.514865 sshd[5061]: pam_unix(sshd:session): session closed for user core Aug 6 00:19:14.525649 systemd[1]: sshd@11-10.244.27.62:22-139.178.89.65:59068.service: Deactivated successfully. Aug 6 00:19:14.528446 systemd[1]: session-14.scope: Deactivated successfully. Aug 6 00:19:14.529827 systemd-logind[1485]: Session 14 logged out. Waiting for processes to exit. Aug 6 00:19:14.531634 systemd-logind[1485]: Removed session 14. Aug 6 00:19:14.672526 systemd[1]: Started sshd@12-10.244.27.62:22-139.178.89.65:59072.service - OpenSSH per-connection server daemon (139.178.89.65:59072). Aug 6 00:19:15.582016 sshd[5101]: Accepted publickey for core from 139.178.89.65 port 59072 ssh2: RSA SHA256:MLhCj2QQz+1ufXK8br7fHkzSPc3j4VOop8SP6hp3dC8 Aug 6 00:19:15.583856 sshd[5101]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 00:19:15.592339 systemd-logind[1485]: New session 15 of user core. Aug 6 00:19:15.599265 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 6 00:19:17.467786 sshd[5101]: pam_unix(sshd:session): session closed for user core Aug 6 00:19:17.478297 systemd[1]: sshd@12-10.244.27.62:22-139.178.89.65:59072.service: Deactivated successfully. Aug 6 00:19:17.481757 systemd[1]: session-15.scope: Deactivated successfully. Aug 6 00:19:17.484476 systemd-logind[1485]: Session 15 logged out. Waiting for processes to exit. Aug 6 00:19:17.487817 systemd-logind[1485]: Removed session 15. Aug 6 00:19:17.619353 systemd[1]: Started sshd@13-10.244.27.62:22-139.178.89.65:59088.service - OpenSSH per-connection server daemon (139.178.89.65:59088). Aug 6 00:19:18.514159 sshd[5131]: Accepted publickey for core from 139.178.89.65 port 59088 ssh2: RSA SHA256:MLhCj2QQz+1ufXK8br7fHkzSPc3j4VOop8SP6hp3dC8 Aug 6 00:19:18.518160 sshd[5131]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 00:19:18.526191 systemd-logind[1485]: New session 16 of user core. Aug 6 00:19:18.532335 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 6 00:19:19.344821 sshd[5131]: pam_unix(sshd:session): session closed for user core Aug 6 00:19:19.352532 systemd-logind[1485]: Session 16 logged out. Waiting for processes to exit. Aug 6 00:19:19.354374 systemd[1]: sshd@13-10.244.27.62:22-139.178.89.65:59088.service: Deactivated successfully. Aug 6 00:19:19.358703 systemd[1]: session-16.scope: Deactivated successfully. Aug 6 00:19:19.362707 systemd-logind[1485]: Removed session 16. Aug 6 00:19:24.506426 systemd[1]: Started sshd@14-10.244.27.62:22-139.178.89.65:54648.service - OpenSSH per-connection server daemon (139.178.89.65:54648). Aug 6 00:19:25.387019 sshd[5155]: Accepted publickey for core from 139.178.89.65 port 54648 ssh2: RSA SHA256:MLhCj2QQz+1ufXK8br7fHkzSPc3j4VOop8SP6hp3dC8 Aug 6 00:19:25.389115 sshd[5155]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 00:19:25.398573 systemd-logind[1485]: New session 17 of user core. Aug 6 00:19:25.405271 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 6 00:19:26.122433 sshd[5155]: pam_unix(sshd:session): session closed for user core Aug 6 00:19:26.131120 systemd-logind[1485]: Session 17 logged out. Waiting for processes to exit. Aug 6 00:19:26.131539 systemd[1]: sshd@14-10.244.27.62:22-139.178.89.65:54648.service: Deactivated successfully. Aug 6 00:19:26.136624 systemd[1]: session-17.scope: Deactivated successfully. Aug 6 00:19:26.140032 systemd-logind[1485]: Removed session 17. Aug 6 00:19:31.283297 systemd[1]: Started sshd@15-10.244.27.62:22-139.178.89.65:55388.service - OpenSSH per-connection server daemon (139.178.89.65:55388). Aug 6 00:19:32.159321 sshd[5173]: Accepted publickey for core from 139.178.89.65 port 55388 ssh2: RSA SHA256:MLhCj2QQz+1ufXK8br7fHkzSPc3j4VOop8SP6hp3dC8 Aug 6 00:19:32.161670 sshd[5173]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 00:19:32.169333 systemd-logind[1485]: New session 18 of user core. Aug 6 00:19:32.177206 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 6 00:19:32.867076 sshd[5173]: pam_unix(sshd:session): session closed for user core Aug 6 00:19:32.871778 systemd[1]: sshd@15-10.244.27.62:22-139.178.89.65:55388.service: Deactivated successfully. Aug 6 00:19:32.874943 systemd[1]: session-18.scope: Deactivated successfully. Aug 6 00:19:32.876777 systemd-logind[1485]: Session 18 logged out. Waiting for processes to exit. Aug 6 00:19:32.878278 systemd-logind[1485]: Removed session 18. Aug 6 00:19:38.031711 systemd[1]: Started sshd@16-10.244.27.62:22-139.178.89.65:55398.service - OpenSSH per-connection server daemon (139.178.89.65:55398). Aug 6 00:19:38.974606 sshd[5212]: Accepted publickey for core from 139.178.89.65 port 55398 ssh2: RSA SHA256:MLhCj2QQz+1ufXK8br7fHkzSPc3j4VOop8SP6hp3dC8 Aug 6 00:19:38.980254 sshd[5212]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 00:19:38.989798 systemd-logind[1485]: New session 19 of user core. Aug 6 00:19:38.996456 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 6 00:19:39.817907 sshd[5212]: pam_unix(sshd:session): session closed for user core Aug 6 00:19:39.827315 systemd[1]: sshd@16-10.244.27.62:22-139.178.89.65:55398.service: Deactivated successfully. Aug 6 00:19:39.833701 systemd[1]: session-19.scope: Deactivated successfully. Aug 6 00:19:39.837773 systemd-logind[1485]: Session 19 logged out. Waiting for processes to exit. Aug 6 00:19:39.841057 systemd-logind[1485]: Removed session 19. Aug 6 00:19:39.981480 systemd[1]: Started sshd@17-10.244.27.62:22-139.178.89.65:55408.service - OpenSSH per-connection server daemon (139.178.89.65:55408). Aug 6 00:19:40.863336 sshd[5225]: Accepted publickey for core from 139.178.89.65 port 55408 ssh2: RSA SHA256:MLhCj2QQz+1ufXK8br7fHkzSPc3j4VOop8SP6hp3dC8 Aug 6 00:19:40.869818 sshd[5225]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 00:19:40.879172 systemd-logind[1485]: New session 20 of user core. Aug 6 00:19:40.886218 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 6 00:19:41.941264 sshd[5225]: pam_unix(sshd:session): session closed for user core Aug 6 00:19:41.949077 systemd[1]: sshd@17-10.244.27.62:22-139.178.89.65:55408.service: Deactivated successfully. Aug 6 00:19:41.953802 systemd[1]: session-20.scope: Deactivated successfully. Aug 6 00:19:41.955531 systemd-logind[1485]: Session 20 logged out. Waiting for processes to exit. Aug 6 00:19:41.957184 systemd-logind[1485]: Removed session 20. Aug 6 00:19:42.105717 systemd[1]: Started sshd@18-10.244.27.62:22-139.178.89.65:38122.service - OpenSSH per-connection server daemon (139.178.89.65:38122). Aug 6 00:19:43.021737 sshd[5236]: Accepted publickey for core from 139.178.89.65 port 38122 ssh2: RSA SHA256:MLhCj2QQz+1ufXK8br7fHkzSPc3j4VOop8SP6hp3dC8 Aug 6 00:19:43.023349 sshd[5236]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 00:19:43.031488 systemd-logind[1485]: New session 21 of user core. Aug 6 00:19:43.037311 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 6 00:19:45.170992 sshd[5236]: pam_unix(sshd:session): session closed for user core Aug 6 00:19:45.186689 systemd[1]: sshd@18-10.244.27.62:22-139.178.89.65:38122.service: Deactivated successfully. Aug 6 00:19:45.191298 systemd[1]: session-21.scope: Deactivated successfully. Aug 6 00:19:45.193525 systemd-logind[1485]: Session 21 logged out. Waiting for processes to exit. Aug 6 00:19:45.195587 systemd-logind[1485]: Removed session 21. Aug 6 00:19:45.313796 systemd[1]: Started sshd@19-10.244.27.62:22-139.178.89.65:38128.service - OpenSSH per-connection server daemon (139.178.89.65:38128). Aug 6 00:19:46.255951 sshd[5280]: Accepted publickey for core from 139.178.89.65 port 38128 ssh2: RSA SHA256:MLhCj2QQz+1ufXK8br7fHkzSPc3j4VOop8SP6hp3dC8 Aug 6 00:19:46.259415 sshd[5280]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 00:19:46.267809 systemd-logind[1485]: New session 22 of user core. Aug 6 00:19:46.273179 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 6 00:19:47.536648 sshd[5280]: pam_unix(sshd:session): session closed for user core Aug 6 00:19:47.545949 systemd[1]: sshd@19-10.244.27.62:22-139.178.89.65:38128.service: Deactivated successfully. Aug 6 00:19:47.551088 systemd[1]: session-22.scope: Deactivated successfully. Aug 6 00:19:47.552578 systemd-logind[1485]: Session 22 logged out. Waiting for processes to exit. Aug 6 00:19:47.555332 systemd-logind[1485]: Removed session 22. Aug 6 00:19:47.694329 systemd[1]: Started sshd@20-10.244.27.62:22-139.178.89.65:38136.service - OpenSSH per-connection server daemon (139.178.89.65:38136). Aug 6 00:19:48.611216 sshd[5316]: Accepted publickey for core from 139.178.89.65 port 38136 ssh2: RSA SHA256:MLhCj2QQz+1ufXK8br7fHkzSPc3j4VOop8SP6hp3dC8 Aug 6 00:19:48.613370 sshd[5316]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 00:19:48.620737 systemd-logind[1485]: New session 23 of user core. Aug 6 00:19:48.629211 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 6 00:19:49.310252 sshd[5316]: pam_unix(sshd:session): session closed for user core Aug 6 00:19:49.317055 systemd[1]: sshd@20-10.244.27.62:22-139.178.89.65:38136.service: Deactivated successfully. Aug 6 00:19:49.320365 systemd[1]: session-23.scope: Deactivated successfully. Aug 6 00:19:49.321703 systemd-logind[1485]: Session 23 logged out. Waiting for processes to exit. Aug 6 00:19:49.323883 systemd-logind[1485]: Removed session 23. Aug 6 00:19:54.467375 systemd[1]: Started sshd@21-10.244.27.62:22-139.178.89.65:44756.service - OpenSSH per-connection server daemon (139.178.89.65:44756). Aug 6 00:19:55.341083 sshd[5337]: Accepted publickey for core from 139.178.89.65 port 44756 ssh2: RSA SHA256:MLhCj2QQz+1ufXK8br7fHkzSPc3j4VOop8SP6hp3dC8 Aug 6 00:19:55.343365 sshd[5337]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 00:19:55.353747 systemd-logind[1485]: New session 24 of user core. Aug 6 00:19:55.363354 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 6 00:19:56.031414 sshd[5337]: pam_unix(sshd:session): session closed for user core Aug 6 00:19:56.035680 systemd-logind[1485]: Session 24 logged out. Waiting for processes to exit. Aug 6 00:19:56.038818 systemd[1]: sshd@21-10.244.27.62:22-139.178.89.65:44756.service: Deactivated successfully. Aug 6 00:19:56.043485 systemd[1]: session-24.scope: Deactivated successfully. Aug 6 00:19:56.045547 systemd-logind[1485]: Removed session 24. Aug 6 00:20:01.195022 systemd[1]: Started sshd@22-10.244.27.62:22-139.178.89.65:57864.service - OpenSSH per-connection server daemon (139.178.89.65:57864). Aug 6 00:20:02.078724 sshd[5357]: Accepted publickey for core from 139.178.89.65 port 57864 ssh2: RSA SHA256:MLhCj2QQz+1ufXK8br7fHkzSPc3j4VOop8SP6hp3dC8 Aug 6 00:20:02.081538 sshd[5357]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 00:20:02.089059 systemd-logind[1485]: New session 25 of user core. Aug 6 00:20:02.099318 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 6 00:20:02.796053 sshd[5357]: pam_unix(sshd:session): session closed for user core Aug 6 00:20:02.801985 systemd[1]: sshd@22-10.244.27.62:22-139.178.89.65:57864.service: Deactivated successfully. Aug 6 00:20:02.805242 systemd[1]: session-25.scope: Deactivated successfully. Aug 6 00:20:02.807860 systemd-logind[1485]: Session 25 logged out. Waiting for processes to exit. Aug 6 00:20:02.809295 systemd-logind[1485]: Removed session 25. Aug 6 00:20:07.959458 systemd[1]: Started sshd@23-10.244.27.62:22-139.178.89.65:57876.service - OpenSSH per-connection server daemon (139.178.89.65:57876). Aug 6 00:20:08.863111 sshd[5386]: Accepted publickey for core from 139.178.89.65 port 57876 ssh2: RSA SHA256:MLhCj2QQz+1ufXK8br7fHkzSPc3j4VOop8SP6hp3dC8 Aug 6 00:20:08.866171 sshd[5386]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 6 00:20:08.876080 systemd-logind[1485]: New session 26 of user core. Aug 6 00:20:08.884381 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 6 00:20:09.608500 sshd[5386]: pam_unix(sshd:session): session closed for user core Aug 6 00:20:09.614463 systemd[1]: sshd@23-10.244.27.62:22-139.178.89.65:57876.service: Deactivated successfully. Aug 6 00:20:09.617378 systemd[1]: session-26.scope: Deactivated successfully. Aug 6 00:20:09.619118 systemd-logind[1485]: Session 26 logged out. Waiting for processes to exit. Aug 6 00:20:09.620808 systemd-logind[1485]: Removed session 26.