Dec 13 05:14:15.047467 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 05:14:15.047502 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 05:14:15.047516 kernel: BIOS-provided physical RAM map: Dec 13 05:14:15.047532 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 05:14:15.047542 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 05:14:15.047552 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 05:14:15.047563 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Dec 13 05:14:15.047574 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Dec 13 05:14:15.047584 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 05:14:15.047595 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 13 05:14:15.047605 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 05:14:15.047615 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 05:14:15.047631 kernel: NX (Execute Disable) protection: active Dec 13 05:14:15.047641 kernel: APIC: Static calls initialized Dec 13 05:14:15.047670 kernel: SMBIOS 2.8 present. Dec 13 05:14:15.047682 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Dec 13 05:14:15.047693 kernel: Hypervisor detected: KVM Dec 13 05:14:15.047711 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 05:14:15.047723 kernel: kvm-clock: using sched offset of 4408171891 cycles Dec 13 05:14:15.047735 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 05:14:15.047747 kernel: tsc: Detected 2499.998 MHz processor Dec 13 05:14:15.047758 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 05:14:15.047770 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 05:14:15.047781 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Dec 13 05:14:15.047793 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 13 05:14:15.047804 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 05:14:15.047820 kernel: Using GB pages for direct mapping Dec 13 05:14:15.047844 kernel: ACPI: Early table checksum verification disabled Dec 13 05:14:15.047856 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Dec 13 05:14:15.047867 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 05:14:15.047878 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 05:14:15.047902 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 05:14:15.047913 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Dec 13 05:14:15.047924 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 05:14:15.047936 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 05:14:15.047952 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 05:14:15.047963 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 05:14:15.047975 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Dec 13 05:14:15.047986 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Dec 13 05:14:15.047998 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Dec 13 05:14:15.048016 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Dec 13 05:14:15.048037 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Dec 13 05:14:15.048053 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Dec 13 05:14:15.048066 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Dec 13 05:14:15.048078 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 05:14:15.048090 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 05:14:15.048102 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Dec 13 05:14:15.048114 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Dec 13 05:14:15.048125 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Dec 13 05:14:15.048142 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Dec 13 05:14:15.048154 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Dec 13 05:14:15.048166 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Dec 13 05:14:15.048178 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Dec 13 05:14:15.048202 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Dec 13 05:14:15.048213 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Dec 13 05:14:15.048225 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Dec 13 05:14:15.048237 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Dec 13 05:14:15.048249 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Dec 13 05:14:15.048260 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Dec 13 05:14:15.048278 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Dec 13 05:14:15.048290 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 13 05:14:15.048302 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Dec 13 05:14:15.048314 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Dec 13 05:14:15.048326 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Dec 13 05:14:15.048338 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Dec 13 05:14:15.048350 kernel: Zone ranges: Dec 13 05:14:15.048362 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 05:14:15.048374 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Dec 13 05:14:15.048391 kernel: Normal empty Dec 13 05:14:15.048403 kernel: Movable zone start for each node Dec 13 05:14:15.048415 kernel: Early memory node ranges Dec 13 05:14:15.048427 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 05:14:15.048438 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Dec 13 05:14:15.048450 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Dec 13 05:14:15.048468 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 05:14:15.048480 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 05:14:15.048492 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Dec 13 05:14:15.048504 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 05:14:15.048521 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 05:14:15.048533 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 05:14:15.048545 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 05:14:15.048557 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 05:14:15.048569 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 05:14:15.048581 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 05:14:15.048593 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 05:14:15.048604 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 05:14:15.048616 kernel: TSC deadline timer available Dec 13 05:14:15.048633 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Dec 13 05:14:15.049618 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 05:14:15.049638 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 13 05:14:15.049653 kernel: Booting paravirtualized kernel on KVM Dec 13 05:14:15.049665 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 05:14:15.049693 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Dec 13 05:14:15.049705 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Dec 13 05:14:15.049718 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Dec 13 05:14:15.049730 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Dec 13 05:14:15.049749 kernel: kvm-guest: PV spinlocks enabled Dec 13 05:14:15.049761 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 05:14:15.049775 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 05:14:15.049788 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 05:14:15.049800 kernel: random: crng init done Dec 13 05:14:15.049812 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 05:14:15.049824 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 05:14:15.049836 kernel: Fallback order for Node 0: 0 Dec 13 05:14:15.049854 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Dec 13 05:14:15.049878 kernel: Policy zone: DMA32 Dec 13 05:14:15.049889 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 05:14:15.049907 kernel: software IO TLB: area num 16. Dec 13 05:14:15.049919 kernel: Memory: 1901524K/2096616K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 194832K reserved, 0K cma-reserved) Dec 13 05:14:15.049942 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Dec 13 05:14:15.049953 kernel: Kernel/User page tables isolation: enabled Dec 13 05:14:15.049964 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 05:14:15.049975 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 05:14:15.049991 kernel: Dynamic Preempt: voluntary Dec 13 05:14:15.050002 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 05:14:15.050013 kernel: rcu: RCU event tracing is enabled. Dec 13 05:14:15.050025 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Dec 13 05:14:15.050040 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 05:14:15.050063 kernel: Rude variant of Tasks RCU enabled. Dec 13 05:14:15.050078 kernel: Tracing variant of Tasks RCU enabled. Dec 13 05:14:15.050090 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 05:14:15.050102 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Dec 13 05:14:15.050113 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Dec 13 05:14:15.050125 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 05:14:15.050136 kernel: Console: colour VGA+ 80x25 Dec 13 05:14:15.050152 kernel: printk: console [tty0] enabled Dec 13 05:14:15.050177 kernel: printk: console [ttyS0] enabled Dec 13 05:14:15.050202 kernel: ACPI: Core revision 20230628 Dec 13 05:14:15.050215 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 05:14:15.050228 kernel: x2apic enabled Dec 13 05:14:15.050246 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 05:14:15.050259 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Dec 13 05:14:15.050272 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Dec 13 05:14:15.050285 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 05:14:15.050298 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 13 05:14:15.050310 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 13 05:14:15.050323 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 05:14:15.050335 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 05:14:15.050348 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 05:14:15.050365 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 05:14:15.050378 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Dec 13 05:14:15.050390 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 05:14:15.050403 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 05:14:15.050415 kernel: MDS: Mitigation: Clear CPU buffers Dec 13 05:14:15.050428 kernel: MMIO Stale Data: Unknown: No mitigations Dec 13 05:14:15.050440 kernel: SRBDS: Unknown: Dependent on hypervisor status Dec 13 05:14:15.050453 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 05:14:15.050477 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 05:14:15.050489 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 05:14:15.050500 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 05:14:15.050517 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 05:14:15.050542 kernel: Freeing SMP alternatives memory: 32K Dec 13 05:14:15.050554 kernel: pid_max: default: 32768 minimum: 301 Dec 13 05:14:15.050566 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 05:14:15.050578 kernel: landlock: Up and running. Dec 13 05:14:15.050602 kernel: SELinux: Initializing. Dec 13 05:14:15.050615 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 05:14:15.050627 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 05:14:15.050640 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Dec 13 05:14:15.050653 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Dec 13 05:14:15.050665 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Dec 13 05:14:15.050706 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Dec 13 05:14:15.050720 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Dec 13 05:14:15.050733 kernel: signal: max sigframe size: 1776 Dec 13 05:14:15.050746 kernel: rcu: Hierarchical SRCU implementation. Dec 13 05:14:15.050759 kernel: rcu: Max phase no-delay instances is 400. Dec 13 05:14:15.050772 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 05:14:15.050784 kernel: smp: Bringing up secondary CPUs ... Dec 13 05:14:15.050797 kernel: smpboot: x86: Booting SMP configuration: Dec 13 05:14:15.050809 kernel: .... node #0, CPUs: #1 Dec 13 05:14:15.050828 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Dec 13 05:14:15.050841 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 05:14:15.050854 kernel: smpboot: Max logical packages: 16 Dec 13 05:14:15.050866 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Dec 13 05:14:15.050879 kernel: devtmpfs: initialized Dec 13 05:14:15.050891 kernel: x86/mm: Memory block size: 128MB Dec 13 05:14:15.050917 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 05:14:15.050928 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Dec 13 05:14:15.050940 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 05:14:15.050956 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 05:14:15.050981 kernel: audit: initializing netlink subsys (disabled) Dec 13 05:14:15.050992 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 05:14:15.051004 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 05:14:15.051015 kernel: audit: type=2000 audit(1734066853.848:1): state=initialized audit_enabled=0 res=1 Dec 13 05:14:15.051026 kernel: cpuidle: using governor menu Dec 13 05:14:15.051038 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 05:14:15.051061 kernel: dca service started, version 1.12.1 Dec 13 05:14:15.051073 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 05:14:15.051090 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Dec 13 05:14:15.051102 kernel: PCI: Using configuration type 1 for base access Dec 13 05:14:15.051114 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 05:14:15.051126 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 05:14:15.051138 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 05:14:15.051150 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 05:14:15.051191 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 05:14:15.051205 kernel: ACPI: Added _OSI(Module Device) Dec 13 05:14:15.051217 kernel: ACPI: Added _OSI(Processor Device) Dec 13 05:14:15.051235 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 05:14:15.051248 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 05:14:15.051261 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 05:14:15.051273 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 05:14:15.051286 kernel: ACPI: Interpreter enabled Dec 13 05:14:15.051299 kernel: ACPI: PM: (supports S0 S5) Dec 13 05:14:15.051311 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 05:14:15.051324 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 05:14:15.051336 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 05:14:15.051353 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 05:14:15.051366 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 05:14:15.051651 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 05:14:15.051860 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 05:14:15.052038 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 05:14:15.052057 kernel: PCI host bridge to bus 0000:00 Dec 13 05:14:15.052258 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 05:14:15.052421 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 05:14:15.052574 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 05:14:15.052746 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Dec 13 05:14:15.052899 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 05:14:15.053070 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Dec 13 05:14:15.053254 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 05:14:15.053451 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 05:14:15.053689 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Dec 13 05:14:15.054491 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Dec 13 05:14:15.054717 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Dec 13 05:14:15.054898 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Dec 13 05:14:15.055079 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 05:14:15.055323 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Dec 13 05:14:15.055504 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Dec 13 05:14:15.055709 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Dec 13 05:14:15.055878 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Dec 13 05:14:15.056057 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Dec 13 05:14:15.056242 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Dec 13 05:14:15.056426 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Dec 13 05:14:15.056612 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Dec 13 05:14:15.056887 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Dec 13 05:14:15.057064 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Dec 13 05:14:15.057295 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Dec 13 05:14:15.057473 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Dec 13 05:14:15.057785 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Dec 13 05:14:15.057963 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Dec 13 05:14:15.058148 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Dec 13 05:14:15.058326 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Dec 13 05:14:15.058516 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Dec 13 05:14:15.058710 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Dec 13 05:14:15.058879 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Dec 13 05:14:15.059044 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Dec 13 05:14:15.059232 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Dec 13 05:14:15.059453 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Dec 13 05:14:15.062782 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 05:14:15.062981 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Dec 13 05:14:15.063156 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Dec 13 05:14:15.063362 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 05:14:15.063564 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 05:14:15.065818 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 05:14:15.066041 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Dec 13 05:14:15.066247 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Dec 13 05:14:15.066430 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 05:14:15.066618 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Dec 13 05:14:15.066859 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Dec 13 05:14:15.067063 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Dec 13 05:14:15.067254 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 13 05:14:15.067424 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Dec 13 05:14:15.067603 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 05:14:15.068835 kernel: pci_bus 0000:02: extended config space not accessible Dec 13 05:14:15.069040 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Dec 13 05:14:15.069255 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Dec 13 05:14:15.069436 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 13 05:14:15.069610 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 13 05:14:15.070390 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Dec 13 05:14:15.070579 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Dec 13 05:14:15.071785 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 13 05:14:15.071960 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Dec 13 05:14:15.072138 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 05:14:15.072339 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Dec 13 05:14:15.072516 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Dec 13 05:14:15.074743 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 13 05:14:15.074921 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Dec 13 05:14:15.075086 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 05:14:15.075291 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 13 05:14:15.075461 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Dec 13 05:14:15.075668 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 05:14:15.075917 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 13 05:14:15.076084 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Dec 13 05:14:15.076266 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 05:14:15.076433 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 13 05:14:15.076598 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Dec 13 05:14:15.077813 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 05:14:15.077992 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 13 05:14:15.078195 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Dec 13 05:14:15.078369 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 05:14:15.078556 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 13 05:14:15.079766 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Dec 13 05:14:15.079945 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 05:14:15.079967 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 05:14:15.079981 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 05:14:15.079994 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 05:14:15.080016 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 05:14:15.080029 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 05:14:15.080042 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 05:14:15.080054 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 05:14:15.080067 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 05:14:15.080080 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 05:14:15.080092 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 05:14:15.080105 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 05:14:15.080118 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 05:14:15.080135 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 05:14:15.080149 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 05:14:15.080162 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 05:14:15.080174 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 05:14:15.080200 kernel: iommu: Default domain type: Translated Dec 13 05:14:15.080214 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 05:14:15.080226 kernel: PCI: Using ACPI for IRQ routing Dec 13 05:14:15.080239 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 05:14:15.080252 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 05:14:15.080270 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Dec 13 05:14:15.080438 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 05:14:15.080606 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 05:14:15.081838 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 05:14:15.081864 kernel: vgaarb: loaded Dec 13 05:14:15.081878 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 05:14:15.081891 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 05:14:15.081903 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 05:14:15.081919 kernel: pnp: PnP ACPI init Dec 13 05:14:15.082120 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 05:14:15.082141 kernel: pnp: PnP ACPI: found 5 devices Dec 13 05:14:15.082154 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 05:14:15.082198 kernel: NET: Registered PF_INET protocol family Dec 13 05:14:15.082211 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 05:14:15.082224 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 05:14:15.082238 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 05:14:15.082250 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 05:14:15.082270 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 05:14:15.082283 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 05:14:15.082296 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 05:14:15.082309 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 05:14:15.082322 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 05:14:15.082335 kernel: NET: Registered PF_XDP protocol family Dec 13 05:14:15.082502 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Dec 13 05:14:15.082696 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 13 05:14:15.082886 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 13 05:14:15.083038 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Dec 13 05:14:15.083227 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Dec 13 05:14:15.083396 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 13 05:14:15.083563 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 13 05:14:15.086775 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 13 05:14:15.086959 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Dec 13 05:14:15.087138 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Dec 13 05:14:15.087328 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Dec 13 05:14:15.087497 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Dec 13 05:14:15.087693 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Dec 13 05:14:15.087866 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Dec 13 05:14:15.088039 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Dec 13 05:14:15.088230 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Dec 13 05:14:15.088434 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 13 05:14:15.088617 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 13 05:14:15.089839 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 13 05:14:15.090027 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Dec 13 05:14:15.090228 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Dec 13 05:14:15.090396 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 05:14:15.090571 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 13 05:14:15.091789 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Dec 13 05:14:15.091985 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Dec 13 05:14:15.092149 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 05:14:15.092339 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 13 05:14:15.092527 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Dec 13 05:14:15.093717 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Dec 13 05:14:15.093907 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 05:14:15.094095 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 13 05:14:15.094290 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Dec 13 05:14:15.094455 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Dec 13 05:14:15.094621 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 05:14:15.094812 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 13 05:14:15.095004 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Dec 13 05:14:15.095170 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Dec 13 05:14:15.095349 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 05:14:15.095541 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 13 05:14:15.097720 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Dec 13 05:14:15.097891 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Dec 13 05:14:15.098058 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 05:14:15.098237 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 13 05:14:15.098404 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Dec 13 05:14:15.098588 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Dec 13 05:14:15.099803 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 05:14:15.099972 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 13 05:14:15.100168 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Dec 13 05:14:15.100352 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Dec 13 05:14:15.100528 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 05:14:15.102704 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 05:14:15.102866 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 05:14:15.103018 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 05:14:15.103222 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Dec 13 05:14:15.103376 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 05:14:15.103535 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Dec 13 05:14:15.103744 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Dec 13 05:14:15.103918 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Dec 13 05:14:15.104097 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 05:14:15.104305 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Dec 13 05:14:15.104508 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Dec 13 05:14:15.106715 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Dec 13 05:14:15.106921 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 05:14:15.107095 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Dec 13 05:14:15.107285 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Dec 13 05:14:15.107447 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 05:14:15.107682 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Dec 13 05:14:15.107857 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Dec 13 05:14:15.108013 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 05:14:15.108219 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Dec 13 05:14:15.108380 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Dec 13 05:14:15.108537 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 05:14:15.109847 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Dec 13 05:14:15.110037 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Dec 13 05:14:15.110214 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 05:14:15.110404 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Dec 13 05:14:15.110580 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Dec 13 05:14:15.110785 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 05:14:15.110981 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Dec 13 05:14:15.111139 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Dec 13 05:14:15.111316 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 05:14:15.111338 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 05:14:15.111352 kernel: PCI: CLS 0 bytes, default 64 Dec 13 05:14:15.111366 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 05:14:15.111380 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Dec 13 05:14:15.111393 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 05:14:15.111407 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Dec 13 05:14:15.111420 kernel: Initialise system trusted keyrings Dec 13 05:14:15.111441 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 05:14:15.111472 kernel: Key type asymmetric registered Dec 13 05:14:15.111485 kernel: Asymmetric key parser 'x509' registered Dec 13 05:14:15.111498 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 05:14:15.111524 kernel: io scheduler mq-deadline registered Dec 13 05:14:15.111537 kernel: io scheduler kyber registered Dec 13 05:14:15.111551 kernel: io scheduler bfq registered Dec 13 05:14:15.111778 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Dec 13 05:14:15.111949 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Dec 13 05:14:15.112135 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 05:14:15.112335 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Dec 13 05:14:15.112502 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Dec 13 05:14:15.112695 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 05:14:15.112893 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Dec 13 05:14:15.113071 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Dec 13 05:14:15.113260 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 05:14:15.113448 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Dec 13 05:14:15.113639 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Dec 13 05:14:15.113839 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 05:14:15.114043 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Dec 13 05:14:15.114222 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Dec 13 05:14:15.114397 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 05:14:15.114581 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Dec 13 05:14:15.114795 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Dec 13 05:14:15.114975 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 05:14:15.115151 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Dec 13 05:14:15.115339 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Dec 13 05:14:15.115515 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 05:14:15.115745 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Dec 13 05:14:15.115948 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Dec 13 05:14:15.116114 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 05:14:15.116136 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 05:14:15.116151 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 05:14:15.116173 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 05:14:15.116207 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 05:14:15.116221 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 05:14:15.116235 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 05:14:15.116248 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 05:14:15.116262 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 05:14:15.116275 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 05:14:15.116465 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 13 05:14:15.116624 kernel: rtc_cmos 00:03: registered as rtc0 Dec 13 05:14:15.116828 kernel: rtc_cmos 00:03: setting system clock to 2024-12-13T05:14:14 UTC (1734066854) Dec 13 05:14:15.117001 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Dec 13 05:14:15.117020 kernel: intel_pstate: CPU model not supported Dec 13 05:14:15.117053 kernel: NET: Registered PF_INET6 protocol family Dec 13 05:14:15.117067 kernel: Segment Routing with IPv6 Dec 13 05:14:15.117081 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 05:14:15.117094 kernel: NET: Registered PF_PACKET protocol family Dec 13 05:14:15.117108 kernel: Key type dns_resolver registered Dec 13 05:14:15.117126 kernel: IPI shorthand broadcast: enabled Dec 13 05:14:15.117140 kernel: sched_clock: Marking stable (1204015967, 241299193)->(1703569411, -258254251) Dec 13 05:14:15.117153 kernel: registered taskstats version 1 Dec 13 05:14:15.117167 kernel: Loading compiled-in X.509 certificates Dec 13 05:14:15.117192 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 05:14:15.117206 kernel: Key type .fscrypt registered Dec 13 05:14:15.117219 kernel: Key type fscrypt-provisioning registered Dec 13 05:14:15.117233 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 05:14:15.117246 kernel: ima: Allocated hash algorithm: sha1 Dec 13 05:14:15.117266 kernel: ima: No architecture policies found Dec 13 05:14:15.117279 kernel: clk: Disabling unused clocks Dec 13 05:14:15.117292 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 05:14:15.117306 kernel: Write protecting the kernel read-only data: 36864k Dec 13 05:14:15.117319 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 05:14:15.117333 kernel: Run /init as init process Dec 13 05:14:15.117347 kernel: with arguments: Dec 13 05:14:15.117360 kernel: /init Dec 13 05:14:15.117373 kernel: with environment: Dec 13 05:14:15.117391 kernel: HOME=/ Dec 13 05:14:15.117404 kernel: TERM=linux Dec 13 05:14:15.117417 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 05:14:15.117433 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 05:14:15.117450 systemd[1]: Detected virtualization kvm. Dec 13 05:14:15.117468 systemd[1]: Detected architecture x86-64. Dec 13 05:14:15.117481 systemd[1]: Running in initrd. Dec 13 05:14:15.117495 systemd[1]: No hostname configured, using default hostname. Dec 13 05:14:15.117514 systemd[1]: Hostname set to . Dec 13 05:14:15.117530 systemd[1]: Initializing machine ID from VM UUID. Dec 13 05:14:15.117544 systemd[1]: Queued start job for default target initrd.target. Dec 13 05:14:15.117558 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 05:14:15.117573 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 05:14:15.117587 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 05:14:15.117604 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 05:14:15.117624 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 05:14:15.117639 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 05:14:15.117693 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 05:14:15.117714 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 05:14:15.117729 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 05:14:15.117743 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 05:14:15.117757 systemd[1]: Reached target paths.target - Path Units. Dec 13 05:14:15.117778 systemd[1]: Reached target slices.target - Slice Units. Dec 13 05:14:15.117793 systemd[1]: Reached target swap.target - Swaps. Dec 13 05:14:15.117810 systemd[1]: Reached target timers.target - Timer Units. Dec 13 05:14:15.117824 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 05:14:15.117839 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 05:14:15.117865 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 05:14:15.117879 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 05:14:15.117893 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 05:14:15.117906 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 05:14:15.117939 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 05:14:15.117953 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 05:14:15.117968 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 05:14:15.117982 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 05:14:15.117996 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 05:14:15.118010 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 05:14:15.118025 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 05:14:15.118039 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 05:14:15.118053 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 05:14:15.118112 systemd-journald[201]: Collecting audit messages is disabled. Dec 13 05:14:15.118146 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 05:14:15.118161 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 05:14:15.118194 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 05:14:15.118216 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 05:14:15.118231 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 05:14:15.118245 kernel: Bridge firewalling registered Dec 13 05:14:15.118260 systemd-journald[201]: Journal started Dec 13 05:14:15.118292 systemd-journald[201]: Runtime Journal (/run/log/journal/5a0c5b843e7f44128427be434daab0ec) is 4.7M, max 38.0M, 33.2M free. Dec 13 05:14:15.060454 systemd-modules-load[202]: Inserted module 'overlay' Dec 13 05:14:15.108988 systemd-modules-load[202]: Inserted module 'br_netfilter' Dec 13 05:14:15.176688 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 05:14:15.177354 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 05:14:15.179526 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 05:14:15.181524 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 05:14:15.196873 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 05:14:15.199902 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 05:14:15.203833 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 05:14:15.214802 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 05:14:15.220217 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 05:14:15.229769 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 05:14:15.235634 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 05:14:15.241832 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 05:14:15.243389 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 05:14:15.248839 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 05:14:15.258894 dracut-cmdline[232]: dracut-dracut-053 Dec 13 05:14:15.263940 dracut-cmdline[232]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 05:14:15.299282 systemd-resolved[234]: Positive Trust Anchors: Dec 13 05:14:15.300275 systemd-resolved[234]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 05:14:15.300321 systemd-resolved[234]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 05:14:15.309385 systemd-resolved[234]: Defaulting to hostname 'linux'. Dec 13 05:14:15.311210 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 05:14:15.312148 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 05:14:15.371699 kernel: SCSI subsystem initialized Dec 13 05:14:15.383694 kernel: Loading iSCSI transport class v2.0-870. Dec 13 05:14:15.397741 kernel: iscsi: registered transport (tcp) Dec 13 05:14:15.425248 kernel: iscsi: registered transport (qla4xxx) Dec 13 05:14:15.425307 kernel: QLogic iSCSI HBA Driver Dec 13 05:14:15.480715 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 05:14:15.486833 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 05:14:15.528567 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 05:14:15.528635 kernel: device-mapper: uevent: version 1.0.3 Dec 13 05:14:15.530728 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 05:14:15.579713 kernel: raid6: sse2x4 gen() 12892 MB/s Dec 13 05:14:15.597688 kernel: raid6: sse2x2 gen() 8627 MB/s Dec 13 05:14:15.618037 kernel: raid6: sse2x1 gen() 9019 MB/s Dec 13 05:14:15.618328 kernel: raid6: using algorithm sse2x4 gen() 12892 MB/s Dec 13 05:14:15.638425 kernel: raid6: .... xor() 7579 MB/s, rmw enabled Dec 13 05:14:15.638470 kernel: raid6: using ssse3x2 recovery algorithm Dec 13 05:14:15.665728 kernel: xor: automatically using best checksumming function avx Dec 13 05:14:15.860698 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 05:14:15.878986 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 05:14:15.887975 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 05:14:15.920174 systemd-udevd[417]: Using default interface naming scheme 'v255'. Dec 13 05:14:15.927751 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 05:14:15.936071 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 05:14:15.959668 dracut-pre-trigger[423]: rd.md=0: removing MD RAID activation Dec 13 05:14:16.002221 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 05:14:16.008848 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 05:14:16.124783 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 05:14:16.134308 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 05:14:16.162667 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 05:14:16.165274 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 05:14:16.169199 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 05:14:16.171183 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 05:14:16.179832 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 05:14:16.200871 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 05:14:16.252701 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Dec 13 05:14:16.359708 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Dec 13 05:14:16.359978 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 05:14:16.359998 kernel: ACPI: bus type USB registered Dec 13 05:14:16.360024 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 05:14:16.360050 kernel: GPT:17805311 != 125829119 Dec 13 05:14:16.360066 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 05:14:16.360094 kernel: GPT:17805311 != 125829119 Dec 13 05:14:16.360109 kernel: usbcore: registered new interface driver usbfs Dec 13 05:14:16.360124 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 05:14:16.360139 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 05:14:16.360182 kernel: usbcore: registered new interface driver hub Dec 13 05:14:16.360200 kernel: usbcore: registered new device driver usb Dec 13 05:14:16.360217 kernel: AVX version of gcm_enc/dec engaged. Dec 13 05:14:16.360241 kernel: AES CTR mode by8 optimization enabled Dec 13 05:14:16.360260 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Dec 13 05:14:16.447024 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Dec 13 05:14:16.447294 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Dec 13 05:14:16.447509 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Dec 13 05:14:16.447745 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Dec 13 05:14:16.447960 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Dec 13 05:14:16.448176 kernel: hub 1-0:1.0: USB hub found Dec 13 05:14:16.448417 kernel: hub 1-0:1.0: 4 ports detected Dec 13 05:14:16.448623 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 13 05:14:16.451410 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (477) Dec 13 05:14:16.451456 kernel: hub 2-0:1.0: USB hub found Dec 13 05:14:16.452907 kernel: hub 2-0:1.0: 4 ports detected Dec 13 05:14:16.453160 kernel: libata version 3.00 loaded. Dec 13 05:14:16.453192 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 05:14:16.453400 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 05:14:16.453436 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 05:14:16.453637 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 05:14:16.455796 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (463) Dec 13 05:14:16.455819 kernel: scsi host0: ahci Dec 13 05:14:16.456034 kernel: scsi host1: ahci Dec 13 05:14:16.456254 kernel: scsi host2: ahci Dec 13 05:14:16.456452 kernel: scsi host3: ahci Dec 13 05:14:16.457697 kernel: scsi host4: ahci Dec 13 05:14:16.457901 kernel: scsi host5: ahci Dec 13 05:14:16.458112 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 Dec 13 05:14:16.458134 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 Dec 13 05:14:16.458164 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 Dec 13 05:14:16.458192 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 Dec 13 05:14:16.458211 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 Dec 13 05:14:16.458228 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 Dec 13 05:14:16.329116 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 05:14:16.329311 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 05:14:16.331821 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 05:14:16.332586 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 05:14:16.333789 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 05:14:16.335749 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 05:14:16.345972 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 05:14:16.424893 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 05:14:16.538072 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 05:14:16.540035 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 05:14:16.547981 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 05:14:16.562543 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 05:14:16.570010 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 05:14:16.576833 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 05:14:16.580831 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 05:14:16.584567 disk-uuid[558]: Primary Header is updated. Dec 13 05:14:16.584567 disk-uuid[558]: Secondary Entries is updated. Dec 13 05:14:16.584567 disk-uuid[558]: Secondary Header is updated. Dec 13 05:14:16.590445 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 05:14:16.596686 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 05:14:16.620722 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 05:14:16.624682 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Dec 13 05:14:16.768605 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 05:14:16.768683 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 05:14:16.768704 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 13 05:14:16.768721 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 05:14:16.772083 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 05:14:16.772140 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 05:14:16.800667 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 05:14:16.808251 kernel: usbcore: registered new interface driver usbhid Dec 13 05:14:16.808287 kernel: usbhid: USB HID core driver Dec 13 05:14:16.816259 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Dec 13 05:14:16.816298 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Dec 13 05:14:17.607689 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 05:14:17.609189 disk-uuid[559]: The operation has completed successfully. Dec 13 05:14:17.655433 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 05:14:17.655621 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 05:14:17.685889 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 05:14:17.692029 sh[585]: Success Dec 13 05:14:17.709789 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Dec 13 05:14:17.768525 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 05:14:17.782238 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 05:14:17.786308 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 05:14:17.814722 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 05:14:17.814777 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 05:14:17.814809 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 05:14:17.814828 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 05:14:17.814846 kernel: BTRFS info (device dm-0): using free space tree Dec 13 05:14:17.824151 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 05:14:17.825813 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 05:14:17.831848 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 05:14:17.834880 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 05:14:17.847787 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 05:14:17.847827 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 05:14:17.849673 kernel: BTRFS info (device vda6): using free space tree Dec 13 05:14:17.855696 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 05:14:17.869843 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 05:14:17.869522 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 05:14:17.877845 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 05:14:17.884973 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 05:14:18.024224 ignition[676]: Ignition 2.19.0 Dec 13 05:14:18.024248 ignition[676]: Stage: fetch-offline Dec 13 05:14:18.024346 ignition[676]: no configs at "/usr/lib/ignition/base.d" Dec 13 05:14:18.024372 ignition[676]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 05:14:18.024601 ignition[676]: parsed url from cmdline: "" Dec 13 05:14:18.028309 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 05:14:18.024608 ignition[676]: no config URL provided Dec 13 05:14:18.024617 ignition[676]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 05:14:18.024641 ignition[676]: no config at "/usr/lib/ignition/user.ign" Dec 13 05:14:18.024662 ignition[676]: failed to fetch config: resource requires networking Dec 13 05:14:18.025318 ignition[676]: Ignition finished successfully Dec 13 05:14:18.047401 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 05:14:18.055876 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 05:14:18.093819 systemd-networkd[773]: lo: Link UP Dec 13 05:14:18.093836 systemd-networkd[773]: lo: Gained carrier Dec 13 05:14:18.096331 systemd-networkd[773]: Enumeration completed Dec 13 05:14:18.096867 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 05:14:18.096984 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 05:14:18.096990 systemd-networkd[773]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 05:14:18.097745 systemd[1]: Reached target network.target - Network. Dec 13 05:14:18.098910 systemd-networkd[773]: eth0: Link UP Dec 13 05:14:18.098916 systemd-networkd[773]: eth0: Gained carrier Dec 13 05:14:18.098928 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 05:14:18.108136 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 05:14:18.124221 systemd-networkd[773]: eth0: DHCPv4 address 10.230.15.106/30, gateway 10.230.15.105 acquired from 10.230.15.105 Dec 13 05:14:18.128628 ignition[775]: Ignition 2.19.0 Dec 13 05:14:18.128672 ignition[775]: Stage: fetch Dec 13 05:14:18.128914 ignition[775]: no configs at "/usr/lib/ignition/base.d" Dec 13 05:14:18.128934 ignition[775]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 05:14:18.129066 ignition[775]: parsed url from cmdline: "" Dec 13 05:14:18.129080 ignition[775]: no config URL provided Dec 13 05:14:18.129090 ignition[775]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 05:14:18.129123 ignition[775]: no config at "/usr/lib/ignition/user.ign" Dec 13 05:14:18.129315 ignition[775]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Dec 13 05:14:18.129363 ignition[775]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Dec 13 05:14:18.129375 ignition[775]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Dec 13 05:14:18.146781 ignition[775]: GET result: OK Dec 13 05:14:18.147674 ignition[775]: parsing config with SHA512: aa6fff1cc8c68cc32e7da908e8566cae42b48a41b460dda49183cd8e2c1a4734cee5f118302de743ad478441cb2a2c1c5222777e2afbcf699114a87870338f31 Dec 13 05:14:18.153840 unknown[775]: fetched base config from "system" Dec 13 05:14:18.154404 ignition[775]: fetch: fetch complete Dec 13 05:14:18.153867 unknown[775]: fetched base config from "system" Dec 13 05:14:18.154413 ignition[775]: fetch: fetch passed Dec 13 05:14:18.153876 unknown[775]: fetched user config from "openstack" Dec 13 05:14:18.154488 ignition[775]: Ignition finished successfully Dec 13 05:14:18.156566 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 05:14:18.166921 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 05:14:18.186865 ignition[783]: Ignition 2.19.0 Dec 13 05:14:18.186905 ignition[783]: Stage: kargs Dec 13 05:14:18.187165 ignition[783]: no configs at "/usr/lib/ignition/base.d" Dec 13 05:14:18.187186 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 05:14:18.188672 ignition[783]: kargs: kargs passed Dec 13 05:14:18.188764 ignition[783]: Ignition finished successfully Dec 13 05:14:18.192174 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 05:14:18.202913 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 05:14:18.221830 ignition[789]: Ignition 2.19.0 Dec 13 05:14:18.221861 ignition[789]: Stage: disks Dec 13 05:14:18.222107 ignition[789]: no configs at "/usr/lib/ignition/base.d" Dec 13 05:14:18.222131 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 05:14:18.223741 ignition[789]: disks: disks passed Dec 13 05:14:18.224943 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 05:14:18.223828 ignition[789]: Ignition finished successfully Dec 13 05:14:18.226672 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 05:14:18.228323 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 05:14:18.229779 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 05:14:18.231309 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 05:14:18.232910 systemd[1]: Reached target basic.target - Basic System. Dec 13 05:14:18.240851 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 05:14:18.260541 systemd-fsck[797]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Dec 13 05:14:18.264779 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 05:14:18.270768 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 05:14:18.397687 kernel: EXT4-fs (vda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 05:14:18.398492 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 05:14:18.400772 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 05:14:18.407782 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 05:14:18.410780 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 05:14:18.413618 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 05:14:18.419937 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Dec 13 05:14:18.421299 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 05:14:18.421338 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 05:14:18.431047 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (805) Dec 13 05:14:18.431077 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 05:14:18.432122 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 05:14:18.444326 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 05:14:18.444381 kernel: BTRFS info (device vda6): using free space tree Dec 13 05:14:18.446876 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 05:14:18.453216 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 05:14:18.459563 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 05:14:18.518502 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 05:14:18.529463 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory Dec 13 05:14:18.536770 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 05:14:18.544136 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 05:14:18.648070 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 05:14:18.653768 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 05:14:18.655900 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 05:14:18.670676 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 05:14:18.691924 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 05:14:18.702266 ignition[925]: INFO : Ignition 2.19.0 Dec 13 05:14:18.704730 ignition[925]: INFO : Stage: mount Dec 13 05:14:18.704730 ignition[925]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 05:14:18.704730 ignition[925]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 05:14:18.707561 ignition[925]: INFO : mount: mount passed Dec 13 05:14:18.707561 ignition[925]: INFO : Ignition finished successfully Dec 13 05:14:18.708618 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 05:14:18.805152 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 05:14:19.180111 systemd-networkd[773]: eth0: Gained IPv6LL Dec 13 05:14:20.686329 systemd-networkd[773]: eth0: Ignoring DHCPv6 address 2a02:1348:179:83da:24:19ff:fee6:f6a/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:83da:24:19ff:fee6:f6a/64 assigned by NDisc. Dec 13 05:14:20.686347 systemd-networkd[773]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Dec 13 05:14:25.583216 coreos-metadata[807]: Dec 13 05:14:25.583 WARN failed to locate config-drive, using the metadata service API instead Dec 13 05:14:25.609459 coreos-metadata[807]: Dec 13 05:14:25.609 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 05:14:25.623605 coreos-metadata[807]: Dec 13 05:14:25.623 INFO Fetch successful Dec 13 05:14:25.624620 coreos-metadata[807]: Dec 13 05:14:25.624 INFO wrote hostname srv-p0439.gb1.brightbox.com to /sysroot/etc/hostname Dec 13 05:14:25.626976 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Dec 13 05:14:25.627155 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Dec 13 05:14:25.637809 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 05:14:25.650931 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 05:14:25.678686 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (940) Dec 13 05:14:25.678759 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 05:14:25.679695 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 05:14:25.682658 kernel: BTRFS info (device vda6): using free space tree Dec 13 05:14:25.687802 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 05:14:25.690123 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 05:14:25.726043 ignition[958]: INFO : Ignition 2.19.0 Dec 13 05:14:25.726043 ignition[958]: INFO : Stage: files Dec 13 05:14:25.728060 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 05:14:25.728060 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 05:14:25.728060 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Dec 13 05:14:25.730913 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 05:14:25.730913 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 05:14:25.733044 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 05:14:25.734102 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 05:14:25.734102 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 05:14:25.733815 unknown[958]: wrote ssh authorized keys file for user: core Dec 13 05:14:25.737141 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 05:14:25.737141 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 05:14:25.737141 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 05:14:25.737141 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 05:14:25.993754 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 05:14:26.322258 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 05:14:26.323831 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 05:14:26.323831 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 05:14:26.323831 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 05:14:26.323831 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 05:14:26.323831 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 05:14:26.335658 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 05:14:26.335658 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 05:14:26.335658 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 05:14:26.335658 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 05:14:26.335658 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 05:14:26.335658 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 05:14:26.335658 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 05:14:26.335658 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 05:14:26.335658 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 05:14:26.881519 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 05:14:28.601764 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 05:14:28.601764 ignition[958]: INFO : files: op(c): [started] processing unit "containerd.service" Dec 13 05:14:28.604932 ignition[958]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 05:14:28.604932 ignition[958]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 05:14:28.604932 ignition[958]: INFO : files: op(c): [finished] processing unit "containerd.service" Dec 13 05:14:28.604932 ignition[958]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Dec 13 05:14:28.604932 ignition[958]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 05:14:28.604932 ignition[958]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 05:14:28.604932 ignition[958]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Dec 13 05:14:28.604932 ignition[958]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Dec 13 05:14:28.604932 ignition[958]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 05:14:28.604932 ignition[958]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 05:14:28.604932 ignition[958]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 05:14:28.604932 ignition[958]: INFO : files: files passed Dec 13 05:14:28.604932 ignition[958]: INFO : Ignition finished successfully Dec 13 05:14:28.606662 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 05:14:28.619893 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 05:14:28.621718 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 05:14:28.631393 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 05:14:28.631560 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 05:14:28.643943 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 05:14:28.643943 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 05:14:28.647314 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 05:14:28.648134 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 05:14:28.649940 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 05:14:28.657878 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 05:14:28.691793 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 05:14:28.692013 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 05:14:28.693908 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 05:14:28.695362 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 05:14:28.697055 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 05:14:28.704929 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 05:14:28.724392 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 05:14:28.729899 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 05:14:28.753878 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 05:14:28.756145 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 05:14:28.757242 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 05:14:28.758881 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 05:14:28.759150 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 05:14:28.760891 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 05:14:28.761978 systemd[1]: Stopped target basic.target - Basic System. Dec 13 05:14:28.763469 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 05:14:28.764921 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 05:14:28.766353 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 05:14:28.768095 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 05:14:28.769822 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 05:14:28.771443 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 05:14:28.772961 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 05:14:28.774683 systemd[1]: Stopped target swap.target - Swaps. Dec 13 05:14:28.777474 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 05:14:28.777668 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 05:14:28.779392 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 05:14:28.780386 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 05:14:28.782142 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 05:14:28.782435 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 05:14:28.783965 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 05:14:28.784183 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 05:14:28.786124 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 05:14:28.786314 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 05:14:28.788320 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 05:14:28.788505 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 05:14:28.796020 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 05:14:28.796903 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 05:14:28.797141 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 05:14:28.807286 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 05:14:28.808584 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 05:14:28.810222 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 05:14:28.813633 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 05:14:28.814067 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 05:14:28.826814 ignition[1010]: INFO : Ignition 2.19.0 Dec 13 05:14:28.826814 ignition[1010]: INFO : Stage: umount Dec 13 05:14:28.826814 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 05:14:28.826814 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 05:14:28.836008 ignition[1010]: INFO : umount: umount passed Dec 13 05:14:28.836008 ignition[1010]: INFO : Ignition finished successfully Dec 13 05:14:28.831088 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 05:14:28.831271 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 05:14:28.834411 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 05:14:28.834596 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 05:14:28.840099 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 05:14:28.840177 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 05:14:28.841087 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 05:14:28.841154 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 05:14:28.842529 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 05:14:28.842596 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 05:14:28.844964 systemd[1]: Stopped target network.target - Network. Dec 13 05:14:28.846332 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 05:14:28.846405 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 05:14:28.847905 systemd[1]: Stopped target paths.target - Path Units. Dec 13 05:14:28.850726 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 05:14:28.854848 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 05:14:28.855866 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 05:14:28.859539 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 05:14:28.860745 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 05:14:28.860896 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 05:14:28.862335 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 05:14:28.862408 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 05:14:28.863993 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 05:14:28.864122 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 05:14:28.865453 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 05:14:28.865520 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 05:14:28.867229 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 05:14:28.870629 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 05:14:28.874958 systemd-networkd[773]: eth0: DHCPv6 lease lost Dec 13 05:14:28.880903 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 05:14:28.882080 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 05:14:28.882312 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 05:14:28.888131 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 05:14:28.888294 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 05:14:28.892406 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 05:14:28.893373 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 05:14:28.896342 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 05:14:28.896440 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 05:14:28.898116 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 05:14:28.898193 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 05:14:28.903771 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 05:14:28.905383 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 05:14:28.905456 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 05:14:28.908260 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 05:14:28.908345 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 05:14:28.909140 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 05:14:28.909215 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 05:14:28.910847 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 05:14:28.910913 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 05:14:28.916937 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 05:14:28.929188 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 05:14:28.930359 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 05:14:28.931873 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 05:14:28.932011 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 05:14:28.934728 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 05:14:28.934930 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 05:14:28.936250 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 05:14:28.936309 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 05:14:28.937920 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 05:14:28.937993 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 05:14:28.940260 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 05:14:28.940330 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 05:14:28.941866 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 05:14:28.941946 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 05:14:28.954959 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 05:14:28.956696 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 05:14:28.956777 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 05:14:28.958767 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 05:14:28.958865 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 05:14:28.963732 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 05:14:28.963902 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 05:14:28.965896 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 05:14:28.977275 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 05:14:28.987311 systemd[1]: Switching root. Dec 13 05:14:29.022921 systemd-journald[201]: Journal stopped Dec 13 05:14:30.451298 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Dec 13 05:14:30.451398 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 05:14:30.451447 kernel: SELinux: policy capability open_perms=1 Dec 13 05:14:30.451475 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 05:14:30.451493 kernel: SELinux: policy capability always_check_network=0 Dec 13 05:14:30.451515 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 05:14:30.451538 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 05:14:30.451560 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 05:14:30.451589 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 05:14:30.451612 kernel: audit: type=1403 audit(1734066869.314:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 05:14:30.451694 systemd[1]: Successfully loaded SELinux policy in 56.691ms. Dec 13 05:14:30.451721 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.746ms. Dec 13 05:14:30.451753 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 05:14:30.459675 systemd[1]: Detected virtualization kvm. Dec 13 05:14:30.459713 systemd[1]: Detected architecture x86-64. Dec 13 05:14:30.459734 systemd[1]: Detected first boot. Dec 13 05:14:30.459778 systemd[1]: Hostname set to . Dec 13 05:14:30.459801 systemd[1]: Initializing machine ID from VM UUID. Dec 13 05:14:30.459821 zram_generator::config[1069]: No configuration found. Dec 13 05:14:30.459859 systemd[1]: Populated /etc with preset unit settings. Dec 13 05:14:30.459881 systemd[1]: Queued start job for default target multi-user.target. Dec 13 05:14:30.459900 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 05:14:30.459921 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 05:14:30.459941 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 05:14:30.459961 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 05:14:30.459982 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 05:14:30.460008 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 05:14:30.460045 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 05:14:30.460066 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 05:14:30.460086 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 05:14:30.460106 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 05:14:30.460126 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 05:14:30.460145 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 05:14:30.460165 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 05:14:30.460185 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 05:14:30.460217 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 05:14:30.460238 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 05:14:30.460258 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 05:14:30.460278 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 05:14:30.460297 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 05:14:30.460329 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 05:14:30.460348 systemd[1]: Reached target slices.target - Slice Units. Dec 13 05:14:30.460391 systemd[1]: Reached target swap.target - Swaps. Dec 13 05:14:30.460411 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 05:14:30.460446 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 05:14:30.460464 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 05:14:30.460514 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 05:14:30.460551 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 05:14:30.460589 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 05:14:30.460609 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 05:14:30.460628 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 05:14:30.462543 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 05:14:30.462581 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 05:14:30.462600 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 05:14:30.462619 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 05:14:30.462638 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 05:14:30.462655 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 05:14:30.463178 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 05:14:30.463206 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 05:14:30.463227 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 05:14:30.463247 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 05:14:30.463267 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 05:14:30.463286 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 05:14:30.463307 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 05:14:30.463327 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 05:14:30.463360 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 05:14:30.463395 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 05:14:30.463414 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 05:14:30.463446 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Dec 13 05:14:30.463464 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Dec 13 05:14:30.463493 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 05:14:30.463512 kernel: fuse: init (API version 7.39) Dec 13 05:14:30.463530 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 05:14:30.463548 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 05:14:30.463578 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 05:14:30.463602 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 05:14:30.463621 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 05:14:30.463687 systemd-journald[1173]: Collecting audit messages is disabled. Dec 13 05:14:30.463723 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 05:14:30.463753 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 05:14:30.463786 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 05:14:30.463823 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 05:14:30.463848 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 05:14:30.463868 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 05:14:30.463888 systemd-journald[1173]: Journal started Dec 13 05:14:30.463920 systemd-journald[1173]: Runtime Journal (/run/log/journal/5a0c5b843e7f44128427be434daab0ec) is 4.7M, max 38.0M, 33.2M free. Dec 13 05:14:30.470790 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 05:14:30.470834 kernel: loop: module loaded Dec 13 05:14:30.476121 kernel: ACPI: bus type drm_connector registered Dec 13 05:14:30.474164 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 05:14:30.481252 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 05:14:30.484349 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 05:14:30.484608 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 05:14:30.486192 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 05:14:30.486442 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 05:14:30.487632 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 05:14:30.487915 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 05:14:30.489271 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 05:14:30.489488 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 05:14:30.490652 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 05:14:30.490967 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 05:14:30.492152 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 05:14:30.492464 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 05:14:30.493629 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 05:14:30.495098 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 05:14:30.496521 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 05:14:30.510806 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 05:14:30.519564 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 05:14:30.535810 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 05:14:30.536645 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 05:14:30.539578 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 05:14:30.549803 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 05:14:30.552797 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 05:14:30.559883 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 05:14:30.561869 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 05:14:30.567572 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 05:14:30.580807 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 05:14:30.582393 systemd-journald[1173]: Time spent on flushing to /var/log/journal/5a0c5b843e7f44128427be434daab0ec is 38.618ms for 1124 entries. Dec 13 05:14:30.582393 systemd-journald[1173]: System Journal (/var/log/journal/5a0c5b843e7f44128427be434daab0ec) is 8.0M, max 584.8M, 576.8M free. Dec 13 05:14:30.648127 systemd-journald[1173]: Received client request to flush runtime journal. Dec 13 05:14:30.593900 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 05:14:30.595051 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 05:14:30.605201 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 05:14:30.609520 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 05:14:30.651187 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 05:14:30.677423 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 05:14:30.690751 systemd-tmpfiles[1225]: ACLs are not supported, ignoring. Dec 13 05:14:30.693731 systemd-tmpfiles[1225]: ACLs are not supported, ignoring. Dec 13 05:14:30.704928 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 05:14:30.712232 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 05:14:30.721945 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 05:14:30.725333 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 05:14:30.759041 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 05:14:30.767924 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 05:14:30.775971 udevadm[1243]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 05:14:30.793669 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Dec 13 05:14:30.794133 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Dec 13 05:14:30.800392 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 05:14:31.331619 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 05:14:31.338876 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 05:14:31.381302 systemd-udevd[1253]: Using default interface naming scheme 'v255'. Dec 13 05:14:31.408818 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 05:14:31.418845 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 05:14:31.448861 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 05:14:31.497873 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Dec 13 05:14:31.542681 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1259) Dec 13 05:14:31.547114 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 05:14:31.567685 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1259) Dec 13 05:14:31.598705 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1267) Dec 13 05:14:31.689799 systemd-networkd[1257]: lo: Link UP Dec 13 05:14:31.689813 systemd-networkd[1257]: lo: Gained carrier Dec 13 05:14:31.693411 systemd-networkd[1257]: Enumeration completed Dec 13 05:14:31.694495 systemd-networkd[1257]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 05:14:31.694500 systemd-networkd[1257]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 05:14:31.696769 systemd-networkd[1257]: eth0: Link UP Dec 13 05:14:31.696782 systemd-networkd[1257]: eth0: Gained carrier Dec 13 05:14:31.696800 systemd-networkd[1257]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 05:14:31.701429 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 05:14:31.702442 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 05:14:31.706735 systemd-networkd[1257]: eth0: DHCPv4 address 10.230.15.106/30, gateway 10.230.15.105 acquired from 10.230.15.105 Dec 13 05:14:31.710855 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 05:14:31.734695 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 05:14:31.743714 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 05:14:31.747670 kernel: ACPI: button: Power Button [PWRF] Dec 13 05:14:31.808687 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Dec 13 05:14:31.814684 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 05:14:31.819089 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 05:14:31.819438 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 05:14:31.860995 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 05:14:32.025291 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 05:14:32.070493 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 05:14:32.077917 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 05:14:32.105931 lvm[1293]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 05:14:32.145621 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 05:14:32.147572 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 05:14:32.155915 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 05:14:32.162185 lvm[1296]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 05:14:32.195088 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 05:14:32.196801 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 05:14:32.197764 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 05:14:32.198004 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 05:14:32.198976 systemd[1]: Reached target machines.target - Containers. Dec 13 05:14:32.201345 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 05:14:32.208872 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 05:14:32.211837 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 05:14:32.214866 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 05:14:32.216138 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 05:14:32.230951 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 05:14:32.248987 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 05:14:32.253476 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 05:14:32.257008 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 05:14:32.283980 kernel: loop0: detected capacity change from 0 to 140768 Dec 13 05:14:32.297615 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 05:14:32.298641 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 05:14:32.314966 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 05:14:32.335183 kernel: loop1: detected capacity change from 0 to 8 Dec 13 05:14:32.358675 kernel: loop2: detected capacity change from 0 to 211296 Dec 13 05:14:32.392732 kernel: loop3: detected capacity change from 0 to 142488 Dec 13 05:14:32.445735 kernel: loop4: detected capacity change from 0 to 140768 Dec 13 05:14:32.474683 kernel: loop5: detected capacity change from 0 to 8 Dec 13 05:14:32.477764 kernel: loop6: detected capacity change from 0 to 211296 Dec 13 05:14:32.504680 kernel: loop7: detected capacity change from 0 to 142488 Dec 13 05:14:32.522530 (sd-merge)[1317]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Dec 13 05:14:32.523374 (sd-merge)[1317]: Merged extensions into '/usr'. Dec 13 05:14:32.530758 systemd[1]: Reloading requested from client PID 1304 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 05:14:32.530799 systemd[1]: Reloading... Dec 13 05:14:32.645547 zram_generator::config[1342]: No configuration found. Dec 13 05:14:32.832811 ldconfig[1300]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 05:14:32.869606 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 05:14:32.955130 systemd[1]: Reloading finished in 423 ms. Dec 13 05:14:32.976515 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 05:14:32.977980 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 05:14:32.987895 systemd[1]: Starting ensure-sysext.service... Dec 13 05:14:32.996910 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 05:14:33.007798 systemd[1]: Reloading requested from client PID 1408 ('systemctl') (unit ensure-sysext.service)... Dec 13 05:14:33.007829 systemd[1]: Reloading... Dec 13 05:14:33.038495 systemd-tmpfiles[1409]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 05:14:33.039215 systemd-tmpfiles[1409]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 05:14:33.041500 systemd-tmpfiles[1409]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 05:14:33.042303 systemd-tmpfiles[1409]: ACLs are not supported, ignoring. Dec 13 05:14:33.042426 systemd-tmpfiles[1409]: ACLs are not supported, ignoring. Dec 13 05:14:33.049302 systemd-tmpfiles[1409]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 05:14:33.049321 systemd-tmpfiles[1409]: Skipping /boot Dec 13 05:14:33.066822 systemd-tmpfiles[1409]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 05:14:33.066843 systemd-tmpfiles[1409]: Skipping /boot Dec 13 05:14:33.109718 zram_generator::config[1439]: No configuration found. Dec 13 05:14:33.297489 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 05:14:33.382746 systemd[1]: Reloading finished in 374 ms. Dec 13 05:14:33.410375 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 05:14:33.426912 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 05:14:33.431862 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 05:14:33.434864 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 05:14:33.446115 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 05:14:33.452356 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 05:14:33.452519 systemd-networkd[1257]: eth0: Gained IPv6LL Dec 13 05:14:33.470092 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 05:14:33.484530 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 05:14:33.486044 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 05:14:33.497511 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 05:14:33.510957 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 05:14:33.529949 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 05:14:33.532370 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 05:14:33.532577 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 05:14:33.542002 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 05:14:33.542315 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 05:14:33.553860 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 05:14:33.554314 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 05:14:33.564038 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 05:14:33.565012 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 05:14:33.565255 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 05:14:33.570352 augenrules[1531]: No rules Dec 13 05:14:33.577052 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 05:14:33.577341 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 05:14:33.583840 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 05:14:33.586497 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 05:14:33.591132 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 05:14:33.595068 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 05:14:33.597463 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 05:14:33.597815 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 05:14:33.602136 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 05:14:33.611545 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 05:14:33.619451 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 05:14:33.619928 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 05:14:33.624932 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 05:14:33.629288 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 05:14:33.634851 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 05:14:33.637163 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 05:14:33.640562 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 05:14:33.654834 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 05:14:33.655959 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 05:14:33.656006 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 05:14:33.661278 systemd-resolved[1512]: Positive Trust Anchors: Dec 13 05:14:33.661290 systemd-resolved[1512]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 05:14:33.661347 systemd-resolved[1512]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 05:14:33.663789 systemd[1]: Finished ensure-sysext.service. Dec 13 05:14:33.671337 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 05:14:33.671607 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 05:14:33.673067 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 05:14:33.673329 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 05:14:33.673866 systemd-resolved[1512]: Using system hostname 'srv-p0439.gb1.brightbox.com'. Dec 13 05:14:33.680911 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 05:14:33.681203 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 05:14:33.683292 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 05:14:33.684786 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 05:14:33.685182 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 05:14:33.690100 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 05:14:33.695042 systemd[1]: Reached target network.target - Network. Dec 13 05:14:33.695749 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 05:14:33.696495 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 05:14:33.697367 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 05:14:33.697476 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 05:14:33.702930 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 05:14:33.785137 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 05:14:33.786265 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 05:14:33.787160 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 05:14:33.788040 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 05:14:33.788881 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 05:14:33.789718 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 05:14:33.789765 systemd[1]: Reached target paths.target - Path Units. Dec 13 05:14:33.790410 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 05:14:33.791355 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 05:14:33.792270 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 05:14:33.793085 systemd[1]: Reached target timers.target - Timer Units. Dec 13 05:14:33.795133 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 05:14:33.798003 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 05:14:33.801073 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 05:14:33.803879 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 05:14:33.804689 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 05:14:33.805415 systemd[1]: Reached target basic.target - Basic System. Dec 13 05:14:33.806348 systemd[1]: System is tainted: cgroupsv1 Dec 13 05:14:33.806416 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 05:14:33.806462 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 05:14:33.809117 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 05:14:33.812908 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 05:14:33.818885 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 05:14:33.822912 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 05:14:33.830827 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 05:14:33.831941 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 05:14:33.839879 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 05:14:33.855706 jq[1578]: false Dec 13 05:14:33.861947 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 05:14:33.874873 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 05:14:33.881366 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 05:14:33.887767 extend-filesystems[1579]: Found loop4 Dec 13 05:14:33.887767 extend-filesystems[1579]: Found loop5 Dec 13 05:14:33.887767 extend-filesystems[1579]: Found loop6 Dec 13 05:14:33.896769 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 05:14:33.907956 extend-filesystems[1579]: Found loop7 Dec 13 05:14:33.907956 extend-filesystems[1579]: Found vda Dec 13 05:14:33.907956 extend-filesystems[1579]: Found vda1 Dec 13 05:14:33.907956 extend-filesystems[1579]: Found vda2 Dec 13 05:14:33.907956 extend-filesystems[1579]: Found vda3 Dec 13 05:14:33.907956 extend-filesystems[1579]: Found usr Dec 13 05:14:33.907956 extend-filesystems[1579]: Found vda4 Dec 13 05:14:33.907956 extend-filesystems[1579]: Found vda6 Dec 13 05:14:33.907956 extend-filesystems[1579]: Found vda7 Dec 13 05:14:33.907956 extend-filesystems[1579]: Found vda9 Dec 13 05:14:33.907956 extend-filesystems[1579]: Checking size of /dev/vda9 Dec 13 05:14:33.956952 extend-filesystems[1579]: Resized partition /dev/vda9 Dec 13 05:14:33.912642 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 05:14:33.927513 dbus-daemon[1577]: [system] SELinux support is enabled Dec 13 05:14:33.960371 extend-filesystems[1606]: resize2fs 1.47.1 (20-May-2024) Dec 13 05:14:33.930494 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 05:14:33.939794 dbus-daemon[1577]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1257 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 05:14:33.932200 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 05:14:33.937534 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 05:14:33.948792 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 05:14:33.955874 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 05:14:33.991770 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Dec 13 05:14:33.974943 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 05:14:33.980823 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 05:14:34.007517 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 05:14:34.018555 jq[1605]: true Dec 13 05:14:34.040917 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1263) Dec 13 05:14:34.040960 update_engine[1603]: I20241213 05:14:34.040241 1603 main.cc:92] Flatcar Update Engine starting Dec 13 05:14:34.022035 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 05:14:34.027623 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 05:14:34.032118 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 05:14:34.059127 update_engine[1603]: I20241213 05:14:34.058905 1603 update_check_scheduler.cc:74] Next update check in 2m5s Dec 13 05:14:34.081763 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 05:14:34.082987 (ntainerd)[1621]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 05:14:35.030669 dbus-daemon[1577]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 05:14:35.021810 systemd-resolved[1512]: Clock change detected. Flushing caches. Dec 13 05:14:35.029363 systemd-timesyncd[1570]: Contacted time server 131.111.8.60:123 (0.flatcar.pool.ntp.org). Dec 13 05:14:35.029441 systemd-timesyncd[1570]: Initial clock synchronization to Fri 2024-12-13 05:14:35.021750 UTC. Dec 13 05:14:35.029804 systemd[1]: Started update-engine.service - Update Engine. Dec 13 05:14:35.035332 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 05:14:35.035377 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 05:14:35.045337 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 13 05:14:35.047862 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 05:14:35.047900 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 05:14:35.050074 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 05:14:35.057718 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 05:14:35.065199 jq[1620]: true Dec 13 05:14:35.065509 tar[1612]: linux-amd64/helm Dec 13 05:14:35.201222 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Dec 13 05:14:35.235640 extend-filesystems[1606]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 05:14:35.235640 extend-filesystems[1606]: old_desc_blocks = 1, new_desc_blocks = 8 Dec 13 05:14:35.235640 extend-filesystems[1606]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Dec 13 05:14:35.242929 extend-filesystems[1579]: Resized filesystem in /dev/vda9 Dec 13 05:14:35.239653 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 05:14:35.240073 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 05:14:35.288730 bash[1656]: Updated "/home/core/.ssh/authorized_keys" Dec 13 05:14:35.292031 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 05:14:35.311839 systemd[1]: Starting sshkeys.service... Dec 13 05:14:35.330938 systemd-logind[1602]: Watching system buttons on /dev/input/event2 (Power Button) Dec 13 05:14:35.330991 systemd-logind[1602]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 05:14:35.335670 systemd-logind[1602]: New seat seat0. Dec 13 05:14:35.354771 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 05:14:35.394357 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 05:14:35.403493 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 05:14:35.605836 containerd[1621]: time="2024-12-13T05:14:35.605655308Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 05:14:35.609836 dbus-daemon[1577]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 05:14:35.610010 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 13 05:14:35.615739 dbus-daemon[1577]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1635 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 05:14:35.624492 systemd[1]: Starting polkit.service - Authorization Manager... Dec 13 05:14:35.625866 locksmithd[1636]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 05:14:35.667079 polkitd[1681]: Started polkitd version 121 Dec 13 05:14:35.679475 polkitd[1681]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 05:14:35.679568 polkitd[1681]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 05:14:35.684407 polkitd[1681]: Finished loading, compiling and executing 2 rules Dec 13 05:14:35.686049 systemd[1]: Started polkit.service - Authorization Manager. Dec 13 05:14:35.685801 dbus-daemon[1577]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 05:14:35.689154 polkitd[1681]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 05:14:35.706211 containerd[1621]: time="2024-12-13T05:14:35.705710212Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 05:14:35.714804 systemd-hostnamed[1635]: Hostname set to (static) Dec 13 05:14:35.718838 containerd[1621]: time="2024-12-13T05:14:35.718793039Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 05:14:35.718977 containerd[1621]: time="2024-12-13T05:14:35.718950429Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 05:14:35.719153 containerd[1621]: time="2024-12-13T05:14:35.719096812Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 05:14:35.720187 containerd[1621]: time="2024-12-13T05:14:35.720158591Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 05:14:35.720641 containerd[1621]: time="2024-12-13T05:14:35.720614265Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 05:14:35.721251 containerd[1621]: time="2024-12-13T05:14:35.721213340Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 05:14:35.722191 containerd[1621]: time="2024-12-13T05:14:35.722160825Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 05:14:35.723318 containerd[1621]: time="2024-12-13T05:14:35.722780559Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 05:14:35.723318 containerd[1621]: time="2024-12-13T05:14:35.722814839Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 05:14:35.723318 containerd[1621]: time="2024-12-13T05:14:35.723169313Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 05:14:35.723318 containerd[1621]: time="2024-12-13T05:14:35.723193153Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 05:14:35.723599 containerd[1621]: time="2024-12-13T05:14:35.723563251Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 05:14:35.725014 containerd[1621]: time="2024-12-13T05:14:35.724888259Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 05:14:35.725841 containerd[1621]: time="2024-12-13T05:14:35.725718850Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 05:14:35.725841 containerd[1621]: time="2024-12-13T05:14:35.725768032Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 05:14:35.728440 containerd[1621]: time="2024-12-13T05:14:35.726275963Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 05:14:35.726369 systemd-networkd[1257]: eth0: Ignoring DHCPv6 address 2a02:1348:179:83da:24:19ff:fee6:f6a/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:83da:24:19ff:fee6:f6a/64 assigned by NDisc. Dec 13 05:14:35.726375 systemd-networkd[1257]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Dec 13 05:14:35.730246 containerd[1621]: time="2024-12-13T05:14:35.729238242Z" level=info msg="metadata content store policy set" policy=shared Dec 13 05:14:35.737323 containerd[1621]: time="2024-12-13T05:14:35.737243546Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 05:14:35.739771 containerd[1621]: time="2024-12-13T05:14:35.737445415Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 05:14:35.739771 containerd[1621]: time="2024-12-13T05:14:35.737503642Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 05:14:35.739771 containerd[1621]: time="2024-12-13T05:14:35.737531427Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 05:14:35.739771 containerd[1621]: time="2024-12-13T05:14:35.737553836Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 05:14:35.739771 containerd[1621]: time="2024-12-13T05:14:35.737768877Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 05:14:35.739771 containerd[1621]: time="2024-12-13T05:14:35.738251065Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 05:14:35.739771 containerd[1621]: time="2024-12-13T05:14:35.738455646Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 05:14:35.739771 containerd[1621]: time="2024-12-13T05:14:35.738492840Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 05:14:35.739771 containerd[1621]: time="2024-12-13T05:14:35.738512009Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 05:14:35.739771 containerd[1621]: time="2024-12-13T05:14:35.738533285Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 05:14:35.739771 containerd[1621]: time="2024-12-13T05:14:35.738558263Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 05:14:35.739771 containerd[1621]: time="2024-12-13T05:14:35.738595735Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 05:14:35.739771 containerd[1621]: time="2024-12-13T05:14:35.738618217Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 05:14:35.739771 containerd[1621]: time="2024-12-13T05:14:35.738641005Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 05:14:35.740349 containerd[1621]: time="2024-12-13T05:14:35.738660915Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 05:14:35.740349 containerd[1621]: time="2024-12-13T05:14:35.738710648Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 05:14:35.740349 containerd[1621]: time="2024-12-13T05:14:35.738734664Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 05:14:35.740349 containerd[1621]: time="2024-12-13T05:14:35.738774356Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 05:14:35.740349 containerd[1621]: time="2024-12-13T05:14:35.738798159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 05:14:35.740349 containerd[1621]: time="2024-12-13T05:14:35.738829383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 05:14:35.740349 containerd[1621]: time="2024-12-13T05:14:35.738852471Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 05:14:35.740349 containerd[1621]: time="2024-12-13T05:14:35.738871604Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 05:14:35.740349 containerd[1621]: time="2024-12-13T05:14:35.738918886Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 05:14:35.740349 containerd[1621]: time="2024-12-13T05:14:35.738939751Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 05:14:35.740349 containerd[1621]: time="2024-12-13T05:14:35.738969915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 05:14:35.740349 containerd[1621]: time="2024-12-13T05:14:35.738989969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 05:14:35.740349 containerd[1621]: time="2024-12-13T05:14:35.739011836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 05:14:35.740349 containerd[1621]: time="2024-12-13T05:14:35.739042326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 05:14:35.740815 containerd[1621]: time="2024-12-13T05:14:35.739061429Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 05:14:35.740815 containerd[1621]: time="2024-12-13T05:14:35.739081735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 05:14:35.742443 containerd[1621]: time="2024-12-13T05:14:35.742415363Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 05:14:35.744967 containerd[1621]: time="2024-12-13T05:14:35.742581316Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 05:14:35.744967 containerd[1621]: time="2024-12-13T05:14:35.742614228Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 05:14:35.744967 containerd[1621]: time="2024-12-13T05:14:35.742636074Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 05:14:35.744967 containerd[1621]: time="2024-12-13T05:14:35.742719085Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 05:14:35.744967 containerd[1621]: time="2024-12-13T05:14:35.742763211Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 05:14:35.744967 containerd[1621]: time="2024-12-13T05:14:35.742782186Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 05:14:35.744967 containerd[1621]: time="2024-12-13T05:14:35.742813331Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 05:14:35.744967 containerd[1621]: time="2024-12-13T05:14:35.742830989Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 05:14:35.744967 containerd[1621]: time="2024-12-13T05:14:35.742854915Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 05:14:35.744967 containerd[1621]: time="2024-12-13T05:14:35.742890288Z" level=info msg="NRI interface is disabled by configuration." Dec 13 05:14:35.744967 containerd[1621]: time="2024-12-13T05:14:35.742919036Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 05:14:35.745433 containerd[1621]: time="2024-12-13T05:14:35.743312944Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 05:14:35.745433 containerd[1621]: time="2024-12-13T05:14:35.743394276Z" level=info msg="Connect containerd service" Dec 13 05:14:35.745433 containerd[1621]: time="2024-12-13T05:14:35.743459791Z" level=info msg="using legacy CRI server" Dec 13 05:14:35.745433 containerd[1621]: time="2024-12-13T05:14:35.743479860Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 05:14:35.745433 containerd[1621]: time="2024-12-13T05:14:35.743714498Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 05:14:35.749804 containerd[1621]: time="2024-12-13T05:14:35.749771149Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 05:14:35.750977 containerd[1621]: time="2024-12-13T05:14:35.750567928Z" level=info msg="Start subscribing containerd event" Dec 13 05:14:35.751168 containerd[1621]: time="2024-12-13T05:14:35.751104808Z" level=info msg="Start recovering state" Dec 13 05:14:35.751415 containerd[1621]: time="2024-12-13T05:14:35.751391011Z" level=info msg="Start event monitor" Dec 13 05:14:35.752191 containerd[1621]: time="2024-12-13T05:14:35.752164525Z" level=info msg="Start snapshots syncer" Dec 13 05:14:35.752340 containerd[1621]: time="2024-12-13T05:14:35.752314167Z" level=info msg="Start cni network conf syncer for default" Dec 13 05:14:35.752890 containerd[1621]: time="2024-12-13T05:14:35.752446250Z" level=info msg="Start streaming server" Dec 13 05:14:35.755174 containerd[1621]: time="2024-12-13T05:14:35.754897656Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 05:14:35.755483 containerd[1621]: time="2024-12-13T05:14:35.755458562Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 05:14:35.757885 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 05:14:35.762223 containerd[1621]: time="2024-12-13T05:14:35.760249126Z" level=info msg="containerd successfully booted in 0.157229s" Dec 13 05:14:36.093989 sshd_keygen[1622]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 05:14:36.121156 tar[1612]: linux-amd64/LICENSE Dec 13 05:14:36.121156 tar[1612]: linux-amd64/README.md Dec 13 05:14:36.136641 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 05:14:36.152216 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 05:14:36.168464 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 05:14:36.174158 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 05:14:36.174486 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 05:14:36.185752 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 05:14:36.206367 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 05:14:36.217601 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 05:14:36.221567 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 05:14:36.224922 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 05:14:36.580316 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 05:14:36.586111 (kubelet)[1728]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 05:14:37.371314 kubelet[1728]: E1213 05:14:37.370994 1728 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 05:14:37.373926 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 05:14:37.374290 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 05:14:41.296794 login[1719]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Dec 13 05:14:41.297502 login[1718]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 05:14:41.317492 systemd-logind[1602]: New session 2 of user core. Dec 13 05:14:41.318849 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 05:14:41.330523 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 05:14:41.355322 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 05:14:41.365781 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 05:14:41.371986 (systemd)[1748]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 05:14:41.510540 systemd[1748]: Queued start job for default target default.target. Dec 13 05:14:41.512304 systemd[1748]: Created slice app.slice - User Application Slice. Dec 13 05:14:41.512342 systemd[1748]: Reached target paths.target - Paths. Dec 13 05:14:41.512365 systemd[1748]: Reached target timers.target - Timers. Dec 13 05:14:41.519336 systemd[1748]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 05:14:41.540116 systemd[1748]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 05:14:41.540227 systemd[1748]: Reached target sockets.target - Sockets. Dec 13 05:14:41.540252 systemd[1748]: Reached target basic.target - Basic System. Dec 13 05:14:41.540335 systemd[1748]: Reached target default.target - Main User Target. Dec 13 05:14:41.540404 systemd[1748]: Startup finished in 158ms. Dec 13 05:14:41.540556 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 05:14:41.551616 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 05:14:41.891592 coreos-metadata[1575]: Dec 13 05:14:41.891 WARN failed to locate config-drive, using the metadata service API instead Dec 13 05:14:41.920184 coreos-metadata[1575]: Dec 13 05:14:41.919 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Dec 13 05:14:41.926068 coreos-metadata[1575]: Dec 13 05:14:41.926 INFO Fetch failed with 404: resource not found Dec 13 05:14:41.926068 coreos-metadata[1575]: Dec 13 05:14:41.926 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 05:14:41.927012 coreos-metadata[1575]: Dec 13 05:14:41.926 INFO Fetch successful Dec 13 05:14:41.927190 coreos-metadata[1575]: Dec 13 05:14:41.927 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Dec 13 05:14:41.938293 coreos-metadata[1575]: Dec 13 05:14:41.938 INFO Fetch successful Dec 13 05:14:41.938498 coreos-metadata[1575]: Dec 13 05:14:41.938 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Dec 13 05:14:41.954864 coreos-metadata[1575]: Dec 13 05:14:41.954 INFO Fetch successful Dec 13 05:14:41.955067 coreos-metadata[1575]: Dec 13 05:14:41.955 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Dec 13 05:14:41.971182 coreos-metadata[1575]: Dec 13 05:14:41.971 INFO Fetch successful Dec 13 05:14:41.971401 coreos-metadata[1575]: Dec 13 05:14:41.971 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Dec 13 05:14:41.989178 coreos-metadata[1575]: Dec 13 05:14:41.989 INFO Fetch successful Dec 13 05:14:42.026103 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 05:14:42.028432 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 05:14:42.299303 login[1719]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 05:14:42.306665 systemd-logind[1602]: New session 1 of user core. Dec 13 05:14:42.326571 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 05:14:42.659796 coreos-metadata[1667]: Dec 13 05:14:42.659 WARN failed to locate config-drive, using the metadata service API instead Dec 13 05:14:42.683547 coreos-metadata[1667]: Dec 13 05:14:42.683 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Dec 13 05:14:42.710241 coreos-metadata[1667]: Dec 13 05:14:42.710 INFO Fetch successful Dec 13 05:14:42.710241 coreos-metadata[1667]: Dec 13 05:14:42.710 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 05:14:42.742236 coreos-metadata[1667]: Dec 13 05:14:42.742 INFO Fetch successful Dec 13 05:14:42.745179 unknown[1667]: wrote ssh authorized keys file for user: core Dec 13 05:14:42.768208 update-ssh-keys[1793]: Updated "/home/core/.ssh/authorized_keys" Dec 13 05:14:42.769783 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 05:14:42.777100 systemd[1]: Finished sshkeys.service. Dec 13 05:14:42.783632 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 05:14:42.784480 systemd[1]: Startup finished in 15.936s (kernel) + 12.591s (userspace) = 28.527s. Dec 13 05:14:45.513682 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 05:14:45.520546 systemd[1]: Started sshd@0-10.230.15.106:22-147.75.109.163:42528.service - OpenSSH per-connection server daemon (147.75.109.163:42528). Dec 13 05:14:46.433398 sshd[1799]: Accepted publickey for core from 147.75.109.163 port 42528 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:14:46.435715 sshd[1799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:14:46.442773 systemd-logind[1602]: New session 3 of user core. Dec 13 05:14:46.450671 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 05:14:47.201469 systemd[1]: Started sshd@1-10.230.15.106:22-147.75.109.163:39972.service - OpenSSH per-connection server daemon (147.75.109.163:39972). Dec 13 05:14:47.563139 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 05:14:47.569347 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 05:14:47.767368 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 05:14:47.782735 (kubelet)[1818]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 05:14:47.852008 kubelet[1818]: E1213 05:14:47.851815 1818 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 05:14:47.856445 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 05:14:47.856795 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 05:14:48.089269 sshd[1804]: Accepted publickey for core from 147.75.109.163 port 39972 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:14:48.091268 sshd[1804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:14:48.099990 systemd-logind[1602]: New session 4 of user core. Dec 13 05:14:48.106163 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 05:14:48.715002 sshd[1804]: pam_unix(sshd:session): session closed for user core Dec 13 05:14:48.721564 systemd[1]: sshd@1-10.230.15.106:22-147.75.109.163:39972.service: Deactivated successfully. Dec 13 05:14:48.727526 systemd-logind[1602]: Session 4 logged out. Waiting for processes to exit. Dec 13 05:14:48.728526 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 05:14:48.730747 systemd-logind[1602]: Removed session 4. Dec 13 05:14:48.864498 systemd[1]: Started sshd@2-10.230.15.106:22-147.75.109.163:39982.service - OpenSSH per-connection server daemon (147.75.109.163:39982). Dec 13 05:14:49.750527 sshd[1833]: Accepted publickey for core from 147.75.109.163 port 39982 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:14:49.752923 sshd[1833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:14:49.760571 systemd-logind[1602]: New session 5 of user core. Dec 13 05:14:49.766553 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 05:14:50.363488 sshd[1833]: pam_unix(sshd:session): session closed for user core Dec 13 05:14:50.368289 systemd[1]: sshd@2-10.230.15.106:22-147.75.109.163:39982.service: Deactivated successfully. Dec 13 05:14:50.368401 systemd-logind[1602]: Session 5 logged out. Waiting for processes to exit. Dec 13 05:14:50.372993 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 05:14:50.374419 systemd-logind[1602]: Removed session 5. Dec 13 05:14:50.513449 systemd[1]: Started sshd@3-10.230.15.106:22-147.75.109.163:39998.service - OpenSSH per-connection server daemon (147.75.109.163:39998). Dec 13 05:14:51.401662 sshd[1841]: Accepted publickey for core from 147.75.109.163 port 39998 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:14:51.403709 sshd[1841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:14:51.410062 systemd-logind[1602]: New session 6 of user core. Dec 13 05:14:51.421851 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 05:14:52.053451 sshd[1841]: pam_unix(sshd:session): session closed for user core Dec 13 05:14:52.057795 systemd[1]: sshd@3-10.230.15.106:22-147.75.109.163:39998.service: Deactivated successfully. Dec 13 05:14:52.061778 systemd-logind[1602]: Session 6 logged out. Waiting for processes to exit. Dec 13 05:14:52.063192 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 05:14:52.064492 systemd-logind[1602]: Removed session 6. Dec 13 05:14:52.203491 systemd[1]: Started sshd@4-10.230.15.106:22-147.75.109.163:40002.service - OpenSSH per-connection server daemon (147.75.109.163:40002). Dec 13 05:14:53.085924 sshd[1849]: Accepted publickey for core from 147.75.109.163 port 40002 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:14:53.088002 sshd[1849]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:14:53.095743 systemd-logind[1602]: New session 7 of user core. Dec 13 05:14:53.102590 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 05:14:53.574042 sudo[1853]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 05:14:53.575073 sudo[1853]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 05:14:53.595913 sudo[1853]: pam_unix(sudo:session): session closed for user root Dec 13 05:14:53.740732 sshd[1849]: pam_unix(sshd:session): session closed for user core Dec 13 05:14:53.745914 systemd[1]: sshd@4-10.230.15.106:22-147.75.109.163:40002.service: Deactivated successfully. Dec 13 05:14:53.750682 systemd-logind[1602]: Session 7 logged out. Waiting for processes to exit. Dec 13 05:14:53.752530 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 05:14:53.753940 systemd-logind[1602]: Removed session 7. Dec 13 05:14:53.889543 systemd[1]: Started sshd@5-10.230.15.106:22-147.75.109.163:40018.service - OpenSSH per-connection server daemon (147.75.109.163:40018). Dec 13 05:14:54.777391 sshd[1858]: Accepted publickey for core from 147.75.109.163 port 40018 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:14:54.779735 sshd[1858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:14:54.788211 systemd-logind[1602]: New session 8 of user core. Dec 13 05:14:54.794622 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 05:14:55.253570 sudo[1863]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 05:14:55.254121 sudo[1863]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 05:14:55.259724 sudo[1863]: pam_unix(sudo:session): session closed for user root Dec 13 05:14:55.267462 sudo[1862]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 05:14:55.267937 sudo[1862]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 05:14:55.284503 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 05:14:55.298799 auditctl[1866]: No rules Dec 13 05:14:55.299394 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 05:14:55.299758 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 05:14:55.309596 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 05:14:55.349784 augenrules[1885]: No rules Dec 13 05:14:55.351474 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 05:14:55.352908 sudo[1862]: pam_unix(sudo:session): session closed for user root Dec 13 05:14:55.497484 sshd[1858]: pam_unix(sshd:session): session closed for user core Dec 13 05:14:55.502649 systemd[1]: sshd@5-10.230.15.106:22-147.75.109.163:40018.service: Deactivated successfully. Dec 13 05:14:55.506210 systemd-logind[1602]: Session 8 logged out. Waiting for processes to exit. Dec 13 05:14:55.506929 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 05:14:55.509084 systemd-logind[1602]: Removed session 8. Dec 13 05:14:55.659537 systemd[1]: Started sshd@6-10.230.15.106:22-147.75.109.163:40020.service - OpenSSH per-connection server daemon (147.75.109.163:40020). Dec 13 05:14:56.539098 sshd[1894]: Accepted publickey for core from 147.75.109.163 port 40020 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:14:56.541059 sshd[1894]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:14:56.547356 systemd-logind[1602]: New session 9 of user core. Dec 13 05:14:56.554581 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 05:14:57.015324 sudo[1898]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 05:14:57.015825 sudo[1898]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 05:14:57.477503 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 05:14:57.477951 (dockerd)[1915]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 05:14:57.921800 dockerd[1915]: time="2024-12-13T05:14:57.920706029Z" level=info msg="Starting up" Dec 13 05:14:57.925316 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 05:14:57.934342 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 05:14:58.191576 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 05:14:58.210718 (kubelet)[1947]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 05:14:58.301384 kubelet[1947]: E1213 05:14:58.301307 1947 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 05:14:58.303754 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 05:14:58.304073 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 05:14:58.346452 dockerd[1915]: time="2024-12-13T05:14:58.345935519Z" level=info msg="Loading containers: start." Dec 13 05:14:58.492237 kernel: Initializing XFRM netlink socket Dec 13 05:14:58.608216 systemd-networkd[1257]: docker0: Link UP Dec 13 05:14:58.621899 dockerd[1915]: time="2024-12-13T05:14:58.621845320Z" level=info msg="Loading containers: done." Dec 13 05:14:58.646844 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3215073014-merged.mount: Deactivated successfully. Dec 13 05:14:58.648867 dockerd[1915]: time="2024-12-13T05:14:58.648824680Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 05:14:58.649297 dockerd[1915]: time="2024-12-13T05:14:58.649266066Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 05:14:58.649951 dockerd[1915]: time="2024-12-13T05:14:58.649566642Z" level=info msg="Daemon has completed initialization" Dec 13 05:14:58.687546 dockerd[1915]: time="2024-12-13T05:14:58.686832647Z" level=info msg="API listen on /run/docker.sock" Dec 13 05:14:58.687234 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 05:15:00.087473 containerd[1621]: time="2024-12-13T05:15:00.087330541Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 05:15:00.873968 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount12969643.mount: Deactivated successfully. Dec 13 05:15:03.188658 containerd[1621]: time="2024-12-13T05:15:03.186812863Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:15:03.193643 containerd[1621]: time="2024-12-13T05:15:03.193574641Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=35139262" Dec 13 05:15:03.194522 containerd[1621]: time="2024-12-13T05:15:03.194438922Z" level=info msg="ImageCreate event name:\"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:15:03.198195 containerd[1621]: time="2024-12-13T05:15:03.198089875Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:15:03.201460 containerd[1621]: time="2024-12-13T05:15:03.199858110Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"35136054\" in 3.112352794s" Dec 13 05:15:03.201460 containerd[1621]: time="2024-12-13T05:15:03.199929893Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Dec 13 05:15:03.231817 containerd[1621]: time="2024-12-13T05:15:03.231756337Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 05:15:05.755744 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 05:15:05.767661 containerd[1621]: time="2024-12-13T05:15:05.765039358Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:15:05.768752 containerd[1621]: time="2024-12-13T05:15:05.768690782Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=32217740" Dec 13 05:15:05.771393 containerd[1621]: time="2024-12-13T05:15:05.771351791Z" level=info msg="ImageCreate event name:\"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:15:05.777028 containerd[1621]: time="2024-12-13T05:15:05.776982326Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:15:05.778764 containerd[1621]: time="2024-12-13T05:15:05.778713054Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"33662844\" in 2.546891733s" Dec 13 05:15:05.778852 containerd[1621]: time="2024-12-13T05:15:05.778788740Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Dec 13 05:15:05.817679 containerd[1621]: time="2024-12-13T05:15:05.817624038Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 05:15:07.493161 containerd[1621]: time="2024-12-13T05:15:07.492984865Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:15:07.494698 containerd[1621]: time="2024-12-13T05:15:07.494629259Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=17332830" Dec 13 05:15:07.495832 containerd[1621]: time="2024-12-13T05:15:07.495771210Z" level=info msg="ImageCreate event name:\"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:15:07.499563 containerd[1621]: time="2024-12-13T05:15:07.499503871Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:15:07.501096 containerd[1621]: time="2024-12-13T05:15:07.501053491Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"18777952\" in 1.683374477s" Dec 13 05:15:07.501202 containerd[1621]: time="2024-12-13T05:15:07.501100595Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Dec 13 05:15:07.532288 containerd[1621]: time="2024-12-13T05:15:07.532114119Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 05:15:08.314093 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 05:15:08.326456 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 05:15:08.538791 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 05:15:08.548733 (kubelet)[2174]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 05:15:08.641877 kubelet[2174]: E1213 05:15:08.640923 2174 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 05:15:08.644384 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 05:15:08.644943 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 05:15:09.555502 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1098158606.mount: Deactivated successfully. Dec 13 05:15:10.138464 containerd[1621]: time="2024-12-13T05:15:10.138301829Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:15:10.139888 containerd[1621]: time="2024-12-13T05:15:10.139594477Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619966" Dec 13 05:15:10.140807 containerd[1621]: time="2024-12-13T05:15:10.140753869Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:15:10.146072 containerd[1621]: time="2024-12-13T05:15:10.145998721Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:15:10.147065 containerd[1621]: time="2024-12-13T05:15:10.147017212Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 2.614826011s" Dec 13 05:15:10.147164 containerd[1621]: time="2024-12-13T05:15:10.147094489Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 05:15:10.179577 containerd[1621]: time="2024-12-13T05:15:10.179237823Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 05:15:10.861195 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount217245816.mount: Deactivated successfully. Dec 13 05:15:12.017761 containerd[1621]: time="2024-12-13T05:15:12.017381996Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:15:12.019629 containerd[1621]: time="2024-12-13T05:15:12.019322934Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Dec 13 05:15:12.020545 containerd[1621]: time="2024-12-13T05:15:12.020490667Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:15:12.027462 containerd[1621]: time="2024-12-13T05:15:12.024744857Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:15:12.027853 containerd[1621]: time="2024-12-13T05:15:12.027818038Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.848523304s" Dec 13 05:15:12.028011 containerd[1621]: time="2024-12-13T05:15:12.027981496Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 05:15:12.056972 containerd[1621]: time="2024-12-13T05:15:12.056919589Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 05:15:12.658743 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4236826633.mount: Deactivated successfully. Dec 13 05:15:12.666548 containerd[1621]: time="2024-12-13T05:15:12.665822318Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:15:12.667293 containerd[1621]: time="2024-12-13T05:15:12.667225037Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Dec 13 05:15:12.668416 containerd[1621]: time="2024-12-13T05:15:12.668356530Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:15:12.671367 containerd[1621]: time="2024-12-13T05:15:12.671296733Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:15:12.672846 containerd[1621]: time="2024-12-13T05:15:12.672607867Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 615.389691ms" Dec 13 05:15:12.672846 containerd[1621]: time="2024-12-13T05:15:12.672656557Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 05:15:12.704644 containerd[1621]: time="2024-12-13T05:15:12.704585966Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 05:15:13.352843 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1699996452.mount: Deactivated successfully. Dec 13 05:15:16.120193 containerd[1621]: time="2024-12-13T05:15:16.120057662Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:15:16.122352 containerd[1621]: time="2024-12-13T05:15:16.122276369Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651633" Dec 13 05:15:16.123166 containerd[1621]: time="2024-12-13T05:15:16.122683181Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:15:16.127158 containerd[1621]: time="2024-12-13T05:15:16.127074346Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:15:16.129100 containerd[1621]: time="2024-12-13T05:15:16.128848744Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.424208191s" Dec 13 05:15:16.129100 containerd[1621]: time="2024-12-13T05:15:16.128907091Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Dec 13 05:15:18.813754 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 13 05:15:18.827350 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 05:15:19.304653 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 05:15:19.319690 (kubelet)[2368]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 05:15:19.406297 kubelet[2368]: E1213 05:15:19.406221 2368 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 05:15:19.409357 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 05:15:19.409710 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 05:15:20.193490 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 05:15:20.213501 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 05:15:20.249565 systemd[1]: Reloading requested from client PID 2385 ('systemctl') (unit session-9.scope)... Dec 13 05:15:20.249829 systemd[1]: Reloading... Dec 13 05:15:20.275365 update_engine[1603]: I20241213 05:15:20.273013 1603 update_attempter.cc:509] Updating boot flags... Dec 13 05:15:20.408193 zram_generator::config[2422]: No configuration found. Dec 13 05:15:20.472319 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2448) Dec 13 05:15:20.636412 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 05:15:20.736461 systemd[1]: Reloading finished in 485 ms. Dec 13 05:15:20.823410 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 05:15:20.838921 (kubelet)[2504]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 05:15:20.848844 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 05:15:20.863479 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 05:15:20.864014 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 05:15:20.871630 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 05:15:21.042351 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 05:15:21.051780 (kubelet)[2520]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 05:15:21.136142 kubelet[2520]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 05:15:21.136142 kubelet[2520]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 05:15:21.136142 kubelet[2520]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 05:15:21.136787 kubelet[2520]: I1213 05:15:21.136234 2520 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 05:15:21.635625 kubelet[2520]: I1213 05:15:21.635553 2520 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 05:15:21.635625 kubelet[2520]: I1213 05:15:21.635607 2520 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 05:15:21.635943 kubelet[2520]: I1213 05:15:21.635910 2520 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 05:15:21.667395 kubelet[2520]: E1213 05:15:21.667320 2520 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.230.15.106:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.230.15.106:6443: connect: connection refused Dec 13 05:15:21.667646 kubelet[2520]: I1213 05:15:21.667618 2520 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 05:15:21.692039 kubelet[2520]: I1213 05:15:21.691979 2520 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 05:15:21.694020 kubelet[2520]: I1213 05:15:21.693948 2520 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 05:15:21.695222 kubelet[2520]: I1213 05:15:21.695146 2520 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 05:15:21.695788 kubelet[2520]: I1213 05:15:21.695752 2520 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 05:15:21.695788 kubelet[2520]: I1213 05:15:21.695788 2520 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 05:15:21.696010 kubelet[2520]: I1213 05:15:21.695980 2520 state_mem.go:36] "Initialized new in-memory state store" Dec 13 05:15:21.696208 kubelet[2520]: I1213 05:15:21.696187 2520 kubelet.go:396] "Attempting to sync node with API server" Dec 13 05:15:21.696285 kubelet[2520]: I1213 05:15:21.696223 2520 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 05:15:21.697288 kubelet[2520]: I1213 05:15:21.696952 2520 kubelet.go:312] "Adding apiserver pod source" Dec 13 05:15:21.697288 kubelet[2520]: I1213 05:15:21.696992 2520 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 05:15:21.698932 kubelet[2520]: W1213 05:15:21.698634 2520 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.230.15.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-p0439.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.15.106:6443: connect: connection refused Dec 13 05:15:21.698932 kubelet[2520]: E1213 05:15:21.698747 2520 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.230.15.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-p0439.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.15.106:6443: connect: connection refused Dec 13 05:15:21.698932 kubelet[2520]: W1213 05:15:21.698855 2520 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.230.15.106:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.15.106:6443: connect: connection refused Dec 13 05:15:21.698932 kubelet[2520]: E1213 05:15:21.698906 2520 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.230.15.106:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.15.106:6443: connect: connection refused Dec 13 05:15:21.700042 kubelet[2520]: I1213 05:15:21.699566 2520 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 05:15:21.704272 kubelet[2520]: I1213 05:15:21.704236 2520 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 05:15:21.704359 kubelet[2520]: W1213 05:15:21.704340 2520 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 05:15:21.706232 kubelet[2520]: I1213 05:15:21.705713 2520 server.go:1256] "Started kubelet" Dec 13 05:15:21.707743 kubelet[2520]: I1213 05:15:21.707672 2520 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 05:15:21.722310 kubelet[2520]: I1213 05:15:21.722283 2520 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 05:15:21.723991 kubelet[2520]: I1213 05:15:21.723544 2520 server.go:461] "Adding debug handlers to kubelet server" Dec 13 05:15:21.725178 kubelet[2520]: I1213 05:15:21.725151 2520 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 05:15:21.725760 kubelet[2520]: I1213 05:15:21.725481 2520 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 05:15:21.729585 kubelet[2520]: I1213 05:15:21.728394 2520 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 05:15:21.729585 kubelet[2520]: E1213 05:15:21.729114 2520 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.15.106:6443/api/v1/namespaces/default/events\": dial tcp 10.230.15.106:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-p0439.gb1.brightbox.com.1810a4ad850564b2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-p0439.gb1.brightbox.com,UID:srv-p0439.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-p0439.gb1.brightbox.com,},FirstTimestamp:2024-12-13 05:15:21.705661618 +0000 UTC m=+0.648701794,LastTimestamp:2024-12-13 05:15:21.705661618 +0000 UTC m=+0.648701794,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-p0439.gb1.brightbox.com,}" Dec 13 05:15:21.730159 kubelet[2520]: I1213 05:15:21.730106 2520 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 05:15:21.730266 kubelet[2520]: I1213 05:15:21.730246 2520 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 05:15:21.730539 kubelet[2520]: E1213 05:15:21.730369 2520 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.15.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-p0439.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.15.106:6443: connect: connection refused" interval="200ms" Dec 13 05:15:21.730813 kubelet[2520]: W1213 05:15:21.730758 2520 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.230.15.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.15.106:6443: connect: connection refused Dec 13 05:15:21.730813 kubelet[2520]: E1213 05:15:21.730812 2520 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.230.15.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.15.106:6443: connect: connection refused Dec 13 05:15:21.733911 kubelet[2520]: I1213 05:15:21.733884 2520 factory.go:221] Registration of the systemd container factory successfully Dec 13 05:15:21.734014 kubelet[2520]: I1213 05:15:21.733990 2520 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 05:15:21.737547 kubelet[2520]: E1213 05:15:21.737521 2520 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 05:15:21.737853 kubelet[2520]: I1213 05:15:21.737832 2520 factory.go:221] Registration of the containerd container factory successfully Dec 13 05:15:21.765999 kubelet[2520]: I1213 05:15:21.765081 2520 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 05:15:21.770850 kubelet[2520]: I1213 05:15:21.770825 2520 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 05:15:21.770944 kubelet[2520]: I1213 05:15:21.770881 2520 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 05:15:21.770944 kubelet[2520]: I1213 05:15:21.770917 2520 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 05:15:21.771070 kubelet[2520]: E1213 05:15:21.771024 2520 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 05:15:21.779013 kubelet[2520]: W1213 05:15:21.778979 2520 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.230.15.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.15.106:6443: connect: connection refused Dec 13 05:15:21.779109 kubelet[2520]: E1213 05:15:21.779025 2520 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.230.15.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.15.106:6443: connect: connection refused Dec 13 05:15:21.791674 kubelet[2520]: I1213 05:15:21.791633 2520 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 05:15:21.791834 kubelet[2520]: I1213 05:15:21.791816 2520 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 05:15:21.792040 kubelet[2520]: I1213 05:15:21.792013 2520 state_mem.go:36] "Initialized new in-memory state store" Dec 13 05:15:21.794320 kubelet[2520]: I1213 05:15:21.794298 2520 policy_none.go:49] "None policy: Start" Dec 13 05:15:21.795427 kubelet[2520]: I1213 05:15:21.795405 2520 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 05:15:21.795679 kubelet[2520]: I1213 05:15:21.795658 2520 state_mem.go:35] "Initializing new in-memory state store" Dec 13 05:15:21.802557 kubelet[2520]: I1213 05:15:21.802533 2520 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 05:15:21.803073 kubelet[2520]: I1213 05:15:21.803053 2520 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 05:15:21.810435 kubelet[2520]: E1213 05:15:21.810414 2520 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-p0439.gb1.brightbox.com\" not found" Dec 13 05:15:21.832242 kubelet[2520]: I1213 05:15:21.831837 2520 kubelet_node_status.go:73] "Attempting to register node" node="srv-p0439.gb1.brightbox.com" Dec 13 05:15:21.832469 kubelet[2520]: E1213 05:15:21.832447 2520 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.15.106:6443/api/v1/nodes\": dial tcp 10.230.15.106:6443: connect: connection refused" node="srv-p0439.gb1.brightbox.com" Dec 13 05:15:21.871700 kubelet[2520]: I1213 05:15:21.871657 2520 topology_manager.go:215] "Topology Admit Handler" podUID="2f4fed7f049c19c9eacdaf4c230f7140" podNamespace="kube-system" podName="kube-apiserver-srv-p0439.gb1.brightbox.com" Dec 13 05:15:21.878421 kubelet[2520]: I1213 05:15:21.878380 2520 topology_manager.go:215] "Topology Admit Handler" podUID="d14d3ad4e480ed1ff1df8a4c297648dc" podNamespace="kube-system" podName="kube-controller-manager-srv-p0439.gb1.brightbox.com" Dec 13 05:15:21.880723 kubelet[2520]: I1213 05:15:21.880639 2520 topology_manager.go:215] "Topology Admit Handler" podUID="2250899c921a3b640e7b4e9d2e7d4146" podNamespace="kube-system" podName="kube-scheduler-srv-p0439.gb1.brightbox.com" Dec 13 05:15:21.932169 kubelet[2520]: E1213 05:15:21.931977 2520 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.15.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-p0439.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.15.106:6443: connect: connection refused" interval="400ms" Dec 13 05:15:22.032270 kubelet[2520]: I1213 05:15:22.031575 2520 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2f4fed7f049c19c9eacdaf4c230f7140-k8s-certs\") pod \"kube-apiserver-srv-p0439.gb1.brightbox.com\" (UID: \"2f4fed7f049c19c9eacdaf4c230f7140\") " pod="kube-system/kube-apiserver-srv-p0439.gb1.brightbox.com" Dec 13 05:15:22.032270 kubelet[2520]: I1213 05:15:22.031697 2520 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2f4fed7f049c19c9eacdaf4c230f7140-usr-share-ca-certificates\") pod \"kube-apiserver-srv-p0439.gb1.brightbox.com\" (UID: \"2f4fed7f049c19c9eacdaf4c230f7140\") " pod="kube-system/kube-apiserver-srv-p0439.gb1.brightbox.com" Dec 13 05:15:22.032270 kubelet[2520]: I1213 05:15:22.031749 2520 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d14d3ad4e480ed1ff1df8a4c297648dc-ca-certs\") pod \"kube-controller-manager-srv-p0439.gb1.brightbox.com\" (UID: \"d14d3ad4e480ed1ff1df8a4c297648dc\") " pod="kube-system/kube-controller-manager-srv-p0439.gb1.brightbox.com" Dec 13 05:15:22.032270 kubelet[2520]: I1213 05:15:22.031796 2520 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2250899c921a3b640e7b4e9d2e7d4146-kubeconfig\") pod \"kube-scheduler-srv-p0439.gb1.brightbox.com\" (UID: \"2250899c921a3b640e7b4e9d2e7d4146\") " pod="kube-system/kube-scheduler-srv-p0439.gb1.brightbox.com" Dec 13 05:15:22.032270 kubelet[2520]: I1213 05:15:22.031858 2520 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d14d3ad4e480ed1ff1df8a4c297648dc-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-p0439.gb1.brightbox.com\" (UID: \"d14d3ad4e480ed1ff1df8a4c297648dc\") " pod="kube-system/kube-controller-manager-srv-p0439.gb1.brightbox.com" Dec 13 05:15:22.032513 kubelet[2520]: I1213 05:15:22.031930 2520 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2f4fed7f049c19c9eacdaf4c230f7140-ca-certs\") pod \"kube-apiserver-srv-p0439.gb1.brightbox.com\" (UID: \"2f4fed7f049c19c9eacdaf4c230f7140\") " pod="kube-system/kube-apiserver-srv-p0439.gb1.brightbox.com" Dec 13 05:15:22.032513 kubelet[2520]: I1213 05:15:22.031984 2520 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d14d3ad4e480ed1ff1df8a4c297648dc-flexvolume-dir\") pod \"kube-controller-manager-srv-p0439.gb1.brightbox.com\" (UID: \"d14d3ad4e480ed1ff1df8a4c297648dc\") " pod="kube-system/kube-controller-manager-srv-p0439.gb1.brightbox.com" Dec 13 05:15:22.032513 kubelet[2520]: I1213 05:15:22.032040 2520 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d14d3ad4e480ed1ff1df8a4c297648dc-k8s-certs\") pod \"kube-controller-manager-srv-p0439.gb1.brightbox.com\" (UID: \"d14d3ad4e480ed1ff1df8a4c297648dc\") " pod="kube-system/kube-controller-manager-srv-p0439.gb1.brightbox.com" Dec 13 05:15:22.032513 kubelet[2520]: I1213 05:15:22.032102 2520 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d14d3ad4e480ed1ff1df8a4c297648dc-kubeconfig\") pod \"kube-controller-manager-srv-p0439.gb1.brightbox.com\" (UID: \"d14d3ad4e480ed1ff1df8a4c297648dc\") " pod="kube-system/kube-controller-manager-srv-p0439.gb1.brightbox.com" Dec 13 05:15:22.036523 kubelet[2520]: I1213 05:15:22.036452 2520 kubelet_node_status.go:73] "Attempting to register node" node="srv-p0439.gb1.brightbox.com" Dec 13 05:15:22.036860 kubelet[2520]: E1213 05:15:22.036830 2520 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.15.106:6443/api/v1/nodes\": dial tcp 10.230.15.106:6443: connect: connection refused" node="srv-p0439.gb1.brightbox.com" Dec 13 05:15:22.195121 containerd[1621]: time="2024-12-13T05:15:22.194285236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-p0439.gb1.brightbox.com,Uid:2f4fed7f049c19c9eacdaf4c230f7140,Namespace:kube-system,Attempt:0,}" Dec 13 05:15:22.196930 containerd[1621]: time="2024-12-13T05:15:22.196800003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-p0439.gb1.brightbox.com,Uid:d14d3ad4e480ed1ff1df8a4c297648dc,Namespace:kube-system,Attempt:0,}" Dec 13 05:15:22.199611 containerd[1621]: time="2024-12-13T05:15:22.199543069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-p0439.gb1.brightbox.com,Uid:2250899c921a3b640e7b4e9d2e7d4146,Namespace:kube-system,Attempt:0,}" Dec 13 05:15:22.333277 kubelet[2520]: E1213 05:15:22.333210 2520 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.15.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-p0439.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.15.106:6443: connect: connection refused" interval="800ms" Dec 13 05:15:22.440473 kubelet[2520]: I1213 05:15:22.440403 2520 kubelet_node_status.go:73] "Attempting to register node" node="srv-p0439.gb1.brightbox.com" Dec 13 05:15:22.441021 kubelet[2520]: E1213 05:15:22.440992 2520 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.15.106:6443/api/v1/nodes\": dial tcp 10.230.15.106:6443: connect: connection refused" node="srv-p0439.gb1.brightbox.com" Dec 13 05:15:22.635726 kubelet[2520]: W1213 05:15:22.635660 2520 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.230.15.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.15.106:6443: connect: connection refused Dec 13 05:15:22.635849 kubelet[2520]: E1213 05:15:22.635739 2520 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.230.15.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.15.106:6443: connect: connection refused Dec 13 05:15:22.799359 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3547997107.mount: Deactivated successfully. Dec 13 05:15:22.807129 containerd[1621]: time="2024-12-13T05:15:22.807063583Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 05:15:22.814353 containerd[1621]: time="2024-12-13T05:15:22.814104884Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 05:15:22.815335 containerd[1621]: time="2024-12-13T05:15:22.815152499Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 05:15:22.816735 containerd[1621]: time="2024-12-13T05:15:22.816702044Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 05:15:22.820524 containerd[1621]: time="2024-12-13T05:15:22.818233783Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Dec 13 05:15:22.820524 containerd[1621]: time="2024-12-13T05:15:22.819740135Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 05:15:22.820926 containerd[1621]: time="2024-12-13T05:15:22.820883280Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 05:15:22.822172 containerd[1621]: time="2024-12-13T05:15:22.822080567Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 05:15:22.823852 containerd[1621]: time="2024-12-13T05:15:22.823811524Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 629.325469ms" Dec 13 05:15:22.829936 containerd[1621]: time="2024-12-13T05:15:22.829880243Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 630.163963ms" Dec 13 05:15:22.831037 containerd[1621]: time="2024-12-13T05:15:22.830731551Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 633.86055ms" Dec 13 05:15:22.907849 kubelet[2520]: W1213 05:15:22.907676 2520 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.230.15.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-p0439.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.15.106:6443: connect: connection refused Dec 13 05:15:22.907849 kubelet[2520]: E1213 05:15:22.907769 2520 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.230.15.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-p0439.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.15.106:6443: connect: connection refused Dec 13 05:15:22.930363 kubelet[2520]: W1213 05:15:22.930282 2520 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.230.15.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.15.106:6443: connect: connection refused Dec 13 05:15:22.930573 kubelet[2520]: E1213 05:15:22.930539 2520 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.230.15.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.15.106:6443: connect: connection refused Dec 13 05:15:22.989707 kubelet[2520]: E1213 05:15:22.989578 2520 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.15.106:6443/api/v1/namespaces/default/events\": dial tcp 10.230.15.106:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-p0439.gb1.brightbox.com.1810a4ad850564b2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-p0439.gb1.brightbox.com,UID:srv-p0439.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-p0439.gb1.brightbox.com,},FirstTimestamp:2024-12-13 05:15:21.705661618 +0000 UTC m=+0.648701794,LastTimestamp:2024-12-13 05:15:21.705661618 +0000 UTC m=+0.648701794,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-p0439.gb1.brightbox.com,}" Dec 13 05:15:23.017087 containerd[1621]: time="2024-12-13T05:15:23.016875531Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 05:15:23.017087 containerd[1621]: time="2024-12-13T05:15:23.017054021Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 05:15:23.017369 containerd[1621]: time="2024-12-13T05:15:23.017188503Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:15:23.018589 containerd[1621]: time="2024-12-13T05:15:23.018317644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:15:23.025143 containerd[1621]: time="2024-12-13T05:15:23.024921509Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 05:15:23.025424 containerd[1621]: time="2024-12-13T05:15:23.025164264Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 05:15:23.025424 containerd[1621]: time="2024-12-13T05:15:23.025190356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:15:23.027088 containerd[1621]: time="2024-12-13T05:15:23.025650314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:15:23.032092 containerd[1621]: time="2024-12-13T05:15:23.031331447Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 05:15:23.032092 containerd[1621]: time="2024-12-13T05:15:23.031388920Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 05:15:23.032092 containerd[1621]: time="2024-12-13T05:15:23.031406656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:15:23.032092 containerd[1621]: time="2024-12-13T05:15:23.031507785Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:15:23.139597 kubelet[2520]: E1213 05:15:23.137946 2520 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.15.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-p0439.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.15.106:6443: connect: connection refused" interval="1.6s" Dec 13 05:15:23.163662 containerd[1621]: time="2024-12-13T05:15:23.163511358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-p0439.gb1.brightbox.com,Uid:2f4fed7f049c19c9eacdaf4c230f7140,Namespace:kube-system,Attempt:0,} returns sandbox id \"dda65a780a5a69e14381f06a1b3a6309afe7b75f7e6018fac3069704ee854ebf\"" Dec 13 05:15:23.178504 containerd[1621]: time="2024-12-13T05:15:23.178454531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-p0439.gb1.brightbox.com,Uid:d14d3ad4e480ed1ff1df8a4c297648dc,Namespace:kube-system,Attempt:0,} returns sandbox id \"c5b71b99fa3371dd498e5fbd556618ab5ffec74067128be401594d72d9605cb6\"" Dec 13 05:15:23.181473 containerd[1621]: time="2024-12-13T05:15:23.181186468Z" level=info msg="CreateContainer within sandbox \"dda65a780a5a69e14381f06a1b3a6309afe7b75f7e6018fac3069704ee854ebf\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 05:15:23.185013 containerd[1621]: time="2024-12-13T05:15:23.184934474Z" level=info msg="CreateContainer within sandbox \"c5b71b99fa3371dd498e5fbd556618ab5ffec74067128be401594d72d9605cb6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 05:15:23.191527 containerd[1621]: time="2024-12-13T05:15:23.191493375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-p0439.gb1.brightbox.com,Uid:2250899c921a3b640e7b4e9d2e7d4146,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ee78f32bd256af41d4f70ec8b5a84635cc4ca853b9909dac92e582d3e6fd956\"" Dec 13 05:15:23.194472 containerd[1621]: time="2024-12-13T05:15:23.194442085Z" level=info msg="CreateContainer within sandbox \"2ee78f32bd256af41d4f70ec8b5a84635cc4ca853b9909dac92e582d3e6fd956\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 05:15:23.212238 containerd[1621]: time="2024-12-13T05:15:23.212106346Z" level=info msg="CreateContainer within sandbox \"dda65a780a5a69e14381f06a1b3a6309afe7b75f7e6018fac3069704ee854ebf\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"54c7b8dc1a6420a847adbfb484bbdf9bfb65d431eeeae4e043bbce5c5a7abc20\"" Dec 13 05:15:23.213110 containerd[1621]: time="2024-12-13T05:15:23.212919465Z" level=info msg="StartContainer for \"54c7b8dc1a6420a847adbfb484bbdf9bfb65d431eeeae4e043bbce5c5a7abc20\"" Dec 13 05:15:23.216875 containerd[1621]: time="2024-12-13T05:15:23.216591156Z" level=info msg="CreateContainer within sandbox \"2ee78f32bd256af41d4f70ec8b5a84635cc4ca853b9909dac92e582d3e6fd956\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"08ca9ccbcaee37e3892bb058824933c243ea83e30497cadc9c448ab798b9cee2\"" Dec 13 05:15:23.217585 containerd[1621]: time="2024-12-13T05:15:23.217556297Z" level=info msg="StartContainer for \"08ca9ccbcaee37e3892bb058824933c243ea83e30497cadc9c448ab798b9cee2\"" Dec 13 05:15:23.218100 containerd[1621]: time="2024-12-13T05:15:23.217995740Z" level=info msg="CreateContainer within sandbox \"c5b71b99fa3371dd498e5fbd556618ab5ffec74067128be401594d72d9605cb6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7c2024d1bc2171fb4bf45d295312ab84476c26731188f7cd8d9b08a0d0ac8bbc\"" Dec 13 05:15:23.220976 containerd[1621]: time="2024-12-13T05:15:23.219720710Z" level=info msg="StartContainer for \"7c2024d1bc2171fb4bf45d295312ab84476c26731188f7cd8d9b08a0d0ac8bbc\"" Dec 13 05:15:23.235896 kubelet[2520]: W1213 05:15:23.234261 2520 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.230.15.106:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.15.106:6443: connect: connection refused Dec 13 05:15:23.236215 kubelet[2520]: E1213 05:15:23.236180 2520 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.230.15.106:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.15.106:6443: connect: connection refused Dec 13 05:15:23.246126 kubelet[2520]: I1213 05:15:23.246097 2520 kubelet_node_status.go:73] "Attempting to register node" node="srv-p0439.gb1.brightbox.com" Dec 13 05:15:23.246985 kubelet[2520]: E1213 05:15:23.246909 2520 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.15.106:6443/api/v1/nodes\": dial tcp 10.230.15.106:6443: connect: connection refused" node="srv-p0439.gb1.brightbox.com" Dec 13 05:15:23.414783 containerd[1621]: time="2024-12-13T05:15:23.414639095Z" level=info msg="StartContainer for \"54c7b8dc1a6420a847adbfb484bbdf9bfb65d431eeeae4e043bbce5c5a7abc20\" returns successfully" Dec 13 05:15:23.415006 containerd[1621]: time="2024-12-13T05:15:23.414850590Z" level=info msg="StartContainer for \"08ca9ccbcaee37e3892bb058824933c243ea83e30497cadc9c448ab798b9cee2\" returns successfully" Dec 13 05:15:23.415006 containerd[1621]: time="2024-12-13T05:15:23.414908388Z" level=info msg="StartContainer for \"7c2024d1bc2171fb4bf45d295312ab84476c26731188f7cd8d9b08a0d0ac8bbc\" returns successfully" Dec 13 05:15:23.706486 kubelet[2520]: E1213 05:15:23.706338 2520 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.230.15.106:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.230.15.106:6443: connect: connection refused Dec 13 05:15:24.849455 kubelet[2520]: I1213 05:15:24.849381 2520 kubelet_node_status.go:73] "Attempting to register node" node="srv-p0439.gb1.brightbox.com" Dec 13 05:15:26.495577 kubelet[2520]: E1213 05:15:26.495353 2520 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-p0439.gb1.brightbox.com\" not found" node="srv-p0439.gb1.brightbox.com" Dec 13 05:15:26.554653 kubelet[2520]: I1213 05:15:26.554440 2520 kubelet_node_status.go:76] "Successfully registered node" node="srv-p0439.gb1.brightbox.com" Dec 13 05:15:26.701337 kubelet[2520]: I1213 05:15:26.701271 2520 apiserver.go:52] "Watching apiserver" Dec 13 05:15:26.731398 kubelet[2520]: I1213 05:15:26.731307 2520 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 05:15:28.217080 kubelet[2520]: W1213 05:15:28.216784 2520 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 05:15:29.248217 systemd[1]: Reloading requested from client PID 2796 ('systemctl') (unit session-9.scope)... Dec 13 05:15:29.248267 systemd[1]: Reloading... Dec 13 05:15:29.379170 zram_generator::config[2836]: No configuration found. Dec 13 05:15:29.574521 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 05:15:29.686689 systemd[1]: Reloading finished in 437 ms. Dec 13 05:15:29.739040 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 05:15:29.740160 kubelet[2520]: I1213 05:15:29.738862 2520 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 05:15:29.750443 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 05:15:29.751136 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 05:15:29.762730 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 05:15:29.997442 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 05:15:30.006788 (kubelet)[2909]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 05:15:30.137252 kubelet[2909]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 05:15:30.137252 kubelet[2909]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 05:15:30.137252 kubelet[2909]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 05:15:30.137252 kubelet[2909]: I1213 05:15:30.136796 2909 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 05:15:30.143984 kubelet[2909]: I1213 05:15:30.143534 2909 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 05:15:30.143984 kubelet[2909]: I1213 05:15:30.143562 2909 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 05:15:30.143984 kubelet[2909]: I1213 05:15:30.143792 2909 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 05:15:30.146248 kubelet[2909]: I1213 05:15:30.146138 2909 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 05:15:30.158149 kubelet[2909]: I1213 05:15:30.157398 2909 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 05:15:30.167776 kubelet[2909]: I1213 05:15:30.167751 2909 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 05:15:30.169037 kubelet[2909]: I1213 05:15:30.168574 2909 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 05:15:30.169037 kubelet[2909]: I1213 05:15:30.168832 2909 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 05:15:30.169037 kubelet[2909]: I1213 05:15:30.168905 2909 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 05:15:30.169037 kubelet[2909]: I1213 05:15:30.168923 2909 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 05:15:30.169598 kubelet[2909]: I1213 05:15:30.169560 2909 state_mem.go:36] "Initialized new in-memory state store" Dec 13 05:15:30.169856 kubelet[2909]: I1213 05:15:30.169835 2909 kubelet.go:396] "Attempting to sync node with API server" Dec 13 05:15:30.169986 kubelet[2909]: I1213 05:15:30.169968 2909 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 05:15:30.170922 kubelet[2909]: I1213 05:15:30.170900 2909 kubelet.go:312] "Adding apiserver pod source" Dec 13 05:15:30.171061 kubelet[2909]: I1213 05:15:30.171043 2909 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 05:15:30.186383 kubelet[2909]: I1213 05:15:30.183492 2909 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 05:15:30.186383 kubelet[2909]: I1213 05:15:30.183797 2909 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 05:15:30.186383 kubelet[2909]: I1213 05:15:30.185534 2909 server.go:1256] "Started kubelet" Dec 13 05:15:30.200169 kubelet[2909]: I1213 05:15:30.198027 2909 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 05:15:30.201603 kubelet[2909]: I1213 05:15:30.201557 2909 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 05:15:30.203764 kubelet[2909]: I1213 05:15:30.203741 2909 server.go:461] "Adding debug handlers to kubelet server" Dec 13 05:15:30.206841 kubelet[2909]: I1213 05:15:30.206061 2909 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 05:15:30.208137 kubelet[2909]: I1213 05:15:30.208072 2909 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 05:15:30.215681 kubelet[2909]: I1213 05:15:30.215077 2909 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 05:15:30.217840 kubelet[2909]: I1213 05:15:30.217815 2909 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 05:15:30.218392 kubelet[2909]: I1213 05:15:30.218225 2909 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 05:15:30.223906 kubelet[2909]: I1213 05:15:30.222971 2909 factory.go:221] Registration of the systemd container factory successfully Dec 13 05:15:30.223906 kubelet[2909]: I1213 05:15:30.223081 2909 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 05:15:30.224442 kubelet[2909]: I1213 05:15:30.224417 2909 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 05:15:30.226110 kubelet[2909]: I1213 05:15:30.226089 2909 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 05:15:30.226286 kubelet[2909]: I1213 05:15:30.226252 2909 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 05:15:30.226836 kubelet[2909]: I1213 05:15:30.226813 2909 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 05:15:30.227230 kubelet[2909]: E1213 05:15:30.227045 2909 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 05:15:30.231728 kubelet[2909]: E1213 05:15:30.231658 2909 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 05:15:30.232182 kubelet[2909]: I1213 05:15:30.232082 2909 factory.go:221] Registration of the containerd container factory successfully Dec 13 05:15:30.328239 kubelet[2909]: E1213 05:15:30.328191 2909 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 05:15:30.336334 kubelet[2909]: I1213 05:15:30.336296 2909 kubelet_node_status.go:73] "Attempting to register node" node="srv-p0439.gb1.brightbox.com" Dec 13 05:15:30.359604 kubelet[2909]: I1213 05:15:30.359259 2909 kubelet_node_status.go:112] "Node was previously registered" node="srv-p0439.gb1.brightbox.com" Dec 13 05:15:30.360156 kubelet[2909]: I1213 05:15:30.360068 2909 kubelet_node_status.go:76] "Successfully registered node" node="srv-p0439.gb1.brightbox.com" Dec 13 05:15:30.361691 kubelet[2909]: I1213 05:15:30.361661 2909 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 05:15:30.361691 kubelet[2909]: I1213 05:15:30.361689 2909 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 05:15:30.361844 kubelet[2909]: I1213 05:15:30.361829 2909 state_mem.go:36] "Initialized new in-memory state store" Dec 13 05:15:30.363203 kubelet[2909]: I1213 05:15:30.362699 2909 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 05:15:30.363203 kubelet[2909]: I1213 05:15:30.362740 2909 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 05:15:30.363203 kubelet[2909]: I1213 05:15:30.362782 2909 policy_none.go:49] "None policy: Start" Dec 13 05:15:30.365120 kubelet[2909]: I1213 05:15:30.365074 2909 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 05:15:30.365312 kubelet[2909]: I1213 05:15:30.365241 2909 state_mem.go:35] "Initializing new in-memory state store" Dec 13 05:15:30.369031 kubelet[2909]: I1213 05:15:30.365924 2909 state_mem.go:75] "Updated machine memory state" Dec 13 05:15:30.376860 kubelet[2909]: I1213 05:15:30.375616 2909 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 05:15:30.376860 kubelet[2909]: I1213 05:15:30.376030 2909 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 05:15:30.528970 kubelet[2909]: I1213 05:15:30.528869 2909 topology_manager.go:215] "Topology Admit Handler" podUID="2f4fed7f049c19c9eacdaf4c230f7140" podNamespace="kube-system" podName="kube-apiserver-srv-p0439.gb1.brightbox.com" Dec 13 05:15:30.529240 kubelet[2909]: I1213 05:15:30.529039 2909 topology_manager.go:215] "Topology Admit Handler" podUID="d14d3ad4e480ed1ff1df8a4c297648dc" podNamespace="kube-system" podName="kube-controller-manager-srv-p0439.gb1.brightbox.com" Dec 13 05:15:30.529240 kubelet[2909]: I1213 05:15:30.529108 2909 topology_manager.go:215] "Topology Admit Handler" podUID="2250899c921a3b640e7b4e9d2e7d4146" podNamespace="kube-system" podName="kube-scheduler-srv-p0439.gb1.brightbox.com" Dec 13 05:15:30.538111 kubelet[2909]: W1213 05:15:30.537797 2909 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 05:15:30.538111 kubelet[2909]: W1213 05:15:30.537881 2909 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 05:15:30.538111 kubelet[2909]: W1213 05:15:30.538063 2909 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 05:15:30.538367 kubelet[2909]: E1213 05:15:30.538320 2909 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-srv-p0439.gb1.brightbox.com\" already exists" pod="kube-system/kube-controller-manager-srv-p0439.gb1.brightbox.com" Dec 13 05:15:30.625352 kubelet[2909]: I1213 05:15:30.624799 2909 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2250899c921a3b640e7b4e9d2e7d4146-kubeconfig\") pod \"kube-scheduler-srv-p0439.gb1.brightbox.com\" (UID: \"2250899c921a3b640e7b4e9d2e7d4146\") " pod="kube-system/kube-scheduler-srv-p0439.gb1.brightbox.com" Dec 13 05:15:30.625352 kubelet[2909]: I1213 05:15:30.624859 2909 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d14d3ad4e480ed1ff1df8a4c297648dc-k8s-certs\") pod \"kube-controller-manager-srv-p0439.gb1.brightbox.com\" (UID: \"d14d3ad4e480ed1ff1df8a4c297648dc\") " pod="kube-system/kube-controller-manager-srv-p0439.gb1.brightbox.com" Dec 13 05:15:30.625352 kubelet[2909]: I1213 05:15:30.624894 2909 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d14d3ad4e480ed1ff1df8a4c297648dc-kubeconfig\") pod \"kube-controller-manager-srv-p0439.gb1.brightbox.com\" (UID: \"d14d3ad4e480ed1ff1df8a4c297648dc\") " pod="kube-system/kube-controller-manager-srv-p0439.gb1.brightbox.com" Dec 13 05:15:30.625352 kubelet[2909]: I1213 05:15:30.624951 2909 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d14d3ad4e480ed1ff1df8a4c297648dc-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-p0439.gb1.brightbox.com\" (UID: \"d14d3ad4e480ed1ff1df8a4c297648dc\") " pod="kube-system/kube-controller-manager-srv-p0439.gb1.brightbox.com" Dec 13 05:15:30.625352 kubelet[2909]: I1213 05:15:30.624983 2909 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d14d3ad4e480ed1ff1df8a4c297648dc-flexvolume-dir\") pod \"kube-controller-manager-srv-p0439.gb1.brightbox.com\" (UID: \"d14d3ad4e480ed1ff1df8a4c297648dc\") " pod="kube-system/kube-controller-manager-srv-p0439.gb1.brightbox.com" Dec 13 05:15:30.625746 kubelet[2909]: I1213 05:15:30.625020 2909 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2f4fed7f049c19c9eacdaf4c230f7140-ca-certs\") pod \"kube-apiserver-srv-p0439.gb1.brightbox.com\" (UID: \"2f4fed7f049c19c9eacdaf4c230f7140\") " pod="kube-system/kube-apiserver-srv-p0439.gb1.brightbox.com" Dec 13 05:15:30.625746 kubelet[2909]: I1213 05:15:30.625050 2909 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2f4fed7f049c19c9eacdaf4c230f7140-k8s-certs\") pod \"kube-apiserver-srv-p0439.gb1.brightbox.com\" (UID: \"2f4fed7f049c19c9eacdaf4c230f7140\") " pod="kube-system/kube-apiserver-srv-p0439.gb1.brightbox.com" Dec 13 05:15:30.625746 kubelet[2909]: I1213 05:15:30.625105 2909 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2f4fed7f049c19c9eacdaf4c230f7140-usr-share-ca-certificates\") pod \"kube-apiserver-srv-p0439.gb1.brightbox.com\" (UID: \"2f4fed7f049c19c9eacdaf4c230f7140\") " pod="kube-system/kube-apiserver-srv-p0439.gb1.brightbox.com" Dec 13 05:15:30.625746 kubelet[2909]: I1213 05:15:30.625158 2909 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d14d3ad4e480ed1ff1df8a4c297648dc-ca-certs\") pod \"kube-controller-manager-srv-p0439.gb1.brightbox.com\" (UID: \"d14d3ad4e480ed1ff1df8a4c297648dc\") " pod="kube-system/kube-controller-manager-srv-p0439.gb1.brightbox.com" Dec 13 05:15:31.172996 kubelet[2909]: I1213 05:15:31.172671 2909 apiserver.go:52] "Watching apiserver" Dec 13 05:15:31.220179 kubelet[2909]: I1213 05:15:31.218445 2909 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 05:15:31.433870 kubelet[2909]: I1213 05:15:31.433714 2909 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-p0439.gb1.brightbox.com" podStartSLOduration=1.433555574 podStartE2EDuration="1.433555574s" podCreationTimestamp="2024-12-13 05:15:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 05:15:31.40166564 +0000 UTC m=+1.386678300" watchObservedRunningTime="2024-12-13 05:15:31.433555574 +0000 UTC m=+1.418568229" Dec 13 05:15:31.457582 kubelet[2909]: I1213 05:15:31.457535 2909 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-p0439.gb1.brightbox.com" podStartSLOduration=1.457486236 podStartE2EDuration="1.457486236s" podCreationTimestamp="2024-12-13 05:15:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 05:15:31.436323185 +0000 UTC m=+1.421335850" watchObservedRunningTime="2024-12-13 05:15:31.457486236 +0000 UTC m=+1.442498894" Dec 13 05:15:31.482750 kubelet[2909]: I1213 05:15:31.482717 2909 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-p0439.gb1.brightbox.com" podStartSLOduration=3.482676556 podStartE2EDuration="3.482676556s" podCreationTimestamp="2024-12-13 05:15:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 05:15:31.458562624 +0000 UTC m=+1.443575288" watchObservedRunningTime="2024-12-13 05:15:31.482676556 +0000 UTC m=+1.467689211" Dec 13 05:15:35.847686 sudo[1898]: pam_unix(sudo:session): session closed for user root Dec 13 05:15:35.993063 sshd[1894]: pam_unix(sshd:session): session closed for user core Dec 13 05:15:35.998638 systemd[1]: sshd@6-10.230.15.106:22-147.75.109.163:40020.service: Deactivated successfully. Dec 13 05:15:36.007070 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 05:15:36.008986 systemd-logind[1602]: Session 9 logged out. Waiting for processes to exit. Dec 13 05:15:36.011594 systemd-logind[1602]: Removed session 9. Dec 13 05:15:44.993639 kubelet[2909]: I1213 05:15:44.992878 2909 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 05:15:44.997682 containerd[1621]: time="2024-12-13T05:15:44.997115118Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 05:15:45.000863 kubelet[2909]: I1213 05:15:44.998415 2909 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 05:15:45.631524 kubelet[2909]: I1213 05:15:45.631453 2909 topology_manager.go:215] "Topology Admit Handler" podUID="4a3d6abe-27a4-42b9-b769-981dfd1c2391" podNamespace="kube-system" podName="kube-proxy-9x7kr" Dec 13 05:15:45.829588 kubelet[2909]: I1213 05:15:45.829483 2909 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4a3d6abe-27a4-42b9-b769-981dfd1c2391-kube-proxy\") pod \"kube-proxy-9x7kr\" (UID: \"4a3d6abe-27a4-42b9-b769-981dfd1c2391\") " pod="kube-system/kube-proxy-9x7kr" Dec 13 05:15:45.829588 kubelet[2909]: I1213 05:15:45.829574 2909 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4a3d6abe-27a4-42b9-b769-981dfd1c2391-lib-modules\") pod \"kube-proxy-9x7kr\" (UID: \"4a3d6abe-27a4-42b9-b769-981dfd1c2391\") " pod="kube-system/kube-proxy-9x7kr" Dec 13 05:15:45.829588 kubelet[2909]: I1213 05:15:45.829617 2909 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4a3d6abe-27a4-42b9-b769-981dfd1c2391-xtables-lock\") pod \"kube-proxy-9x7kr\" (UID: \"4a3d6abe-27a4-42b9-b769-981dfd1c2391\") " pod="kube-system/kube-proxy-9x7kr" Dec 13 05:15:45.829958 kubelet[2909]: I1213 05:15:45.829654 2909 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5rlq\" (UniqueName: \"kubernetes.io/projected/4a3d6abe-27a4-42b9-b769-981dfd1c2391-kube-api-access-d5rlq\") pod \"kube-proxy-9x7kr\" (UID: \"4a3d6abe-27a4-42b9-b769-981dfd1c2391\") " pod="kube-system/kube-proxy-9x7kr" Dec 13 05:15:46.114425 kubelet[2909]: I1213 05:15:46.114312 2909 topology_manager.go:215] "Topology Admit Handler" podUID="90ec2800-cc14-49e5-8ad6-5c1a13f2e7f0" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-x98hz" Dec 13 05:15:46.131830 kubelet[2909]: I1213 05:15:46.131783 2909 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/90ec2800-cc14-49e5-8ad6-5c1a13f2e7f0-var-lib-calico\") pod \"tigera-operator-c7ccbd65-x98hz\" (UID: \"90ec2800-cc14-49e5-8ad6-5c1a13f2e7f0\") " pod="tigera-operator/tigera-operator-c7ccbd65-x98hz" Dec 13 05:15:46.232988 kubelet[2909]: I1213 05:15:46.232874 2909 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbpzw\" (UniqueName: \"kubernetes.io/projected/90ec2800-cc14-49e5-8ad6-5c1a13f2e7f0-kube-api-access-nbpzw\") pod \"tigera-operator-c7ccbd65-x98hz\" (UID: \"90ec2800-cc14-49e5-8ad6-5c1a13f2e7f0\") " pod="tigera-operator/tigera-operator-c7ccbd65-x98hz" Dec 13 05:15:46.248260 containerd[1621]: time="2024-12-13T05:15:46.248172397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9x7kr,Uid:4a3d6abe-27a4-42b9-b769-981dfd1c2391,Namespace:kube-system,Attempt:0,}" Dec 13 05:15:46.303991 containerd[1621]: time="2024-12-13T05:15:46.303379311Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 05:15:46.303991 containerd[1621]: time="2024-12-13T05:15:46.303570519Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 05:15:46.303991 containerd[1621]: time="2024-12-13T05:15:46.303600460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:15:46.303991 containerd[1621]: time="2024-12-13T05:15:46.303857776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:15:46.341234 systemd[1]: run-containerd-runc-k8s.io-813fa21d0c9e20148a6096a83df419928d60de94b26da503e1a7b658c8bc4efb-runc.HkaXmk.mount: Deactivated successfully. Dec 13 05:15:46.388001 containerd[1621]: time="2024-12-13T05:15:46.387842067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9x7kr,Uid:4a3d6abe-27a4-42b9-b769-981dfd1c2391,Namespace:kube-system,Attempt:0,} returns sandbox id \"813fa21d0c9e20148a6096a83df419928d60de94b26da503e1a7b658c8bc4efb\"" Dec 13 05:15:46.414359 containerd[1621]: time="2024-12-13T05:15:46.414281442Z" level=info msg="CreateContainer within sandbox \"813fa21d0c9e20148a6096a83df419928d60de94b26da503e1a7b658c8bc4efb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 05:15:46.425861 containerd[1621]: time="2024-12-13T05:15:46.425819726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-x98hz,Uid:90ec2800-cc14-49e5-8ad6-5c1a13f2e7f0,Namespace:tigera-operator,Attempt:0,}" Dec 13 05:15:46.433786 containerd[1621]: time="2024-12-13T05:15:46.433751307Z" level=info msg="CreateContainer within sandbox \"813fa21d0c9e20148a6096a83df419928d60de94b26da503e1a7b658c8bc4efb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"40b0be69aa242e19cc699e1888823d1fdb54d67e62e58f3cb7f1806b40e4aa12\"" Dec 13 05:15:46.435099 containerd[1621]: time="2024-12-13T05:15:46.435057932Z" level=info msg="StartContainer for \"40b0be69aa242e19cc699e1888823d1fdb54d67e62e58f3cb7f1806b40e4aa12\"" Dec 13 05:15:46.475505 containerd[1621]: time="2024-12-13T05:15:46.475296592Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 05:15:46.476115 containerd[1621]: time="2024-12-13T05:15:46.475479012Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 05:15:46.476115 containerd[1621]: time="2024-12-13T05:15:46.475510818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:15:46.476115 containerd[1621]: time="2024-12-13T05:15:46.475814790Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:15:46.549173 containerd[1621]: time="2024-12-13T05:15:46.549104325Z" level=info msg="StartContainer for \"40b0be69aa242e19cc699e1888823d1fdb54d67e62e58f3cb7f1806b40e4aa12\" returns successfully" Dec 13 05:15:46.595377 containerd[1621]: time="2024-12-13T05:15:46.595206050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-x98hz,Uid:90ec2800-cc14-49e5-8ad6-5c1a13f2e7f0,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f79751c42909cbe13e792c60040707c7daed527bae3c9c386fb1a30f7ef7426c\"" Dec 13 05:15:46.599395 containerd[1621]: time="2024-12-13T05:15:46.599214726Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Dec 13 05:15:47.348603 kubelet[2909]: I1213 05:15:47.348279 2909 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-9x7kr" podStartSLOduration=2.348193739 podStartE2EDuration="2.348193739s" podCreationTimestamp="2024-12-13 05:15:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 05:15:47.347929548 +0000 UTC m=+17.332942227" watchObservedRunningTime="2024-12-13 05:15:47.348193739 +0000 UTC m=+17.333206401" Dec 13 05:15:49.487926 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1264851059.mount: Deactivated successfully. Dec 13 05:15:50.370497 containerd[1621]: time="2024-12-13T05:15:50.370425542Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:15:50.372063 containerd[1621]: time="2024-12-13T05:15:50.371715925Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21764301" Dec 13 05:15:50.373241 containerd[1621]: time="2024-12-13T05:15:50.373112061Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:15:50.378561 containerd[1621]: time="2024-12-13T05:15:50.378489349Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:15:50.380054 containerd[1621]: time="2024-12-13T05:15:50.379771795Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 3.780486161s" Dec 13 05:15:50.380054 containerd[1621]: time="2024-12-13T05:15:50.379834485Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Dec 13 05:15:50.385704 containerd[1621]: time="2024-12-13T05:15:50.385358161Z" level=info msg="CreateContainer within sandbox \"f79751c42909cbe13e792c60040707c7daed527bae3c9c386fb1a30f7ef7426c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 05:15:50.410988 containerd[1621]: time="2024-12-13T05:15:50.410800901Z" level=info msg="CreateContainer within sandbox \"f79751c42909cbe13e792c60040707c7daed527bae3c9c386fb1a30f7ef7426c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8fd6d5f6dab080aae3301a1d5ed7293d6400eb512f1c84ae6a81e33d1a1c6357\"" Dec 13 05:15:50.413586 containerd[1621]: time="2024-12-13T05:15:50.413350896Z" level=info msg="StartContainer for \"8fd6d5f6dab080aae3301a1d5ed7293d6400eb512f1c84ae6a81e33d1a1c6357\"" Dec 13 05:15:50.456591 systemd[1]: run-containerd-runc-k8s.io-8fd6d5f6dab080aae3301a1d5ed7293d6400eb512f1c84ae6a81e33d1a1c6357-runc.9NrNKW.mount: Deactivated successfully. Dec 13 05:15:50.533967 containerd[1621]: time="2024-12-13T05:15:50.532099966Z" level=info msg="StartContainer for \"8fd6d5f6dab080aae3301a1d5ed7293d6400eb512f1c84ae6a81e33d1a1c6357\" returns successfully" Dec 13 05:15:53.715191 kubelet[2909]: I1213 05:15:53.715056 2909 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-x98hz" podStartSLOduration=3.9321385749999997 podStartE2EDuration="7.714888565s" podCreationTimestamp="2024-12-13 05:15:46 +0000 UTC" firstStartedPulling="2024-12-13 05:15:46.597388826 +0000 UTC m=+16.582401482" lastFinishedPulling="2024-12-13 05:15:50.380138825 +0000 UTC m=+20.365151472" observedRunningTime="2024-12-13 05:15:51.36241369 +0000 UTC m=+21.347426354" watchObservedRunningTime="2024-12-13 05:15:53.714888565 +0000 UTC m=+23.699901230" Dec 13 05:15:53.721060 kubelet[2909]: I1213 05:15:53.715332 2909 topology_manager.go:215] "Topology Admit Handler" podUID="1028b302-5c12-4ec7-8494-a188454392c5" podNamespace="calico-system" podName="calico-typha-74b87474d-p6852" Dec 13 05:15:53.793863 kubelet[2909]: I1213 05:15:53.793715 2909 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1028b302-5c12-4ec7-8494-a188454392c5-tigera-ca-bundle\") pod \"calico-typha-74b87474d-p6852\" (UID: \"1028b302-5c12-4ec7-8494-a188454392c5\") " pod="calico-system/calico-typha-74b87474d-p6852" Dec 13 05:15:53.793863 kubelet[2909]: I1213 05:15:53.793786 2909 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc9zz\" (UniqueName: \"kubernetes.io/projected/1028b302-5c12-4ec7-8494-a188454392c5-kube-api-access-rc9zz\") pod \"calico-typha-74b87474d-p6852\" (UID: \"1028b302-5c12-4ec7-8494-a188454392c5\") " pod="calico-system/calico-typha-74b87474d-p6852" Dec 13 05:15:53.793863 kubelet[2909]: I1213 05:15:53.793821 2909 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1028b302-5c12-4ec7-8494-a188454392c5-typha-certs\") pod \"calico-typha-74b87474d-p6852\" (UID: \"1028b302-5c12-4ec7-8494-a188454392c5\") " pod="calico-system/calico-typha-74b87474d-p6852" Dec 13 05:15:53.850503 kubelet[2909]: I1213 05:15:53.850420 2909 topology_manager.go:215] "Topology Admit Handler" podUID="5f6f43dd-dbbd-4dc0-8d00-460f84bca664" podNamespace="calico-system" podName="calico-node-t8nb4" Dec 13 05:15:53.894824 kubelet[2909]: I1213 05:15:53.894764 2909 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f6f43dd-dbbd-4dc0-8d00-460f84bca664-tigera-ca-bundle\") pod \"calico-node-t8nb4\" (UID: \"5f6f43dd-dbbd-4dc0-8d00-460f84bca664\") " pod="calico-system/calico-node-t8nb4" Dec 13 05:15:53.895113 kubelet[2909]: I1213 05:15:53.894890 2909 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5f6f43dd-dbbd-4dc0-8d00-460f84bca664-node-certs\") pod \"calico-node-t8nb4\" (UID: \"5f6f43dd-dbbd-4dc0-8d00-460f84bca664\") " pod="calico-system/calico-node-t8nb4" Dec 13 05:15:53.895113 kubelet[2909]: I1213 05:15:53.894929 2909 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5f6f43dd-dbbd-4dc0-8d00-460f84bca664-flexvol-driver-host\") pod \"calico-node-t8nb4\" (UID: \"5f6f43dd-dbbd-4dc0-8d00-460f84bca664\") " pod="calico-system/calico-node-t8nb4" Dec 13 05:15:53.895113 kubelet[2909]: I1213 05:15:53.894975 2909 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5f6f43dd-dbbd-4dc0-8d00-460f84bca664-policysync\") pod \"calico-node-t8nb4\" (UID: \"5f6f43dd-dbbd-4dc0-8d00-460f84bca664\") " pod="calico-system/calico-node-t8nb4" Dec 13 05:15:53.895113 kubelet[2909]: I1213 05:15:53.895028 2909 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5f6f43dd-dbbd-4dc0-8d00-460f84bca664-cni-bin-dir\") pod \"calico-node-t8nb4\" (UID: \"5f6f43dd-dbbd-4dc0-8d00-460f84bca664\") " pod="calico-system/calico-node-t8nb4" Dec 13 05:15:53.895113 kubelet[2909]: I1213 05:15:53.895074 2909 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5f6f43dd-dbbd-4dc0-8d00-460f84bca664-xtables-lock\") pod \"calico-node-t8nb4\" (UID: \"5f6f43dd-dbbd-4dc0-8d00-460f84bca664\") " pod="calico-system/calico-node-t8nb4" Dec 13 05:15:53.895425 kubelet[2909]: I1213 05:15:53.895150 2909 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlnbj\" (UniqueName: \"kubernetes.io/projected/5f6f43dd-dbbd-4dc0-8d00-460f84bca664-kube-api-access-hlnbj\") pod \"calico-node-t8nb4\" (UID: \"5f6f43dd-dbbd-4dc0-8d00-460f84bca664\") " pod="calico-system/calico-node-t8nb4" Dec 13 05:15:53.895425 kubelet[2909]: I1213 05:15:53.895207 2909 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5f6f43dd-dbbd-4dc0-8d00-460f84bca664-lib-modules\") pod \"calico-node-t8nb4\" (UID: \"5f6f43dd-dbbd-4dc0-8d00-460f84bca664\") " pod="calico-system/calico-node-t8nb4" Dec 13 05:15:53.895425 kubelet[2909]: I1213 05:15:53.895251 2909 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5f6f43dd-dbbd-4dc0-8d00-460f84bca664-var-lib-calico\") pod \"calico-node-t8nb4\" (UID: \"5f6f43dd-dbbd-4dc0-8d00-460f84bca664\") " pod="calico-system/calico-node-t8nb4" Dec 13 05:15:53.895425 kubelet[2909]: I1213 05:15:53.895283 2909 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5f6f43dd-dbbd-4dc0-8d00-460f84bca664-cni-log-dir\") pod \"calico-node-t8nb4\" (UID: \"5f6f43dd-dbbd-4dc0-8d00-460f84bca664\") " pod="calico-system/calico-node-t8nb4" Dec 13 05:15:53.895425 kubelet[2909]: I1213 05:15:53.895318 2909 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5f6f43dd-dbbd-4dc0-8d00-460f84bca664-cni-net-dir\") pod \"calico-node-t8nb4\" (UID: \"5f6f43dd-dbbd-4dc0-8d00-460f84bca664\") " pod="calico-system/calico-node-t8nb4" Dec 13 05:15:53.895689 kubelet[2909]: I1213 05:15:53.895365 2909 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5f6f43dd-dbbd-4dc0-8d00-460f84bca664-var-run-calico\") pod \"calico-node-t8nb4\" (UID: \"5f6f43dd-dbbd-4dc0-8d00-460f84bca664\") " pod="calico-system/calico-node-t8nb4" Dec 13 05:15:54.008319 kubelet[2909]: E1213 05:15:54.004023 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:15:54.008319 kubelet[2909]: W1213 05:15:54.004063 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:15:54.008319 kubelet[2909]: E1213 05:15:54.004777 2909 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:15:54.008319 kubelet[2909]: E1213 05:15:54.006381 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:15:54.008319 kubelet[2909]: W1213 05:15:54.006399 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:15:54.008319 kubelet[2909]: E1213 05:15:54.006418 2909 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:15:54.027223 kubelet[2909]: E1213 05:15:54.026392 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:15:54.027223 kubelet[2909]: W1213 05:15:54.026421 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:15:54.027223 kubelet[2909]: E1213 05:15:54.026450 2909 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:15:54.034000 kubelet[2909]: E1213 05:15:54.033254 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:15:54.034000 kubelet[2909]: W1213 05:15:54.033312 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:15:54.034000 kubelet[2909]: E1213 05:15:54.033337 2909 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:15:54.067288 containerd[1621]: time="2024-12-13T05:15:54.066850278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-74b87474d-p6852,Uid:1028b302-5c12-4ec7-8494-a188454392c5,Namespace:calico-system,Attempt:0,}" Dec 13 05:15:54.172457 containerd[1621]: time="2024-12-13T05:15:54.171869653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-t8nb4,Uid:5f6f43dd-dbbd-4dc0-8d00-460f84bca664,Namespace:calico-system,Attempt:0,}" Dec 13 05:15:54.178613 kubelet[2909]: I1213 05:15:54.177621 2909 topology_manager.go:215] "Topology Admit Handler" podUID="ce9c7b6e-1a22-4ef0-a19c-f54deef2ef8c" podNamespace="calico-system" podName="csi-node-driver-nfbkn" Dec 13 05:15:54.184689 kubelet[2909]: E1213 05:15:54.183002 2909 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nfbkn" podUID="ce9c7b6e-1a22-4ef0-a19c-f54deef2ef8c" Dec 13 05:15:54.193737 kubelet[2909]: E1213 05:15:54.193339 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:15:54.193737 kubelet[2909]: W1213 05:15:54.193375 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:15:54.194340 kubelet[2909]: E1213 05:15:54.193980 2909 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:15:54.202182 kubelet[2909]: E1213 05:15:54.194896 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:15:54.202182 kubelet[2909]: W1213 05:15:54.194920 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:15:54.202182 kubelet[2909]: E1213 05:15:54.194942 2909 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:15:54.202182 kubelet[2909]: E1213 05:15:54.198375 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:15:54.202182 kubelet[2909]: W1213 05:15:54.198391 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:15:54.202182 kubelet[2909]: E1213 05:15:54.198411 2909 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:15:54.211651 kubelet[2909]: E1213 05:15:54.205210 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:15:54.211651 kubelet[2909]: W1213 05:15:54.205239 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:15:54.211651 kubelet[2909]: E1213 05:15:54.205264 2909 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:15:54.211651 kubelet[2909]: E1213 05:15:54.205591 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:15:54.211651 kubelet[2909]: W1213 05:15:54.205605 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:15:54.211651 kubelet[2909]: E1213 05:15:54.205624 2909 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:15:54.211651 kubelet[2909]: E1213 05:15:54.206317 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:15:54.211651 kubelet[2909]: W1213 05:15:54.206332 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:15:54.211651 kubelet[2909]: E1213 05:15:54.206353 2909 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:15:54.211651 kubelet[2909]: E1213 05:15:54.208468 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:15:54.212285 kubelet[2909]: W1213 05:15:54.208483 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:15:54.212285 kubelet[2909]: E1213 05:15:54.208514 2909 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:15:54.212285 kubelet[2909]: E1213 05:15:54.210364 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:15:54.212285 kubelet[2909]: W1213 05:15:54.210394 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:15:54.212285 kubelet[2909]: E1213 05:15:54.210414 2909 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:15:54.212285 kubelet[2909]: E1213 05:15:54.210781 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:15:54.212285 kubelet[2909]: W1213 05:15:54.210797 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:15:54.212285 kubelet[2909]: E1213 05:15:54.210822 2909 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:15:54.212285 kubelet[2909]: E1213 05:15:54.211959 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:15:54.212285 kubelet[2909]: W1213 05:15:54.211985 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:15:54.212746 kubelet[2909]: E1213 05:15:54.212005 2909 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:15:54.217766 kubelet[2909]: E1213 05:15:54.214006 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:15:54.217766 kubelet[2909]: W1213 05:15:54.214028 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:15:54.217766 kubelet[2909]: E1213 05:15:54.214266 2909 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:15:54.217766 kubelet[2909]: E1213 05:15:54.215048 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:15:54.217766 kubelet[2909]: W1213 05:15:54.215077 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:15:54.217766 kubelet[2909]: E1213 05:15:54.215314 2909 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:15:54.217766 kubelet[2909]: E1213 05:15:54.216113 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:15:54.217766 kubelet[2909]: W1213 05:15:54.216325 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:15:54.217766 kubelet[2909]: E1213 05:15:54.216349 2909 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:15:54.217766 kubelet[2909]: E1213 05:15:54.217273 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:15:54.218999 kubelet[2909]: W1213 05:15:54.217291 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:15:54.218999 kubelet[2909]: E1213 05:15:54.217310 2909 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:15:54.232234 kubelet[2909]: E1213 05:15:54.219389 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:15:54.232234 kubelet[2909]: W1213 05:15:54.219405 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:15:54.232234 kubelet[2909]: E1213 05:15:54.219424 2909 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:15:54.232234 kubelet[2909]: E1213 05:15:54.221560 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:15:54.232234 kubelet[2909]: W1213 05:15:54.221579 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:15:54.232234 kubelet[2909]: E1213 05:15:54.221599 2909 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:15:54.232234 kubelet[2909]: E1213 05:15:54.224001 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:15:54.232234 kubelet[2909]: W1213 05:15:54.224017 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:15:54.232234 kubelet[2909]: E1213 05:15:54.224042 2909 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:15:54.232234 kubelet[2909]: E1213 05:15:54.226466 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:15:54.234433 kubelet[2909]: W1213 05:15:54.226481 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:15:54.234433 kubelet[2909]: E1213 05:15:54.226501 2909 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:15:54.234433 kubelet[2909]: E1213 05:15:54.226745 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:15:54.234433 kubelet[2909]: W1213 05:15:54.226759 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:15:54.234433 kubelet[2909]: E1213 05:15:54.226779 2909 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:15:54.234433 kubelet[2909]: E1213 05:15:54.227816 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:15:54.234433 kubelet[2909]: W1213 05:15:54.227832 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:15:54.234433 kubelet[2909]: E1213 05:15:54.227853 2909 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:15:54.234433 kubelet[2909]: E1213 05:15:54.229803 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:15:54.234433 kubelet[2909]: W1213 05:15:54.229819 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:15:54.234970 kubelet[2909]: E1213 05:15:54.229837 2909 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:15:54.234970 kubelet[2909]: I1213 05:15:54.229886 2909 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ce9c7b6e-1a22-4ef0-a19c-f54deef2ef8c-kubelet-dir\") pod \"csi-node-driver-nfbkn\" (UID: \"ce9c7b6e-1a22-4ef0-a19c-f54deef2ef8c\") " pod="calico-system/csi-node-driver-nfbkn" Dec 13 05:15:54.234970 kubelet[2909]: E1213 05:15:54.230640 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:15:54.234970 kubelet[2909]: W1213 05:15:54.230791 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:15:54.234970 kubelet[2909]: E1213 05:15:54.230957 2909 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:15:54.234970 kubelet[2909]: I1213 05:15:54.230992 2909 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ce9c7b6e-1a22-4ef0-a19c-f54deef2ef8c-varrun\") pod \"csi-node-driver-nfbkn\" (UID: \"ce9c7b6e-1a22-4ef0-a19c-f54deef2ef8c\") " pod="calico-system/csi-node-driver-nfbkn" Dec 13 05:15:54.234970 kubelet[2909]: E1213 05:15:54.233394 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:15:54.234970 kubelet[2909]: W1213 05:15:54.233412 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:15:54.234970 kubelet[2909]: E1213 05:15:54.233431 2909 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:15:54.237782 kubelet[2909]: I1213 05:15:54.233483 2909 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ce9c7b6e-1a22-4ef0-a19c-f54deef2ef8c-socket-dir\") pod \"csi-node-driver-nfbkn\" (UID: \"ce9c7b6e-1a22-4ef0-a19c-f54deef2ef8c\") " pod="calico-system/csi-node-driver-nfbkn" Dec 13 05:15:54.237782 kubelet[2909]: E1213 05:15:54.235372 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:15:54.237782 kubelet[2909]: W1213 05:15:54.235389 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:15:54.237782 kubelet[2909]: E1213 05:15:54.235409 2909 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:15:54.237782 kubelet[2909]: I1213 05:15:54.235450 2909 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv94d\" (UniqueName: \"kubernetes.io/projected/ce9c7b6e-1a22-4ef0-a19c-f54deef2ef8c-kube-api-access-fv94d\") pod \"csi-node-driver-nfbkn\" (UID: \"ce9c7b6e-1a22-4ef0-a19c-f54deef2ef8c\") " pod="calico-system/csi-node-driver-nfbkn" Dec 13 05:15:54.243917 kubelet[2909]: E1213 05:15:54.242212 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:15:54.243917 kubelet[2909]: W1213 05:15:54.242238 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:15:54.243917 kubelet[2909]: E1213 05:15:54.242265 2909 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:15:54.243917 kubelet[2909]: I1213 05:15:54.242297 2909 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ce9c7b6e-1a22-4ef0-a19c-f54deef2ef8c-registration-dir\") pod \"csi-node-driver-nfbkn\" (UID: \"ce9c7b6e-1a22-4ef0-a19c-f54deef2ef8c\") " pod="calico-system/csi-node-driver-nfbkn" Dec 13 05:15:54.243917 kubelet[2909]: E1213 05:15:54.242672 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:15:54.243917 kubelet[2909]: W1213 05:15:54.242689 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:15:54.243917 kubelet[2909]: E1213 05:15:54.242714 2909 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:15:54.243917 kubelet[2909]: E1213 05:15:54.243546 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:15:54.243917 kubelet[2909]: W1213 05:15:54.243561 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:15:54.244628 kubelet[2909]: E1213 05:15:54.243581 2909 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:15:54.248188 kubelet[2909]: E1213 05:15:54.245752 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:15:54.248188 kubelet[2909]: W1213 05:15:54.245778 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:15:54.248188 kubelet[2909]: E1213 05:15:54.245799 2909 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:15:54.251374 kubelet[2909]: E1213 05:15:54.250137 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:15:54.251374 kubelet[2909]: W1213 05:15:54.250195 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:15:54.251374 kubelet[2909]: E1213 05:15:54.250590 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:15:54.251374 kubelet[2909]: W1213 05:15:54.250606 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:15:54.251374 kubelet[2909]: E1213 05:15:54.251191 2909 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:15:54.251374 kubelet[2909]: E1213 05:15:54.251257 2909 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:15:54.254678 kubelet[2909]: E1213 05:15:54.253741 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:15:54.254678 kubelet[2909]: W1213 05:15:54.253778 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:15:54.255367 kubelet[2909]: E1213 05:15:54.255304 2909 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:15:54.263553 kubelet[2909]: E1213 05:15:54.260340 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:15:54.263553 kubelet[2909]: W1213 05:15:54.260366 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:15:54.263553 kubelet[2909]: E1213 05:15:54.260627 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:15:54.263553 kubelet[2909]: W1213 05:15:54.260641 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:15:54.263553 kubelet[2909]: E1213 05:15:54.260663 2909 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:15:54.263553 kubelet[2909]: E1213 05:15:54.260956 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:15:54.263553 kubelet[2909]: W1213 05:15:54.260970 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:15:54.263553 kubelet[2909]: E1213 05:15:54.260996 2909 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:15:54.263553 kubelet[2909]: E1213 05:15:54.261019 2909 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:15:54.263553 kubelet[2909]: E1213 05:15:54.261971 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:15:54.264686 kubelet[2909]: W1213 05:15:54.261987 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:15:54.264686 kubelet[2909]: E1213 05:15:54.262006 2909 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:15:54.345017 kubelet[2909]: E1213 05:15:54.344655 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:15:54.345017 kubelet[2909]: W1213 05:15:54.344702 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:15:54.345017 kubelet[2909]: E1213 05:15:54.344746 2909 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:15:54.347201 kubelet[2909]: E1213 05:15:54.346044 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:15:54.347201 kubelet[2909]: W1213 05:15:54.346064 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:15:54.347201 kubelet[2909]: E1213 05:15:54.346254 2909 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:15:54.348672 kubelet[2909]: E1213 05:15:54.348336 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:15:54.348672 kubelet[2909]: W1213 05:15:54.348417 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:15:54.348672 kubelet[2909]: E1213 05:15:54.348458 2909 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:15:54.350060 kubelet[2909]: E1213 05:15:54.349578 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:15:54.350060 kubelet[2909]: W1213 05:15:54.349595 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:15:54.350366 kubelet[2909]: E1213 05:15:54.350189 2909 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:15:54.351160 kubelet[2909]: E1213 05:15:54.350776 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:15:54.351160 kubelet[2909]: W1213 05:15:54.350796 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:15:54.351160 kubelet[2909]: E1213 05:15:54.351070 2909 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:15:54.352626 kubelet[2909]: E1213 05:15:54.352139 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:15:54.352626 kubelet[2909]: W1213 05:15:54.352277 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:15:54.353218 kubelet[2909]: E1213 05:15:54.352787 2909 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:15:54.355562 kubelet[2909]: E1213 05:15:54.355067 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:15:54.355562 kubelet[2909]: W1213 05:15:54.355092 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:15:54.355562 kubelet[2909]: E1213 05:15:54.355292 2909 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:15:54.355562 kubelet[2909]: E1213 05:15:54.355487 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:15:54.355562 kubelet[2909]: W1213 05:15:54.355500 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:15:54.356197 kubelet[2909]: E1213 05:15:54.355983 2909 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:15:54.358553 kubelet[2909]: E1213 05:15:54.357411 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:15:54.358553 kubelet[2909]: W1213 05:15:54.357430 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:15:54.358553 kubelet[2909]: E1213 05:15:54.358080 2909 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:15:54.359238 kubelet[2909]: E1213 05:15:54.359219 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:15:54.359447 kubelet[2909]: W1213 05:15:54.359425 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:15:54.359820 kubelet[2909]: E1213 05:15:54.359750 2909 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:15:54.360622 kubelet[2909]: E1213 05:15:54.360447 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:15:54.360622 kubelet[2909]: W1213 05:15:54.360473 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:15:54.361028 kubelet[2909]: E1213 05:15:54.360800 2909 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:15:54.361869 kubelet[2909]: E1213 05:15:54.361592 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:15:54.361869 kubelet[2909]: W1213 05:15:54.361840 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:15:54.363401 kubelet[2909]: E1213 05:15:54.363209 2909 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:15:54.364330 kubelet[2909]: E1213 05:15:54.363929 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:15:54.364330 kubelet[2909]: W1213 05:15:54.363955 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:15:54.365984 kubelet[2909]: E1213 05:15:54.365261 2909 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:15:54.367777 kubelet[2909]: E1213 05:15:54.367576 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:15:54.367777 kubelet[2909]: W1213 05:15:54.367601 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:15:54.368158 kubelet[2909]: E1213 05:15:54.367936 2909 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:15:54.369065 kubelet[2909]: E1213 05:15:54.368759 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:15:54.369065 kubelet[2909]: W1213 05:15:54.368783 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:15:54.369561 kubelet[2909]: E1213 05:15:54.369397 2909 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:15:54.370360 kubelet[2909]: E1213 05:15:54.370078 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:15:54.370360 kubelet[2909]: W1213 05:15:54.370100 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:15:54.370811 kubelet[2909]: E1213 05:15:54.370567 2909 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:15:54.371727 kubelet[2909]: E1213 05:15:54.370986 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:15:54.371727 kubelet[2909]: W1213 05:15:54.371479 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:15:54.371727 kubelet[2909]: E1213 05:15:54.371637 2909 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:15:54.372515 kubelet[2909]: E1213 05:15:54.372498 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:15:54.372695 kubelet[2909]: W1213 05:15:54.372676 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:15:54.373196 kubelet[2909]: E1213 05:15:54.373158 2909 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:15:54.373947 kubelet[2909]: E1213 05:15:54.373732 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:15:54.373947 kubelet[2909]: W1213 05:15:54.373749 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:15:54.374305 kubelet[2909]: E1213 05:15:54.374079 2909 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:15:54.375402 kubelet[2909]: E1213 05:15:54.374975 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:15:54.375402 kubelet[2909]: W1213 05:15:54.374991 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:15:54.375938 kubelet[2909]: E1213 05:15:54.375563 2909 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:15:54.378635 kubelet[2909]: E1213 05:15:54.377057 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:15:54.378635 kubelet[2909]: W1213 05:15:54.377075 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:15:54.378635 kubelet[2909]: E1213 05:15:54.378237 2909 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:15:54.378635 kubelet[2909]: E1213 05:15:54.378502 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:15:54.378635 kubelet[2909]: W1213 05:15:54.378516 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:15:54.378965 kubelet[2909]: E1213 05:15:54.378945 2909 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:15:54.379303 kubelet[2909]: E1213 05:15:54.379284 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:15:54.379427 kubelet[2909]: W1213 05:15:54.379389 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:15:54.379734 kubelet[2909]: E1213 05:15:54.379716 2909 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:15:54.380306 kubelet[2909]: E1213 05:15:54.380286 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:15:54.380520 kubelet[2909]: W1213 05:15:54.380486 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:15:54.381564 kubelet[2909]: E1213 05:15:54.380633 2909 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:15:54.382649 kubelet[2909]: E1213 05:15:54.382630 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:15:54.382783 kubelet[2909]: W1213 05:15:54.382762 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:15:54.383076 kubelet[2909]: E1213 05:15:54.383027 2909 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:15:54.396592 containerd[1621]: time="2024-12-13T05:15:54.394725327Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 05:15:54.396592 containerd[1621]: time="2024-12-13T05:15:54.396483869Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 05:15:54.396806 containerd[1621]: time="2024-12-13T05:15:54.396509137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:15:54.404054 containerd[1621]: time="2024-12-13T05:15:54.400073969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:15:54.408178 containerd[1621]: time="2024-12-13T05:15:54.407820614Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 05:15:54.408178 containerd[1621]: time="2024-12-13T05:15:54.407941426Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 05:15:54.408178 containerd[1621]: time="2024-12-13T05:15:54.407999692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:15:54.408793 kubelet[2909]: E1213 05:15:54.408384 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 05:15:54.408793 kubelet[2909]: W1213 05:15:54.408407 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 05:15:54.408793 kubelet[2909]: E1213 05:15:54.408437 2909 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 05:15:54.409964 containerd[1621]: time="2024-12-13T05:15:54.409508058Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:15:54.593425 containerd[1621]: time="2024-12-13T05:15:54.593355994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-t8nb4,Uid:5f6f43dd-dbbd-4dc0-8d00-460f84bca664,Namespace:calico-system,Attempt:0,} returns sandbox id \"2a77efa745e9f1fd457a7bad4e98b9025470a7b93e6568b821806b95f9886db2\"" Dec 13 05:15:54.608344 containerd[1621]: time="2024-12-13T05:15:54.607798974Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 05:15:54.609719 containerd[1621]: time="2024-12-13T05:15:54.609637963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-74b87474d-p6852,Uid:1028b302-5c12-4ec7-8494-a188454392c5,Namespace:calico-system,Attempt:0,} returns sandbox id \"55efae43ac55cf0739f000151cc56a5139327c98cedb2a8b28d2387bb19fa8be\"" Dec 13 05:15:56.228008 kubelet[2909]: E1213 05:15:56.227643 2909 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nfbkn" podUID="ce9c7b6e-1a22-4ef0-a19c-f54deef2ef8c" Dec 13 05:15:56.254320 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2195940149.mount: Deactivated successfully. Dec 13 05:15:56.406487 containerd[1621]: time="2024-12-13T05:15:56.406380842Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:15:56.407649 containerd[1621]: time="2024-12-13T05:15:56.407590452Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Dec 13 05:15:56.408483 containerd[1621]: time="2024-12-13T05:15:56.408412966Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:15:56.411614 containerd[1621]: time="2024-12-13T05:15:56.411559098Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:15:56.412952 containerd[1621]: time="2024-12-13T05:15:56.412747384Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.804843518s" Dec 13 05:15:56.412952 containerd[1621]: time="2024-12-13T05:15:56.412806830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Dec 13 05:15:56.413981 containerd[1621]: time="2024-12-13T05:15:56.413636649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 13 05:15:56.417395 containerd[1621]: time="2024-12-13T05:15:56.416676211Z" level=info msg="CreateContainer within sandbox \"2a77efa745e9f1fd457a7bad4e98b9025470a7b93e6568b821806b95f9886db2\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 05:15:56.451667 containerd[1621]: time="2024-12-13T05:15:56.451612512Z" level=info msg="CreateContainer within sandbox \"2a77efa745e9f1fd457a7bad4e98b9025470a7b93e6568b821806b95f9886db2\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"17f1ed9603b4200a2f2501a7c368d5b7065ebe06e782ee44d3678920019927f6\"" Dec 13 05:15:56.454153 containerd[1621]: time="2024-12-13T05:15:56.453208462Z" level=info msg="StartContainer for \"17f1ed9603b4200a2f2501a7c368d5b7065ebe06e782ee44d3678920019927f6\"" Dec 13 05:15:56.558887 containerd[1621]: time="2024-12-13T05:15:56.558718637Z" level=info msg="StartContainer for \"17f1ed9603b4200a2f2501a7c368d5b7065ebe06e782ee44d3678920019927f6\" returns successfully" Dec 13 05:15:56.635202 containerd[1621]: time="2024-12-13T05:15:56.615769099Z" level=info msg="shim disconnected" id=17f1ed9603b4200a2f2501a7c368d5b7065ebe06e782ee44d3678920019927f6 namespace=k8s.io Dec 13 05:15:56.635202 containerd[1621]: time="2024-12-13T05:15:56.635053709Z" level=warning msg="cleaning up after shim disconnected" id=17f1ed9603b4200a2f2501a7c368d5b7065ebe06e782ee44d3678920019927f6 namespace=k8s.io Dec 13 05:15:56.635202 containerd[1621]: time="2024-12-13T05:15:56.635085065Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 05:15:57.153684 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-17f1ed9603b4200a2f2501a7c368d5b7065ebe06e782ee44d3678920019927f6-rootfs.mount: Deactivated successfully. Dec 13 05:15:58.229518 kubelet[2909]: E1213 05:15:58.227850 2909 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nfbkn" podUID="ce9c7b6e-1a22-4ef0-a19c-f54deef2ef8c" Dec 13 05:15:59.490416 containerd[1621]: time="2024-12-13T05:15:59.490344206Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:15:59.492482 containerd[1621]: time="2024-12-13T05:15:59.492432120Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Dec 13 05:15:59.493491 containerd[1621]: time="2024-12-13T05:15:59.493455647Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:15:59.497659 containerd[1621]: time="2024-12-13T05:15:59.497560114Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:15:59.498582 containerd[1621]: time="2024-12-13T05:15:59.498391953Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 3.084716819s" Dec 13 05:15:59.498582 containerd[1621]: time="2024-12-13T05:15:59.498446915Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Dec 13 05:15:59.500991 containerd[1621]: time="2024-12-13T05:15:59.500697484Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 05:15:59.520908 containerd[1621]: time="2024-12-13T05:15:59.520873118Z" level=info msg="CreateContainer within sandbox \"55efae43ac55cf0739f000151cc56a5139327c98cedb2a8b28d2387bb19fa8be\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 05:15:59.541531 containerd[1621]: time="2024-12-13T05:15:59.541382784Z" level=info msg="CreateContainer within sandbox \"55efae43ac55cf0739f000151cc56a5139327c98cedb2a8b28d2387bb19fa8be\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e4c646e6cfd8378d3e98993168e6d9ec00b34463054dc20b98a8f1c21d3d7e8b\"" Dec 13 05:15:59.542114 containerd[1621]: time="2024-12-13T05:15:59.542081880Z" level=info msg="StartContainer for \"e4c646e6cfd8378d3e98993168e6d9ec00b34463054dc20b98a8f1c21d3d7e8b\"" Dec 13 05:15:59.661188 containerd[1621]: time="2024-12-13T05:15:59.661091497Z" level=info msg="StartContainer for \"e4c646e6cfd8378d3e98993168e6d9ec00b34463054dc20b98a8f1c21d3d7e8b\" returns successfully" Dec 13 05:16:00.228578 kubelet[2909]: E1213 05:16:00.228111 2909 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nfbkn" podUID="ce9c7b6e-1a22-4ef0-a19c-f54deef2ef8c" Dec 13 05:16:01.393496 kubelet[2909]: I1213 05:16:01.393340 2909 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 05:16:02.229959 kubelet[2909]: E1213 05:16:02.228828 2909 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nfbkn" podUID="ce9c7b6e-1a22-4ef0-a19c-f54deef2ef8c" Dec 13 05:16:04.228726 kubelet[2909]: E1213 05:16:04.228677 2909 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nfbkn" podUID="ce9c7b6e-1a22-4ef0-a19c-f54deef2ef8c" Dec 13 05:16:06.229939 kubelet[2909]: E1213 05:16:06.228359 2909 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nfbkn" podUID="ce9c7b6e-1a22-4ef0-a19c-f54deef2ef8c" Dec 13 05:16:06.290923 containerd[1621]: time="2024-12-13T05:16:06.290642130Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:16:06.292764 containerd[1621]: time="2024-12-13T05:16:06.292493831Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Dec 13 05:16:06.294153 containerd[1621]: time="2024-12-13T05:16:06.293659803Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:16:06.297005 containerd[1621]: time="2024-12-13T05:16:06.296937164Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:16:06.298447 containerd[1621]: time="2024-12-13T05:16:06.298227579Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 6.797478779s" Dec 13 05:16:06.298447 containerd[1621]: time="2024-12-13T05:16:06.298290369Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Dec 13 05:16:06.305849 containerd[1621]: time="2024-12-13T05:16:06.305322467Z" level=info msg="CreateContainer within sandbox \"2a77efa745e9f1fd457a7bad4e98b9025470a7b93e6568b821806b95f9886db2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 05:16:06.329444 containerd[1621]: time="2024-12-13T05:16:06.329311113Z" level=info msg="CreateContainer within sandbox \"2a77efa745e9f1fd457a7bad4e98b9025470a7b93e6568b821806b95f9886db2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3f61eba95f8524513868a2a02161dbaa3d39074265a934d5d2e1c0dbc8c0e27e\"" Dec 13 05:16:06.333490 containerd[1621]: time="2024-12-13T05:16:06.333225920Z" level=info msg="StartContainer for \"3f61eba95f8524513868a2a02161dbaa3d39074265a934d5d2e1c0dbc8c0e27e\"" Dec 13 05:16:06.474155 containerd[1621]: time="2024-12-13T05:16:06.474052383Z" level=info msg="StartContainer for \"3f61eba95f8524513868a2a02161dbaa3d39074265a934d5d2e1c0dbc8c0e27e\" returns successfully" Dec 13 05:16:07.477358 kubelet[2909]: I1213 05:16:07.477290 2909 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-74b87474d-p6852" podStartSLOduration=9.587172121 podStartE2EDuration="14.471165632s" podCreationTimestamp="2024-12-13 05:15:53 +0000 UTC" firstStartedPulling="2024-12-13 05:15:54.614860463 +0000 UTC m=+24.599873111" lastFinishedPulling="2024-12-13 05:15:59.498853957 +0000 UTC m=+29.483866622" observedRunningTime="2024-12-13 05:16:00.407427811 +0000 UTC m=+30.392440477" watchObservedRunningTime="2024-12-13 05:16:07.471165632 +0000 UTC m=+37.456178289" Dec 13 05:16:07.699250 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3f61eba95f8524513868a2a02161dbaa3d39074265a934d5d2e1c0dbc8c0e27e-rootfs.mount: Deactivated successfully. Dec 13 05:16:07.712448 kubelet[2909]: I1213 05:16:07.700634 2909 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 05:16:07.717956 containerd[1621]: time="2024-12-13T05:16:07.716621172Z" level=info msg="shim disconnected" id=3f61eba95f8524513868a2a02161dbaa3d39074265a934d5d2e1c0dbc8c0e27e namespace=k8s.io Dec 13 05:16:07.717956 containerd[1621]: time="2024-12-13T05:16:07.717463324Z" level=warning msg="cleaning up after shim disconnected" id=3f61eba95f8524513868a2a02161dbaa3d39074265a934d5d2e1c0dbc8c0e27e namespace=k8s.io Dec 13 05:16:07.717956 containerd[1621]: time="2024-12-13T05:16:07.717490135Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 05:16:07.759056 kubelet[2909]: I1213 05:16:07.758898 2909 topology_manager.go:215] "Topology Admit Handler" podUID="cef81659-bce4-4b5b-b8bf-c2339e0fd749" podNamespace="kube-system" podName="coredns-76f75df574-dthps" Dec 13 05:16:07.775912 kubelet[2909]: I1213 05:16:07.774894 2909 topology_manager.go:215] "Topology Admit Handler" podUID="20ff6575-c92f-4eb2-882b-7354b8ce1048" podNamespace="kube-system" podName="coredns-76f75df574-m7856" Dec 13 05:16:07.789190 containerd[1621]: time="2024-12-13T05:16:07.787327274Z" level=warning msg="cleanup warnings time=\"2024-12-13T05:16:07Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 05:16:07.795901 kubelet[2909]: I1213 05:16:07.793392 2909 topology_manager.go:215] "Topology Admit Handler" podUID="b5832f8d-80e4-4cb3-aa3a-b02c7e2ed4bb" podNamespace="calico-system" podName="calico-kube-controllers-548b6bb445-n77k4" Dec 13 05:16:07.795901 kubelet[2909]: I1213 05:16:07.793751 2909 topology_manager.go:215] "Topology Admit Handler" podUID="7d984527-00ee-4ef0-a476-1512e9636ccc" podNamespace="calico-apiserver" podName="calico-apiserver-795898bcbf-x8w4g" Dec 13 05:16:07.795901 kubelet[2909]: I1213 05:16:07.793986 2909 topology_manager.go:215] "Topology Admit Handler" podUID="6bc8feac-28cf-423e-8b2e-f3d9212b2634" podNamespace="calico-apiserver" podName="calico-apiserver-795898bcbf-ck94r" Dec 13 05:16:07.860156 kubelet[2909]: I1213 05:16:07.860099 2909 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69djs\" (UniqueName: \"kubernetes.io/projected/cef81659-bce4-4b5b-b8bf-c2339e0fd749-kube-api-access-69djs\") pod \"coredns-76f75df574-dthps\" (UID: \"cef81659-bce4-4b5b-b8bf-c2339e0fd749\") " pod="kube-system/coredns-76f75df574-dthps" Dec 13 05:16:07.860363 kubelet[2909]: I1213 05:16:07.860177 2909 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzqpf\" (UniqueName: \"kubernetes.io/projected/20ff6575-c92f-4eb2-882b-7354b8ce1048-kube-api-access-kzqpf\") pod \"coredns-76f75df574-m7856\" (UID: \"20ff6575-c92f-4eb2-882b-7354b8ce1048\") " pod="kube-system/coredns-76f75df574-m7856" Dec 13 05:16:07.860363 kubelet[2909]: I1213 05:16:07.860231 2909 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cef81659-bce4-4b5b-b8bf-c2339e0fd749-config-volume\") pod \"coredns-76f75df574-dthps\" (UID: \"cef81659-bce4-4b5b-b8bf-c2339e0fd749\") " pod="kube-system/coredns-76f75df574-dthps" Dec 13 05:16:07.860363 kubelet[2909]: I1213 05:16:07.860266 2909 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5832f8d-80e4-4cb3-aa3a-b02c7e2ed4bb-tigera-ca-bundle\") pod \"calico-kube-controllers-548b6bb445-n77k4\" (UID: \"b5832f8d-80e4-4cb3-aa3a-b02c7e2ed4bb\") " pod="calico-system/calico-kube-controllers-548b6bb445-n77k4" Dec 13 05:16:07.860363 kubelet[2909]: I1213 05:16:07.860310 2909 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcsbk\" (UniqueName: \"kubernetes.io/projected/b5832f8d-80e4-4cb3-aa3a-b02c7e2ed4bb-kube-api-access-rcsbk\") pod \"calico-kube-controllers-548b6bb445-n77k4\" (UID: \"b5832f8d-80e4-4cb3-aa3a-b02c7e2ed4bb\") " pod="calico-system/calico-kube-controllers-548b6bb445-n77k4" Dec 13 05:16:07.860363 kubelet[2909]: I1213 05:16:07.860341 2909 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tn8w7\" (UniqueName: \"kubernetes.io/projected/7d984527-00ee-4ef0-a476-1512e9636ccc-kube-api-access-tn8w7\") pod \"calico-apiserver-795898bcbf-x8w4g\" (UID: \"7d984527-00ee-4ef0-a476-1512e9636ccc\") " pod="calico-apiserver/calico-apiserver-795898bcbf-x8w4g" Dec 13 05:16:07.862429 kubelet[2909]: I1213 05:16:07.860371 2909 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20ff6575-c92f-4eb2-882b-7354b8ce1048-config-volume\") pod \"coredns-76f75df574-m7856\" (UID: \"20ff6575-c92f-4eb2-882b-7354b8ce1048\") " pod="kube-system/coredns-76f75df574-m7856" Dec 13 05:16:07.862429 kubelet[2909]: I1213 05:16:07.860400 2909 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7d984527-00ee-4ef0-a476-1512e9636ccc-calico-apiserver-certs\") pod \"calico-apiserver-795898bcbf-x8w4g\" (UID: \"7d984527-00ee-4ef0-a476-1512e9636ccc\") " pod="calico-apiserver/calico-apiserver-795898bcbf-x8w4g" Dec 13 05:16:07.862429 kubelet[2909]: I1213 05:16:07.860446 2909 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk7hj\" (UniqueName: \"kubernetes.io/projected/6bc8feac-28cf-423e-8b2e-f3d9212b2634-kube-api-access-zk7hj\") pod \"calico-apiserver-795898bcbf-ck94r\" (UID: \"6bc8feac-28cf-423e-8b2e-f3d9212b2634\") " pod="calico-apiserver/calico-apiserver-795898bcbf-ck94r" Dec 13 05:16:07.862429 kubelet[2909]: I1213 05:16:07.860495 2909 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6bc8feac-28cf-423e-8b2e-f3d9212b2634-calico-apiserver-certs\") pod \"calico-apiserver-795898bcbf-ck94r\" (UID: \"6bc8feac-28cf-423e-8b2e-f3d9212b2634\") " pod="calico-apiserver/calico-apiserver-795898bcbf-ck94r" Dec 13 05:16:08.074570 containerd[1621]: time="2024-12-13T05:16:08.074524515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-dthps,Uid:cef81659-bce4-4b5b-b8bf-c2339e0fd749,Namespace:kube-system,Attempt:0,}" Dec 13 05:16:08.105013 containerd[1621]: time="2024-12-13T05:16:08.104965763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-m7856,Uid:20ff6575-c92f-4eb2-882b-7354b8ce1048,Namespace:kube-system,Attempt:0,}" Dec 13 05:16:08.132927 containerd[1621]: time="2024-12-13T05:16:08.132425140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-795898bcbf-ck94r,Uid:6bc8feac-28cf-423e-8b2e-f3d9212b2634,Namespace:calico-apiserver,Attempt:0,}" Dec 13 05:16:08.133669 containerd[1621]: time="2024-12-13T05:16:08.133629790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-548b6bb445-n77k4,Uid:b5832f8d-80e4-4cb3-aa3a-b02c7e2ed4bb,Namespace:calico-system,Attempt:0,}" Dec 13 05:16:08.135233 containerd[1621]: time="2024-12-13T05:16:08.135193606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-795898bcbf-x8w4g,Uid:7d984527-00ee-4ef0-a476-1512e9636ccc,Namespace:calico-apiserver,Attempt:0,}" Dec 13 05:16:08.243670 containerd[1621]: time="2024-12-13T05:16:08.243587692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nfbkn,Uid:ce9c7b6e-1a22-4ef0-a19c-f54deef2ef8c,Namespace:calico-system,Attempt:0,}" Dec 13 05:16:08.470735 containerd[1621]: time="2024-12-13T05:16:08.469406055Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 05:16:08.569466 containerd[1621]: time="2024-12-13T05:16:08.569343144Z" level=error msg="Failed to destroy network for sandbox \"e8d76595af3388d10b3ace0a7998c685a7f910c4ac5da92f4c9d17b6abc775d1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:16:08.581696 containerd[1621]: time="2024-12-13T05:16:08.581650128Z" level=error msg="encountered an error cleaning up failed sandbox \"e8d76595af3388d10b3ace0a7998c685a7f910c4ac5da92f4c9d17b6abc775d1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:16:08.600691 containerd[1621]: time="2024-12-13T05:16:08.600618606Z" level=error msg="Failed to destroy network for sandbox \"f41b90515a3cb38878bfc1d7ad7da1b8ba24d92ea17f859d5ff03d89aa3e7570\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:16:08.603254 containerd[1621]: time="2024-12-13T05:16:08.603199593Z" level=error msg="encountered an error cleaning up failed sandbox \"f41b90515a3cb38878bfc1d7ad7da1b8ba24d92ea17f859d5ff03d89aa3e7570\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:16:08.603412 containerd[1621]: time="2024-12-13T05:16:08.603360553Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-dthps,Uid:cef81659-bce4-4b5b-b8bf-c2339e0fd749,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f41b90515a3cb38878bfc1d7ad7da1b8ba24d92ea17f859d5ff03d89aa3e7570\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:16:08.606741 containerd[1621]: time="2024-12-13T05:16:08.606694985Z" level=error msg="Failed to destroy network for sandbox \"45cea2665418527e382388ca1a8d533bbaa1955aca5c54206b3c3dd1a55f2d40\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:16:08.607397 containerd[1621]: time="2024-12-13T05:16:08.607336420Z" level=error msg="encountered an error cleaning up failed sandbox \"45cea2665418527e382388ca1a8d533bbaa1955aca5c54206b3c3dd1a55f2d40\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:16:08.607492 containerd[1621]: time="2024-12-13T05:16:08.607413887Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-795898bcbf-x8w4g,Uid:7d984527-00ee-4ef0-a476-1512e9636ccc,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"45cea2665418527e382388ca1a8d533bbaa1955aca5c54206b3c3dd1a55f2d40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:16:08.611905 containerd[1621]: time="2024-12-13T05:16:08.611685501Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-795898bcbf-ck94r,Uid:6bc8feac-28cf-423e-8b2e-f3d9212b2634,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e8d76595af3388d10b3ace0a7998c685a7f910c4ac5da92f4c9d17b6abc775d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:16:08.612760 containerd[1621]: time="2024-12-13T05:16:08.612595004Z" level=error msg="Failed to destroy network for sandbox \"3754e6b058036373abb3bf637e5394d8738766363111017c0b79b73f83b5a86f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:16:08.614699 containerd[1621]: time="2024-12-13T05:16:08.614530235Z" level=error msg="encountered an error cleaning up failed sandbox \"3754e6b058036373abb3bf637e5394d8738766363111017c0b79b73f83b5a86f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:16:08.615639 containerd[1621]: time="2024-12-13T05:16:08.615310808Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-m7856,Uid:20ff6575-c92f-4eb2-882b-7354b8ce1048,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3754e6b058036373abb3bf637e5394d8738766363111017c0b79b73f83b5a86f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:16:08.619212 kubelet[2909]: E1213 05:16:08.617893 2909 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8d76595af3388d10b3ace0a7998c685a7f910c4ac5da92f4c9d17b6abc775d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:16:08.619212 kubelet[2909]: E1213 05:16:08.617896 2909 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f41b90515a3cb38878bfc1d7ad7da1b8ba24d92ea17f859d5ff03d89aa3e7570\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:16:08.619212 kubelet[2909]: E1213 05:16:08.618028 2909 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f41b90515a3cb38878bfc1d7ad7da1b8ba24d92ea17f859d5ff03d89aa3e7570\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-dthps" Dec 13 05:16:08.619212 kubelet[2909]: E1213 05:16:08.618028 2909 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8d76595af3388d10b3ace0a7998c685a7f910c4ac5da92f4c9d17b6abc775d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-795898bcbf-ck94r" Dec 13 05:16:08.620267 kubelet[2909]: E1213 05:16:08.618077 2909 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f41b90515a3cb38878bfc1d7ad7da1b8ba24d92ea17f859d5ff03d89aa3e7570\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-dthps" Dec 13 05:16:08.620267 kubelet[2909]: E1213 05:16:08.618139 2909 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45cea2665418527e382388ca1a8d533bbaa1955aca5c54206b3c3dd1a55f2d40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:16:08.620267 kubelet[2909]: E1213 05:16:08.618201 2909 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45cea2665418527e382388ca1a8d533bbaa1955aca5c54206b3c3dd1a55f2d40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-795898bcbf-x8w4g" Dec 13 05:16:08.621376 kubelet[2909]: E1213 05:16:08.618211 2909 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-dthps_kube-system(cef81659-bce4-4b5b-b8bf-c2339e0fd749)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-dthps_kube-system(cef81659-bce4-4b5b-b8bf-c2339e0fd749)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f41b90515a3cb38878bfc1d7ad7da1b8ba24d92ea17f859d5ff03d89aa3e7570\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-dthps" podUID="cef81659-bce4-4b5b-b8bf-c2339e0fd749" Dec 13 05:16:08.621376 kubelet[2909]: E1213 05:16:08.618239 2909 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45cea2665418527e382388ca1a8d533bbaa1955aca5c54206b3c3dd1a55f2d40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-795898bcbf-x8w4g" Dec 13 05:16:08.621376 kubelet[2909]: E1213 05:16:08.618325 2909 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-795898bcbf-x8w4g_calico-apiserver(7d984527-00ee-4ef0-a476-1512e9636ccc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-795898bcbf-x8w4g_calico-apiserver(7d984527-00ee-4ef0-a476-1512e9636ccc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"45cea2665418527e382388ca1a8d533bbaa1955aca5c54206b3c3dd1a55f2d40\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-795898bcbf-x8w4g" podUID="7d984527-00ee-4ef0-a476-1512e9636ccc" Dec 13 05:16:08.621610 kubelet[2909]: E1213 05:16:08.618448 2909 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3754e6b058036373abb3bf637e5394d8738766363111017c0b79b73f83b5a86f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:16:08.621610 kubelet[2909]: E1213 05:16:08.618486 2909 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3754e6b058036373abb3bf637e5394d8738766363111017c0b79b73f83b5a86f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-m7856" Dec 13 05:16:08.621610 kubelet[2909]: E1213 05:16:08.618512 2909 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3754e6b058036373abb3bf637e5394d8738766363111017c0b79b73f83b5a86f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-m7856" Dec 13 05:16:08.622421 kubelet[2909]: E1213 05:16:08.618609 2909 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-m7856_kube-system(20ff6575-c92f-4eb2-882b-7354b8ce1048)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-m7856_kube-system(20ff6575-c92f-4eb2-882b-7354b8ce1048)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3754e6b058036373abb3bf637e5394d8738766363111017c0b79b73f83b5a86f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-m7856" podUID="20ff6575-c92f-4eb2-882b-7354b8ce1048" Dec 13 05:16:08.622421 kubelet[2909]: E1213 05:16:08.618077 2909 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8d76595af3388d10b3ace0a7998c685a7f910c4ac5da92f4c9d17b6abc775d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-795898bcbf-ck94r" Dec 13 05:16:08.622421 kubelet[2909]: E1213 05:16:08.618685 2909 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-795898bcbf-ck94r_calico-apiserver(6bc8feac-28cf-423e-8b2e-f3d9212b2634)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-795898bcbf-ck94r_calico-apiserver(6bc8feac-28cf-423e-8b2e-f3d9212b2634)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e8d76595af3388d10b3ace0a7998c685a7f910c4ac5da92f4c9d17b6abc775d1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-795898bcbf-ck94r" podUID="6bc8feac-28cf-423e-8b2e-f3d9212b2634" Dec 13 05:16:08.632783 containerd[1621]: time="2024-12-13T05:16:08.632497900Z" level=error msg="Failed to destroy network for sandbox \"9ab78e6efce3e39c288feeb536ab2cfc8880ec0aa35bcfb75ec33e8316356131\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:16:08.634424 containerd[1621]: time="2024-12-13T05:16:08.634257982Z" level=error msg="encountered an error cleaning up failed sandbox \"9ab78e6efce3e39c288feeb536ab2cfc8880ec0aa35bcfb75ec33e8316356131\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:16:08.634424 containerd[1621]: time="2024-12-13T05:16:08.634339056Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-548b6bb445-n77k4,Uid:b5832f8d-80e4-4cb3-aa3a-b02c7e2ed4bb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9ab78e6efce3e39c288feeb536ab2cfc8880ec0aa35bcfb75ec33e8316356131\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:16:08.635268 kubelet[2909]: E1213 05:16:08.634729 2909 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ab78e6efce3e39c288feeb536ab2cfc8880ec0aa35bcfb75ec33e8316356131\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:16:08.635268 kubelet[2909]: E1213 05:16:08.634900 2909 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ab78e6efce3e39c288feeb536ab2cfc8880ec0aa35bcfb75ec33e8316356131\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-548b6bb445-n77k4" Dec 13 05:16:08.635268 kubelet[2909]: E1213 05:16:08.634945 2909 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ab78e6efce3e39c288feeb536ab2cfc8880ec0aa35bcfb75ec33e8316356131\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-548b6bb445-n77k4" Dec 13 05:16:08.635477 kubelet[2909]: E1213 05:16:08.635015 2909 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-548b6bb445-n77k4_calico-system(b5832f8d-80e4-4cb3-aa3a-b02c7e2ed4bb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-548b6bb445-n77k4_calico-system(b5832f8d-80e4-4cb3-aa3a-b02c7e2ed4bb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9ab78e6efce3e39c288feeb536ab2cfc8880ec0aa35bcfb75ec33e8316356131\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-548b6bb445-n77k4" podUID="b5832f8d-80e4-4cb3-aa3a-b02c7e2ed4bb" Dec 13 05:16:08.646481 containerd[1621]: time="2024-12-13T05:16:08.646415122Z" level=error msg="Failed to destroy network for sandbox \"82506a6a82f3107212f048a8d02c4b0e3924df441411c85a1849caeb3fbe95d7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:16:08.647041 containerd[1621]: time="2024-12-13T05:16:08.646989213Z" level=error msg="encountered an error cleaning up failed sandbox \"82506a6a82f3107212f048a8d02c4b0e3924df441411c85a1849caeb3fbe95d7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:16:08.647152 containerd[1621]: time="2024-12-13T05:16:08.647083677Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nfbkn,Uid:ce9c7b6e-1a22-4ef0-a19c-f54deef2ef8c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"82506a6a82f3107212f048a8d02c4b0e3924df441411c85a1849caeb3fbe95d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:16:08.647493 kubelet[2909]: E1213 05:16:08.647435 2909 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82506a6a82f3107212f048a8d02c4b0e3924df441411c85a1849caeb3fbe95d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:16:08.647493 kubelet[2909]: E1213 05:16:08.647492 2909 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82506a6a82f3107212f048a8d02c4b0e3924df441411c85a1849caeb3fbe95d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nfbkn" Dec 13 05:16:08.648055 kubelet[2909]: E1213 05:16:08.647522 2909 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82506a6a82f3107212f048a8d02c4b0e3924df441411c85a1849caeb3fbe95d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nfbkn" Dec 13 05:16:08.648055 kubelet[2909]: E1213 05:16:08.647599 2909 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-nfbkn_calico-system(ce9c7b6e-1a22-4ef0-a19c-f54deef2ef8c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-nfbkn_calico-system(ce9c7b6e-1a22-4ef0-a19c-f54deef2ef8c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"82506a6a82f3107212f048a8d02c4b0e3924df441411c85a1849caeb3fbe95d7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nfbkn" podUID="ce9c7b6e-1a22-4ef0-a19c-f54deef2ef8c" Dec 13 05:16:09.457181 kubelet[2909]: I1213 05:16:09.456908 2909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f41b90515a3cb38878bfc1d7ad7da1b8ba24d92ea17f859d5ff03d89aa3e7570" Dec 13 05:16:09.463253 kubelet[2909]: I1213 05:16:09.462411 2909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45cea2665418527e382388ca1a8d533bbaa1955aca5c54206b3c3dd1a55f2d40" Dec 13 05:16:09.476184 kubelet[2909]: I1213 05:16:09.475618 2909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82506a6a82f3107212f048a8d02c4b0e3924df441411c85a1849caeb3fbe95d7" Dec 13 05:16:09.480441 kubelet[2909]: I1213 05:16:09.480063 2909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3754e6b058036373abb3bf637e5394d8738766363111017c0b79b73f83b5a86f" Dec 13 05:16:09.490607 containerd[1621]: time="2024-12-13T05:16:09.490260559Z" level=info msg="StopPodSandbox for \"45cea2665418527e382388ca1a8d533bbaa1955aca5c54206b3c3dd1a55f2d40\"" Dec 13 05:16:09.491425 containerd[1621]: time="2024-12-13T05:16:09.491156474Z" level=info msg="StopPodSandbox for \"82506a6a82f3107212f048a8d02c4b0e3924df441411c85a1849caeb3fbe95d7\"" Dec 13 05:16:09.492113 containerd[1621]: time="2024-12-13T05:16:09.492081298Z" level=info msg="Ensure that sandbox 45cea2665418527e382388ca1a8d533bbaa1955aca5c54206b3c3dd1a55f2d40 in task-service has been cleanup successfully" Dec 13 05:16:09.492568 containerd[1621]: time="2024-12-13T05:16:09.492345736Z" level=info msg="StopPodSandbox for \"3754e6b058036373abb3bf637e5394d8738766363111017c0b79b73f83b5a86f\"" Dec 13 05:16:09.493159 containerd[1621]: time="2024-12-13T05:16:09.492876387Z" level=info msg="Ensure that sandbox 3754e6b058036373abb3bf637e5394d8738766363111017c0b79b73f83b5a86f in task-service has been cleanup successfully" Dec 13 05:16:09.493602 containerd[1621]: time="2024-12-13T05:16:09.492093420Z" level=info msg="Ensure that sandbox 82506a6a82f3107212f048a8d02c4b0e3924df441411c85a1849caeb3fbe95d7 in task-service has been cleanup successfully" Dec 13 05:16:09.494753 kubelet[2909]: I1213 05:16:09.494674 2909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ab78e6efce3e39c288feeb536ab2cfc8880ec0aa35bcfb75ec33e8316356131" Dec 13 05:16:09.496930 containerd[1621]: time="2024-12-13T05:16:09.496900741Z" level=info msg="StopPodSandbox for \"f41b90515a3cb38878bfc1d7ad7da1b8ba24d92ea17f859d5ff03d89aa3e7570\"" Dec 13 05:16:09.497798 containerd[1621]: time="2024-12-13T05:16:09.497455940Z" level=info msg="StopPodSandbox for \"9ab78e6efce3e39c288feeb536ab2cfc8880ec0aa35bcfb75ec33e8316356131\"" Dec 13 05:16:09.498621 containerd[1621]: time="2024-12-13T05:16:09.498590627Z" level=info msg="Ensure that sandbox 9ab78e6efce3e39c288feeb536ab2cfc8880ec0aa35bcfb75ec33e8316356131 in task-service has been cleanup successfully" Dec 13 05:16:09.501059 containerd[1621]: time="2024-12-13T05:16:09.500929543Z" level=info msg="Ensure that sandbox f41b90515a3cb38878bfc1d7ad7da1b8ba24d92ea17f859d5ff03d89aa3e7570 in task-service has been cleanup successfully" Dec 13 05:16:09.505433 kubelet[2909]: I1213 05:16:09.505375 2909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8d76595af3388d10b3ace0a7998c685a7f910c4ac5da92f4c9d17b6abc775d1" Dec 13 05:16:09.507325 containerd[1621]: time="2024-12-13T05:16:09.507029876Z" level=info msg="StopPodSandbox for \"e8d76595af3388d10b3ace0a7998c685a7f910c4ac5da92f4c9d17b6abc775d1\"" Dec 13 05:16:09.507325 containerd[1621]: time="2024-12-13T05:16:09.507288120Z" level=info msg="Ensure that sandbox e8d76595af3388d10b3ace0a7998c685a7f910c4ac5da92f4c9d17b6abc775d1 in task-service has been cleanup successfully" Dec 13 05:16:09.616959 containerd[1621]: time="2024-12-13T05:16:09.616544190Z" level=error msg="StopPodSandbox for \"82506a6a82f3107212f048a8d02c4b0e3924df441411c85a1849caeb3fbe95d7\" failed" error="failed to destroy network for sandbox \"82506a6a82f3107212f048a8d02c4b0e3924df441411c85a1849caeb3fbe95d7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:16:09.617316 kubelet[2909]: E1213 05:16:09.617023 2909 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"82506a6a82f3107212f048a8d02c4b0e3924df441411c85a1849caeb3fbe95d7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="82506a6a82f3107212f048a8d02c4b0e3924df441411c85a1849caeb3fbe95d7" Dec 13 05:16:09.640145 kubelet[2909]: E1213 05:16:09.640058 2909 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"82506a6a82f3107212f048a8d02c4b0e3924df441411c85a1849caeb3fbe95d7"} Dec 13 05:16:09.641529 kubelet[2909]: E1213 05:16:09.640180 2909 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ce9c7b6e-1a22-4ef0-a19c-f54deef2ef8c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"82506a6a82f3107212f048a8d02c4b0e3924df441411c85a1849caeb3fbe95d7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 05:16:09.641529 kubelet[2909]: E1213 05:16:09.640232 2909 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ce9c7b6e-1a22-4ef0-a19c-f54deef2ef8c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"82506a6a82f3107212f048a8d02c4b0e3924df441411c85a1849caeb3fbe95d7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nfbkn" podUID="ce9c7b6e-1a22-4ef0-a19c-f54deef2ef8c" Dec 13 05:16:09.643045 kubelet[2909]: E1213 05:16:09.642367 2909 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e8d76595af3388d10b3ace0a7998c685a7f910c4ac5da92f4c9d17b6abc775d1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e8d76595af3388d10b3ace0a7998c685a7f910c4ac5da92f4c9d17b6abc775d1" Dec 13 05:16:09.643045 kubelet[2909]: E1213 05:16:09.642415 2909 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e8d76595af3388d10b3ace0a7998c685a7f910c4ac5da92f4c9d17b6abc775d1"} Dec 13 05:16:09.643045 kubelet[2909]: E1213 05:16:09.642467 2909 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6bc8feac-28cf-423e-8b2e-f3d9212b2634\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e8d76595af3388d10b3ace0a7998c685a7f910c4ac5da92f4c9d17b6abc775d1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 05:16:09.643045 kubelet[2909]: E1213 05:16:09.642502 2909 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6bc8feac-28cf-423e-8b2e-f3d9212b2634\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e8d76595af3388d10b3ace0a7998c685a7f910c4ac5da92f4c9d17b6abc775d1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-795898bcbf-ck94r" podUID="6bc8feac-28cf-423e-8b2e-f3d9212b2634" Dec 13 05:16:09.643887 containerd[1621]: time="2024-12-13T05:16:09.641906330Z" level=error msg="StopPodSandbox for \"e8d76595af3388d10b3ace0a7998c685a7f910c4ac5da92f4c9d17b6abc775d1\" failed" error="failed to destroy network for sandbox \"e8d76595af3388d10b3ace0a7998c685a7f910c4ac5da92f4c9d17b6abc775d1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:16:09.658844 containerd[1621]: time="2024-12-13T05:16:09.658759238Z" level=error msg="StopPodSandbox for \"45cea2665418527e382388ca1a8d533bbaa1955aca5c54206b3c3dd1a55f2d40\" failed" error="failed to destroy network for sandbox \"45cea2665418527e382388ca1a8d533bbaa1955aca5c54206b3c3dd1a55f2d40\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:16:09.659514 kubelet[2909]: E1213 05:16:09.659463 2909 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"45cea2665418527e382388ca1a8d533bbaa1955aca5c54206b3c3dd1a55f2d40\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="45cea2665418527e382388ca1a8d533bbaa1955aca5c54206b3c3dd1a55f2d40" Dec 13 05:16:09.659631 kubelet[2909]: E1213 05:16:09.659537 2909 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"45cea2665418527e382388ca1a8d533bbaa1955aca5c54206b3c3dd1a55f2d40"} Dec 13 05:16:09.659631 kubelet[2909]: E1213 05:16:09.659591 2909 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7d984527-00ee-4ef0-a476-1512e9636ccc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"45cea2665418527e382388ca1a8d533bbaa1955aca5c54206b3c3dd1a55f2d40\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 05:16:09.659911 kubelet[2909]: E1213 05:16:09.659653 2909 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7d984527-00ee-4ef0-a476-1512e9636ccc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"45cea2665418527e382388ca1a8d533bbaa1955aca5c54206b3c3dd1a55f2d40\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-795898bcbf-x8w4g" podUID="7d984527-00ee-4ef0-a476-1512e9636ccc" Dec 13 05:16:09.661265 containerd[1621]: time="2024-12-13T05:16:09.661213468Z" level=error msg="StopPodSandbox for \"f41b90515a3cb38878bfc1d7ad7da1b8ba24d92ea17f859d5ff03d89aa3e7570\" failed" error="failed to destroy network for sandbox \"f41b90515a3cb38878bfc1d7ad7da1b8ba24d92ea17f859d5ff03d89aa3e7570\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:16:09.662008 kubelet[2909]: E1213 05:16:09.661735 2909 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f41b90515a3cb38878bfc1d7ad7da1b8ba24d92ea17f859d5ff03d89aa3e7570\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f41b90515a3cb38878bfc1d7ad7da1b8ba24d92ea17f859d5ff03d89aa3e7570" Dec 13 05:16:09.662301 kubelet[2909]: E1213 05:16:09.662275 2909 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f41b90515a3cb38878bfc1d7ad7da1b8ba24d92ea17f859d5ff03d89aa3e7570"} Dec 13 05:16:09.662495 kubelet[2909]: E1213 05:16:09.662372 2909 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cef81659-bce4-4b5b-b8bf-c2339e0fd749\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f41b90515a3cb38878bfc1d7ad7da1b8ba24d92ea17f859d5ff03d89aa3e7570\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 05:16:09.662993 kubelet[2909]: E1213 05:16:09.662564 2909 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cef81659-bce4-4b5b-b8bf-c2339e0fd749\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f41b90515a3cb38878bfc1d7ad7da1b8ba24d92ea17f859d5ff03d89aa3e7570\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-dthps" podUID="cef81659-bce4-4b5b-b8bf-c2339e0fd749" Dec 13 05:16:09.667988 containerd[1621]: time="2024-12-13T05:16:09.667934165Z" level=error msg="StopPodSandbox for \"3754e6b058036373abb3bf637e5394d8738766363111017c0b79b73f83b5a86f\" failed" error="failed to destroy network for sandbox \"3754e6b058036373abb3bf637e5394d8738766363111017c0b79b73f83b5a86f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:16:09.668544 kubelet[2909]: E1213 05:16:09.668417 2909 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3754e6b058036373abb3bf637e5394d8738766363111017c0b79b73f83b5a86f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3754e6b058036373abb3bf637e5394d8738766363111017c0b79b73f83b5a86f" Dec 13 05:16:09.668848 kubelet[2909]: E1213 05:16:09.668719 2909 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3754e6b058036373abb3bf637e5394d8738766363111017c0b79b73f83b5a86f"} Dec 13 05:16:09.668848 kubelet[2909]: E1213 05:16:09.668819 2909 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"20ff6575-c92f-4eb2-882b-7354b8ce1048\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3754e6b058036373abb3bf637e5394d8738766363111017c0b79b73f83b5a86f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 05:16:09.669150 kubelet[2909]: E1213 05:16:09.669084 2909 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"20ff6575-c92f-4eb2-882b-7354b8ce1048\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3754e6b058036373abb3bf637e5394d8738766363111017c0b79b73f83b5a86f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-m7856" podUID="20ff6575-c92f-4eb2-882b-7354b8ce1048" Dec 13 05:16:09.675163 containerd[1621]: time="2024-12-13T05:16:09.675072913Z" level=error msg="StopPodSandbox for \"9ab78e6efce3e39c288feeb536ab2cfc8880ec0aa35bcfb75ec33e8316356131\" failed" error="failed to destroy network for sandbox \"9ab78e6efce3e39c288feeb536ab2cfc8880ec0aa35bcfb75ec33e8316356131\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 05:16:09.675621 kubelet[2909]: E1213 05:16:09.675485 2909 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9ab78e6efce3e39c288feeb536ab2cfc8880ec0aa35bcfb75ec33e8316356131\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9ab78e6efce3e39c288feeb536ab2cfc8880ec0aa35bcfb75ec33e8316356131" Dec 13 05:16:09.675621 kubelet[2909]: E1213 05:16:09.675553 2909 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9ab78e6efce3e39c288feeb536ab2cfc8880ec0aa35bcfb75ec33e8316356131"} Dec 13 05:16:09.675621 kubelet[2909]: E1213 05:16:09.675599 2909 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b5832f8d-80e4-4cb3-aa3a-b02c7e2ed4bb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9ab78e6efce3e39c288feeb536ab2cfc8880ec0aa35bcfb75ec33e8316356131\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 05:16:09.676072 kubelet[2909]: E1213 05:16:09.675868 2909 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b5832f8d-80e4-4cb3-aa3a-b02c7e2ed4bb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9ab78e6efce3e39c288feeb536ab2cfc8880ec0aa35bcfb75ec33e8316356131\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-548b6bb445-n77k4" podUID="b5832f8d-80e4-4cb3-aa3a-b02c7e2ed4bb" Dec 13 05:16:15.151329 kubelet[2909]: I1213 05:16:15.151194 2909 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 05:16:17.945898 systemd-journald[1173]: Under memory pressure, flushing caches. Dec 13 05:16:17.944822 systemd-resolved[1512]: Under memory pressure, flushing caches. Dec 13 05:16:17.945050 systemd-resolved[1512]: Flushed all caches. Dec 13 05:16:19.069697 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3162040389.mount: Deactivated successfully. Dec 13 05:16:19.186346 containerd[1621]: time="2024-12-13T05:16:19.184493522Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Dec 13 05:16:19.188637 containerd[1621]: time="2024-12-13T05:16:19.188599304Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 10.719114504s" Dec 13 05:16:19.188799 containerd[1621]: time="2024-12-13T05:16:19.188769523Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Dec 13 05:16:19.189242 containerd[1621]: time="2024-12-13T05:16:19.189114127Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:16:19.229049 containerd[1621]: time="2024-12-13T05:16:19.228993999Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:16:19.235149 containerd[1621]: time="2024-12-13T05:16:19.233655868Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:16:19.264502 containerd[1621]: time="2024-12-13T05:16:19.264240552Z" level=info msg="CreateContainer within sandbox \"2a77efa745e9f1fd457a7bad4e98b9025470a7b93e6568b821806b95f9886db2\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 05:16:19.337282 containerd[1621]: time="2024-12-13T05:16:19.336617670Z" level=info msg="CreateContainer within sandbox \"2a77efa745e9f1fd457a7bad4e98b9025470a7b93e6568b821806b95f9886db2\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3411b8e7d13cddc13e84a4cd7adf8595debe7c3d7adf0bda34fa7e1ae9da44a0\"" Dec 13 05:16:19.343028 containerd[1621]: time="2024-12-13T05:16:19.342557331Z" level=info msg="StartContainer for \"3411b8e7d13cddc13e84a4cd7adf8595debe7c3d7adf0bda34fa7e1ae9da44a0\"" Dec 13 05:16:19.536168 containerd[1621]: time="2024-12-13T05:16:19.535527171Z" level=info msg="StartContainer for \"3411b8e7d13cddc13e84a4cd7adf8595debe7c3d7adf0bda34fa7e1ae9da44a0\" returns successfully" Dec 13 05:16:19.766418 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 05:16:19.766778 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 05:16:19.990021 systemd-journald[1173]: Under memory pressure, flushing caches. Dec 13 05:16:19.987605 systemd-resolved[1512]: Under memory pressure, flushing caches. Dec 13 05:16:19.987627 systemd-resolved[1512]: Flushed all caches. Dec 13 05:16:21.232966 containerd[1621]: time="2024-12-13T05:16:21.230047058Z" level=info msg="StopPodSandbox for \"45cea2665418527e382388ca1a8d533bbaa1955aca5c54206b3c3dd1a55f2d40\"" Dec 13 05:16:21.404854 kubelet[2909]: I1213 05:16:21.404783 2909 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-t8nb4" podStartSLOduration=3.815352194 podStartE2EDuration="28.39944613s" podCreationTimestamp="2024-12-13 05:15:53 +0000 UTC" firstStartedPulling="2024-12-13 05:15:54.605497749 +0000 UTC m=+24.590510396" lastFinishedPulling="2024-12-13 05:16:19.189591686 +0000 UTC m=+49.174604332" observedRunningTime="2024-12-13 05:16:19.70765157 +0000 UTC m=+49.692664259" watchObservedRunningTime="2024-12-13 05:16:21.39944613 +0000 UTC m=+51.384458790" Dec 13 05:16:21.675266 containerd[1621]: 2024-12-13 05:16:21.394 [INFO][4113] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="45cea2665418527e382388ca1a8d533bbaa1955aca5c54206b3c3dd1a55f2d40" Dec 13 05:16:21.675266 containerd[1621]: 2024-12-13 05:16:21.395 [INFO][4113] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="45cea2665418527e382388ca1a8d533bbaa1955aca5c54206b3c3dd1a55f2d40" iface="eth0" netns="/var/run/netns/cni-692317e7-9f11-8d5b-7a06-2fb38e9dca37" Dec 13 05:16:21.675266 containerd[1621]: 2024-12-13 05:16:21.397 [INFO][4113] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="45cea2665418527e382388ca1a8d533bbaa1955aca5c54206b3c3dd1a55f2d40" iface="eth0" netns="/var/run/netns/cni-692317e7-9f11-8d5b-7a06-2fb38e9dca37" Dec 13 05:16:21.675266 containerd[1621]: 2024-12-13 05:16:21.399 [INFO][4113] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="45cea2665418527e382388ca1a8d533bbaa1955aca5c54206b3c3dd1a55f2d40" iface="eth0" netns="/var/run/netns/cni-692317e7-9f11-8d5b-7a06-2fb38e9dca37" Dec 13 05:16:21.675266 containerd[1621]: 2024-12-13 05:16:21.399 [INFO][4113] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="45cea2665418527e382388ca1a8d533bbaa1955aca5c54206b3c3dd1a55f2d40" Dec 13 05:16:21.675266 containerd[1621]: 2024-12-13 05:16:21.399 [INFO][4113] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="45cea2665418527e382388ca1a8d533bbaa1955aca5c54206b3c3dd1a55f2d40" Dec 13 05:16:21.675266 containerd[1621]: 2024-12-13 05:16:21.639 [INFO][4158] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="45cea2665418527e382388ca1a8d533bbaa1955aca5c54206b3c3dd1a55f2d40" HandleID="k8s-pod-network.45cea2665418527e382388ca1a8d533bbaa1955aca5c54206b3c3dd1a55f2d40" Workload="srv--p0439.gb1.brightbox.com-k8s-calico--apiserver--795898bcbf--x8w4g-eth0" Dec 13 05:16:21.675266 containerd[1621]: 2024-12-13 05:16:21.641 [INFO][4158] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 05:16:21.675266 containerd[1621]: 2024-12-13 05:16:21.641 [INFO][4158] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 05:16:21.675266 containerd[1621]: 2024-12-13 05:16:21.657 [WARNING][4158] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="45cea2665418527e382388ca1a8d533bbaa1955aca5c54206b3c3dd1a55f2d40" HandleID="k8s-pod-network.45cea2665418527e382388ca1a8d533bbaa1955aca5c54206b3c3dd1a55f2d40" Workload="srv--p0439.gb1.brightbox.com-k8s-calico--apiserver--795898bcbf--x8w4g-eth0" Dec 13 05:16:21.675266 containerd[1621]: 2024-12-13 05:16:21.657 [INFO][4158] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="45cea2665418527e382388ca1a8d533bbaa1955aca5c54206b3c3dd1a55f2d40" HandleID="k8s-pod-network.45cea2665418527e382388ca1a8d533bbaa1955aca5c54206b3c3dd1a55f2d40" Workload="srv--p0439.gb1.brightbox.com-k8s-calico--apiserver--795898bcbf--x8w4g-eth0" Dec 13 05:16:21.675266 containerd[1621]: 2024-12-13 05:16:21.659 [INFO][4158] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 05:16:21.675266 containerd[1621]: 2024-12-13 05:16:21.662 [INFO][4113] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="45cea2665418527e382388ca1a8d533bbaa1955aca5c54206b3c3dd1a55f2d40" Dec 13 05:16:21.680149 containerd[1621]: time="2024-12-13T05:16:21.678889265Z" level=info msg="TearDown network for sandbox \"45cea2665418527e382388ca1a8d533bbaa1955aca5c54206b3c3dd1a55f2d40\" successfully" Dec 13 05:16:21.680149 containerd[1621]: time="2024-12-13T05:16:21.678948417Z" level=info msg="StopPodSandbox for \"45cea2665418527e382388ca1a8d533bbaa1955aca5c54206b3c3dd1a55f2d40\" returns successfully" Dec 13 05:16:21.683682 containerd[1621]: time="2024-12-13T05:16:21.681416413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-795898bcbf-x8w4g,Uid:7d984527-00ee-4ef0-a476-1512e9636ccc,Namespace:calico-apiserver,Attempt:1,}" Dec 13 05:16:21.682493 systemd[1]: run-netns-cni\x2d692317e7\x2d9f11\x2d8d5b\x2d7a06\x2d2fb38e9dca37.mount: Deactivated successfully. Dec 13 05:16:21.860858 kernel: bpftool[4213]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 05:16:22.192386 systemd-networkd[1257]: cali9ea0ff524d5: Link UP Dec 13 05:16:22.195585 systemd-networkd[1257]: cali9ea0ff524d5: Gained carrier Dec 13 05:16:22.232313 containerd[1621]: 2024-12-13 05:16:21.951 [INFO][4198] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--p0439.gb1.brightbox.com-k8s-calico--apiserver--795898bcbf--x8w4g-eth0 calico-apiserver-795898bcbf- calico-apiserver 7d984527-00ee-4ef0-a476-1512e9636ccc 768 0 2024-12-13 05:15:53 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:795898bcbf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-p0439.gb1.brightbox.com calico-apiserver-795898bcbf-x8w4g eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9ea0ff524d5 [] []}} ContainerID="e07f74a0181c41fedb7cb31a74e5f321d55bac7605eca5a8430495f94f2e067a" Namespace="calico-apiserver" Pod="calico-apiserver-795898bcbf-x8w4g" WorkloadEndpoint="srv--p0439.gb1.brightbox.com-k8s-calico--apiserver--795898bcbf--x8w4g-" Dec 13 05:16:22.232313 containerd[1621]: 2024-12-13 05:16:21.952 [INFO][4198] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e07f74a0181c41fedb7cb31a74e5f321d55bac7605eca5a8430495f94f2e067a" Namespace="calico-apiserver" Pod="calico-apiserver-795898bcbf-x8w4g" WorkloadEndpoint="srv--p0439.gb1.brightbox.com-k8s-calico--apiserver--795898bcbf--x8w4g-eth0" Dec 13 05:16:22.232313 containerd[1621]: 2024-12-13 05:16:22.091 [INFO][4234] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e07f74a0181c41fedb7cb31a74e5f321d55bac7605eca5a8430495f94f2e067a" HandleID="k8s-pod-network.e07f74a0181c41fedb7cb31a74e5f321d55bac7605eca5a8430495f94f2e067a" Workload="srv--p0439.gb1.brightbox.com-k8s-calico--apiserver--795898bcbf--x8w4g-eth0" Dec 13 05:16:22.232313 containerd[1621]: 2024-12-13 05:16:22.111 [INFO][4234] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e07f74a0181c41fedb7cb31a74e5f321d55bac7605eca5a8430495f94f2e067a" HandleID="k8s-pod-network.e07f74a0181c41fedb7cb31a74e5f321d55bac7605eca5a8430495f94f2e067a" Workload="srv--p0439.gb1.brightbox.com-k8s-calico--apiserver--795898bcbf--x8w4g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011aaf0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-p0439.gb1.brightbox.com", "pod":"calico-apiserver-795898bcbf-x8w4g", "timestamp":"2024-12-13 05:16:22.090540242 +0000 UTC"}, Hostname:"srv-p0439.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 05:16:22.232313 containerd[1621]: 2024-12-13 05:16:22.111 [INFO][4234] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 05:16:22.232313 containerd[1621]: 2024-12-13 05:16:22.113 [INFO][4234] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 05:16:22.232313 containerd[1621]: 2024-12-13 05:16:22.113 [INFO][4234] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-p0439.gb1.brightbox.com' Dec 13 05:16:22.232313 containerd[1621]: 2024-12-13 05:16:22.116 [INFO][4234] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e07f74a0181c41fedb7cb31a74e5f321d55bac7605eca5a8430495f94f2e067a" host="srv-p0439.gb1.brightbox.com" Dec 13 05:16:22.232313 containerd[1621]: 2024-12-13 05:16:22.128 [INFO][4234] ipam/ipam.go 372: Looking up existing affinities for host host="srv-p0439.gb1.brightbox.com" Dec 13 05:16:22.232313 containerd[1621]: 2024-12-13 05:16:22.138 [INFO][4234] ipam/ipam.go 489: Trying affinity for 192.168.124.64/26 host="srv-p0439.gb1.brightbox.com" Dec 13 05:16:22.232313 containerd[1621]: 2024-12-13 05:16:22.140 [INFO][4234] ipam/ipam.go 155: Attempting to load block cidr=192.168.124.64/26 host="srv-p0439.gb1.brightbox.com" Dec 13 05:16:22.232313 containerd[1621]: 2024-12-13 05:16:22.143 [INFO][4234] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.124.64/26 host="srv-p0439.gb1.brightbox.com" Dec 13 05:16:22.232313 containerd[1621]: 2024-12-13 05:16:22.143 [INFO][4234] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.124.64/26 handle="k8s-pod-network.e07f74a0181c41fedb7cb31a74e5f321d55bac7605eca5a8430495f94f2e067a" host="srv-p0439.gb1.brightbox.com" Dec 13 05:16:22.232313 containerd[1621]: 2024-12-13 05:16:22.145 [INFO][4234] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e07f74a0181c41fedb7cb31a74e5f321d55bac7605eca5a8430495f94f2e067a Dec 13 05:16:22.232313 containerd[1621]: 2024-12-13 05:16:22.150 [INFO][4234] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.124.64/26 handle="k8s-pod-network.e07f74a0181c41fedb7cb31a74e5f321d55bac7605eca5a8430495f94f2e067a" host="srv-p0439.gb1.brightbox.com" Dec 13 05:16:22.232313 containerd[1621]: 2024-12-13 05:16:22.159 [INFO][4234] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.124.65/26] block=192.168.124.64/26 handle="k8s-pod-network.e07f74a0181c41fedb7cb31a74e5f321d55bac7605eca5a8430495f94f2e067a" host="srv-p0439.gb1.brightbox.com" Dec 13 05:16:22.232313 containerd[1621]: 2024-12-13 05:16:22.159 [INFO][4234] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.124.65/26] handle="k8s-pod-network.e07f74a0181c41fedb7cb31a74e5f321d55bac7605eca5a8430495f94f2e067a" host="srv-p0439.gb1.brightbox.com" Dec 13 05:16:22.232313 containerd[1621]: 2024-12-13 05:16:22.159 [INFO][4234] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 05:16:22.232313 containerd[1621]: 2024-12-13 05:16:22.159 [INFO][4234] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.124.65/26] IPv6=[] ContainerID="e07f74a0181c41fedb7cb31a74e5f321d55bac7605eca5a8430495f94f2e067a" HandleID="k8s-pod-network.e07f74a0181c41fedb7cb31a74e5f321d55bac7605eca5a8430495f94f2e067a" Workload="srv--p0439.gb1.brightbox.com-k8s-calico--apiserver--795898bcbf--x8w4g-eth0" Dec 13 05:16:22.238464 containerd[1621]: 2024-12-13 05:16:22.164 [INFO][4198] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e07f74a0181c41fedb7cb31a74e5f321d55bac7605eca5a8430495f94f2e067a" Namespace="calico-apiserver" Pod="calico-apiserver-795898bcbf-x8w4g" WorkloadEndpoint="srv--p0439.gb1.brightbox.com-k8s-calico--apiserver--795898bcbf--x8w4g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p0439.gb1.brightbox.com-k8s-calico--apiserver--795898bcbf--x8w4g-eth0", GenerateName:"calico-apiserver-795898bcbf-", Namespace:"calico-apiserver", SelfLink:"", UID:"7d984527-00ee-4ef0-a476-1512e9636ccc", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 5, 15, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"795898bcbf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p0439.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-795898bcbf-x8w4g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9ea0ff524d5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 05:16:22.238464 containerd[1621]: 2024-12-13 05:16:22.164 [INFO][4198] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.124.65/32] ContainerID="e07f74a0181c41fedb7cb31a74e5f321d55bac7605eca5a8430495f94f2e067a" Namespace="calico-apiserver" Pod="calico-apiserver-795898bcbf-x8w4g" WorkloadEndpoint="srv--p0439.gb1.brightbox.com-k8s-calico--apiserver--795898bcbf--x8w4g-eth0" Dec 13 05:16:22.238464 containerd[1621]: 2024-12-13 05:16:22.165 [INFO][4198] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9ea0ff524d5 ContainerID="e07f74a0181c41fedb7cb31a74e5f321d55bac7605eca5a8430495f94f2e067a" Namespace="calico-apiserver" Pod="calico-apiserver-795898bcbf-x8w4g" WorkloadEndpoint="srv--p0439.gb1.brightbox.com-k8s-calico--apiserver--795898bcbf--x8w4g-eth0" Dec 13 05:16:22.238464 containerd[1621]: 2024-12-13 05:16:22.189 [INFO][4198] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e07f74a0181c41fedb7cb31a74e5f321d55bac7605eca5a8430495f94f2e067a" Namespace="calico-apiserver" Pod="calico-apiserver-795898bcbf-x8w4g" WorkloadEndpoint="srv--p0439.gb1.brightbox.com-k8s-calico--apiserver--795898bcbf--x8w4g-eth0" Dec 13 05:16:22.238464 containerd[1621]: 2024-12-13 05:16:22.192 [INFO][4198] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e07f74a0181c41fedb7cb31a74e5f321d55bac7605eca5a8430495f94f2e067a" Namespace="calico-apiserver" Pod="calico-apiserver-795898bcbf-x8w4g" WorkloadEndpoint="srv--p0439.gb1.brightbox.com-k8s-calico--apiserver--795898bcbf--x8w4g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p0439.gb1.brightbox.com-k8s-calico--apiserver--795898bcbf--x8w4g-eth0", GenerateName:"calico-apiserver-795898bcbf-", Namespace:"calico-apiserver", SelfLink:"", UID:"7d984527-00ee-4ef0-a476-1512e9636ccc", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 5, 15, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"795898bcbf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p0439.gb1.brightbox.com", ContainerID:"e07f74a0181c41fedb7cb31a74e5f321d55bac7605eca5a8430495f94f2e067a", Pod:"calico-apiserver-795898bcbf-x8w4g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9ea0ff524d5", MAC:"6e:ee:cc:71:02:b0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 05:16:22.238464 containerd[1621]: 2024-12-13 05:16:22.221 [INFO][4198] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e07f74a0181c41fedb7cb31a74e5f321d55bac7605eca5a8430495f94f2e067a" Namespace="calico-apiserver" Pod="calico-apiserver-795898bcbf-x8w4g" WorkloadEndpoint="srv--p0439.gb1.brightbox.com-k8s-calico--apiserver--795898bcbf--x8w4g-eth0" Dec 13 05:16:22.245529 containerd[1621]: time="2024-12-13T05:16:22.241374232Z" level=info msg="StopPodSandbox for \"e8d76595af3388d10b3ace0a7998c685a7f910c4ac5da92f4c9d17b6abc775d1\"" Dec 13 05:16:22.262078 containerd[1621]: time="2024-12-13T05:16:22.260543020Z" level=info msg="StopPodSandbox for \"82506a6a82f3107212f048a8d02c4b0e3924df441411c85a1849caeb3fbe95d7\"" Dec 13 05:16:22.411752 containerd[1621]: time="2024-12-13T05:16:22.409553146Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 05:16:22.411752 containerd[1621]: time="2024-12-13T05:16:22.411257970Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 05:16:22.411752 containerd[1621]: time="2024-12-13T05:16:22.411279209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:16:22.411752 containerd[1621]: time="2024-12-13T05:16:22.411452685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:16:22.511530 systemd-networkd[1257]: vxlan.calico: Link UP Dec 13 05:16:22.511543 systemd-networkd[1257]: vxlan.calico: Gained carrier Dec 13 05:16:22.791450 containerd[1621]: 2024-12-13 05:16:22.504 [INFO][4280] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="82506a6a82f3107212f048a8d02c4b0e3924df441411c85a1849caeb3fbe95d7" Dec 13 05:16:22.791450 containerd[1621]: 2024-12-13 05:16:22.505 [INFO][4280] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="82506a6a82f3107212f048a8d02c4b0e3924df441411c85a1849caeb3fbe95d7" iface="eth0" netns="/var/run/netns/cni-2e054eab-fa90-7e87-77f4-6846396a25a0" Dec 13 05:16:22.791450 containerd[1621]: 2024-12-13 05:16:22.507 [INFO][4280] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="82506a6a82f3107212f048a8d02c4b0e3924df441411c85a1849caeb3fbe95d7" iface="eth0" netns="/var/run/netns/cni-2e054eab-fa90-7e87-77f4-6846396a25a0" Dec 13 05:16:22.791450 containerd[1621]: 2024-12-13 05:16:22.507 [INFO][4280] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="82506a6a82f3107212f048a8d02c4b0e3924df441411c85a1849caeb3fbe95d7" iface="eth0" netns="/var/run/netns/cni-2e054eab-fa90-7e87-77f4-6846396a25a0" Dec 13 05:16:22.791450 containerd[1621]: 2024-12-13 05:16:22.507 [INFO][4280] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="82506a6a82f3107212f048a8d02c4b0e3924df441411c85a1849caeb3fbe95d7" Dec 13 05:16:22.791450 containerd[1621]: 2024-12-13 05:16:22.507 [INFO][4280] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="82506a6a82f3107212f048a8d02c4b0e3924df441411c85a1849caeb3fbe95d7" Dec 13 05:16:22.791450 containerd[1621]: 2024-12-13 05:16:22.697 [INFO][4340] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="82506a6a82f3107212f048a8d02c4b0e3924df441411c85a1849caeb3fbe95d7" HandleID="k8s-pod-network.82506a6a82f3107212f048a8d02c4b0e3924df441411c85a1849caeb3fbe95d7" Workload="srv--p0439.gb1.brightbox.com-k8s-csi--node--driver--nfbkn-eth0" Dec 13 05:16:22.791450 containerd[1621]: 2024-12-13 05:16:22.698 [INFO][4340] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 05:16:22.791450 containerd[1621]: 2024-12-13 05:16:22.698 [INFO][4340] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 05:16:22.791450 containerd[1621]: 2024-12-13 05:16:22.769 [WARNING][4340] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="82506a6a82f3107212f048a8d02c4b0e3924df441411c85a1849caeb3fbe95d7" HandleID="k8s-pod-network.82506a6a82f3107212f048a8d02c4b0e3924df441411c85a1849caeb3fbe95d7" Workload="srv--p0439.gb1.brightbox.com-k8s-csi--node--driver--nfbkn-eth0" Dec 13 05:16:22.791450 containerd[1621]: 2024-12-13 05:16:22.769 [INFO][4340] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="82506a6a82f3107212f048a8d02c4b0e3924df441411c85a1849caeb3fbe95d7" HandleID="k8s-pod-network.82506a6a82f3107212f048a8d02c4b0e3924df441411c85a1849caeb3fbe95d7" Workload="srv--p0439.gb1.brightbox.com-k8s-csi--node--driver--nfbkn-eth0" Dec 13 05:16:22.791450 containerd[1621]: 2024-12-13 05:16:22.774 [INFO][4340] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 05:16:22.791450 containerd[1621]: 2024-12-13 05:16:22.787 [INFO][4280] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="82506a6a82f3107212f048a8d02c4b0e3924df441411c85a1849caeb3fbe95d7" Dec 13 05:16:22.796433 containerd[1621]: time="2024-12-13T05:16:22.796246440Z" level=info msg="TearDown network for sandbox \"82506a6a82f3107212f048a8d02c4b0e3924df441411c85a1849caeb3fbe95d7\" successfully" Dec 13 05:16:22.796433 containerd[1621]: time="2024-12-13T05:16:22.796303462Z" level=info msg="StopPodSandbox for \"82506a6a82f3107212f048a8d02c4b0e3924df441411c85a1849caeb3fbe95d7\" returns successfully" Dec 13 05:16:22.797396 containerd[1621]: time="2024-12-13T05:16:22.797349043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nfbkn,Uid:ce9c7b6e-1a22-4ef0-a19c-f54deef2ef8c,Namespace:calico-system,Attempt:1,}" Dec 13 05:16:22.799072 systemd[1]: run-netns-cni\x2d2e054eab\x2dfa90\x2d7e87\x2d77f4\x2d6846396a25a0.mount: Deactivated successfully. Dec 13 05:16:22.826889 containerd[1621]: time="2024-12-13T05:16:22.826835565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-795898bcbf-x8w4g,Uid:7d984527-00ee-4ef0-a476-1512e9636ccc,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"e07f74a0181c41fedb7cb31a74e5f321d55bac7605eca5a8430495f94f2e067a\"" Dec 13 05:16:22.833751 containerd[1621]: time="2024-12-13T05:16:22.833488111Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 05:16:22.852158 containerd[1621]: 2024-12-13 05:16:22.617 [INFO][4292] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e8d76595af3388d10b3ace0a7998c685a7f910c4ac5da92f4c9d17b6abc775d1" Dec 13 05:16:22.852158 containerd[1621]: 2024-12-13 05:16:22.620 [INFO][4292] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e8d76595af3388d10b3ace0a7998c685a7f910c4ac5da92f4c9d17b6abc775d1" iface="eth0" netns="/var/run/netns/cni-7b57d2c7-d163-2cd0-31fa-96dfaf9ab757" Dec 13 05:16:22.852158 containerd[1621]: 2024-12-13 05:16:22.622 [INFO][4292] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e8d76595af3388d10b3ace0a7998c685a7f910c4ac5da92f4c9d17b6abc775d1" iface="eth0" netns="/var/run/netns/cni-7b57d2c7-d163-2cd0-31fa-96dfaf9ab757" Dec 13 05:16:22.852158 containerd[1621]: 2024-12-13 05:16:22.624 [INFO][4292] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e8d76595af3388d10b3ace0a7998c685a7f910c4ac5da92f4c9d17b6abc775d1" iface="eth0" netns="/var/run/netns/cni-7b57d2c7-d163-2cd0-31fa-96dfaf9ab757" Dec 13 05:16:22.852158 containerd[1621]: 2024-12-13 05:16:22.624 [INFO][4292] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e8d76595af3388d10b3ace0a7998c685a7f910c4ac5da92f4c9d17b6abc775d1" Dec 13 05:16:22.852158 containerd[1621]: 2024-12-13 05:16:22.624 [INFO][4292] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e8d76595af3388d10b3ace0a7998c685a7f910c4ac5da92f4c9d17b6abc775d1" Dec 13 05:16:22.852158 containerd[1621]: 2024-12-13 05:16:22.807 [INFO][4370] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e8d76595af3388d10b3ace0a7998c685a7f910c4ac5da92f4c9d17b6abc775d1" HandleID="k8s-pod-network.e8d76595af3388d10b3ace0a7998c685a7f910c4ac5da92f4c9d17b6abc775d1" Workload="srv--p0439.gb1.brightbox.com-k8s-calico--apiserver--795898bcbf--ck94r-eth0" Dec 13 05:16:22.852158 containerd[1621]: 2024-12-13 05:16:22.809 [INFO][4370] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 05:16:22.852158 containerd[1621]: 2024-12-13 05:16:22.809 [INFO][4370] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 05:16:22.852158 containerd[1621]: 2024-12-13 05:16:22.837 [WARNING][4370] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e8d76595af3388d10b3ace0a7998c685a7f910c4ac5da92f4c9d17b6abc775d1" HandleID="k8s-pod-network.e8d76595af3388d10b3ace0a7998c685a7f910c4ac5da92f4c9d17b6abc775d1" Workload="srv--p0439.gb1.brightbox.com-k8s-calico--apiserver--795898bcbf--ck94r-eth0" Dec 13 05:16:22.852158 containerd[1621]: 2024-12-13 05:16:22.837 [INFO][4370] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e8d76595af3388d10b3ace0a7998c685a7f910c4ac5da92f4c9d17b6abc775d1" HandleID="k8s-pod-network.e8d76595af3388d10b3ace0a7998c685a7f910c4ac5da92f4c9d17b6abc775d1" Workload="srv--p0439.gb1.brightbox.com-k8s-calico--apiserver--795898bcbf--ck94r-eth0" Dec 13 05:16:22.852158 containerd[1621]: 2024-12-13 05:16:22.842 [INFO][4370] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 05:16:22.852158 containerd[1621]: 2024-12-13 05:16:22.849 [INFO][4292] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e8d76595af3388d10b3ace0a7998c685a7f910c4ac5da92f4c9d17b6abc775d1" Dec 13 05:16:22.855455 containerd[1621]: time="2024-12-13T05:16:22.855215336Z" level=info msg="TearDown network for sandbox \"e8d76595af3388d10b3ace0a7998c685a7f910c4ac5da92f4c9d17b6abc775d1\" successfully" Dec 13 05:16:22.855455 containerd[1621]: time="2024-12-13T05:16:22.855275567Z" level=info msg="StopPodSandbox for \"e8d76595af3388d10b3ace0a7998c685a7f910c4ac5da92f4c9d17b6abc775d1\" returns successfully" Dec 13 05:16:22.857803 containerd[1621]: time="2024-12-13T05:16:22.857659354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-795898bcbf-ck94r,Uid:6bc8feac-28cf-423e-8b2e-f3d9212b2634,Namespace:calico-apiserver,Attempt:1,}" Dec 13 05:16:22.859423 systemd[1]: run-netns-cni\x2d7b57d2c7\x2dd163\x2d2cd0\x2d31fa\x2d96dfaf9ab757.mount: Deactivated successfully. Dec 13 05:16:23.135555 systemd-networkd[1257]: calica7cca48d86: Link UP Dec 13 05:16:23.138271 systemd-networkd[1257]: calica7cca48d86: Gained carrier Dec 13 05:16:23.185145 containerd[1621]: 2024-12-13 05:16:22.934 [INFO][4399] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--p0439.gb1.brightbox.com-k8s-csi--node--driver--nfbkn-eth0 csi-node-driver- calico-system ce9c7b6e-1a22-4ef0-a19c-f54deef2ef8c 776 0 2024-12-13 05:15:54 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-p0439.gb1.brightbox.com csi-node-driver-nfbkn eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calica7cca48d86 [] []}} ContainerID="89429350f6c33055b0f8e8f67bddc1ec5bfb511ebb7741dfe425544ac4baf8ef" Namespace="calico-system" Pod="csi-node-driver-nfbkn" WorkloadEndpoint="srv--p0439.gb1.brightbox.com-k8s-csi--node--driver--nfbkn-" Dec 13 05:16:23.185145 containerd[1621]: 2024-12-13 05:16:22.934 [INFO][4399] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="89429350f6c33055b0f8e8f67bddc1ec5bfb511ebb7741dfe425544ac4baf8ef" Namespace="calico-system" Pod="csi-node-driver-nfbkn" WorkloadEndpoint="srv--p0439.gb1.brightbox.com-k8s-csi--node--driver--nfbkn-eth0" Dec 13 05:16:23.185145 containerd[1621]: 2024-12-13 05:16:23.033 [INFO][4420] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="89429350f6c33055b0f8e8f67bddc1ec5bfb511ebb7741dfe425544ac4baf8ef" HandleID="k8s-pod-network.89429350f6c33055b0f8e8f67bddc1ec5bfb511ebb7741dfe425544ac4baf8ef" Workload="srv--p0439.gb1.brightbox.com-k8s-csi--node--driver--nfbkn-eth0" Dec 13 05:16:23.185145 containerd[1621]: 2024-12-13 05:16:23.067 [INFO][4420] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="89429350f6c33055b0f8e8f67bddc1ec5bfb511ebb7741dfe425544ac4baf8ef" HandleID="k8s-pod-network.89429350f6c33055b0f8e8f67bddc1ec5bfb511ebb7741dfe425544ac4baf8ef" Workload="srv--p0439.gb1.brightbox.com-k8s-csi--node--driver--nfbkn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a4f00), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-p0439.gb1.brightbox.com", "pod":"csi-node-driver-nfbkn", "timestamp":"2024-12-13 05:16:23.033329703 +0000 UTC"}, Hostname:"srv-p0439.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 05:16:23.185145 containerd[1621]: 2024-12-13 05:16:23.071 [INFO][4420] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 05:16:23.185145 containerd[1621]: 2024-12-13 05:16:23.073 [INFO][4420] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 05:16:23.185145 containerd[1621]: 2024-12-13 05:16:23.073 [INFO][4420] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-p0439.gb1.brightbox.com' Dec 13 05:16:23.185145 containerd[1621]: 2024-12-13 05:16:23.081 [INFO][4420] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.89429350f6c33055b0f8e8f67bddc1ec5bfb511ebb7741dfe425544ac4baf8ef" host="srv-p0439.gb1.brightbox.com" Dec 13 05:16:23.185145 containerd[1621]: 2024-12-13 05:16:23.087 [INFO][4420] ipam/ipam.go 372: Looking up existing affinities for host host="srv-p0439.gb1.brightbox.com" Dec 13 05:16:23.185145 containerd[1621]: 2024-12-13 05:16:23.094 [INFO][4420] ipam/ipam.go 489: Trying affinity for 192.168.124.64/26 host="srv-p0439.gb1.brightbox.com" Dec 13 05:16:23.185145 containerd[1621]: 2024-12-13 05:16:23.098 [INFO][4420] ipam/ipam.go 155: Attempting to load block cidr=192.168.124.64/26 host="srv-p0439.gb1.brightbox.com" Dec 13 05:16:23.185145 containerd[1621]: 2024-12-13 05:16:23.102 [INFO][4420] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.124.64/26 host="srv-p0439.gb1.brightbox.com" Dec 13 05:16:23.185145 containerd[1621]: 2024-12-13 05:16:23.102 [INFO][4420] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.124.64/26 handle="k8s-pod-network.89429350f6c33055b0f8e8f67bddc1ec5bfb511ebb7741dfe425544ac4baf8ef" host="srv-p0439.gb1.brightbox.com" Dec 13 05:16:23.185145 containerd[1621]: 2024-12-13 05:16:23.104 [INFO][4420] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.89429350f6c33055b0f8e8f67bddc1ec5bfb511ebb7741dfe425544ac4baf8ef Dec 13 05:16:23.185145 containerd[1621]: 2024-12-13 05:16:23.109 [INFO][4420] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.124.64/26 handle="k8s-pod-network.89429350f6c33055b0f8e8f67bddc1ec5bfb511ebb7741dfe425544ac4baf8ef" host="srv-p0439.gb1.brightbox.com" Dec 13 05:16:23.185145 containerd[1621]: 2024-12-13 05:16:23.116 [INFO][4420] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.124.66/26] block=192.168.124.64/26 handle="k8s-pod-network.89429350f6c33055b0f8e8f67bddc1ec5bfb511ebb7741dfe425544ac4baf8ef" host="srv-p0439.gb1.brightbox.com" Dec 13 05:16:23.185145 containerd[1621]: 2024-12-13 05:16:23.117 [INFO][4420] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.124.66/26] handle="k8s-pod-network.89429350f6c33055b0f8e8f67bddc1ec5bfb511ebb7741dfe425544ac4baf8ef" host="srv-p0439.gb1.brightbox.com" Dec 13 05:16:23.185145 containerd[1621]: 2024-12-13 05:16:23.117 [INFO][4420] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 05:16:23.185145 containerd[1621]: 2024-12-13 05:16:23.117 [INFO][4420] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.124.66/26] IPv6=[] ContainerID="89429350f6c33055b0f8e8f67bddc1ec5bfb511ebb7741dfe425544ac4baf8ef" HandleID="k8s-pod-network.89429350f6c33055b0f8e8f67bddc1ec5bfb511ebb7741dfe425544ac4baf8ef" Workload="srv--p0439.gb1.brightbox.com-k8s-csi--node--driver--nfbkn-eth0" Dec 13 05:16:23.186515 containerd[1621]: 2024-12-13 05:16:23.122 [INFO][4399] cni-plugin/k8s.go 386: Populated endpoint ContainerID="89429350f6c33055b0f8e8f67bddc1ec5bfb511ebb7741dfe425544ac4baf8ef" Namespace="calico-system" Pod="csi-node-driver-nfbkn" WorkloadEndpoint="srv--p0439.gb1.brightbox.com-k8s-csi--node--driver--nfbkn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p0439.gb1.brightbox.com-k8s-csi--node--driver--nfbkn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ce9c7b6e-1a22-4ef0-a19c-f54deef2ef8c", ResourceVersion:"776", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 5, 15, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p0439.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-nfbkn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.124.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calica7cca48d86", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 05:16:23.186515 containerd[1621]: 2024-12-13 05:16:23.124 [INFO][4399] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.124.66/32] ContainerID="89429350f6c33055b0f8e8f67bddc1ec5bfb511ebb7741dfe425544ac4baf8ef" Namespace="calico-system" Pod="csi-node-driver-nfbkn" WorkloadEndpoint="srv--p0439.gb1.brightbox.com-k8s-csi--node--driver--nfbkn-eth0" Dec 13 05:16:23.186515 containerd[1621]: 2024-12-13 05:16:23.124 [INFO][4399] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calica7cca48d86 ContainerID="89429350f6c33055b0f8e8f67bddc1ec5bfb511ebb7741dfe425544ac4baf8ef" Namespace="calico-system" Pod="csi-node-driver-nfbkn" WorkloadEndpoint="srv--p0439.gb1.brightbox.com-k8s-csi--node--driver--nfbkn-eth0" Dec 13 05:16:23.186515 containerd[1621]: 2024-12-13 05:16:23.139 [INFO][4399] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="89429350f6c33055b0f8e8f67bddc1ec5bfb511ebb7741dfe425544ac4baf8ef" Namespace="calico-system" Pod="csi-node-driver-nfbkn" WorkloadEndpoint="srv--p0439.gb1.brightbox.com-k8s-csi--node--driver--nfbkn-eth0" Dec 13 05:16:23.186515 containerd[1621]: 2024-12-13 05:16:23.140 [INFO][4399] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="89429350f6c33055b0f8e8f67bddc1ec5bfb511ebb7741dfe425544ac4baf8ef" Namespace="calico-system" Pod="csi-node-driver-nfbkn" WorkloadEndpoint="srv--p0439.gb1.brightbox.com-k8s-csi--node--driver--nfbkn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p0439.gb1.brightbox.com-k8s-csi--node--driver--nfbkn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ce9c7b6e-1a22-4ef0-a19c-f54deef2ef8c", ResourceVersion:"776", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 5, 15, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p0439.gb1.brightbox.com", ContainerID:"89429350f6c33055b0f8e8f67bddc1ec5bfb511ebb7741dfe425544ac4baf8ef", Pod:"csi-node-driver-nfbkn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.124.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calica7cca48d86", MAC:"62:85:36:4f:f6:9c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 05:16:23.186515 containerd[1621]: 2024-12-13 05:16:23.174 [INFO][4399] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="89429350f6c33055b0f8e8f67bddc1ec5bfb511ebb7741dfe425544ac4baf8ef" Namespace="calico-system" Pod="csi-node-driver-nfbkn" WorkloadEndpoint="srv--p0439.gb1.brightbox.com-k8s-csi--node--driver--nfbkn-eth0" Dec 13 05:16:23.234152 containerd[1621]: time="2024-12-13T05:16:23.232182843Z" level=info msg="StopPodSandbox for \"9ab78e6efce3e39c288feeb536ab2cfc8880ec0aa35bcfb75ec33e8316356131\"" Dec 13 05:16:23.241573 containerd[1621]: time="2024-12-13T05:16:23.240469176Z" level=info msg="StopPodSandbox for \"f41b90515a3cb38878bfc1d7ad7da1b8ba24d92ea17f859d5ff03d89aa3e7570\"" Dec 13 05:16:23.376078 containerd[1621]: time="2024-12-13T05:16:23.361519913Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 05:16:23.376078 containerd[1621]: time="2024-12-13T05:16:23.361741632Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 05:16:23.376078 containerd[1621]: time="2024-12-13T05:16:23.361762759Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:16:23.376078 containerd[1621]: time="2024-12-13T05:16:23.363384220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:16:23.372549 systemd-networkd[1257]: cali8e53388ed5f: Link UP Dec 13 05:16:23.377290 systemd-networkd[1257]: cali8e53388ed5f: Gained carrier Dec 13 05:16:23.423336 containerd[1621]: 2024-12-13 05:16:22.962 [INFO][4408] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--p0439.gb1.brightbox.com-k8s-calico--apiserver--795898bcbf--ck94r-eth0 calico-apiserver-795898bcbf- calico-apiserver 6bc8feac-28cf-423e-8b2e-f3d9212b2634 777 0 2024-12-13 05:15:53 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:795898bcbf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-p0439.gb1.brightbox.com calico-apiserver-795898bcbf-ck94r eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8e53388ed5f [] []}} ContainerID="4d88a2cb4f7ffe546c8326e39d2982b9a5d2e5075d2b952fe6a0da76e4f3cf1a" Namespace="calico-apiserver" Pod="calico-apiserver-795898bcbf-ck94r" WorkloadEndpoint="srv--p0439.gb1.brightbox.com-k8s-calico--apiserver--795898bcbf--ck94r-" Dec 13 05:16:23.423336 containerd[1621]: 2024-12-13 05:16:22.963 [INFO][4408] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4d88a2cb4f7ffe546c8326e39d2982b9a5d2e5075d2b952fe6a0da76e4f3cf1a" Namespace="calico-apiserver" Pod="calico-apiserver-795898bcbf-ck94r" WorkloadEndpoint="srv--p0439.gb1.brightbox.com-k8s-calico--apiserver--795898bcbf--ck94r-eth0" Dec 13 05:16:23.423336 containerd[1621]: 2024-12-13 05:16:23.051 [INFO][4425] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4d88a2cb4f7ffe546c8326e39d2982b9a5d2e5075d2b952fe6a0da76e4f3cf1a" HandleID="k8s-pod-network.4d88a2cb4f7ffe546c8326e39d2982b9a5d2e5075d2b952fe6a0da76e4f3cf1a" Workload="srv--p0439.gb1.brightbox.com-k8s-calico--apiserver--795898bcbf--ck94r-eth0" Dec 13 05:16:23.423336 containerd[1621]: 2024-12-13 05:16:23.073 [INFO][4425] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4d88a2cb4f7ffe546c8326e39d2982b9a5d2e5075d2b952fe6a0da76e4f3cf1a" HandleID="k8s-pod-network.4d88a2cb4f7ffe546c8326e39d2982b9a5d2e5075d2b952fe6a0da76e4f3cf1a" Workload="srv--p0439.gb1.brightbox.com-k8s-calico--apiserver--795898bcbf--ck94r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318b60), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-p0439.gb1.brightbox.com", "pod":"calico-apiserver-795898bcbf-ck94r", "timestamp":"2024-12-13 05:16:23.051083805 +0000 UTC"}, Hostname:"srv-p0439.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 05:16:23.423336 containerd[1621]: 2024-12-13 05:16:23.074 [INFO][4425] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 05:16:23.423336 containerd[1621]: 2024-12-13 05:16:23.118 [INFO][4425] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 05:16:23.423336 containerd[1621]: 2024-12-13 05:16:23.118 [INFO][4425] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-p0439.gb1.brightbox.com' Dec 13 05:16:23.423336 containerd[1621]: 2024-12-13 05:16:23.130 [INFO][4425] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4d88a2cb4f7ffe546c8326e39d2982b9a5d2e5075d2b952fe6a0da76e4f3cf1a" host="srv-p0439.gb1.brightbox.com" Dec 13 05:16:23.423336 containerd[1621]: 2024-12-13 05:16:23.160 [INFO][4425] ipam/ipam.go 372: Looking up existing affinities for host host="srv-p0439.gb1.brightbox.com" Dec 13 05:16:23.423336 containerd[1621]: 2024-12-13 05:16:23.185 [INFO][4425] ipam/ipam.go 489: Trying affinity for 192.168.124.64/26 host="srv-p0439.gb1.brightbox.com" Dec 13 05:16:23.423336 containerd[1621]: 2024-12-13 05:16:23.216 [INFO][4425] ipam/ipam.go 155: Attempting to load block cidr=192.168.124.64/26 host="srv-p0439.gb1.brightbox.com" Dec 13 05:16:23.423336 containerd[1621]: 2024-12-13 05:16:23.248 [INFO][4425] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.124.64/26 host="srv-p0439.gb1.brightbox.com" Dec 13 05:16:23.423336 containerd[1621]: 2024-12-13 05:16:23.248 [INFO][4425] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.124.64/26 handle="k8s-pod-network.4d88a2cb4f7ffe546c8326e39d2982b9a5d2e5075d2b952fe6a0da76e4f3cf1a" host="srv-p0439.gb1.brightbox.com" Dec 13 05:16:23.423336 containerd[1621]: 2024-12-13 05:16:23.264 [INFO][4425] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4d88a2cb4f7ffe546c8326e39d2982b9a5d2e5075d2b952fe6a0da76e4f3cf1a Dec 13 05:16:23.423336 containerd[1621]: 2024-12-13 05:16:23.283 [INFO][4425] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.124.64/26 handle="k8s-pod-network.4d88a2cb4f7ffe546c8326e39d2982b9a5d2e5075d2b952fe6a0da76e4f3cf1a" host="srv-p0439.gb1.brightbox.com" Dec 13 05:16:23.423336 containerd[1621]: 2024-12-13 05:16:23.308 [INFO][4425] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.124.67/26] block=192.168.124.64/26 handle="k8s-pod-network.4d88a2cb4f7ffe546c8326e39d2982b9a5d2e5075d2b952fe6a0da76e4f3cf1a" host="srv-p0439.gb1.brightbox.com" Dec 13 05:16:23.423336 containerd[1621]: 2024-12-13 05:16:23.309 [INFO][4425] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.124.67/26] handle="k8s-pod-network.4d88a2cb4f7ffe546c8326e39d2982b9a5d2e5075d2b952fe6a0da76e4f3cf1a" host="srv-p0439.gb1.brightbox.com" Dec 13 05:16:23.423336 containerd[1621]: 2024-12-13 05:16:23.309 [INFO][4425] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 05:16:23.423336 containerd[1621]: 2024-12-13 05:16:23.309 [INFO][4425] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.124.67/26] IPv6=[] ContainerID="4d88a2cb4f7ffe546c8326e39d2982b9a5d2e5075d2b952fe6a0da76e4f3cf1a" HandleID="k8s-pod-network.4d88a2cb4f7ffe546c8326e39d2982b9a5d2e5075d2b952fe6a0da76e4f3cf1a" Workload="srv--p0439.gb1.brightbox.com-k8s-calico--apiserver--795898bcbf--ck94r-eth0" Dec 13 05:16:23.430499 containerd[1621]: 2024-12-13 05:16:23.322 [INFO][4408] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4d88a2cb4f7ffe546c8326e39d2982b9a5d2e5075d2b952fe6a0da76e4f3cf1a" Namespace="calico-apiserver" Pod="calico-apiserver-795898bcbf-ck94r" WorkloadEndpoint="srv--p0439.gb1.brightbox.com-k8s-calico--apiserver--795898bcbf--ck94r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p0439.gb1.brightbox.com-k8s-calico--apiserver--795898bcbf--ck94r-eth0", GenerateName:"calico-apiserver-795898bcbf-", Namespace:"calico-apiserver", SelfLink:"", UID:"6bc8feac-28cf-423e-8b2e-f3d9212b2634", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 5, 15, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"795898bcbf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p0439.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-795898bcbf-ck94r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8e53388ed5f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 05:16:23.430499 containerd[1621]: 2024-12-13 05:16:23.322 [INFO][4408] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.124.67/32] ContainerID="4d88a2cb4f7ffe546c8326e39d2982b9a5d2e5075d2b952fe6a0da76e4f3cf1a" Namespace="calico-apiserver" Pod="calico-apiserver-795898bcbf-ck94r" WorkloadEndpoint="srv--p0439.gb1.brightbox.com-k8s-calico--apiserver--795898bcbf--ck94r-eth0" Dec 13 05:16:23.430499 containerd[1621]: 2024-12-13 05:16:23.322 [INFO][4408] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8e53388ed5f ContainerID="4d88a2cb4f7ffe546c8326e39d2982b9a5d2e5075d2b952fe6a0da76e4f3cf1a" Namespace="calico-apiserver" Pod="calico-apiserver-795898bcbf-ck94r" WorkloadEndpoint="srv--p0439.gb1.brightbox.com-k8s-calico--apiserver--795898bcbf--ck94r-eth0" Dec 13 05:16:23.430499 containerd[1621]: 2024-12-13 05:16:23.381 [INFO][4408] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4d88a2cb4f7ffe546c8326e39d2982b9a5d2e5075d2b952fe6a0da76e4f3cf1a" Namespace="calico-apiserver" Pod="calico-apiserver-795898bcbf-ck94r" WorkloadEndpoint="srv--p0439.gb1.brightbox.com-k8s-calico--apiserver--795898bcbf--ck94r-eth0" Dec 13 05:16:23.430499 containerd[1621]: 2024-12-13 05:16:23.391 [INFO][4408] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4d88a2cb4f7ffe546c8326e39d2982b9a5d2e5075d2b952fe6a0da76e4f3cf1a" Namespace="calico-apiserver" Pod="calico-apiserver-795898bcbf-ck94r" WorkloadEndpoint="srv--p0439.gb1.brightbox.com-k8s-calico--apiserver--795898bcbf--ck94r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p0439.gb1.brightbox.com-k8s-calico--apiserver--795898bcbf--ck94r-eth0", GenerateName:"calico-apiserver-795898bcbf-", Namespace:"calico-apiserver", SelfLink:"", UID:"6bc8feac-28cf-423e-8b2e-f3d9212b2634", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 5, 15, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"795898bcbf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p0439.gb1.brightbox.com", ContainerID:"4d88a2cb4f7ffe546c8326e39d2982b9a5d2e5075d2b952fe6a0da76e4f3cf1a", Pod:"calico-apiserver-795898bcbf-ck94r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8e53388ed5f", MAC:"0a:93:fa:1d:c0:fc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 05:16:23.430499 containerd[1621]: 2024-12-13 05:16:23.416 [INFO][4408] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4d88a2cb4f7ffe546c8326e39d2982b9a5d2e5075d2b952fe6a0da76e4f3cf1a" Namespace="calico-apiserver" Pod="calico-apiserver-795898bcbf-ck94r" WorkloadEndpoint="srv--p0439.gb1.brightbox.com-k8s-calico--apiserver--795898bcbf--ck94r-eth0" Dec 13 05:16:23.442712 systemd-networkd[1257]: cali9ea0ff524d5: Gained IPv6LL Dec 13 05:16:23.624480 containerd[1621]: time="2024-12-13T05:16:23.623563571Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 05:16:23.624480 containerd[1621]: time="2024-12-13T05:16:23.623668640Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 05:16:23.624480 containerd[1621]: time="2024-12-13T05:16:23.623693485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:16:23.624480 containerd[1621]: time="2024-12-13T05:16:23.623849947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:16:23.634691 containerd[1621]: time="2024-12-13T05:16:23.634201095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nfbkn,Uid:ce9c7b6e-1a22-4ef0-a19c-f54deef2ef8c,Namespace:calico-system,Attempt:1,} returns sandbox id \"89429350f6c33055b0f8e8f67bddc1ec5bfb511ebb7741dfe425544ac4baf8ef\"" Dec 13 05:16:23.704600 containerd[1621]: 2024-12-13 05:16:23.534 [INFO][4497] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f41b90515a3cb38878bfc1d7ad7da1b8ba24d92ea17f859d5ff03d89aa3e7570" Dec 13 05:16:23.704600 containerd[1621]: 2024-12-13 05:16:23.534 [INFO][4497] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f41b90515a3cb38878bfc1d7ad7da1b8ba24d92ea17f859d5ff03d89aa3e7570" iface="eth0" netns="/var/run/netns/cni-f144861a-7c1a-81a6-7f5e-742424208beb" Dec 13 05:16:23.704600 containerd[1621]: 2024-12-13 05:16:23.535 [INFO][4497] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f41b90515a3cb38878bfc1d7ad7da1b8ba24d92ea17f859d5ff03d89aa3e7570" iface="eth0" netns="/var/run/netns/cni-f144861a-7c1a-81a6-7f5e-742424208beb" Dec 13 05:16:23.704600 containerd[1621]: 2024-12-13 05:16:23.536 [INFO][4497] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f41b90515a3cb38878bfc1d7ad7da1b8ba24d92ea17f859d5ff03d89aa3e7570" iface="eth0" netns="/var/run/netns/cni-f144861a-7c1a-81a6-7f5e-742424208beb" Dec 13 05:16:23.704600 containerd[1621]: 2024-12-13 05:16:23.536 [INFO][4497] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f41b90515a3cb38878bfc1d7ad7da1b8ba24d92ea17f859d5ff03d89aa3e7570" Dec 13 05:16:23.704600 containerd[1621]: 2024-12-13 05:16:23.536 [INFO][4497] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f41b90515a3cb38878bfc1d7ad7da1b8ba24d92ea17f859d5ff03d89aa3e7570" Dec 13 05:16:23.704600 containerd[1621]: 2024-12-13 05:16:23.649 [INFO][4563] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f41b90515a3cb38878bfc1d7ad7da1b8ba24d92ea17f859d5ff03d89aa3e7570" HandleID="k8s-pod-network.f41b90515a3cb38878bfc1d7ad7da1b8ba24d92ea17f859d5ff03d89aa3e7570" Workload="srv--p0439.gb1.brightbox.com-k8s-coredns--76f75df574--dthps-eth0" Dec 13 05:16:23.704600 containerd[1621]: 2024-12-13 05:16:23.652 [INFO][4563] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 05:16:23.704600 containerd[1621]: 2024-12-13 05:16:23.653 [INFO][4563] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 05:16:23.704600 containerd[1621]: 2024-12-13 05:16:23.670 [WARNING][4563] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f41b90515a3cb38878bfc1d7ad7da1b8ba24d92ea17f859d5ff03d89aa3e7570" HandleID="k8s-pod-network.f41b90515a3cb38878bfc1d7ad7da1b8ba24d92ea17f859d5ff03d89aa3e7570" Workload="srv--p0439.gb1.brightbox.com-k8s-coredns--76f75df574--dthps-eth0" Dec 13 05:16:23.704600 containerd[1621]: 2024-12-13 05:16:23.670 [INFO][4563] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f41b90515a3cb38878bfc1d7ad7da1b8ba24d92ea17f859d5ff03d89aa3e7570" HandleID="k8s-pod-network.f41b90515a3cb38878bfc1d7ad7da1b8ba24d92ea17f859d5ff03d89aa3e7570" Workload="srv--p0439.gb1.brightbox.com-k8s-coredns--76f75df574--dthps-eth0" Dec 13 05:16:23.704600 containerd[1621]: 2024-12-13 05:16:23.673 [INFO][4563] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 05:16:23.704600 containerd[1621]: 2024-12-13 05:16:23.685 [INFO][4497] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f41b90515a3cb38878bfc1d7ad7da1b8ba24d92ea17f859d5ff03d89aa3e7570" Dec 13 05:16:23.713276 systemd[1]: run-netns-cni\x2df144861a\x2d7c1a\x2d81a6\x2d7f5e\x2d742424208beb.mount: Deactivated successfully. Dec 13 05:16:23.713852 containerd[1621]: time="2024-12-13T05:16:23.713805754Z" level=info msg="TearDown network for sandbox \"f41b90515a3cb38878bfc1d7ad7da1b8ba24d92ea17f859d5ff03d89aa3e7570\" successfully" Dec 13 05:16:23.714026 containerd[1621]: time="2024-12-13T05:16:23.713963120Z" level=info msg="StopPodSandbox for \"f41b90515a3cb38878bfc1d7ad7da1b8ba24d92ea17f859d5ff03d89aa3e7570\" returns successfully" Dec 13 05:16:23.732358 containerd[1621]: time="2024-12-13T05:16:23.732290155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-dthps,Uid:cef81659-bce4-4b5b-b8bf-c2339e0fd749,Namespace:kube-system,Attempt:1,}" Dec 13 05:16:23.752382 containerd[1621]: 2024-12-13 05:16:23.605 [INFO][4524] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9ab78e6efce3e39c288feeb536ab2cfc8880ec0aa35bcfb75ec33e8316356131" Dec 13 05:16:23.752382 containerd[1621]: 2024-12-13 05:16:23.606 [INFO][4524] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9ab78e6efce3e39c288feeb536ab2cfc8880ec0aa35bcfb75ec33e8316356131" iface="eth0" netns="/var/run/netns/cni-66d7b6ee-46c4-dd95-5caf-3b3093fe1a37" Dec 13 05:16:23.752382 containerd[1621]: 2024-12-13 05:16:23.606 [INFO][4524] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9ab78e6efce3e39c288feeb536ab2cfc8880ec0aa35bcfb75ec33e8316356131" iface="eth0" netns="/var/run/netns/cni-66d7b6ee-46c4-dd95-5caf-3b3093fe1a37" Dec 13 05:16:23.752382 containerd[1621]: 2024-12-13 05:16:23.612 [INFO][4524] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9ab78e6efce3e39c288feeb536ab2cfc8880ec0aa35bcfb75ec33e8316356131" iface="eth0" netns="/var/run/netns/cni-66d7b6ee-46c4-dd95-5caf-3b3093fe1a37" Dec 13 05:16:23.752382 containerd[1621]: 2024-12-13 05:16:23.612 [INFO][4524] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9ab78e6efce3e39c288feeb536ab2cfc8880ec0aa35bcfb75ec33e8316356131" Dec 13 05:16:23.752382 containerd[1621]: 2024-12-13 05:16:23.612 [INFO][4524] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9ab78e6efce3e39c288feeb536ab2cfc8880ec0aa35bcfb75ec33e8316356131" Dec 13 05:16:23.752382 containerd[1621]: 2024-12-13 05:16:23.719 [INFO][4585] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9ab78e6efce3e39c288feeb536ab2cfc8880ec0aa35bcfb75ec33e8316356131" HandleID="k8s-pod-network.9ab78e6efce3e39c288feeb536ab2cfc8880ec0aa35bcfb75ec33e8316356131" Workload="srv--p0439.gb1.brightbox.com-k8s-calico--kube--controllers--548b6bb445--n77k4-eth0" Dec 13 05:16:23.752382 containerd[1621]: 2024-12-13 05:16:23.721 [INFO][4585] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 05:16:23.752382 containerd[1621]: 2024-12-13 05:16:23.721 [INFO][4585] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 05:16:23.752382 containerd[1621]: 2024-12-13 05:16:23.738 [WARNING][4585] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9ab78e6efce3e39c288feeb536ab2cfc8880ec0aa35bcfb75ec33e8316356131" HandleID="k8s-pod-network.9ab78e6efce3e39c288feeb536ab2cfc8880ec0aa35bcfb75ec33e8316356131" Workload="srv--p0439.gb1.brightbox.com-k8s-calico--kube--controllers--548b6bb445--n77k4-eth0" Dec 13 05:16:23.752382 containerd[1621]: 2024-12-13 05:16:23.738 [INFO][4585] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9ab78e6efce3e39c288feeb536ab2cfc8880ec0aa35bcfb75ec33e8316356131" HandleID="k8s-pod-network.9ab78e6efce3e39c288feeb536ab2cfc8880ec0aa35bcfb75ec33e8316356131" Workload="srv--p0439.gb1.brightbox.com-k8s-calico--kube--controllers--548b6bb445--n77k4-eth0" Dec 13 05:16:23.752382 containerd[1621]: 2024-12-13 05:16:23.742 [INFO][4585] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 05:16:23.752382 containerd[1621]: 2024-12-13 05:16:23.744 [INFO][4524] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9ab78e6efce3e39c288feeb536ab2cfc8880ec0aa35bcfb75ec33e8316356131" Dec 13 05:16:23.755451 containerd[1621]: time="2024-12-13T05:16:23.755414064Z" level=info msg="TearDown network for sandbox \"9ab78e6efce3e39c288feeb536ab2cfc8880ec0aa35bcfb75ec33e8316356131\" successfully" Dec 13 05:16:23.755689 containerd[1621]: time="2024-12-13T05:16:23.755661810Z" level=info msg="StopPodSandbox for \"9ab78e6efce3e39c288feeb536ab2cfc8880ec0aa35bcfb75ec33e8316356131\" returns successfully" Dec 13 05:16:23.759145 containerd[1621]: time="2024-12-13T05:16:23.757507709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-548b6bb445-n77k4,Uid:b5832f8d-80e4-4cb3-aa3a-b02c7e2ed4bb,Namespace:calico-system,Attempt:1,}" Dec 13 05:16:23.759676 systemd[1]: run-netns-cni\x2d66d7b6ee\x2d46c4\x2ddd95\x2d5caf\x2d3b3093fe1a37.mount: Deactivated successfully. Dec 13 05:16:23.855580 containerd[1621]: time="2024-12-13T05:16:23.854583195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-795898bcbf-ck94r,Uid:6bc8feac-28cf-423e-8b2e-f3d9212b2634,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"4d88a2cb4f7ffe546c8326e39d2982b9a5d2e5075d2b952fe6a0da76e4f3cf1a\"" Dec 13 05:16:24.001285 systemd-networkd[1257]: cali7cfb1e1261e: Link UP Dec 13 05:16:24.005274 systemd-networkd[1257]: cali7cfb1e1261e: Gained carrier Dec 13 05:16:24.032334 containerd[1621]: 2024-12-13 05:16:23.863 [INFO][4631] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--p0439.gb1.brightbox.com-k8s-calico--kube--controllers--548b6bb445--n77k4-eth0 calico-kube-controllers-548b6bb445- calico-system b5832f8d-80e4-4cb3-aa3a-b02c7e2ed4bb 790 0 2024-12-13 05:15:54 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:548b6bb445 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-p0439.gb1.brightbox.com calico-kube-controllers-548b6bb445-n77k4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7cfb1e1261e [] []}} ContainerID="226ecbd3c5c284ee88a38b940c437f6378a9a0158eb09fca82b16e6a8d0330db" Namespace="calico-system" Pod="calico-kube-controllers-548b6bb445-n77k4" WorkloadEndpoint="srv--p0439.gb1.brightbox.com-k8s-calico--kube--controllers--548b6bb445--n77k4-" Dec 13 05:16:24.032334 containerd[1621]: 2024-12-13 05:16:23.863 [INFO][4631] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="226ecbd3c5c284ee88a38b940c437f6378a9a0158eb09fca82b16e6a8d0330db" Namespace="calico-system" Pod="calico-kube-controllers-548b6bb445-n77k4" WorkloadEndpoint="srv--p0439.gb1.brightbox.com-k8s-calico--kube--controllers--548b6bb445--n77k4-eth0" Dec 13 05:16:24.032334 containerd[1621]: 2024-12-13 05:16:23.931 [INFO][4649] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="226ecbd3c5c284ee88a38b940c437f6378a9a0158eb09fca82b16e6a8d0330db" HandleID="k8s-pod-network.226ecbd3c5c284ee88a38b940c437f6378a9a0158eb09fca82b16e6a8d0330db" Workload="srv--p0439.gb1.brightbox.com-k8s-calico--kube--controllers--548b6bb445--n77k4-eth0" Dec 13 05:16:24.032334 containerd[1621]: 2024-12-13 05:16:23.948 [INFO][4649] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="226ecbd3c5c284ee88a38b940c437f6378a9a0158eb09fca82b16e6a8d0330db" HandleID="k8s-pod-network.226ecbd3c5c284ee88a38b940c437f6378a9a0158eb09fca82b16e6a8d0330db" Workload="srv--p0439.gb1.brightbox.com-k8s-calico--kube--controllers--548b6bb445--n77k4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00038c810), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-p0439.gb1.brightbox.com", "pod":"calico-kube-controllers-548b6bb445-n77k4", "timestamp":"2024-12-13 05:16:23.931724272 +0000 UTC"}, Hostname:"srv-p0439.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 05:16:24.032334 containerd[1621]: 2024-12-13 05:16:23.948 [INFO][4649] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 05:16:24.032334 containerd[1621]: 2024-12-13 05:16:23.948 [INFO][4649] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 05:16:24.032334 containerd[1621]: 2024-12-13 05:16:23.948 [INFO][4649] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-p0439.gb1.brightbox.com' Dec 13 05:16:24.032334 containerd[1621]: 2024-12-13 05:16:23.951 [INFO][4649] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.226ecbd3c5c284ee88a38b940c437f6378a9a0158eb09fca82b16e6a8d0330db" host="srv-p0439.gb1.brightbox.com" Dec 13 05:16:24.032334 containerd[1621]: 2024-12-13 05:16:23.957 [INFO][4649] ipam/ipam.go 372: Looking up existing affinities for host host="srv-p0439.gb1.brightbox.com" Dec 13 05:16:24.032334 containerd[1621]: 2024-12-13 05:16:23.963 [INFO][4649] ipam/ipam.go 489: Trying affinity for 192.168.124.64/26 host="srv-p0439.gb1.brightbox.com" Dec 13 05:16:24.032334 containerd[1621]: 2024-12-13 05:16:23.965 [INFO][4649] ipam/ipam.go 155: Attempting to load block cidr=192.168.124.64/26 host="srv-p0439.gb1.brightbox.com" Dec 13 05:16:24.032334 containerd[1621]: 2024-12-13 05:16:23.970 [INFO][4649] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.124.64/26 host="srv-p0439.gb1.brightbox.com" Dec 13 05:16:24.032334 containerd[1621]: 2024-12-13 05:16:23.970 [INFO][4649] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.124.64/26 handle="k8s-pod-network.226ecbd3c5c284ee88a38b940c437f6378a9a0158eb09fca82b16e6a8d0330db" host="srv-p0439.gb1.brightbox.com" Dec 13 05:16:24.032334 containerd[1621]: 2024-12-13 05:16:23.972 [INFO][4649] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.226ecbd3c5c284ee88a38b940c437f6378a9a0158eb09fca82b16e6a8d0330db Dec 13 05:16:24.032334 containerd[1621]: 2024-12-13 05:16:23.978 [INFO][4649] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.124.64/26 handle="k8s-pod-network.226ecbd3c5c284ee88a38b940c437f6378a9a0158eb09fca82b16e6a8d0330db" host="srv-p0439.gb1.brightbox.com" Dec 13 05:16:24.032334 containerd[1621]: 2024-12-13 05:16:23.986 [INFO][4649] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.124.68/26] block=192.168.124.64/26 handle="k8s-pod-network.226ecbd3c5c284ee88a38b940c437f6378a9a0158eb09fca82b16e6a8d0330db" host="srv-p0439.gb1.brightbox.com" Dec 13 05:16:24.032334 containerd[1621]: 2024-12-13 05:16:23.987 [INFO][4649] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.124.68/26] handle="k8s-pod-network.226ecbd3c5c284ee88a38b940c437f6378a9a0158eb09fca82b16e6a8d0330db" host="srv-p0439.gb1.brightbox.com" Dec 13 05:16:24.032334 containerd[1621]: 2024-12-13 05:16:23.987 [INFO][4649] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 05:16:24.032334 containerd[1621]: 2024-12-13 05:16:23.987 [INFO][4649] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.124.68/26] IPv6=[] ContainerID="226ecbd3c5c284ee88a38b940c437f6378a9a0158eb09fca82b16e6a8d0330db" HandleID="k8s-pod-network.226ecbd3c5c284ee88a38b940c437f6378a9a0158eb09fca82b16e6a8d0330db" Workload="srv--p0439.gb1.brightbox.com-k8s-calico--kube--controllers--548b6bb445--n77k4-eth0" Dec 13 05:16:24.034791 containerd[1621]: 2024-12-13 05:16:23.991 [INFO][4631] cni-plugin/k8s.go 386: Populated endpoint ContainerID="226ecbd3c5c284ee88a38b940c437f6378a9a0158eb09fca82b16e6a8d0330db" Namespace="calico-system" Pod="calico-kube-controllers-548b6bb445-n77k4" WorkloadEndpoint="srv--p0439.gb1.brightbox.com-k8s-calico--kube--controllers--548b6bb445--n77k4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p0439.gb1.brightbox.com-k8s-calico--kube--controllers--548b6bb445--n77k4-eth0", GenerateName:"calico-kube-controllers-548b6bb445-", Namespace:"calico-system", SelfLink:"", UID:"b5832f8d-80e4-4cb3-aa3a-b02c7e2ed4bb", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 5, 15, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"548b6bb445", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p0439.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-548b6bb445-n77k4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.124.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7cfb1e1261e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 05:16:24.034791 containerd[1621]: 2024-12-13 05:16:23.991 [INFO][4631] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.124.68/32] ContainerID="226ecbd3c5c284ee88a38b940c437f6378a9a0158eb09fca82b16e6a8d0330db" Namespace="calico-system" Pod="calico-kube-controllers-548b6bb445-n77k4" WorkloadEndpoint="srv--p0439.gb1.brightbox.com-k8s-calico--kube--controllers--548b6bb445--n77k4-eth0" Dec 13 05:16:24.034791 containerd[1621]: 2024-12-13 05:16:23.991 [INFO][4631] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7cfb1e1261e ContainerID="226ecbd3c5c284ee88a38b940c437f6378a9a0158eb09fca82b16e6a8d0330db" Namespace="calico-system" Pod="calico-kube-controllers-548b6bb445-n77k4" WorkloadEndpoint="srv--p0439.gb1.brightbox.com-k8s-calico--kube--controllers--548b6bb445--n77k4-eth0" Dec 13 05:16:24.034791 containerd[1621]: 2024-12-13 05:16:24.007 [INFO][4631] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="226ecbd3c5c284ee88a38b940c437f6378a9a0158eb09fca82b16e6a8d0330db" Namespace="calico-system" Pod="calico-kube-controllers-548b6bb445-n77k4" WorkloadEndpoint="srv--p0439.gb1.brightbox.com-k8s-calico--kube--controllers--548b6bb445--n77k4-eth0" Dec 13 05:16:24.034791 containerd[1621]: 2024-12-13 05:16:24.008 [INFO][4631] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="226ecbd3c5c284ee88a38b940c437f6378a9a0158eb09fca82b16e6a8d0330db" Namespace="calico-system" Pod="calico-kube-controllers-548b6bb445-n77k4" WorkloadEndpoint="srv--p0439.gb1.brightbox.com-k8s-calico--kube--controllers--548b6bb445--n77k4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p0439.gb1.brightbox.com-k8s-calico--kube--controllers--548b6bb445--n77k4-eth0", GenerateName:"calico-kube-controllers-548b6bb445-", Namespace:"calico-system", SelfLink:"", UID:"b5832f8d-80e4-4cb3-aa3a-b02c7e2ed4bb", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 5, 15, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"548b6bb445", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p0439.gb1.brightbox.com", ContainerID:"226ecbd3c5c284ee88a38b940c437f6378a9a0158eb09fca82b16e6a8d0330db", Pod:"calico-kube-controllers-548b6bb445-n77k4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.124.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7cfb1e1261e", MAC:"9a:14:0e:4b:b1:28", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 05:16:24.034791 containerd[1621]: 2024-12-13 05:16:24.025 [INFO][4631] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="226ecbd3c5c284ee88a38b940c437f6378a9a0158eb09fca82b16e6a8d0330db" Namespace="calico-system" Pod="calico-kube-controllers-548b6bb445-n77k4" WorkloadEndpoint="srv--p0439.gb1.brightbox.com-k8s-calico--kube--controllers--548b6bb445--n77k4-eth0" Dec 13 05:16:24.059001 systemd-networkd[1257]: cali6d32163c62b: Link UP Dec 13 05:16:24.061745 systemd-networkd[1257]: cali6d32163c62b: Gained carrier Dec 13 05:16:24.083613 systemd-networkd[1257]: vxlan.calico: Gained IPv6LL Dec 13 05:16:24.099160 containerd[1621]: 2024-12-13 05:16:23.867 [INFO][4621] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--p0439.gb1.brightbox.com-k8s-coredns--76f75df574--dthps-eth0 coredns-76f75df574- kube-system cef81659-bce4-4b5b-b8bf-c2339e0fd749 789 0 2024-12-13 05:15:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-p0439.gb1.brightbox.com coredns-76f75df574-dthps eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6d32163c62b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="9e3a71ded1f93c321a150eaa3ffa2dcc320fec5f9a3e9937c214cd755ae4845f" Namespace="kube-system" Pod="coredns-76f75df574-dthps" WorkloadEndpoint="srv--p0439.gb1.brightbox.com-k8s-coredns--76f75df574--dthps-" Dec 13 05:16:24.099160 containerd[1621]: 2024-12-13 05:16:23.867 [INFO][4621] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9e3a71ded1f93c321a150eaa3ffa2dcc320fec5f9a3e9937c214cd755ae4845f" Namespace="kube-system" Pod="coredns-76f75df574-dthps" WorkloadEndpoint="srv--p0439.gb1.brightbox.com-k8s-coredns--76f75df574--dthps-eth0" Dec 13 05:16:24.099160 containerd[1621]: 2024-12-13 05:16:23.934 [INFO][4650] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9e3a71ded1f93c321a150eaa3ffa2dcc320fec5f9a3e9937c214cd755ae4845f" HandleID="k8s-pod-network.9e3a71ded1f93c321a150eaa3ffa2dcc320fec5f9a3e9937c214cd755ae4845f" Workload="srv--p0439.gb1.brightbox.com-k8s-coredns--76f75df574--dthps-eth0" Dec 13 05:16:24.099160 containerd[1621]: 2024-12-13 05:16:23.951 [INFO][4650] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9e3a71ded1f93c321a150eaa3ffa2dcc320fec5f9a3e9937c214cd755ae4845f" HandleID="k8s-pod-network.9e3a71ded1f93c321a150eaa3ffa2dcc320fec5f9a3e9937c214cd755ae4845f" Workload="srv--p0439.gb1.brightbox.com-k8s-coredns--76f75df574--dthps-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290810), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-p0439.gb1.brightbox.com", "pod":"coredns-76f75df574-dthps", "timestamp":"2024-12-13 05:16:23.934656004 +0000 UTC"}, Hostname:"srv-p0439.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 05:16:24.099160 containerd[1621]: 2024-12-13 05:16:23.951 [INFO][4650] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 05:16:24.099160 containerd[1621]: 2024-12-13 05:16:23.987 [INFO][4650] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 05:16:24.099160 containerd[1621]: 2024-12-13 05:16:23.987 [INFO][4650] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-p0439.gb1.brightbox.com' Dec 13 05:16:24.099160 containerd[1621]: 2024-12-13 05:16:23.989 [INFO][4650] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9e3a71ded1f93c321a150eaa3ffa2dcc320fec5f9a3e9937c214cd755ae4845f" host="srv-p0439.gb1.brightbox.com" Dec 13 05:16:24.099160 containerd[1621]: 2024-12-13 05:16:23.998 [INFO][4650] ipam/ipam.go 372: Looking up existing affinities for host host="srv-p0439.gb1.brightbox.com" Dec 13 05:16:24.099160 containerd[1621]: 2024-12-13 05:16:24.009 [INFO][4650] ipam/ipam.go 489: Trying affinity for 192.168.124.64/26 host="srv-p0439.gb1.brightbox.com" Dec 13 05:16:24.099160 containerd[1621]: 2024-12-13 05:16:24.013 [INFO][4650] ipam/ipam.go 155: Attempting to load block cidr=192.168.124.64/26 host="srv-p0439.gb1.brightbox.com" Dec 13 05:16:24.099160 containerd[1621]: 2024-12-13 05:16:24.017 [INFO][4650] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.124.64/26 host="srv-p0439.gb1.brightbox.com" Dec 13 05:16:24.099160 containerd[1621]: 2024-12-13 05:16:24.017 [INFO][4650] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.124.64/26 handle="k8s-pod-network.9e3a71ded1f93c321a150eaa3ffa2dcc320fec5f9a3e9937c214cd755ae4845f" host="srv-p0439.gb1.brightbox.com" Dec 13 05:16:24.099160 containerd[1621]: 2024-12-13 05:16:24.020 [INFO][4650] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9e3a71ded1f93c321a150eaa3ffa2dcc320fec5f9a3e9937c214cd755ae4845f Dec 13 05:16:24.099160 containerd[1621]: 2024-12-13 05:16:24.029 [INFO][4650] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.124.64/26 handle="k8s-pod-network.9e3a71ded1f93c321a150eaa3ffa2dcc320fec5f9a3e9937c214cd755ae4845f" host="srv-p0439.gb1.brightbox.com" Dec 13 05:16:24.099160 containerd[1621]: 2024-12-13 05:16:24.046 [INFO][4650] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.124.69/26] block=192.168.124.64/26 handle="k8s-pod-network.9e3a71ded1f93c321a150eaa3ffa2dcc320fec5f9a3e9937c214cd755ae4845f" host="srv-p0439.gb1.brightbox.com" Dec 13 05:16:24.099160 containerd[1621]: 2024-12-13 05:16:24.046 [INFO][4650] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.124.69/26] handle="k8s-pod-network.9e3a71ded1f93c321a150eaa3ffa2dcc320fec5f9a3e9937c214cd755ae4845f" host="srv-p0439.gb1.brightbox.com" Dec 13 05:16:24.099160 containerd[1621]: 2024-12-13 05:16:24.046 [INFO][4650] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 05:16:24.099160 containerd[1621]: 2024-12-13 05:16:24.046 [INFO][4650] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.124.69/26] IPv6=[] ContainerID="9e3a71ded1f93c321a150eaa3ffa2dcc320fec5f9a3e9937c214cd755ae4845f" HandleID="k8s-pod-network.9e3a71ded1f93c321a150eaa3ffa2dcc320fec5f9a3e9937c214cd755ae4845f" Workload="srv--p0439.gb1.brightbox.com-k8s-coredns--76f75df574--dthps-eth0" Dec 13 05:16:24.101762 containerd[1621]: 2024-12-13 05:16:24.052 [INFO][4621] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9e3a71ded1f93c321a150eaa3ffa2dcc320fec5f9a3e9937c214cd755ae4845f" Namespace="kube-system" Pod="coredns-76f75df574-dthps" WorkloadEndpoint="srv--p0439.gb1.brightbox.com-k8s-coredns--76f75df574--dthps-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p0439.gb1.brightbox.com-k8s-coredns--76f75df574--dthps-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"cef81659-bce4-4b5b-b8bf-c2339e0fd749", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 5, 15, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p0439.gb1.brightbox.com", ContainerID:"", Pod:"coredns-76f75df574-dthps", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6d32163c62b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 05:16:24.101762 containerd[1621]: 2024-12-13 05:16:24.052 [INFO][4621] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.124.69/32] ContainerID="9e3a71ded1f93c321a150eaa3ffa2dcc320fec5f9a3e9937c214cd755ae4845f" Namespace="kube-system" Pod="coredns-76f75df574-dthps" WorkloadEndpoint="srv--p0439.gb1.brightbox.com-k8s-coredns--76f75df574--dthps-eth0" Dec 13 05:16:24.101762 containerd[1621]: 2024-12-13 05:16:24.052 [INFO][4621] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6d32163c62b ContainerID="9e3a71ded1f93c321a150eaa3ffa2dcc320fec5f9a3e9937c214cd755ae4845f" Namespace="kube-system" Pod="coredns-76f75df574-dthps" WorkloadEndpoint="srv--p0439.gb1.brightbox.com-k8s-coredns--76f75df574--dthps-eth0" Dec 13 05:16:24.101762 containerd[1621]: 2024-12-13 05:16:24.064 [INFO][4621] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9e3a71ded1f93c321a150eaa3ffa2dcc320fec5f9a3e9937c214cd755ae4845f" Namespace="kube-system" Pod="coredns-76f75df574-dthps" WorkloadEndpoint="srv--p0439.gb1.brightbox.com-k8s-coredns--76f75df574--dthps-eth0" Dec 13 05:16:24.101762 containerd[1621]: 2024-12-13 05:16:24.064 [INFO][4621] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9e3a71ded1f93c321a150eaa3ffa2dcc320fec5f9a3e9937c214cd755ae4845f" Namespace="kube-system" Pod="coredns-76f75df574-dthps" WorkloadEndpoint="srv--p0439.gb1.brightbox.com-k8s-coredns--76f75df574--dthps-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p0439.gb1.brightbox.com-k8s-coredns--76f75df574--dthps-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"cef81659-bce4-4b5b-b8bf-c2339e0fd749", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 5, 15, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p0439.gb1.brightbox.com", ContainerID:"9e3a71ded1f93c321a150eaa3ffa2dcc320fec5f9a3e9937c214cd755ae4845f", Pod:"coredns-76f75df574-dthps", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6d32163c62b", MAC:"4e:a7:5e:d8:e7:0d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 05:16:24.101762 containerd[1621]: 2024-12-13 05:16:24.090 [INFO][4621] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9e3a71ded1f93c321a150eaa3ffa2dcc320fec5f9a3e9937c214cd755ae4845f" Namespace="kube-system" Pod="coredns-76f75df574-dthps" WorkloadEndpoint="srv--p0439.gb1.brightbox.com-k8s-coredns--76f75df574--dthps-eth0" Dec 13 05:16:24.120026 containerd[1621]: time="2024-12-13T05:16:24.119712604Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 05:16:24.120026 containerd[1621]: time="2024-12-13T05:16:24.119772193Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 05:16:24.120026 containerd[1621]: time="2024-12-13T05:16:24.119800354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:16:24.120026 containerd[1621]: time="2024-12-13T05:16:24.119930237Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:16:24.200626 containerd[1621]: time="2024-12-13T05:16:24.200393012Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 05:16:24.201833 containerd[1621]: time="2024-12-13T05:16:24.201654272Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 05:16:24.202301 containerd[1621]: time="2024-12-13T05:16:24.202173491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:16:24.202569 containerd[1621]: time="2024-12-13T05:16:24.202450960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:16:24.233733 containerd[1621]: time="2024-12-13T05:16:24.232448001Z" level=info msg="StopPodSandbox for \"3754e6b058036373abb3bf637e5394d8738766363111017c0b79b73f83b5a86f\"" Dec 13 05:16:24.386350 containerd[1621]: time="2024-12-13T05:16:24.386292456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-dthps,Uid:cef81659-bce4-4b5b-b8bf-c2339e0fd749,Namespace:kube-system,Attempt:1,} returns sandbox id \"9e3a71ded1f93c321a150eaa3ffa2dcc320fec5f9a3e9937c214cd755ae4845f\"" Dec 13 05:16:24.408314 containerd[1621]: time="2024-12-13T05:16:24.408211851Z" level=info msg="CreateContainer within sandbox \"9e3a71ded1f93c321a150eaa3ffa2dcc320fec5f9a3e9937c214cd755ae4845f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 05:16:24.426596 containerd[1621]: time="2024-12-13T05:16:24.425926027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-548b6bb445-n77k4,Uid:b5832f8d-80e4-4cb3-aa3a-b02c7e2ed4bb,Namespace:calico-system,Attempt:1,} returns sandbox id \"226ecbd3c5c284ee88a38b940c437f6378a9a0158eb09fca82b16e6a8d0330db\"" Dec 13 05:16:24.464324 containerd[1621]: time="2024-12-13T05:16:24.464272294Z" level=info msg="CreateContainer within sandbox \"9e3a71ded1f93c321a150eaa3ffa2dcc320fec5f9a3e9937c214cd755ae4845f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5473984b0ec26b498b8097d058818501176521ed4c29dfe3d7e382f88fa3e970\"" Dec 13 05:16:24.467154 containerd[1621]: time="2024-12-13T05:16:24.465939730Z" level=info msg="StartContainer for \"5473984b0ec26b498b8097d058818501176521ed4c29dfe3d7e382f88fa3e970\"" Dec 13 05:16:24.517653 containerd[1621]: 2024-12-13 05:16:24.357 [INFO][4771] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3754e6b058036373abb3bf637e5394d8738766363111017c0b79b73f83b5a86f" Dec 13 05:16:24.517653 containerd[1621]: 2024-12-13 05:16:24.358 [INFO][4771] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3754e6b058036373abb3bf637e5394d8738766363111017c0b79b73f83b5a86f" iface="eth0" netns="/var/run/netns/cni-72cda3ed-d7eb-57e8-c81c-b0923c9520d0" Dec 13 05:16:24.517653 containerd[1621]: 2024-12-13 05:16:24.370 [INFO][4771] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3754e6b058036373abb3bf637e5394d8738766363111017c0b79b73f83b5a86f" iface="eth0" netns="/var/run/netns/cni-72cda3ed-d7eb-57e8-c81c-b0923c9520d0" Dec 13 05:16:24.517653 containerd[1621]: 2024-12-13 05:16:24.372 [INFO][4771] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3754e6b058036373abb3bf637e5394d8738766363111017c0b79b73f83b5a86f" iface="eth0" netns="/var/run/netns/cni-72cda3ed-d7eb-57e8-c81c-b0923c9520d0" Dec 13 05:16:24.517653 containerd[1621]: 2024-12-13 05:16:24.372 [INFO][4771] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3754e6b058036373abb3bf637e5394d8738766363111017c0b79b73f83b5a86f" Dec 13 05:16:24.517653 containerd[1621]: 2024-12-13 05:16:24.374 [INFO][4771] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3754e6b058036373abb3bf637e5394d8738766363111017c0b79b73f83b5a86f" Dec 13 05:16:24.517653 containerd[1621]: 2024-12-13 05:16:24.477 [INFO][4793] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3754e6b058036373abb3bf637e5394d8738766363111017c0b79b73f83b5a86f" HandleID="k8s-pod-network.3754e6b058036373abb3bf637e5394d8738766363111017c0b79b73f83b5a86f" Workload="srv--p0439.gb1.brightbox.com-k8s-coredns--76f75df574--m7856-eth0" Dec 13 05:16:24.517653 containerd[1621]: 2024-12-13 05:16:24.480 [INFO][4793] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 05:16:24.517653 containerd[1621]: 2024-12-13 05:16:24.481 [INFO][4793] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 05:16:24.517653 containerd[1621]: 2024-12-13 05:16:24.498 [WARNING][4793] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3754e6b058036373abb3bf637e5394d8738766363111017c0b79b73f83b5a86f" HandleID="k8s-pod-network.3754e6b058036373abb3bf637e5394d8738766363111017c0b79b73f83b5a86f" Workload="srv--p0439.gb1.brightbox.com-k8s-coredns--76f75df574--m7856-eth0" Dec 13 05:16:24.517653 containerd[1621]: 2024-12-13 05:16:24.498 [INFO][4793] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3754e6b058036373abb3bf637e5394d8738766363111017c0b79b73f83b5a86f" HandleID="k8s-pod-network.3754e6b058036373abb3bf637e5394d8738766363111017c0b79b73f83b5a86f" Workload="srv--p0439.gb1.brightbox.com-k8s-coredns--76f75df574--m7856-eth0" Dec 13 05:16:24.517653 containerd[1621]: 2024-12-13 05:16:24.506 [INFO][4793] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 05:16:24.517653 containerd[1621]: 2024-12-13 05:16:24.510 [INFO][4771] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3754e6b058036373abb3bf637e5394d8738766363111017c0b79b73f83b5a86f" Dec 13 05:16:24.519526 containerd[1621]: time="2024-12-13T05:16:24.518089551Z" level=info msg="TearDown network for sandbox \"3754e6b058036373abb3bf637e5394d8738766363111017c0b79b73f83b5a86f\" successfully" Dec 13 05:16:24.519526 containerd[1621]: time="2024-12-13T05:16:24.518152397Z" level=info msg="StopPodSandbox for \"3754e6b058036373abb3bf637e5394d8738766363111017c0b79b73f83b5a86f\" returns successfully" Dec 13 05:16:24.521218 containerd[1621]: time="2024-12-13T05:16:24.520692629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-m7856,Uid:20ff6575-c92f-4eb2-882b-7354b8ce1048,Namespace:kube-system,Attempt:1,}" Dec 13 05:16:24.676620 containerd[1621]: time="2024-12-13T05:16:24.675871872Z" level=info msg="StartContainer for \"5473984b0ec26b498b8097d058818501176521ed4c29dfe3d7e382f88fa3e970\" returns successfully" Dec 13 05:16:24.704175 systemd[1]: run-netns-cni\x2d72cda3ed\x2dd7eb\x2d57e8\x2dc81c\x2db0923c9520d0.mount: Deactivated successfully. Dec 13 05:16:24.791243 systemd-networkd[1257]: calica7cca48d86: Gained IPv6LL Dec 13 05:16:24.864699 kubelet[2909]: I1213 05:16:24.864624 2909 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-dthps" podStartSLOduration=38.864551088 podStartE2EDuration="38.864551088s" podCreationTimestamp="2024-12-13 05:15:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 05:16:24.864321413 +0000 UTC m=+54.849334077" watchObservedRunningTime="2024-12-13 05:16:24.864551088 +0000 UTC m=+54.849563763" Dec 13 05:16:24.986301 systemd-networkd[1257]: cali8401cf5713e: Link UP Dec 13 05:16:24.986653 systemd-networkd[1257]: cali8401cf5713e: Gained carrier Dec 13 05:16:25.034277 containerd[1621]: 2024-12-13 05:16:24.710 [INFO][4824] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--p0439.gb1.brightbox.com-k8s-coredns--76f75df574--m7856-eth0 coredns-76f75df574- kube-system 20ff6575-c92f-4eb2-882b-7354b8ce1048 800 0 2024-12-13 05:15:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-p0439.gb1.brightbox.com coredns-76f75df574-m7856 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8401cf5713e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="fe0c4f6dd81a705caa0c0f59ef402f3098f58d9f40b8ea832dc6d1ce5a374d61" Namespace="kube-system" Pod="coredns-76f75df574-m7856" WorkloadEndpoint="srv--p0439.gb1.brightbox.com-k8s-coredns--76f75df574--m7856-" Dec 13 05:16:25.034277 containerd[1621]: 2024-12-13 05:16:24.710 [INFO][4824] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fe0c4f6dd81a705caa0c0f59ef402f3098f58d9f40b8ea832dc6d1ce5a374d61" Namespace="kube-system" Pod="coredns-76f75df574-m7856" WorkloadEndpoint="srv--p0439.gb1.brightbox.com-k8s-coredns--76f75df574--m7856-eth0" Dec 13 05:16:25.034277 containerd[1621]: 2024-12-13 05:16:24.861 [INFO][4844] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fe0c4f6dd81a705caa0c0f59ef402f3098f58d9f40b8ea832dc6d1ce5a374d61" HandleID="k8s-pod-network.fe0c4f6dd81a705caa0c0f59ef402f3098f58d9f40b8ea832dc6d1ce5a374d61" Workload="srv--p0439.gb1.brightbox.com-k8s-coredns--76f75df574--m7856-eth0" Dec 13 05:16:25.034277 containerd[1621]: 2024-12-13 05:16:24.892 [INFO][4844] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fe0c4f6dd81a705caa0c0f59ef402f3098f58d9f40b8ea832dc6d1ce5a374d61" HandleID="k8s-pod-network.fe0c4f6dd81a705caa0c0f59ef402f3098f58d9f40b8ea832dc6d1ce5a374d61" Workload="srv--p0439.gb1.brightbox.com-k8s-coredns--76f75df574--m7856-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000518e0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-p0439.gb1.brightbox.com", "pod":"coredns-76f75df574-m7856", "timestamp":"2024-12-13 05:16:24.861133538 +0000 UTC"}, Hostname:"srv-p0439.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 05:16:25.034277 containerd[1621]: 2024-12-13 05:16:24.892 [INFO][4844] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 05:16:25.034277 containerd[1621]: 2024-12-13 05:16:24.893 [INFO][4844] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 05:16:25.034277 containerd[1621]: 2024-12-13 05:16:24.894 [INFO][4844] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-p0439.gb1.brightbox.com' Dec 13 05:16:25.034277 containerd[1621]: 2024-12-13 05:16:24.899 [INFO][4844] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fe0c4f6dd81a705caa0c0f59ef402f3098f58d9f40b8ea832dc6d1ce5a374d61" host="srv-p0439.gb1.brightbox.com" Dec 13 05:16:25.034277 containerd[1621]: 2024-12-13 05:16:24.912 [INFO][4844] ipam/ipam.go 372: Looking up existing affinities for host host="srv-p0439.gb1.brightbox.com" Dec 13 05:16:25.034277 containerd[1621]: 2024-12-13 05:16:24.923 [INFO][4844] ipam/ipam.go 489: Trying affinity for 192.168.124.64/26 host="srv-p0439.gb1.brightbox.com" Dec 13 05:16:25.034277 containerd[1621]: 2024-12-13 05:16:24.930 [INFO][4844] ipam/ipam.go 155: Attempting to load block cidr=192.168.124.64/26 host="srv-p0439.gb1.brightbox.com" Dec 13 05:16:25.034277 containerd[1621]: 2024-12-13 05:16:24.934 [INFO][4844] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.124.64/26 host="srv-p0439.gb1.brightbox.com" Dec 13 05:16:25.034277 containerd[1621]: 2024-12-13 05:16:24.934 [INFO][4844] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.124.64/26 handle="k8s-pod-network.fe0c4f6dd81a705caa0c0f59ef402f3098f58d9f40b8ea832dc6d1ce5a374d61" host="srv-p0439.gb1.brightbox.com" Dec 13 05:16:25.034277 containerd[1621]: 2024-12-13 05:16:24.937 [INFO][4844] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.fe0c4f6dd81a705caa0c0f59ef402f3098f58d9f40b8ea832dc6d1ce5a374d61 Dec 13 05:16:25.034277 containerd[1621]: 2024-12-13 05:16:24.950 [INFO][4844] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.124.64/26 handle="k8s-pod-network.fe0c4f6dd81a705caa0c0f59ef402f3098f58d9f40b8ea832dc6d1ce5a374d61" host="srv-p0439.gb1.brightbox.com" Dec 13 05:16:25.034277 containerd[1621]: 2024-12-13 05:16:24.968 [INFO][4844] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.124.70/26] block=192.168.124.64/26 handle="k8s-pod-network.fe0c4f6dd81a705caa0c0f59ef402f3098f58d9f40b8ea832dc6d1ce5a374d61" host="srv-p0439.gb1.brightbox.com" Dec 13 05:16:25.034277 containerd[1621]: 2024-12-13 05:16:24.969 [INFO][4844] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.124.70/26] handle="k8s-pod-network.fe0c4f6dd81a705caa0c0f59ef402f3098f58d9f40b8ea832dc6d1ce5a374d61" host="srv-p0439.gb1.brightbox.com" Dec 13 05:16:25.034277 containerd[1621]: 2024-12-13 05:16:24.969 [INFO][4844] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 05:16:25.034277 containerd[1621]: 2024-12-13 05:16:24.969 [INFO][4844] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.124.70/26] IPv6=[] ContainerID="fe0c4f6dd81a705caa0c0f59ef402f3098f58d9f40b8ea832dc6d1ce5a374d61" HandleID="k8s-pod-network.fe0c4f6dd81a705caa0c0f59ef402f3098f58d9f40b8ea832dc6d1ce5a374d61" Workload="srv--p0439.gb1.brightbox.com-k8s-coredns--76f75df574--m7856-eth0" Dec 13 05:16:25.040964 containerd[1621]: 2024-12-13 05:16:24.977 [INFO][4824] cni-plugin/k8s.go 386: Populated endpoint ContainerID="fe0c4f6dd81a705caa0c0f59ef402f3098f58d9f40b8ea832dc6d1ce5a374d61" Namespace="kube-system" Pod="coredns-76f75df574-m7856" WorkloadEndpoint="srv--p0439.gb1.brightbox.com-k8s-coredns--76f75df574--m7856-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p0439.gb1.brightbox.com-k8s-coredns--76f75df574--m7856-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"20ff6575-c92f-4eb2-882b-7354b8ce1048", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 5, 15, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p0439.gb1.brightbox.com", ContainerID:"", Pod:"coredns-76f75df574-m7856", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8401cf5713e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 05:16:25.040964 containerd[1621]: 2024-12-13 05:16:24.977 [INFO][4824] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.124.70/32] ContainerID="fe0c4f6dd81a705caa0c0f59ef402f3098f58d9f40b8ea832dc6d1ce5a374d61" Namespace="kube-system" Pod="coredns-76f75df574-m7856" WorkloadEndpoint="srv--p0439.gb1.brightbox.com-k8s-coredns--76f75df574--m7856-eth0" Dec 13 05:16:25.040964 containerd[1621]: 2024-12-13 05:16:24.977 [INFO][4824] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8401cf5713e ContainerID="fe0c4f6dd81a705caa0c0f59ef402f3098f58d9f40b8ea832dc6d1ce5a374d61" Namespace="kube-system" Pod="coredns-76f75df574-m7856" WorkloadEndpoint="srv--p0439.gb1.brightbox.com-k8s-coredns--76f75df574--m7856-eth0" Dec 13 05:16:25.040964 containerd[1621]: 2024-12-13 05:16:24.987 [INFO][4824] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fe0c4f6dd81a705caa0c0f59ef402f3098f58d9f40b8ea832dc6d1ce5a374d61" Namespace="kube-system" Pod="coredns-76f75df574-m7856" WorkloadEndpoint="srv--p0439.gb1.brightbox.com-k8s-coredns--76f75df574--m7856-eth0" Dec 13 05:16:25.040964 containerd[1621]: 2024-12-13 05:16:24.996 [INFO][4824] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fe0c4f6dd81a705caa0c0f59ef402f3098f58d9f40b8ea832dc6d1ce5a374d61" Namespace="kube-system" Pod="coredns-76f75df574-m7856" WorkloadEndpoint="srv--p0439.gb1.brightbox.com-k8s-coredns--76f75df574--m7856-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p0439.gb1.brightbox.com-k8s-coredns--76f75df574--m7856-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"20ff6575-c92f-4eb2-882b-7354b8ce1048", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 5, 15, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p0439.gb1.brightbox.com", ContainerID:"fe0c4f6dd81a705caa0c0f59ef402f3098f58d9f40b8ea832dc6d1ce5a374d61", Pod:"coredns-76f75df574-m7856", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8401cf5713e", MAC:"6a:59:32:d3:26:f1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 05:16:25.040964 containerd[1621]: 2024-12-13 05:16:25.018 [INFO][4824] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="fe0c4f6dd81a705caa0c0f59ef402f3098f58d9f40b8ea832dc6d1ce5a374d61" Namespace="kube-system" Pod="coredns-76f75df574-m7856" WorkloadEndpoint="srv--p0439.gb1.brightbox.com-k8s-coredns--76f75df574--m7856-eth0" Dec 13 05:16:25.127433 containerd[1621]: time="2024-12-13T05:16:25.126878122Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 05:16:25.127433 containerd[1621]: time="2024-12-13T05:16:25.126953431Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 05:16:25.127433 containerd[1621]: time="2024-12-13T05:16:25.126991705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:16:25.127433 containerd[1621]: time="2024-12-13T05:16:25.127332095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 05:16:25.181990 systemd[1]: run-containerd-runc-k8s.io-fe0c4f6dd81a705caa0c0f59ef402f3098f58d9f40b8ea832dc6d1ce5a374d61-runc.tKuaD6.mount: Deactivated successfully. Dec 13 05:16:25.269672 containerd[1621]: time="2024-12-13T05:16:25.268504154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-m7856,Uid:20ff6575-c92f-4eb2-882b-7354b8ce1048,Namespace:kube-system,Attempt:1,} returns sandbox id \"fe0c4f6dd81a705caa0c0f59ef402f3098f58d9f40b8ea832dc6d1ce5a374d61\"" Dec 13 05:16:25.296544 containerd[1621]: time="2024-12-13T05:16:25.296489689Z" level=info msg="CreateContainer within sandbox \"fe0c4f6dd81a705caa0c0f59ef402f3098f58d9f40b8ea832dc6d1ce5a374d61\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 05:16:25.313704 containerd[1621]: time="2024-12-13T05:16:25.313330571Z" level=info msg="CreateContainer within sandbox \"fe0c4f6dd81a705caa0c0f59ef402f3098f58d9f40b8ea832dc6d1ce5a374d61\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c868823c388e8cada2839c28915b7e897882c728885b9ab61afd8fc1b87e50ef\"" Dec 13 05:16:25.319470 containerd[1621]: time="2024-12-13T05:16:25.319415252Z" level=info msg="StartContainer for \"c868823c388e8cada2839c28915b7e897882c728885b9ab61afd8fc1b87e50ef\"" Dec 13 05:16:25.363259 systemd-networkd[1257]: cali8e53388ed5f: Gained IPv6LL Dec 13 05:16:25.447426 containerd[1621]: time="2024-12-13T05:16:25.446657587Z" level=info msg="StartContainer for \"c868823c388e8cada2839c28915b7e897882c728885b9ab61afd8fc1b87e50ef\" returns successfully" Dec 13 05:16:25.747066 systemd-networkd[1257]: cali7cfb1e1261e: Gained IPv6LL Dec 13 05:16:25.874110 kubelet[2909]: I1213 05:16:25.874050 2909 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-m7856" podStartSLOduration=39.874003401 podStartE2EDuration="39.874003401s" podCreationTimestamp="2024-12-13 05:15:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 05:16:25.87262387 +0000 UTC m=+55.857636550" watchObservedRunningTime="2024-12-13 05:16:25.874003401 +0000 UTC m=+55.859016058" Dec 13 05:16:25.946171 systemd-journald[1173]: Under memory pressure, flushing caches. Dec 13 05:16:25.939317 systemd-resolved[1512]: Under memory pressure, flushing caches. Dec 13 05:16:25.939353 systemd-resolved[1512]: Flushed all caches. Dec 13 05:16:25.940011 systemd-networkd[1257]: cali6d32163c62b: Gained IPv6LL Dec 13 05:16:26.066712 systemd-networkd[1257]: cali8401cf5713e: Gained IPv6LL Dec 13 05:16:27.027610 containerd[1621]: time="2024-12-13T05:16:27.026367963Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:16:27.027610 containerd[1621]: time="2024-12-13T05:16:27.027204045Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Dec 13 05:16:27.028562 containerd[1621]: time="2024-12-13T05:16:27.028482558Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:16:27.032178 containerd[1621]: time="2024-12-13T05:16:27.031384828Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:16:27.033553 containerd[1621]: time="2024-12-13T05:16:27.033358609Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 4.199820236s" Dec 13 05:16:27.033553 containerd[1621]: time="2024-12-13T05:16:27.033411962Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 05:16:27.035169 containerd[1621]: time="2024-12-13T05:16:27.034843812Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 05:16:27.039747 containerd[1621]: time="2024-12-13T05:16:27.039690634Z" level=info msg="CreateContainer within sandbox \"e07f74a0181c41fedb7cb31a74e5f321d55bac7605eca5a8430495f94f2e067a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 05:16:27.055267 containerd[1621]: time="2024-12-13T05:16:27.055096957Z" level=info msg="CreateContainer within sandbox \"e07f74a0181c41fedb7cb31a74e5f321d55bac7605eca5a8430495f94f2e067a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e0b0c33380d77b937a86538d776ac04c92b8c62fe675a8e67b73e76f1898887a\"" Dec 13 05:16:27.057159 containerd[1621]: time="2024-12-13T05:16:27.056997000Z" level=info msg="StartContainer for \"e0b0c33380d77b937a86538d776ac04c92b8c62fe675a8e67b73e76f1898887a\"" Dec 13 05:16:27.185860 containerd[1621]: time="2024-12-13T05:16:27.185788859Z" level=info msg="StartContainer for \"e0b0c33380d77b937a86538d776ac04c92b8c62fe675a8e67b73e76f1898887a\" returns successfully" Dec 13 05:16:27.873909 kubelet[2909]: I1213 05:16:27.872076 2909 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-795898bcbf-x8w4g" podStartSLOduration=30.670587441 podStartE2EDuration="34.872022737s" podCreationTimestamp="2024-12-13 05:15:53 +0000 UTC" firstStartedPulling="2024-12-13 05:16:22.83278924 +0000 UTC m=+52.817801893" lastFinishedPulling="2024-12-13 05:16:27.034224505 +0000 UTC m=+57.019237189" observedRunningTime="2024-12-13 05:16:27.870927433 +0000 UTC m=+57.855940103" watchObservedRunningTime="2024-12-13 05:16:27.872022737 +0000 UTC m=+57.857035385" Dec 13 05:16:28.705459 containerd[1621]: time="2024-12-13T05:16:28.704265663Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:16:28.705459 containerd[1621]: time="2024-12-13T05:16:28.705377161Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Dec 13 05:16:28.706572 containerd[1621]: time="2024-12-13T05:16:28.706329232Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:16:28.710074 containerd[1621]: time="2024-12-13T05:16:28.710041263Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:16:28.711068 containerd[1621]: time="2024-12-13T05:16:28.711035000Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.676152283s" Dec 13 05:16:28.711246 containerd[1621]: time="2024-12-13T05:16:28.711217827Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Dec 13 05:16:28.712233 containerd[1621]: time="2024-12-13T05:16:28.712202857Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 05:16:28.716194 containerd[1621]: time="2024-12-13T05:16:28.716158297Z" level=info msg="CreateContainer within sandbox \"89429350f6c33055b0f8e8f67bddc1ec5bfb511ebb7741dfe425544ac4baf8ef\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 05:16:28.739571 containerd[1621]: time="2024-12-13T05:16:28.739517291Z" level=info msg="CreateContainer within sandbox \"89429350f6c33055b0f8e8f67bddc1ec5bfb511ebb7741dfe425544ac4baf8ef\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"3f605c4a602ddf10409e8b06cc8239ba17ce06612fe7bcfb68a1792c9302b9c8\"" Dec 13 05:16:28.742562 containerd[1621]: time="2024-12-13T05:16:28.741791619Z" level=info msg="StartContainer for \"3f605c4a602ddf10409e8b06cc8239ba17ce06612fe7bcfb68a1792c9302b9c8\"" Dec 13 05:16:28.852648 containerd[1621]: time="2024-12-13T05:16:28.852585968Z" level=info msg="StartContainer for \"3f605c4a602ddf10409e8b06cc8239ba17ce06612fe7bcfb68a1792c9302b9c8\" returns successfully" Dec 13 05:16:28.865422 kubelet[2909]: I1213 05:16:28.865383 2909 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 05:16:29.103101 containerd[1621]: time="2024-12-13T05:16:29.103026731Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:16:29.103914 containerd[1621]: time="2024-12-13T05:16:29.103846477Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Dec 13 05:16:29.107572 containerd[1621]: time="2024-12-13T05:16:29.107524146Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 394.319275ms" Dec 13 05:16:29.107664 containerd[1621]: time="2024-12-13T05:16:29.107574223Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 05:16:29.108747 containerd[1621]: time="2024-12-13T05:16:29.108460046Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Dec 13 05:16:29.113301 containerd[1621]: time="2024-12-13T05:16:29.113257920Z" level=info msg="CreateContainer within sandbox \"4d88a2cb4f7ffe546c8326e39d2982b9a5d2e5075d2b952fe6a0da76e4f3cf1a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 05:16:29.133053 containerd[1621]: time="2024-12-13T05:16:29.133000970Z" level=info msg="CreateContainer within sandbox \"4d88a2cb4f7ffe546c8326e39d2982b9a5d2e5075d2b952fe6a0da76e4f3cf1a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ef1c9aea02fb68c24e1885c5f9ad1e7a6f792f071f4373f7abee6f85562d9c9c\"" Dec 13 05:16:29.135262 containerd[1621]: time="2024-12-13T05:16:29.134935118Z" level=info msg="StartContainer for \"ef1c9aea02fb68c24e1885c5f9ad1e7a6f792f071f4373f7abee6f85562d9c9c\"" Dec 13 05:16:29.249625 containerd[1621]: time="2024-12-13T05:16:29.249579667Z" level=info msg="StartContainer for \"ef1c9aea02fb68c24e1885c5f9ad1e7a6f792f071f4373f7abee6f85562d9c9c\" returns successfully" Dec 13 05:16:29.890554 kubelet[2909]: I1213 05:16:29.890493 2909 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-795898bcbf-ck94r" podStartSLOduration=31.644068899 podStartE2EDuration="36.89039913s" podCreationTimestamp="2024-12-13 05:15:53 +0000 UTC" firstStartedPulling="2024-12-13 05:16:23.861782666 +0000 UTC m=+53.846795313" lastFinishedPulling="2024-12-13 05:16:29.108112871 +0000 UTC m=+59.093125544" observedRunningTime="2024-12-13 05:16:29.890343508 +0000 UTC m=+59.875356187" watchObservedRunningTime="2024-12-13 05:16:29.89039913 +0000 UTC m=+59.875411790" Dec 13 05:16:30.280689 containerd[1621]: time="2024-12-13T05:16:30.280363252Z" level=info msg="StopPodSandbox for \"e8d76595af3388d10b3ace0a7998c685a7f910c4ac5da92f4c9d17b6abc775d1\"" Dec 13 05:16:30.487184 containerd[1621]: 2024-12-13 05:16:30.386 [WARNING][5096] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e8d76595af3388d10b3ace0a7998c685a7f910c4ac5da92f4c9d17b6abc775d1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p0439.gb1.brightbox.com-k8s-calico--apiserver--795898bcbf--ck94r-eth0", GenerateName:"calico-apiserver-795898bcbf-", Namespace:"calico-apiserver", SelfLink:"", UID:"6bc8feac-28cf-423e-8b2e-f3d9212b2634", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 5, 15, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"795898bcbf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p0439.gb1.brightbox.com", ContainerID:"4d88a2cb4f7ffe546c8326e39d2982b9a5d2e5075d2b952fe6a0da76e4f3cf1a", Pod:"calico-apiserver-795898bcbf-ck94r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8e53388ed5f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 05:16:30.487184 containerd[1621]: 2024-12-13 05:16:30.388 [INFO][5096] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e8d76595af3388d10b3ace0a7998c685a7f910c4ac5da92f4c9d17b6abc775d1" Dec 13 05:16:30.487184 containerd[1621]: 2024-12-13 05:16:30.389 [INFO][5096] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e8d76595af3388d10b3ace0a7998c685a7f910c4ac5da92f4c9d17b6abc775d1" iface="eth0" netns="" Dec 13 05:16:30.487184 containerd[1621]: 2024-12-13 05:16:30.389 [INFO][5096] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e8d76595af3388d10b3ace0a7998c685a7f910c4ac5da92f4c9d17b6abc775d1" Dec 13 05:16:30.487184 containerd[1621]: 2024-12-13 05:16:30.389 [INFO][5096] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e8d76595af3388d10b3ace0a7998c685a7f910c4ac5da92f4c9d17b6abc775d1" Dec 13 05:16:30.487184 containerd[1621]: 2024-12-13 05:16:30.471 [INFO][5102] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e8d76595af3388d10b3ace0a7998c685a7f910c4ac5da92f4c9d17b6abc775d1" HandleID="k8s-pod-network.e8d76595af3388d10b3ace0a7998c685a7f910c4ac5da92f4c9d17b6abc775d1" Workload="srv--p0439.gb1.brightbox.com-k8s-calico--apiserver--795898bcbf--ck94r-eth0" Dec 13 05:16:30.487184 containerd[1621]: 2024-12-13 05:16:30.471 [INFO][5102] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 05:16:30.487184 containerd[1621]: 2024-12-13 05:16:30.471 [INFO][5102] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 05:16:30.487184 containerd[1621]: 2024-12-13 05:16:30.480 [WARNING][5102] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e8d76595af3388d10b3ace0a7998c685a7f910c4ac5da92f4c9d17b6abc775d1" HandleID="k8s-pod-network.e8d76595af3388d10b3ace0a7998c685a7f910c4ac5da92f4c9d17b6abc775d1" Workload="srv--p0439.gb1.brightbox.com-k8s-calico--apiserver--795898bcbf--ck94r-eth0" Dec 13 05:16:30.487184 containerd[1621]: 2024-12-13 05:16:30.480 [INFO][5102] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e8d76595af3388d10b3ace0a7998c685a7f910c4ac5da92f4c9d17b6abc775d1" HandleID="k8s-pod-network.e8d76595af3388d10b3ace0a7998c685a7f910c4ac5da92f4c9d17b6abc775d1" Workload="srv--p0439.gb1.brightbox.com-k8s-calico--apiserver--795898bcbf--ck94r-eth0" Dec 13 05:16:30.487184 containerd[1621]: 2024-12-13 05:16:30.483 [INFO][5102] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 05:16:30.487184 containerd[1621]: 2024-12-13 05:16:30.484 [INFO][5096] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e8d76595af3388d10b3ace0a7998c685a7f910c4ac5da92f4c9d17b6abc775d1" Dec 13 05:16:30.487184 containerd[1621]: time="2024-12-13T05:16:30.486991878Z" level=info msg="TearDown network for sandbox \"e8d76595af3388d10b3ace0a7998c685a7f910c4ac5da92f4c9d17b6abc775d1\" successfully" Dec 13 05:16:30.487184 containerd[1621]: time="2024-12-13T05:16:30.487027935Z" level=info msg="StopPodSandbox for \"e8d76595af3388d10b3ace0a7998c685a7f910c4ac5da92f4c9d17b6abc775d1\" returns successfully" Dec 13 05:16:30.495866 containerd[1621]: time="2024-12-13T05:16:30.495818588Z" level=info msg="RemovePodSandbox for \"e8d76595af3388d10b3ace0a7998c685a7f910c4ac5da92f4c9d17b6abc775d1\"" Dec 13 05:16:30.495972 containerd[1621]: time="2024-12-13T05:16:30.495872827Z" level=info msg="Forcibly stopping sandbox \"e8d76595af3388d10b3ace0a7998c685a7f910c4ac5da92f4c9d17b6abc775d1\"" Dec 13 05:16:30.618923 containerd[1621]: 2024-12-13 05:16:30.549 [WARNING][5120] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e8d76595af3388d10b3ace0a7998c685a7f910c4ac5da92f4c9d17b6abc775d1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p0439.gb1.brightbox.com-k8s-calico--apiserver--795898bcbf--ck94r-eth0", GenerateName:"calico-apiserver-795898bcbf-", Namespace:"calico-apiserver", SelfLink:"", UID:"6bc8feac-28cf-423e-8b2e-f3d9212b2634", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 5, 15, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"795898bcbf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p0439.gb1.brightbox.com", ContainerID:"4d88a2cb4f7ffe546c8326e39d2982b9a5d2e5075d2b952fe6a0da76e4f3cf1a", Pod:"calico-apiserver-795898bcbf-ck94r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8e53388ed5f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 05:16:30.618923 containerd[1621]: 2024-12-13 05:16:30.549 [INFO][5120] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e8d76595af3388d10b3ace0a7998c685a7f910c4ac5da92f4c9d17b6abc775d1" Dec 13 05:16:30.618923 containerd[1621]: 2024-12-13 05:16:30.549 [INFO][5120] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e8d76595af3388d10b3ace0a7998c685a7f910c4ac5da92f4c9d17b6abc775d1" iface="eth0" netns="" Dec 13 05:16:30.618923 containerd[1621]: 2024-12-13 05:16:30.549 [INFO][5120] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e8d76595af3388d10b3ace0a7998c685a7f910c4ac5da92f4c9d17b6abc775d1" Dec 13 05:16:30.618923 containerd[1621]: 2024-12-13 05:16:30.549 [INFO][5120] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e8d76595af3388d10b3ace0a7998c685a7f910c4ac5da92f4c9d17b6abc775d1" Dec 13 05:16:30.618923 containerd[1621]: 2024-12-13 05:16:30.599 [INFO][5126] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e8d76595af3388d10b3ace0a7998c685a7f910c4ac5da92f4c9d17b6abc775d1" HandleID="k8s-pod-network.e8d76595af3388d10b3ace0a7998c685a7f910c4ac5da92f4c9d17b6abc775d1" Workload="srv--p0439.gb1.brightbox.com-k8s-calico--apiserver--795898bcbf--ck94r-eth0" Dec 13 05:16:30.618923 containerd[1621]: 2024-12-13 05:16:30.599 [INFO][5126] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 05:16:30.618923 containerd[1621]: 2024-12-13 05:16:30.599 [INFO][5126] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 05:16:30.618923 containerd[1621]: 2024-12-13 05:16:30.612 [WARNING][5126] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e8d76595af3388d10b3ace0a7998c685a7f910c4ac5da92f4c9d17b6abc775d1" HandleID="k8s-pod-network.e8d76595af3388d10b3ace0a7998c685a7f910c4ac5da92f4c9d17b6abc775d1" Workload="srv--p0439.gb1.brightbox.com-k8s-calico--apiserver--795898bcbf--ck94r-eth0" Dec 13 05:16:30.618923 containerd[1621]: 2024-12-13 05:16:30.612 [INFO][5126] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e8d76595af3388d10b3ace0a7998c685a7f910c4ac5da92f4c9d17b6abc775d1" HandleID="k8s-pod-network.e8d76595af3388d10b3ace0a7998c685a7f910c4ac5da92f4c9d17b6abc775d1" Workload="srv--p0439.gb1.brightbox.com-k8s-calico--apiserver--795898bcbf--ck94r-eth0" Dec 13 05:16:30.618923 containerd[1621]: 2024-12-13 05:16:30.614 [INFO][5126] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 05:16:30.618923 containerd[1621]: 2024-12-13 05:16:30.616 [INFO][5120] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e8d76595af3388d10b3ace0a7998c685a7f910c4ac5da92f4c9d17b6abc775d1" Dec 13 05:16:30.620286 containerd[1621]: time="2024-12-13T05:16:30.619007794Z" level=info msg="TearDown network for sandbox \"e8d76595af3388d10b3ace0a7998c685a7f910c4ac5da92f4c9d17b6abc775d1\" successfully" Dec 13 05:16:30.634610 containerd[1621]: time="2024-12-13T05:16:30.634408064Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e8d76595af3388d10b3ace0a7998c685a7f910c4ac5da92f4c9d17b6abc775d1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 05:16:30.634610 containerd[1621]: time="2024-12-13T05:16:30.634548388Z" level=info msg="RemovePodSandbox \"e8d76595af3388d10b3ace0a7998c685a7f910c4ac5da92f4c9d17b6abc775d1\" returns successfully" Dec 13 05:16:30.636070 containerd[1621]: time="2024-12-13T05:16:30.636010655Z" level=info msg="StopPodSandbox for \"45cea2665418527e382388ca1a8d533bbaa1955aca5c54206b3c3dd1a55f2d40\"" Dec 13 05:16:30.747873 containerd[1621]: 2024-12-13 05:16:30.694 [WARNING][5144] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="45cea2665418527e382388ca1a8d533bbaa1955aca5c54206b3c3dd1a55f2d40" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p0439.gb1.brightbox.com-k8s-calico--apiserver--795898bcbf--x8w4g-eth0", GenerateName:"calico-apiserver-795898bcbf-", Namespace:"calico-apiserver", SelfLink:"", UID:"7d984527-00ee-4ef0-a476-1512e9636ccc", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 5, 15, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"795898bcbf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p0439.gb1.brightbox.com", ContainerID:"e07f74a0181c41fedb7cb31a74e5f321d55bac7605eca5a8430495f94f2e067a", Pod:"calico-apiserver-795898bcbf-x8w4g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9ea0ff524d5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 05:16:30.747873 containerd[1621]: 2024-12-13 05:16:30.695 [INFO][5144] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="45cea2665418527e382388ca1a8d533bbaa1955aca5c54206b3c3dd1a55f2d40" Dec 13 05:16:30.747873 containerd[1621]: 2024-12-13 05:16:30.695 [INFO][5144] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="45cea2665418527e382388ca1a8d533bbaa1955aca5c54206b3c3dd1a55f2d40" iface="eth0" netns="" Dec 13 05:16:30.747873 containerd[1621]: 2024-12-13 05:16:30.695 [INFO][5144] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="45cea2665418527e382388ca1a8d533bbaa1955aca5c54206b3c3dd1a55f2d40" Dec 13 05:16:30.747873 containerd[1621]: 2024-12-13 05:16:30.695 [INFO][5144] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="45cea2665418527e382388ca1a8d533bbaa1955aca5c54206b3c3dd1a55f2d40" Dec 13 05:16:30.747873 containerd[1621]: 2024-12-13 05:16:30.733 [INFO][5151] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="45cea2665418527e382388ca1a8d533bbaa1955aca5c54206b3c3dd1a55f2d40" HandleID="k8s-pod-network.45cea2665418527e382388ca1a8d533bbaa1955aca5c54206b3c3dd1a55f2d40" Workload="srv--p0439.gb1.brightbox.com-k8s-calico--apiserver--795898bcbf--x8w4g-eth0" Dec 13 05:16:30.747873 containerd[1621]: 2024-12-13 05:16:30.733 [INFO][5151] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 05:16:30.747873 containerd[1621]: 2024-12-13 05:16:30.733 [INFO][5151] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 05:16:30.747873 containerd[1621]: 2024-12-13 05:16:30.742 [WARNING][5151] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="45cea2665418527e382388ca1a8d533bbaa1955aca5c54206b3c3dd1a55f2d40" HandleID="k8s-pod-network.45cea2665418527e382388ca1a8d533bbaa1955aca5c54206b3c3dd1a55f2d40" Workload="srv--p0439.gb1.brightbox.com-k8s-calico--apiserver--795898bcbf--x8w4g-eth0" Dec 13 05:16:30.747873 containerd[1621]: 2024-12-13 05:16:30.742 [INFO][5151] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="45cea2665418527e382388ca1a8d533bbaa1955aca5c54206b3c3dd1a55f2d40" HandleID="k8s-pod-network.45cea2665418527e382388ca1a8d533bbaa1955aca5c54206b3c3dd1a55f2d40" Workload="srv--p0439.gb1.brightbox.com-k8s-calico--apiserver--795898bcbf--x8w4g-eth0" Dec 13 05:16:30.747873 containerd[1621]: 2024-12-13 05:16:30.743 [INFO][5151] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 05:16:30.747873 containerd[1621]: 2024-12-13 05:16:30.745 [INFO][5144] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="45cea2665418527e382388ca1a8d533bbaa1955aca5c54206b3c3dd1a55f2d40" Dec 13 05:16:30.749504 containerd[1621]: time="2024-12-13T05:16:30.747915511Z" level=info msg="TearDown network for sandbox \"45cea2665418527e382388ca1a8d533bbaa1955aca5c54206b3c3dd1a55f2d40\" successfully" Dec 13 05:16:30.749504 containerd[1621]: time="2024-12-13T05:16:30.747947614Z" level=info msg="StopPodSandbox for \"45cea2665418527e382388ca1a8d533bbaa1955aca5c54206b3c3dd1a55f2d40\" returns successfully" Dec 13 05:16:30.749504 containerd[1621]: time="2024-12-13T05:16:30.748969909Z" level=info msg="RemovePodSandbox for \"45cea2665418527e382388ca1a8d533bbaa1955aca5c54206b3c3dd1a55f2d40\"" Dec 13 05:16:30.749504 containerd[1621]: time="2024-12-13T05:16:30.749007403Z" level=info msg="Forcibly stopping sandbox \"45cea2665418527e382388ca1a8d533bbaa1955aca5c54206b3c3dd1a55f2d40\"" Dec 13 05:16:30.880216 kubelet[2909]: I1213 05:16:30.878579 2909 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 05:16:30.939674 containerd[1621]: 2024-12-13 05:16:30.862 [WARNING][5169] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="45cea2665418527e382388ca1a8d533bbaa1955aca5c54206b3c3dd1a55f2d40" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p0439.gb1.brightbox.com-k8s-calico--apiserver--795898bcbf--x8w4g-eth0", GenerateName:"calico-apiserver-795898bcbf-", Namespace:"calico-apiserver", SelfLink:"", UID:"7d984527-00ee-4ef0-a476-1512e9636ccc", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 5, 15, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"795898bcbf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p0439.gb1.brightbox.com", ContainerID:"e07f74a0181c41fedb7cb31a74e5f321d55bac7605eca5a8430495f94f2e067a", Pod:"calico-apiserver-795898bcbf-x8w4g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9ea0ff524d5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 05:16:30.939674 containerd[1621]: 2024-12-13 05:16:30.865 [INFO][5169] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="45cea2665418527e382388ca1a8d533bbaa1955aca5c54206b3c3dd1a55f2d40" Dec 13 05:16:30.939674 containerd[1621]: 2024-12-13 05:16:30.865 [INFO][5169] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="45cea2665418527e382388ca1a8d533bbaa1955aca5c54206b3c3dd1a55f2d40" iface="eth0" netns="" Dec 13 05:16:30.939674 containerd[1621]: 2024-12-13 05:16:30.865 [INFO][5169] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="45cea2665418527e382388ca1a8d533bbaa1955aca5c54206b3c3dd1a55f2d40" Dec 13 05:16:30.939674 containerd[1621]: 2024-12-13 05:16:30.865 [INFO][5169] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="45cea2665418527e382388ca1a8d533bbaa1955aca5c54206b3c3dd1a55f2d40" Dec 13 05:16:30.939674 containerd[1621]: 2024-12-13 05:16:30.921 [INFO][5176] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="45cea2665418527e382388ca1a8d533bbaa1955aca5c54206b3c3dd1a55f2d40" HandleID="k8s-pod-network.45cea2665418527e382388ca1a8d533bbaa1955aca5c54206b3c3dd1a55f2d40" Workload="srv--p0439.gb1.brightbox.com-k8s-calico--apiserver--795898bcbf--x8w4g-eth0" Dec 13 05:16:30.939674 containerd[1621]: 2024-12-13 05:16:30.924 [INFO][5176] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 05:16:30.939674 containerd[1621]: 2024-12-13 05:16:30.924 [INFO][5176] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 05:16:30.939674 containerd[1621]: 2024-12-13 05:16:30.933 [WARNING][5176] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="45cea2665418527e382388ca1a8d533bbaa1955aca5c54206b3c3dd1a55f2d40" HandleID="k8s-pod-network.45cea2665418527e382388ca1a8d533bbaa1955aca5c54206b3c3dd1a55f2d40" Workload="srv--p0439.gb1.brightbox.com-k8s-calico--apiserver--795898bcbf--x8w4g-eth0" Dec 13 05:16:30.939674 containerd[1621]: 2024-12-13 05:16:30.933 [INFO][5176] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="45cea2665418527e382388ca1a8d533bbaa1955aca5c54206b3c3dd1a55f2d40" HandleID="k8s-pod-network.45cea2665418527e382388ca1a8d533bbaa1955aca5c54206b3c3dd1a55f2d40" Workload="srv--p0439.gb1.brightbox.com-k8s-calico--apiserver--795898bcbf--x8w4g-eth0" Dec 13 05:16:30.939674 containerd[1621]: 2024-12-13 05:16:30.935 [INFO][5176] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 05:16:30.939674 containerd[1621]: 2024-12-13 05:16:30.937 [INFO][5169] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="45cea2665418527e382388ca1a8d533bbaa1955aca5c54206b3c3dd1a55f2d40" Dec 13 05:16:30.940666 containerd[1621]: time="2024-12-13T05:16:30.939736593Z" level=info msg="TearDown network for sandbox \"45cea2665418527e382388ca1a8d533bbaa1955aca5c54206b3c3dd1a55f2d40\" successfully" Dec 13 05:16:30.943684 containerd[1621]: time="2024-12-13T05:16:30.943644912Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"45cea2665418527e382388ca1a8d533bbaa1955aca5c54206b3c3dd1a55f2d40\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 05:16:30.943764 containerd[1621]: time="2024-12-13T05:16:30.943726830Z" level=info msg="RemovePodSandbox \"45cea2665418527e382388ca1a8d533bbaa1955aca5c54206b3c3dd1a55f2d40\" returns successfully" Dec 13 05:16:30.944409 containerd[1621]: time="2024-12-13T05:16:30.944299691Z" level=info msg="StopPodSandbox for \"82506a6a82f3107212f048a8d02c4b0e3924df441411c85a1849caeb3fbe95d7\"" Dec 13 05:16:31.079323 containerd[1621]: 2024-12-13 05:16:31.024 [WARNING][5194] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="82506a6a82f3107212f048a8d02c4b0e3924df441411c85a1849caeb3fbe95d7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p0439.gb1.brightbox.com-k8s-csi--node--driver--nfbkn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ce9c7b6e-1a22-4ef0-a19c-f54deef2ef8c", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 5, 15, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p0439.gb1.brightbox.com", ContainerID:"89429350f6c33055b0f8e8f67bddc1ec5bfb511ebb7741dfe425544ac4baf8ef", Pod:"csi-node-driver-nfbkn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.124.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calica7cca48d86", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 05:16:31.079323 containerd[1621]: 2024-12-13 05:16:31.025 [INFO][5194] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="82506a6a82f3107212f048a8d02c4b0e3924df441411c85a1849caeb3fbe95d7" Dec 13 05:16:31.079323 containerd[1621]: 2024-12-13 05:16:31.025 [INFO][5194] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="82506a6a82f3107212f048a8d02c4b0e3924df441411c85a1849caeb3fbe95d7" iface="eth0" netns="" Dec 13 05:16:31.079323 containerd[1621]: 2024-12-13 05:16:31.025 [INFO][5194] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="82506a6a82f3107212f048a8d02c4b0e3924df441411c85a1849caeb3fbe95d7" Dec 13 05:16:31.079323 containerd[1621]: 2024-12-13 05:16:31.025 [INFO][5194] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="82506a6a82f3107212f048a8d02c4b0e3924df441411c85a1849caeb3fbe95d7" Dec 13 05:16:31.079323 containerd[1621]: 2024-12-13 05:16:31.061 [INFO][5201] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="82506a6a82f3107212f048a8d02c4b0e3924df441411c85a1849caeb3fbe95d7" HandleID="k8s-pod-network.82506a6a82f3107212f048a8d02c4b0e3924df441411c85a1849caeb3fbe95d7" Workload="srv--p0439.gb1.brightbox.com-k8s-csi--node--driver--nfbkn-eth0" Dec 13 05:16:31.079323 containerd[1621]: 2024-12-13 05:16:31.062 [INFO][5201] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 05:16:31.079323 containerd[1621]: 2024-12-13 05:16:31.062 [INFO][5201] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 05:16:31.079323 containerd[1621]: 2024-12-13 05:16:31.072 [WARNING][5201] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="82506a6a82f3107212f048a8d02c4b0e3924df441411c85a1849caeb3fbe95d7" HandleID="k8s-pod-network.82506a6a82f3107212f048a8d02c4b0e3924df441411c85a1849caeb3fbe95d7" Workload="srv--p0439.gb1.brightbox.com-k8s-csi--node--driver--nfbkn-eth0" Dec 13 05:16:31.079323 containerd[1621]: 2024-12-13 05:16:31.072 [INFO][5201] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="82506a6a82f3107212f048a8d02c4b0e3924df441411c85a1849caeb3fbe95d7" HandleID="k8s-pod-network.82506a6a82f3107212f048a8d02c4b0e3924df441411c85a1849caeb3fbe95d7" Workload="srv--p0439.gb1.brightbox.com-k8s-csi--node--driver--nfbkn-eth0" Dec 13 05:16:31.079323 containerd[1621]: 2024-12-13 05:16:31.074 [INFO][5201] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 05:16:31.079323 containerd[1621]: 2024-12-13 05:16:31.076 [INFO][5194] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="82506a6a82f3107212f048a8d02c4b0e3924df441411c85a1849caeb3fbe95d7" Dec 13 05:16:31.079323 containerd[1621]: time="2024-12-13T05:16:31.078682268Z" level=info msg="TearDown network for sandbox \"82506a6a82f3107212f048a8d02c4b0e3924df441411c85a1849caeb3fbe95d7\" successfully" Dec 13 05:16:31.079323 containerd[1621]: time="2024-12-13T05:16:31.078750617Z" level=info msg="StopPodSandbox for \"82506a6a82f3107212f048a8d02c4b0e3924df441411c85a1849caeb3fbe95d7\" returns successfully" Dec 13 05:16:31.081970 containerd[1621]: time="2024-12-13T05:16:31.079484710Z" level=info msg="RemovePodSandbox for \"82506a6a82f3107212f048a8d02c4b0e3924df441411c85a1849caeb3fbe95d7\"" Dec 13 05:16:31.081970 containerd[1621]: time="2024-12-13T05:16:31.079581340Z" level=info msg="Forcibly stopping sandbox \"82506a6a82f3107212f048a8d02c4b0e3924df441411c85a1849caeb3fbe95d7\"" Dec 13 05:16:31.259990 containerd[1621]: 2024-12-13 05:16:31.183 [WARNING][5219] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="82506a6a82f3107212f048a8d02c4b0e3924df441411c85a1849caeb3fbe95d7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p0439.gb1.brightbox.com-k8s-csi--node--driver--nfbkn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ce9c7b6e-1a22-4ef0-a19c-f54deef2ef8c", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 5, 15, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p0439.gb1.brightbox.com", ContainerID:"89429350f6c33055b0f8e8f67bddc1ec5bfb511ebb7741dfe425544ac4baf8ef", Pod:"csi-node-driver-nfbkn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.124.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calica7cca48d86", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 05:16:31.259990 containerd[1621]: 2024-12-13 05:16:31.184 [INFO][5219] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="82506a6a82f3107212f048a8d02c4b0e3924df441411c85a1849caeb3fbe95d7" Dec 13 05:16:31.259990 containerd[1621]: 2024-12-13 05:16:31.184 [INFO][5219] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="82506a6a82f3107212f048a8d02c4b0e3924df441411c85a1849caeb3fbe95d7" iface="eth0" netns="" Dec 13 05:16:31.259990 containerd[1621]: 2024-12-13 05:16:31.184 [INFO][5219] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="82506a6a82f3107212f048a8d02c4b0e3924df441411c85a1849caeb3fbe95d7" Dec 13 05:16:31.259990 containerd[1621]: 2024-12-13 05:16:31.184 [INFO][5219] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="82506a6a82f3107212f048a8d02c4b0e3924df441411c85a1849caeb3fbe95d7" Dec 13 05:16:31.259990 containerd[1621]: 2024-12-13 05:16:31.243 [INFO][5225] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="82506a6a82f3107212f048a8d02c4b0e3924df441411c85a1849caeb3fbe95d7" HandleID="k8s-pod-network.82506a6a82f3107212f048a8d02c4b0e3924df441411c85a1849caeb3fbe95d7" Workload="srv--p0439.gb1.brightbox.com-k8s-csi--node--driver--nfbkn-eth0" Dec 13 05:16:31.259990 containerd[1621]: 2024-12-13 05:16:31.244 [INFO][5225] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 05:16:31.259990 containerd[1621]: 2024-12-13 05:16:31.244 [INFO][5225] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 05:16:31.259990 containerd[1621]: 2024-12-13 05:16:31.252 [WARNING][5225] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="82506a6a82f3107212f048a8d02c4b0e3924df441411c85a1849caeb3fbe95d7" HandleID="k8s-pod-network.82506a6a82f3107212f048a8d02c4b0e3924df441411c85a1849caeb3fbe95d7" Workload="srv--p0439.gb1.brightbox.com-k8s-csi--node--driver--nfbkn-eth0" Dec 13 05:16:31.259990 containerd[1621]: 2024-12-13 05:16:31.252 [INFO][5225] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="82506a6a82f3107212f048a8d02c4b0e3924df441411c85a1849caeb3fbe95d7" HandleID="k8s-pod-network.82506a6a82f3107212f048a8d02c4b0e3924df441411c85a1849caeb3fbe95d7" Workload="srv--p0439.gb1.brightbox.com-k8s-csi--node--driver--nfbkn-eth0" Dec 13 05:16:31.259990 containerd[1621]: 2024-12-13 05:16:31.254 [INFO][5225] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 05:16:31.259990 containerd[1621]: 2024-12-13 05:16:31.256 [INFO][5219] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="82506a6a82f3107212f048a8d02c4b0e3924df441411c85a1849caeb3fbe95d7" Dec 13 05:16:31.259990 containerd[1621]: time="2024-12-13T05:16:31.258035632Z" level=info msg="TearDown network for sandbox \"82506a6a82f3107212f048a8d02c4b0e3924df441411c85a1849caeb3fbe95d7\" successfully" Dec 13 05:16:31.296028 containerd[1621]: time="2024-12-13T05:16:31.295982733Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"82506a6a82f3107212f048a8d02c4b0e3924df441411c85a1849caeb3fbe95d7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 05:16:31.298383 containerd[1621]: time="2024-12-13T05:16:31.296146856Z" level=info msg="RemovePodSandbox \"82506a6a82f3107212f048a8d02c4b0e3924df441411c85a1849caeb3fbe95d7\" returns successfully" Dec 13 05:16:31.298383 containerd[1621]: time="2024-12-13T05:16:31.297630153Z" level=info msg="StopPodSandbox for \"f41b90515a3cb38878bfc1d7ad7da1b8ba24d92ea17f859d5ff03d89aa3e7570\"" Dec 13 05:16:31.425143 containerd[1621]: 2024-12-13 05:16:31.359 [WARNING][5243] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f41b90515a3cb38878bfc1d7ad7da1b8ba24d92ea17f859d5ff03d89aa3e7570" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p0439.gb1.brightbox.com-k8s-coredns--76f75df574--dthps-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"cef81659-bce4-4b5b-b8bf-c2339e0fd749", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 5, 15, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p0439.gb1.brightbox.com", ContainerID:"9e3a71ded1f93c321a150eaa3ffa2dcc320fec5f9a3e9937c214cd755ae4845f", Pod:"coredns-76f75df574-dthps", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6d32163c62b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 05:16:31.425143 containerd[1621]: 2024-12-13 05:16:31.360 [INFO][5243] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f41b90515a3cb38878bfc1d7ad7da1b8ba24d92ea17f859d5ff03d89aa3e7570" Dec 13 05:16:31.425143 containerd[1621]: 2024-12-13 05:16:31.360 [INFO][5243] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f41b90515a3cb38878bfc1d7ad7da1b8ba24d92ea17f859d5ff03d89aa3e7570" iface="eth0" netns="" Dec 13 05:16:31.425143 containerd[1621]: 2024-12-13 05:16:31.360 [INFO][5243] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f41b90515a3cb38878bfc1d7ad7da1b8ba24d92ea17f859d5ff03d89aa3e7570" Dec 13 05:16:31.425143 containerd[1621]: 2024-12-13 05:16:31.360 [INFO][5243] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f41b90515a3cb38878bfc1d7ad7da1b8ba24d92ea17f859d5ff03d89aa3e7570" Dec 13 05:16:31.425143 containerd[1621]: 2024-12-13 05:16:31.406 [INFO][5249] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f41b90515a3cb38878bfc1d7ad7da1b8ba24d92ea17f859d5ff03d89aa3e7570" HandleID="k8s-pod-network.f41b90515a3cb38878bfc1d7ad7da1b8ba24d92ea17f859d5ff03d89aa3e7570" Workload="srv--p0439.gb1.brightbox.com-k8s-coredns--76f75df574--dthps-eth0" Dec 13 05:16:31.425143 containerd[1621]: 2024-12-13 05:16:31.407 [INFO][5249] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 05:16:31.425143 containerd[1621]: 2024-12-13 05:16:31.407 [INFO][5249] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 05:16:31.425143 containerd[1621]: 2024-12-13 05:16:31.416 [WARNING][5249] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f41b90515a3cb38878bfc1d7ad7da1b8ba24d92ea17f859d5ff03d89aa3e7570" HandleID="k8s-pod-network.f41b90515a3cb38878bfc1d7ad7da1b8ba24d92ea17f859d5ff03d89aa3e7570" Workload="srv--p0439.gb1.brightbox.com-k8s-coredns--76f75df574--dthps-eth0" Dec 13 05:16:31.425143 containerd[1621]: 2024-12-13 05:16:31.416 [INFO][5249] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f41b90515a3cb38878bfc1d7ad7da1b8ba24d92ea17f859d5ff03d89aa3e7570" HandleID="k8s-pod-network.f41b90515a3cb38878bfc1d7ad7da1b8ba24d92ea17f859d5ff03d89aa3e7570" Workload="srv--p0439.gb1.brightbox.com-k8s-coredns--76f75df574--dthps-eth0" Dec 13 05:16:31.425143 containerd[1621]: 2024-12-13 05:16:31.420 [INFO][5249] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 05:16:31.425143 containerd[1621]: 2024-12-13 05:16:31.422 [INFO][5243] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f41b90515a3cb38878bfc1d7ad7da1b8ba24d92ea17f859d5ff03d89aa3e7570" Dec 13 05:16:31.425143 containerd[1621]: time="2024-12-13T05:16:31.423956717Z" level=info msg="TearDown network for sandbox \"f41b90515a3cb38878bfc1d7ad7da1b8ba24d92ea17f859d5ff03d89aa3e7570\" successfully" Dec 13 05:16:31.425143 containerd[1621]: time="2024-12-13T05:16:31.424002494Z" level=info msg="StopPodSandbox for \"f41b90515a3cb38878bfc1d7ad7da1b8ba24d92ea17f859d5ff03d89aa3e7570\" returns successfully" Dec 13 05:16:31.425143 containerd[1621]: time="2024-12-13T05:16:31.424620146Z" level=info msg="RemovePodSandbox for \"f41b90515a3cb38878bfc1d7ad7da1b8ba24d92ea17f859d5ff03d89aa3e7570\"" Dec 13 05:16:31.425143 containerd[1621]: time="2024-12-13T05:16:31.424655973Z" level=info msg="Forcibly stopping sandbox \"f41b90515a3cb38878bfc1d7ad7da1b8ba24d92ea17f859d5ff03d89aa3e7570\"" Dec 13 05:16:31.591896 containerd[1621]: 2024-12-13 05:16:31.513 [WARNING][5267] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f41b90515a3cb38878bfc1d7ad7da1b8ba24d92ea17f859d5ff03d89aa3e7570" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p0439.gb1.brightbox.com-k8s-coredns--76f75df574--dthps-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"cef81659-bce4-4b5b-b8bf-c2339e0fd749", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 5, 15, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p0439.gb1.brightbox.com", ContainerID:"9e3a71ded1f93c321a150eaa3ffa2dcc320fec5f9a3e9937c214cd755ae4845f", Pod:"coredns-76f75df574-dthps", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6d32163c62b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 05:16:31.591896 containerd[1621]: 2024-12-13 05:16:31.513 [INFO][5267] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f41b90515a3cb38878bfc1d7ad7da1b8ba24d92ea17f859d5ff03d89aa3e7570" Dec 13 05:16:31.591896 containerd[1621]: 2024-12-13 05:16:31.513 [INFO][5267] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f41b90515a3cb38878bfc1d7ad7da1b8ba24d92ea17f859d5ff03d89aa3e7570" iface="eth0" netns="" Dec 13 05:16:31.591896 containerd[1621]: 2024-12-13 05:16:31.513 [INFO][5267] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f41b90515a3cb38878bfc1d7ad7da1b8ba24d92ea17f859d5ff03d89aa3e7570" Dec 13 05:16:31.591896 containerd[1621]: 2024-12-13 05:16:31.514 [INFO][5267] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f41b90515a3cb38878bfc1d7ad7da1b8ba24d92ea17f859d5ff03d89aa3e7570" Dec 13 05:16:31.591896 containerd[1621]: 2024-12-13 05:16:31.565 [INFO][5273] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f41b90515a3cb38878bfc1d7ad7da1b8ba24d92ea17f859d5ff03d89aa3e7570" HandleID="k8s-pod-network.f41b90515a3cb38878bfc1d7ad7da1b8ba24d92ea17f859d5ff03d89aa3e7570" Workload="srv--p0439.gb1.brightbox.com-k8s-coredns--76f75df574--dthps-eth0" Dec 13 05:16:31.591896 containerd[1621]: 2024-12-13 05:16:31.566 [INFO][5273] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 05:16:31.591896 containerd[1621]: 2024-12-13 05:16:31.566 [INFO][5273] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 05:16:31.591896 containerd[1621]: 2024-12-13 05:16:31.580 [WARNING][5273] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f41b90515a3cb38878bfc1d7ad7da1b8ba24d92ea17f859d5ff03d89aa3e7570" HandleID="k8s-pod-network.f41b90515a3cb38878bfc1d7ad7da1b8ba24d92ea17f859d5ff03d89aa3e7570" Workload="srv--p0439.gb1.brightbox.com-k8s-coredns--76f75df574--dthps-eth0" Dec 13 05:16:31.591896 containerd[1621]: 2024-12-13 05:16:31.580 [INFO][5273] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f41b90515a3cb38878bfc1d7ad7da1b8ba24d92ea17f859d5ff03d89aa3e7570" HandleID="k8s-pod-network.f41b90515a3cb38878bfc1d7ad7da1b8ba24d92ea17f859d5ff03d89aa3e7570" Workload="srv--p0439.gb1.brightbox.com-k8s-coredns--76f75df574--dthps-eth0" Dec 13 05:16:31.591896 containerd[1621]: 2024-12-13 05:16:31.586 [INFO][5273] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 05:16:31.591896 containerd[1621]: 2024-12-13 05:16:31.588 [INFO][5267] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f41b90515a3cb38878bfc1d7ad7da1b8ba24d92ea17f859d5ff03d89aa3e7570" Dec 13 05:16:31.591896 containerd[1621]: time="2024-12-13T05:16:31.591276652Z" level=info msg="TearDown network for sandbox \"f41b90515a3cb38878bfc1d7ad7da1b8ba24d92ea17f859d5ff03d89aa3e7570\" successfully" Dec 13 05:16:31.597096 containerd[1621]: time="2024-12-13T05:16:31.596632449Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f41b90515a3cb38878bfc1d7ad7da1b8ba24d92ea17f859d5ff03d89aa3e7570\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 05:16:31.597096 containerd[1621]: time="2024-12-13T05:16:31.596722513Z" level=info msg="RemovePodSandbox \"f41b90515a3cb38878bfc1d7ad7da1b8ba24d92ea17f859d5ff03d89aa3e7570\" returns successfully" Dec 13 05:16:31.597602 containerd[1621]: time="2024-12-13T05:16:31.597566074Z" level=info msg="StopPodSandbox for \"9ab78e6efce3e39c288feeb536ab2cfc8880ec0aa35bcfb75ec33e8316356131\"" Dec 13 05:16:31.796572 containerd[1621]: 2024-12-13 05:16:31.709 [WARNING][5292] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9ab78e6efce3e39c288feeb536ab2cfc8880ec0aa35bcfb75ec33e8316356131" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p0439.gb1.brightbox.com-k8s-calico--kube--controllers--548b6bb445--n77k4-eth0", GenerateName:"calico-kube-controllers-548b6bb445-", Namespace:"calico-system", SelfLink:"", UID:"b5832f8d-80e4-4cb3-aa3a-b02c7e2ed4bb", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 5, 15, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"548b6bb445", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p0439.gb1.brightbox.com", ContainerID:"226ecbd3c5c284ee88a38b940c437f6378a9a0158eb09fca82b16e6a8d0330db", Pod:"calico-kube-controllers-548b6bb445-n77k4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.124.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7cfb1e1261e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 05:16:31.796572 containerd[1621]: 2024-12-13 05:16:31.709 [INFO][5292] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9ab78e6efce3e39c288feeb536ab2cfc8880ec0aa35bcfb75ec33e8316356131" Dec 13 05:16:31.796572 containerd[1621]: 2024-12-13 05:16:31.709 [INFO][5292] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9ab78e6efce3e39c288feeb536ab2cfc8880ec0aa35bcfb75ec33e8316356131" iface="eth0" netns="" Dec 13 05:16:31.796572 containerd[1621]: 2024-12-13 05:16:31.709 [INFO][5292] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9ab78e6efce3e39c288feeb536ab2cfc8880ec0aa35bcfb75ec33e8316356131" Dec 13 05:16:31.796572 containerd[1621]: 2024-12-13 05:16:31.710 [INFO][5292] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9ab78e6efce3e39c288feeb536ab2cfc8880ec0aa35bcfb75ec33e8316356131" Dec 13 05:16:31.796572 containerd[1621]: 2024-12-13 05:16:31.773 [INFO][5302] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9ab78e6efce3e39c288feeb536ab2cfc8880ec0aa35bcfb75ec33e8316356131" HandleID="k8s-pod-network.9ab78e6efce3e39c288feeb536ab2cfc8880ec0aa35bcfb75ec33e8316356131" Workload="srv--p0439.gb1.brightbox.com-k8s-calico--kube--controllers--548b6bb445--n77k4-eth0" Dec 13 05:16:31.796572 containerd[1621]: 2024-12-13 05:16:31.774 [INFO][5302] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 05:16:31.796572 containerd[1621]: 2024-12-13 05:16:31.774 [INFO][5302] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 05:16:31.796572 containerd[1621]: 2024-12-13 05:16:31.785 [WARNING][5302] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9ab78e6efce3e39c288feeb536ab2cfc8880ec0aa35bcfb75ec33e8316356131" HandleID="k8s-pod-network.9ab78e6efce3e39c288feeb536ab2cfc8880ec0aa35bcfb75ec33e8316356131" Workload="srv--p0439.gb1.brightbox.com-k8s-calico--kube--controllers--548b6bb445--n77k4-eth0" Dec 13 05:16:31.796572 containerd[1621]: 2024-12-13 05:16:31.785 [INFO][5302] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9ab78e6efce3e39c288feeb536ab2cfc8880ec0aa35bcfb75ec33e8316356131" HandleID="k8s-pod-network.9ab78e6efce3e39c288feeb536ab2cfc8880ec0aa35bcfb75ec33e8316356131" Workload="srv--p0439.gb1.brightbox.com-k8s-calico--kube--controllers--548b6bb445--n77k4-eth0" Dec 13 05:16:31.796572 containerd[1621]: 2024-12-13 05:16:31.787 [INFO][5302] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 05:16:31.796572 containerd[1621]: 2024-12-13 05:16:31.790 [INFO][5292] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9ab78e6efce3e39c288feeb536ab2cfc8880ec0aa35bcfb75ec33e8316356131" Dec 13 05:16:31.798736 containerd[1621]: time="2024-12-13T05:16:31.797900483Z" level=info msg="TearDown network for sandbox \"9ab78e6efce3e39c288feeb536ab2cfc8880ec0aa35bcfb75ec33e8316356131\" successfully" Dec 13 05:16:31.798736 containerd[1621]: time="2024-12-13T05:16:31.798042982Z" level=info msg="StopPodSandbox for \"9ab78e6efce3e39c288feeb536ab2cfc8880ec0aa35bcfb75ec33e8316356131\" returns successfully" Dec 13 05:16:31.799387 containerd[1621]: time="2024-12-13T05:16:31.799354539Z" level=info msg="RemovePodSandbox for \"9ab78e6efce3e39c288feeb536ab2cfc8880ec0aa35bcfb75ec33e8316356131\"" Dec 13 05:16:31.799470 containerd[1621]: time="2024-12-13T05:16:31.799410096Z" level=info msg="Forcibly stopping sandbox \"9ab78e6efce3e39c288feeb536ab2cfc8880ec0aa35bcfb75ec33e8316356131\"" Dec 13 05:16:31.980980 containerd[1621]: 2024-12-13 05:16:31.888 [WARNING][5320] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9ab78e6efce3e39c288feeb536ab2cfc8880ec0aa35bcfb75ec33e8316356131" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p0439.gb1.brightbox.com-k8s-calico--kube--controllers--548b6bb445--n77k4-eth0", GenerateName:"calico-kube-controllers-548b6bb445-", Namespace:"calico-system", SelfLink:"", UID:"b5832f8d-80e4-4cb3-aa3a-b02c7e2ed4bb", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 5, 15, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"548b6bb445", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p0439.gb1.brightbox.com", ContainerID:"226ecbd3c5c284ee88a38b940c437f6378a9a0158eb09fca82b16e6a8d0330db", Pod:"calico-kube-controllers-548b6bb445-n77k4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.124.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7cfb1e1261e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 05:16:31.980980 containerd[1621]: 2024-12-13 05:16:31.889 [INFO][5320] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9ab78e6efce3e39c288feeb536ab2cfc8880ec0aa35bcfb75ec33e8316356131" Dec 13 05:16:31.980980 containerd[1621]: 2024-12-13 05:16:31.889 [INFO][5320] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9ab78e6efce3e39c288feeb536ab2cfc8880ec0aa35bcfb75ec33e8316356131" iface="eth0" netns="" Dec 13 05:16:31.980980 containerd[1621]: 2024-12-13 05:16:31.889 [INFO][5320] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9ab78e6efce3e39c288feeb536ab2cfc8880ec0aa35bcfb75ec33e8316356131" Dec 13 05:16:31.980980 containerd[1621]: 2024-12-13 05:16:31.889 [INFO][5320] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9ab78e6efce3e39c288feeb536ab2cfc8880ec0aa35bcfb75ec33e8316356131" Dec 13 05:16:31.980980 containerd[1621]: 2024-12-13 05:16:31.952 [INFO][5326] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9ab78e6efce3e39c288feeb536ab2cfc8880ec0aa35bcfb75ec33e8316356131" HandleID="k8s-pod-network.9ab78e6efce3e39c288feeb536ab2cfc8880ec0aa35bcfb75ec33e8316356131" Workload="srv--p0439.gb1.brightbox.com-k8s-calico--kube--controllers--548b6bb445--n77k4-eth0" Dec 13 05:16:31.980980 containerd[1621]: 2024-12-13 05:16:31.952 [INFO][5326] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 05:16:31.980980 containerd[1621]: 2024-12-13 05:16:31.952 [INFO][5326] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 05:16:31.980980 containerd[1621]: 2024-12-13 05:16:31.972 [WARNING][5326] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9ab78e6efce3e39c288feeb536ab2cfc8880ec0aa35bcfb75ec33e8316356131" HandleID="k8s-pod-network.9ab78e6efce3e39c288feeb536ab2cfc8880ec0aa35bcfb75ec33e8316356131" Workload="srv--p0439.gb1.brightbox.com-k8s-calico--kube--controllers--548b6bb445--n77k4-eth0" Dec 13 05:16:31.980980 containerd[1621]: 2024-12-13 05:16:31.972 [INFO][5326] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9ab78e6efce3e39c288feeb536ab2cfc8880ec0aa35bcfb75ec33e8316356131" HandleID="k8s-pod-network.9ab78e6efce3e39c288feeb536ab2cfc8880ec0aa35bcfb75ec33e8316356131" Workload="srv--p0439.gb1.brightbox.com-k8s-calico--kube--controllers--548b6bb445--n77k4-eth0" Dec 13 05:16:31.980980 containerd[1621]: 2024-12-13 05:16:31.974 [INFO][5326] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 05:16:31.980980 containerd[1621]: 2024-12-13 05:16:31.976 [INFO][5320] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9ab78e6efce3e39c288feeb536ab2cfc8880ec0aa35bcfb75ec33e8316356131" Dec 13 05:16:31.980980 containerd[1621]: time="2024-12-13T05:16:31.978953070Z" level=info msg="TearDown network for sandbox \"9ab78e6efce3e39c288feeb536ab2cfc8880ec0aa35bcfb75ec33e8316356131\" successfully" Dec 13 05:16:31.990368 containerd[1621]: time="2024-12-13T05:16:31.990329356Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9ab78e6efce3e39c288feeb536ab2cfc8880ec0aa35bcfb75ec33e8316356131\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 05:16:31.990470 containerd[1621]: time="2024-12-13T05:16:31.990403834Z" level=info msg="RemovePodSandbox \"9ab78e6efce3e39c288feeb536ab2cfc8880ec0aa35bcfb75ec33e8316356131\" returns successfully" Dec 13 05:16:31.992307 containerd[1621]: time="2024-12-13T05:16:31.992272427Z" level=info msg="StopPodSandbox for \"3754e6b058036373abb3bf637e5394d8738766363111017c0b79b73f83b5a86f\"" Dec 13 05:16:32.200960 containerd[1621]: 2024-12-13 05:16:32.086 [WARNING][5344] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3754e6b058036373abb3bf637e5394d8738766363111017c0b79b73f83b5a86f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p0439.gb1.brightbox.com-k8s-coredns--76f75df574--m7856-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"20ff6575-c92f-4eb2-882b-7354b8ce1048", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 5, 15, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p0439.gb1.brightbox.com", ContainerID:"fe0c4f6dd81a705caa0c0f59ef402f3098f58d9f40b8ea832dc6d1ce5a374d61", Pod:"coredns-76f75df574-m7856", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8401cf5713e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 05:16:32.200960 containerd[1621]: 2024-12-13 05:16:32.086 [INFO][5344] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3754e6b058036373abb3bf637e5394d8738766363111017c0b79b73f83b5a86f" Dec 13 05:16:32.200960 containerd[1621]: 2024-12-13 05:16:32.086 [INFO][5344] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3754e6b058036373abb3bf637e5394d8738766363111017c0b79b73f83b5a86f" iface="eth0" netns="" Dec 13 05:16:32.200960 containerd[1621]: 2024-12-13 05:16:32.087 [INFO][5344] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3754e6b058036373abb3bf637e5394d8738766363111017c0b79b73f83b5a86f" Dec 13 05:16:32.200960 containerd[1621]: 2024-12-13 05:16:32.087 [INFO][5344] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3754e6b058036373abb3bf637e5394d8738766363111017c0b79b73f83b5a86f" Dec 13 05:16:32.200960 containerd[1621]: 2024-12-13 05:16:32.175 [INFO][5350] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3754e6b058036373abb3bf637e5394d8738766363111017c0b79b73f83b5a86f" HandleID="k8s-pod-network.3754e6b058036373abb3bf637e5394d8738766363111017c0b79b73f83b5a86f" Workload="srv--p0439.gb1.brightbox.com-k8s-coredns--76f75df574--m7856-eth0" Dec 13 05:16:32.200960 containerd[1621]: 2024-12-13 05:16:32.176 [INFO][5350] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 05:16:32.200960 containerd[1621]: 2024-12-13 05:16:32.179 [INFO][5350] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 05:16:32.200960 containerd[1621]: 2024-12-13 05:16:32.192 [WARNING][5350] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3754e6b058036373abb3bf637e5394d8738766363111017c0b79b73f83b5a86f" HandleID="k8s-pod-network.3754e6b058036373abb3bf637e5394d8738766363111017c0b79b73f83b5a86f" Workload="srv--p0439.gb1.brightbox.com-k8s-coredns--76f75df574--m7856-eth0" Dec 13 05:16:32.200960 containerd[1621]: 2024-12-13 05:16:32.192 [INFO][5350] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3754e6b058036373abb3bf637e5394d8738766363111017c0b79b73f83b5a86f" HandleID="k8s-pod-network.3754e6b058036373abb3bf637e5394d8738766363111017c0b79b73f83b5a86f" Workload="srv--p0439.gb1.brightbox.com-k8s-coredns--76f75df574--m7856-eth0" Dec 13 05:16:32.200960 containerd[1621]: 2024-12-13 05:16:32.194 [INFO][5350] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 05:16:32.200960 containerd[1621]: 2024-12-13 05:16:32.197 [INFO][5344] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3754e6b058036373abb3bf637e5394d8738766363111017c0b79b73f83b5a86f" Dec 13 05:16:32.202093 containerd[1621]: time="2024-12-13T05:16:32.200990417Z" level=info msg="TearDown network for sandbox \"3754e6b058036373abb3bf637e5394d8738766363111017c0b79b73f83b5a86f\" successfully" Dec 13 05:16:32.202093 containerd[1621]: time="2024-12-13T05:16:32.201027127Z" level=info msg="StopPodSandbox for \"3754e6b058036373abb3bf637e5394d8738766363111017c0b79b73f83b5a86f\" returns successfully" Dec 13 05:16:32.203421 containerd[1621]: time="2024-12-13T05:16:32.203376397Z" level=info msg="RemovePodSandbox for \"3754e6b058036373abb3bf637e5394d8738766363111017c0b79b73f83b5a86f\"" Dec 13 05:16:32.203566 containerd[1621]: time="2024-12-13T05:16:32.203424631Z" level=info msg="Forcibly stopping sandbox \"3754e6b058036373abb3bf637e5394d8738766363111017c0b79b73f83b5a86f\"" Dec 13 05:16:32.479042 containerd[1621]: 2024-12-13 05:16:32.327 [WARNING][5371] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3754e6b058036373abb3bf637e5394d8738766363111017c0b79b73f83b5a86f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--p0439.gb1.brightbox.com-k8s-coredns--76f75df574--m7856-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"20ff6575-c92f-4eb2-882b-7354b8ce1048", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 5, 15, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-p0439.gb1.brightbox.com", ContainerID:"fe0c4f6dd81a705caa0c0f59ef402f3098f58d9f40b8ea832dc6d1ce5a374d61", Pod:"coredns-76f75df574-m7856", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8401cf5713e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 05:16:32.479042 containerd[1621]: 2024-12-13 05:16:32.328 [INFO][5371] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3754e6b058036373abb3bf637e5394d8738766363111017c0b79b73f83b5a86f" Dec 13 05:16:32.479042 containerd[1621]: 2024-12-13 05:16:32.328 [INFO][5371] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3754e6b058036373abb3bf637e5394d8738766363111017c0b79b73f83b5a86f" iface="eth0" netns="" Dec 13 05:16:32.479042 containerd[1621]: 2024-12-13 05:16:32.328 [INFO][5371] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3754e6b058036373abb3bf637e5394d8738766363111017c0b79b73f83b5a86f" Dec 13 05:16:32.479042 containerd[1621]: 2024-12-13 05:16:32.328 [INFO][5371] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3754e6b058036373abb3bf637e5394d8738766363111017c0b79b73f83b5a86f" Dec 13 05:16:32.479042 containerd[1621]: 2024-12-13 05:16:32.440 [INFO][5397] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3754e6b058036373abb3bf637e5394d8738766363111017c0b79b73f83b5a86f" HandleID="k8s-pod-network.3754e6b058036373abb3bf637e5394d8738766363111017c0b79b73f83b5a86f" Workload="srv--p0439.gb1.brightbox.com-k8s-coredns--76f75df574--m7856-eth0" Dec 13 05:16:32.479042 containerd[1621]: 2024-12-13 05:16:32.441 [INFO][5397] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 05:16:32.479042 containerd[1621]: 2024-12-13 05:16:32.441 [INFO][5397] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 05:16:32.479042 containerd[1621]: 2024-12-13 05:16:32.459 [WARNING][5397] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3754e6b058036373abb3bf637e5394d8738766363111017c0b79b73f83b5a86f" HandleID="k8s-pod-network.3754e6b058036373abb3bf637e5394d8738766363111017c0b79b73f83b5a86f" Workload="srv--p0439.gb1.brightbox.com-k8s-coredns--76f75df574--m7856-eth0" Dec 13 05:16:32.479042 containerd[1621]: 2024-12-13 05:16:32.459 [INFO][5397] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3754e6b058036373abb3bf637e5394d8738766363111017c0b79b73f83b5a86f" HandleID="k8s-pod-network.3754e6b058036373abb3bf637e5394d8738766363111017c0b79b73f83b5a86f" Workload="srv--p0439.gb1.brightbox.com-k8s-coredns--76f75df574--m7856-eth0" Dec 13 05:16:32.479042 containerd[1621]: 2024-12-13 05:16:32.470 [INFO][5397] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 05:16:32.479042 containerd[1621]: 2024-12-13 05:16:32.476 [INFO][5371] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3754e6b058036373abb3bf637e5394d8738766363111017c0b79b73f83b5a86f" Dec 13 05:16:32.479042 containerd[1621]: time="2024-12-13T05:16:32.479034363Z" level=info msg="TearDown network for sandbox \"3754e6b058036373abb3bf637e5394d8738766363111017c0b79b73f83b5a86f\" successfully" Dec 13 05:16:32.487723 containerd[1621]: time="2024-12-13T05:16:32.487683016Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3754e6b058036373abb3bf637e5394d8738766363111017c0b79b73f83b5a86f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 05:16:32.488206 containerd[1621]: time="2024-12-13T05:16:32.487897755Z" level=info msg="RemovePodSandbox \"3754e6b058036373abb3bf637e5394d8738766363111017c0b79b73f83b5a86f\" returns successfully" Dec 13 05:16:32.982898 containerd[1621]: time="2024-12-13T05:16:32.982815387Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:16:32.984328 containerd[1621]: time="2024-12-13T05:16:32.984153182Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Dec 13 05:16:32.985385 containerd[1621]: time="2024-12-13T05:16:32.985348745Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:16:32.995903 containerd[1621]: time="2024-12-13T05:16:32.995829544Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:16:32.997366 containerd[1621]: time="2024-12-13T05:16:32.997108747Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 3.888586218s" Dec 13 05:16:32.997366 containerd[1621]: time="2024-12-13T05:16:32.997207585Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Dec 13 05:16:32.999805 containerd[1621]: time="2024-12-13T05:16:32.998432190Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 05:16:33.044007 containerd[1621]: time="2024-12-13T05:16:33.043527231Z" level=info msg="CreateContainer within sandbox \"226ecbd3c5c284ee88a38b940c437f6378a9a0158eb09fca82b16e6a8d0330db\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 05:16:33.061039 containerd[1621]: time="2024-12-13T05:16:33.060451632Z" level=info msg="CreateContainer within sandbox \"226ecbd3c5c284ee88a38b940c437f6378a9a0158eb09fca82b16e6a8d0330db\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"099bbf46f292fdda9bc37a9023b10168d274d9180bf6e22782564134ce8d31fc\"" Dec 13 05:16:33.065249 containerd[1621]: time="2024-12-13T05:16:33.063281399Z" level=info msg="StartContainer for \"099bbf46f292fdda9bc37a9023b10168d274d9180bf6e22782564134ce8d31fc\"" Dec 13 05:16:33.198299 containerd[1621]: time="2024-12-13T05:16:33.198241736Z" level=info msg="StartContainer for \"099bbf46f292fdda9bc37a9023b10168d274d9180bf6e22782564134ce8d31fc\" returns successfully" Dec 13 05:16:33.946054 kubelet[2909]: I1213 05:16:33.945935 2909 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-548b6bb445-n77k4" podStartSLOduration=31.380480694 podStartE2EDuration="39.945860941s" podCreationTimestamp="2024-12-13 05:15:54 +0000 UTC" firstStartedPulling="2024-12-13 05:16:24.432481475 +0000 UTC m=+54.417494126" lastFinishedPulling="2024-12-13 05:16:32.997861709 +0000 UTC m=+62.982874373" observedRunningTime="2024-12-13 05:16:33.943615397 +0000 UTC m=+63.928628067" watchObservedRunningTime="2024-12-13 05:16:33.945860941 +0000 UTC m=+63.930873600" Dec 13 05:16:35.149178 containerd[1621]: time="2024-12-13T05:16:35.149070086Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:16:35.151616 containerd[1621]: time="2024-12-13T05:16:35.151547851Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Dec 13 05:16:35.152798 containerd[1621]: time="2024-12-13T05:16:35.152457345Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:16:35.159307 containerd[1621]: time="2024-12-13T05:16:35.159272416Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 05:16:35.161451 containerd[1621]: time="2024-12-13T05:16:35.161363749Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 2.16288449s" Dec 13 05:16:35.162676 containerd[1621]: time="2024-12-13T05:16:35.161642810Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Dec 13 05:16:35.167703 containerd[1621]: time="2024-12-13T05:16:35.167106979Z" level=info msg="CreateContainer within sandbox \"89429350f6c33055b0f8e8f67bddc1ec5bfb511ebb7741dfe425544ac4baf8ef\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 05:16:35.193507 containerd[1621]: time="2024-12-13T05:16:35.193397054Z" level=info msg="CreateContainer within sandbox \"89429350f6c33055b0f8e8f67bddc1ec5bfb511ebb7741dfe425544ac4baf8ef\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"2417f68939378ce850a96b53a5bb9cc7884451ed6e4fc23e3ebe532d99663d90\"" Dec 13 05:16:35.200881 containerd[1621]: time="2024-12-13T05:16:35.197787274Z" level=info msg="StartContainer for \"2417f68939378ce850a96b53a5bb9cc7884451ed6e4fc23e3ebe532d99663d90\"" Dec 13 05:16:35.209465 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2795059441.mount: Deactivated successfully. Dec 13 05:16:35.355356 containerd[1621]: time="2024-12-13T05:16:35.355222157Z" level=info msg="StartContainer for \"2417f68939378ce850a96b53a5bb9cc7884451ed6e4fc23e3ebe532d99663d90\" returns successfully" Dec 13 05:16:35.982161 kubelet[2909]: I1213 05:16:35.981829 2909 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-nfbkn" podStartSLOduration=30.456364713 podStartE2EDuration="41.981769337s" podCreationTimestamp="2024-12-13 05:15:54 +0000 UTC" firstStartedPulling="2024-12-13 05:16:23.636534022 +0000 UTC m=+53.621546673" lastFinishedPulling="2024-12-13 05:16:35.161938651 +0000 UTC m=+65.146951297" observedRunningTime="2024-12-13 05:16:35.979996867 +0000 UTC m=+65.965009529" watchObservedRunningTime="2024-12-13 05:16:35.981769337 +0000 UTC m=+65.966781996" Dec 13 05:16:36.607924 kubelet[2909]: I1213 05:16:36.607821 2909 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 05:16:36.620865 kubelet[2909]: I1213 05:16:36.620814 2909 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 05:16:38.179669 systemd[1]: run-containerd-runc-k8s.io-099bbf46f292fdda9bc37a9023b10168d274d9180bf6e22782564134ce8d31fc-runc.zU08AC.mount: Deactivated successfully. Dec 13 05:16:39.453499 systemd[1]: Started sshd@7-10.230.15.106:22-147.75.109.163:50304.service - OpenSSH per-connection server daemon (147.75.109.163:50304). Dec 13 05:16:39.954950 systemd-resolved[1512]: Under memory pressure, flushing caches. Dec 13 05:16:39.958393 systemd-journald[1173]: Under memory pressure, flushing caches. Dec 13 05:16:39.955103 systemd-resolved[1512]: Flushed all caches. Dec 13 05:16:40.382056 update_engine[1603]: I20241213 05:16:40.381837 1603 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Dec 13 05:16:40.382056 update_engine[1603]: I20241213 05:16:40.382023 1603 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Dec 13 05:16:40.384364 update_engine[1603]: I20241213 05:16:40.384306 1603 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Dec 13 05:16:40.385451 update_engine[1603]: I20241213 05:16:40.385407 1603 omaha_request_params.cc:62] Current group set to stable Dec 13 05:16:40.385858 update_engine[1603]: I20241213 05:16:40.385720 1603 update_attempter.cc:499] Already updated boot flags. Skipping. Dec 13 05:16:40.385858 update_engine[1603]: I20241213 05:16:40.385749 1603 update_attempter.cc:643] Scheduling an action processor start. Dec 13 05:16:40.385858 update_engine[1603]: I20241213 05:16:40.385787 1603 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 13 05:16:40.386063 update_engine[1603]: I20241213 05:16:40.385871 1603 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Dec 13 05:16:40.386063 update_engine[1603]: I20241213 05:16:40.385982 1603 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 13 05:16:40.386063 update_engine[1603]: I20241213 05:16:40.386003 1603 omaha_request_action.cc:272] Request: Dec 13 05:16:40.386063 update_engine[1603]: Dec 13 05:16:40.386063 update_engine[1603]: Dec 13 05:16:40.386063 update_engine[1603]: Dec 13 05:16:40.386063 update_engine[1603]: Dec 13 05:16:40.386063 update_engine[1603]: Dec 13 05:16:40.386063 update_engine[1603]: Dec 13 05:16:40.386063 update_engine[1603]: Dec 13 05:16:40.386063 update_engine[1603]: Dec 13 05:16:40.386063 update_engine[1603]: I20241213 05:16:40.386016 1603 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 05:16:40.403394 update_engine[1603]: I20241213 05:16:40.396801 1603 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 05:16:40.403394 update_engine[1603]: I20241213 05:16:40.397269 1603 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 05:16:40.406906 update_engine[1603]: E20241213 05:16:40.406700 1603 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 05:16:40.406906 update_engine[1603]: I20241213 05:16:40.406831 1603 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Dec 13 05:16:40.407365 locksmithd[1636]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Dec 13 05:16:40.425476 sshd[5539]: Accepted publickey for core from 147.75.109.163 port 50304 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:16:40.429717 sshd[5539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:16:40.440284 systemd-logind[1602]: New session 10 of user core. Dec 13 05:16:40.447551 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 05:16:41.727662 sshd[5539]: pam_unix(sshd:session): session closed for user core Dec 13 05:16:41.736734 systemd[1]: sshd@7-10.230.15.106:22-147.75.109.163:50304.service: Deactivated successfully. Dec 13 05:16:41.749825 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 05:16:41.752272 systemd-logind[1602]: Session 10 logged out. Waiting for processes to exit. Dec 13 05:16:41.757454 systemd-logind[1602]: Removed session 10. Dec 13 05:16:43.067953 kubelet[2909]: I1213 05:16:43.067311 2909 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 05:16:46.889514 systemd[1]: Started sshd@8-10.230.15.106:22-147.75.109.163:34148.service - OpenSSH per-connection server daemon (147.75.109.163:34148). Dec 13 05:16:47.822668 sshd[5568]: Accepted publickey for core from 147.75.109.163 port 34148 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:16:47.824903 sshd[5568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:16:47.840267 systemd-logind[1602]: New session 11 of user core. Dec 13 05:16:47.845142 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 05:16:48.636201 sshd[5568]: pam_unix(sshd:session): session closed for user core Dec 13 05:16:48.643184 systemd[1]: sshd@8-10.230.15.106:22-147.75.109.163:34148.service: Deactivated successfully. Dec 13 05:16:48.648147 systemd-logind[1602]: Session 11 logged out. Waiting for processes to exit. Dec 13 05:16:48.648915 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 05:16:48.651771 systemd-logind[1602]: Removed session 11. Dec 13 05:16:50.259768 update_engine[1603]: I20241213 05:16:50.259264 1603 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 05:16:50.261930 update_engine[1603]: I20241213 05:16:50.261382 1603 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 05:16:50.261930 update_engine[1603]: I20241213 05:16:50.261862 1603 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 05:16:50.270796 update_engine[1603]: E20241213 05:16:50.270654 1603 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 05:16:50.270796 update_engine[1603]: I20241213 05:16:50.270752 1603 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Dec 13 05:16:53.788885 systemd[1]: Started sshd@9-10.230.15.106:22-147.75.109.163:34160.service - OpenSSH per-connection server daemon (147.75.109.163:34160). Dec 13 05:16:54.704267 sshd[5585]: Accepted publickey for core from 147.75.109.163 port 34160 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:16:54.706130 sshd[5585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:16:54.712838 systemd-logind[1602]: New session 12 of user core. Dec 13 05:16:54.721552 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 05:16:55.449957 sshd[5585]: pam_unix(sshd:session): session closed for user core Dec 13 05:16:55.457059 systemd[1]: sshd@9-10.230.15.106:22-147.75.109.163:34160.service: Deactivated successfully. Dec 13 05:16:55.462227 systemd-logind[1602]: Session 12 logged out. Waiting for processes to exit. Dec 13 05:16:55.462315 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 05:16:55.465717 systemd-logind[1602]: Removed session 12. Dec 13 05:16:55.600450 systemd[1]: Started sshd@10-10.230.15.106:22-147.75.109.163:34168.service - OpenSSH per-connection server daemon (147.75.109.163:34168). Dec 13 05:16:56.520284 sshd[5599]: Accepted publickey for core from 147.75.109.163 port 34168 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:16:56.522425 sshd[5599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:16:56.529391 systemd-logind[1602]: New session 13 of user core. Dec 13 05:16:56.544795 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 05:16:57.312270 sshd[5599]: pam_unix(sshd:session): session closed for user core Dec 13 05:16:57.318328 systemd[1]: sshd@10-10.230.15.106:22-147.75.109.163:34168.service: Deactivated successfully. Dec 13 05:16:57.322691 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 05:16:57.322694 systemd-logind[1602]: Session 13 logged out. Waiting for processes to exit. Dec 13 05:16:57.325657 systemd-logind[1602]: Removed session 13. Dec 13 05:16:57.463505 systemd[1]: Started sshd@11-10.230.15.106:22-147.75.109.163:39876.service - OpenSSH per-connection server daemon (147.75.109.163:39876). Dec 13 05:16:57.913358 kubelet[2909]: I1213 05:16:57.913025 2909 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 05:16:58.351740 sshd[5611]: Accepted publickey for core from 147.75.109.163 port 39876 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:16:58.355803 sshd[5611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:16:58.369255 systemd-logind[1602]: New session 14 of user core. Dec 13 05:16:58.375568 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 05:16:59.173806 sshd[5611]: pam_unix(sshd:session): session closed for user core Dec 13 05:16:59.183702 systemd-logind[1602]: Session 14 logged out. Waiting for processes to exit. Dec 13 05:16:59.185204 systemd[1]: sshd@11-10.230.15.106:22-147.75.109.163:39876.service: Deactivated successfully. Dec 13 05:16:59.191687 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 05:16:59.195478 systemd-logind[1602]: Removed session 14. Dec 13 05:17:00.259356 update_engine[1603]: I20241213 05:17:00.259191 1603 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 05:17:00.260209 update_engine[1603]: I20241213 05:17:00.259867 1603 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 05:17:00.260450 update_engine[1603]: I20241213 05:17:00.260401 1603 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 05:17:00.260868 update_engine[1603]: E20241213 05:17:00.260821 1603 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 05:17:00.260943 update_engine[1603]: I20241213 05:17:00.260907 1603 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Dec 13 05:17:04.327489 systemd[1]: Started sshd@12-10.230.15.106:22-147.75.109.163:39886.service - OpenSSH per-connection server daemon (147.75.109.163:39886). Dec 13 05:17:05.245267 sshd[5661]: Accepted publickey for core from 147.75.109.163 port 39886 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:17:05.247477 sshd[5661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:17:05.255838 systemd-logind[1602]: New session 15 of user core. Dec 13 05:17:05.266694 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 05:17:06.089308 sshd[5661]: pam_unix(sshd:session): session closed for user core Dec 13 05:17:06.100521 systemd[1]: sshd@12-10.230.15.106:22-147.75.109.163:39886.service: Deactivated successfully. Dec 13 05:17:06.107264 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 05:17:06.108211 systemd-logind[1602]: Session 15 logged out. Waiting for processes to exit. Dec 13 05:17:06.111842 systemd-logind[1602]: Removed session 15. Dec 13 05:17:08.174000 systemd[1]: run-containerd-runc-k8s.io-099bbf46f292fdda9bc37a9023b10168d274d9180bf6e22782564134ce8d31fc-runc.drdFSr.mount: Deactivated successfully. Dec 13 05:17:10.261349 update_engine[1603]: I20241213 05:17:10.261075 1603 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 05:17:10.262349 update_engine[1603]: I20241213 05:17:10.262025 1603 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 05:17:10.262903 update_engine[1603]: I20241213 05:17:10.262737 1603 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 05:17:10.263177 update_engine[1603]: E20241213 05:17:10.263141 1603 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 05:17:10.263280 update_engine[1603]: I20241213 05:17:10.263224 1603 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 13 05:17:10.263280 update_engine[1603]: I20241213 05:17:10.263257 1603 omaha_request_action.cc:617] Omaha request response: Dec 13 05:17:10.263482 update_engine[1603]: E20241213 05:17:10.263453 1603 omaha_request_action.cc:636] Omaha request network transfer failed. Dec 13 05:17:10.268851 update_engine[1603]: I20241213 05:17:10.268191 1603 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Dec 13 05:17:10.268851 update_engine[1603]: I20241213 05:17:10.268230 1603 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 05:17:10.268851 update_engine[1603]: I20241213 05:17:10.268246 1603 update_attempter.cc:306] Processing Done. Dec 13 05:17:10.268851 update_engine[1603]: E20241213 05:17:10.268297 1603 update_attempter.cc:619] Update failed. Dec 13 05:17:10.272546 update_engine[1603]: I20241213 05:17:10.272491 1603 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Dec 13 05:17:10.272546 update_engine[1603]: I20241213 05:17:10.272527 1603 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Dec 13 05:17:10.272546 update_engine[1603]: I20241213 05:17:10.272543 1603 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Dec 13 05:17:10.272755 update_engine[1603]: I20241213 05:17:10.272730 1603 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 13 05:17:10.272842 update_engine[1603]: I20241213 05:17:10.272816 1603 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 13 05:17:10.272842 update_engine[1603]: I20241213 05:17:10.272835 1603 omaha_request_action.cc:272] Request: Dec 13 05:17:10.272842 update_engine[1603]: Dec 13 05:17:10.272842 update_engine[1603]: Dec 13 05:17:10.272842 update_engine[1603]: Dec 13 05:17:10.272842 update_engine[1603]: Dec 13 05:17:10.272842 update_engine[1603]: Dec 13 05:17:10.272842 update_engine[1603]: Dec 13 05:17:10.273280 update_engine[1603]: I20241213 05:17:10.272846 1603 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 05:17:10.273280 update_engine[1603]: I20241213 05:17:10.273221 1603 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 05:17:10.273652 update_engine[1603]: I20241213 05:17:10.273478 1603 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 05:17:10.274269 update_engine[1603]: E20241213 05:17:10.273825 1603 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 05:17:10.274269 update_engine[1603]: I20241213 05:17:10.273891 1603 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 13 05:17:10.274269 update_engine[1603]: I20241213 05:17:10.273912 1603 omaha_request_action.cc:617] Omaha request response: Dec 13 05:17:10.274269 update_engine[1603]: I20241213 05:17:10.273934 1603 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 05:17:10.274269 update_engine[1603]: I20241213 05:17:10.273952 1603 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 05:17:10.274269 update_engine[1603]: I20241213 05:17:10.273969 1603 update_attempter.cc:306] Processing Done. Dec 13 05:17:10.274269 update_engine[1603]: I20241213 05:17:10.273983 1603 update_attempter.cc:310] Error event sent. Dec 13 05:17:10.274269 update_engine[1603]: I20241213 05:17:10.274004 1603 update_check_scheduler.cc:74] Next update check in 47m2s Dec 13 05:17:10.275456 locksmithd[1636]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Dec 13 05:17:10.276081 locksmithd[1636]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Dec 13 05:17:11.241460 systemd[1]: Started sshd@13-10.230.15.106:22-147.75.109.163:33282.service - OpenSSH per-connection server daemon (147.75.109.163:33282). Dec 13 05:17:12.166238 sshd[5714]: Accepted publickey for core from 147.75.109.163 port 33282 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:17:12.168827 sshd[5714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:17:12.177808 systemd-logind[1602]: New session 16 of user core. Dec 13 05:17:12.182663 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 05:17:12.962745 sshd[5714]: pam_unix(sshd:session): session closed for user core Dec 13 05:17:12.968043 systemd[1]: sshd@13-10.230.15.106:22-147.75.109.163:33282.service: Deactivated successfully. Dec 13 05:17:12.978346 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 05:17:12.979098 systemd-logind[1602]: Session 16 logged out. Waiting for processes to exit. Dec 13 05:17:12.982467 systemd-logind[1602]: Removed session 16. Dec 13 05:17:18.114514 systemd[1]: Started sshd@14-10.230.15.106:22-147.75.109.163:52992.service - OpenSSH per-connection server daemon (147.75.109.163:52992). Dec 13 05:17:19.021747 sshd[5730]: Accepted publickey for core from 147.75.109.163 port 52992 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:17:19.024382 sshd[5730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:17:19.032015 systemd-logind[1602]: New session 17 of user core. Dec 13 05:17:19.039585 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 05:17:19.760815 sshd[5730]: pam_unix(sshd:session): session closed for user core Dec 13 05:17:19.766811 systemd[1]: sshd@14-10.230.15.106:22-147.75.109.163:52992.service: Deactivated successfully. Dec 13 05:17:19.773754 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 05:17:19.776557 systemd-logind[1602]: Session 17 logged out. Waiting for processes to exit. Dec 13 05:17:19.779000 systemd-logind[1602]: Removed session 17. Dec 13 05:17:19.910417 systemd[1]: Started sshd@15-10.230.15.106:22-147.75.109.163:53006.service - OpenSSH per-connection server daemon (147.75.109.163:53006). Dec 13 05:17:20.822268 sshd[5744]: Accepted publickey for core from 147.75.109.163 port 53006 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:17:20.823918 sshd[5744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:17:20.832192 systemd-logind[1602]: New session 18 of user core. Dec 13 05:17:20.837676 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 05:17:22.050144 sshd[5744]: pam_unix(sshd:session): session closed for user core Dec 13 05:17:22.059006 systemd[1]: sshd@15-10.230.15.106:22-147.75.109.163:53006.service: Deactivated successfully. Dec 13 05:17:22.065731 systemd-logind[1602]: Session 18 logged out. Waiting for processes to exit. Dec 13 05:17:22.067665 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 05:17:22.069951 systemd-logind[1602]: Removed session 18. Dec 13 05:17:22.201504 systemd[1]: Started sshd@16-10.230.15.106:22-147.75.109.163:53010.service - OpenSSH per-connection server daemon (147.75.109.163:53010). Dec 13 05:17:23.106875 sshd[5757]: Accepted publickey for core from 147.75.109.163 port 53010 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:17:23.112867 sshd[5757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:17:23.133411 systemd-logind[1602]: New session 19 of user core. Dec 13 05:17:23.138608 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 05:17:25.983066 systemd-journald[1173]: Under memory pressure, flushing caches. Dec 13 05:17:25.973048 systemd-resolved[1512]: Under memory pressure, flushing caches. Dec 13 05:17:25.973060 systemd-resolved[1512]: Flushed all caches. Dec 13 05:17:26.670372 sshd[5757]: pam_unix(sshd:session): session closed for user core Dec 13 05:17:26.676729 systemd[1]: sshd@16-10.230.15.106:22-147.75.109.163:53010.service: Deactivated successfully. Dec 13 05:17:26.684604 systemd-logind[1602]: Session 19 logged out. Waiting for processes to exit. Dec 13 05:17:26.685882 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 05:17:26.688715 systemd-logind[1602]: Removed session 19. Dec 13 05:17:26.820796 systemd[1]: Started sshd@17-10.230.15.106:22-147.75.109.163:45866.service - OpenSSH per-connection server daemon (147.75.109.163:45866). Dec 13 05:17:27.727884 sshd[5779]: Accepted publickey for core from 147.75.109.163 port 45866 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:17:27.731379 sshd[5779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:17:27.743571 systemd-logind[1602]: New session 20 of user core. Dec 13 05:17:27.752844 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 05:17:28.021268 systemd-journald[1173]: Under memory pressure, flushing caches. Dec 13 05:17:28.020318 systemd-resolved[1512]: Under memory pressure, flushing caches. Dec 13 05:17:28.020349 systemd-resolved[1512]: Flushed all caches. Dec 13 05:17:29.162737 sshd[5779]: pam_unix(sshd:session): session closed for user core Dec 13 05:17:29.171371 systemd[1]: sshd@17-10.230.15.106:22-147.75.109.163:45866.service: Deactivated successfully. Dec 13 05:17:29.178504 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 05:17:29.179253 systemd-logind[1602]: Session 20 logged out. Waiting for processes to exit. Dec 13 05:17:29.183755 systemd-logind[1602]: Removed session 20. Dec 13 05:17:29.313505 systemd[1]: Started sshd@18-10.230.15.106:22-147.75.109.163:45868.service - OpenSSH per-connection server daemon (147.75.109.163:45868). Dec 13 05:17:30.231428 sshd[5791]: Accepted publickey for core from 147.75.109.163 port 45868 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:17:30.235879 sshd[5791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:17:30.253073 systemd-logind[1602]: New session 21 of user core. Dec 13 05:17:30.257681 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 05:17:30.965246 sshd[5791]: pam_unix(sshd:session): session closed for user core Dec 13 05:17:30.970258 systemd[1]: sshd@18-10.230.15.106:22-147.75.109.163:45868.service: Deactivated successfully. Dec 13 05:17:30.975840 systemd-logind[1602]: Session 21 logged out. Waiting for processes to exit. Dec 13 05:17:30.977480 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 05:17:30.981618 systemd-logind[1602]: Removed session 21. Dec 13 05:17:36.117553 systemd[1]: Started sshd@19-10.230.15.106:22-147.75.109.163:45874.service - OpenSSH per-connection server daemon (147.75.109.163:45874). Dec 13 05:17:37.040190 sshd[5831]: Accepted publickey for core from 147.75.109.163 port 45874 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:17:37.041623 sshd[5831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:17:37.053600 systemd-logind[1602]: New session 22 of user core. Dec 13 05:17:37.058695 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 05:17:37.823405 sshd[5831]: pam_unix(sshd:session): session closed for user core Dec 13 05:17:37.833422 systemd[1]: sshd@19-10.230.15.106:22-147.75.109.163:45874.service: Deactivated successfully. Dec 13 05:17:37.844934 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 05:17:37.845656 systemd-logind[1602]: Session 22 logged out. Waiting for processes to exit. Dec 13 05:17:37.850087 systemd-logind[1602]: Removed session 22. Dec 13 05:17:42.976301 systemd[1]: Started sshd@20-10.230.15.106:22-147.75.109.163:54020.service - OpenSSH per-connection server daemon (147.75.109.163:54020). Dec 13 05:17:43.882728 sshd[5867]: Accepted publickey for core from 147.75.109.163 port 54020 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:17:43.887413 sshd[5867]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:17:43.904230 systemd-logind[1602]: New session 23 of user core. Dec 13 05:17:43.909631 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 05:17:44.641650 sshd[5867]: pam_unix(sshd:session): session closed for user core Dec 13 05:17:44.647057 systemd[1]: sshd@20-10.230.15.106:22-147.75.109.163:54020.service: Deactivated successfully. Dec 13 05:17:44.647214 systemd-logind[1602]: Session 23 logged out. Waiting for processes to exit. Dec 13 05:17:44.653450 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 05:17:44.654769 systemd-logind[1602]: Removed session 23. Dec 13 05:17:49.791561 systemd[1]: Started sshd@21-10.230.15.106:22-147.75.109.163:49790.service - OpenSSH per-connection server daemon (147.75.109.163:49790). Dec 13 05:17:50.705202 sshd[5891]: Accepted publickey for core from 147.75.109.163 port 49790 ssh2: RSA SHA256:JktB8wb5fVvbEi8yoOunjtIIYwdGEaaIVVgKJhYN2Y4 Dec 13 05:17:50.708217 sshd[5891]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 05:17:50.715721 systemd-logind[1602]: New session 24 of user core. Dec 13 05:17:50.724055 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 05:17:51.409472 sshd[5891]: pam_unix(sshd:session): session closed for user core Dec 13 05:17:51.413581 systemd-logind[1602]: Session 24 logged out. Waiting for processes to exit. Dec 13 05:17:51.414660 systemd[1]: sshd@21-10.230.15.106:22-147.75.109.163:49790.service: Deactivated successfully. Dec 13 05:17:51.421576 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 05:17:51.423400 systemd-logind[1602]: Removed session 24.