Nov 13 11:56:18.907212 kernel: Linux version 6.6.60-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Nov 12 16:20:46 -00 2024 Nov 13 11:56:18.907258 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 13 11:56:18.907268 kernel: BIOS-provided physical RAM map: Nov 13 11:56:18.907279 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 13 11:56:18.907286 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 13 11:56:18.907293 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 13 11:56:18.907302 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Nov 13 11:56:18.907309 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Nov 13 11:56:18.907317 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 13 11:56:18.907324 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 13 11:56:18.907332 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 13 11:56:18.907339 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 13 11:56:18.907349 kernel: NX (Execute Disable) protection: active Nov 13 11:56:18.907357 kernel: APIC: Static calls initialized Nov 13 11:56:18.907366 kernel: SMBIOS 2.8 present. Nov 13 11:56:18.907375 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Nov 13 11:56:18.907383 kernel: Hypervisor detected: KVM Nov 13 11:56:18.907394 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 13 11:56:18.907402 kernel: kvm-clock: using sched offset of 3734512621 cycles Nov 13 11:56:18.907412 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 13 11:56:18.907420 kernel: tsc: Detected 2294.576 MHz processor Nov 13 11:56:18.907429 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 13 11:56:18.907438 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 13 11:56:18.907446 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Nov 13 11:56:18.907455 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 13 11:56:18.907463 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 13 11:56:18.907474 kernel: Using GB pages for direct mapping Nov 13 11:56:18.907483 kernel: ACPI: Early table checksum verification disabled Nov 13 11:56:18.907491 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Nov 13 11:56:18.907500 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 13 11:56:18.907508 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 13 11:56:18.907517 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 13 11:56:18.907525 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Nov 13 11:56:18.907534 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 13 11:56:18.907542 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 13 11:56:18.907553 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 13 11:56:18.907561 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 13 11:56:18.907570 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Nov 13 11:56:18.907578 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Nov 13 11:56:18.907587 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Nov 13 11:56:18.907600 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Nov 13 11:56:18.907608 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Nov 13 11:56:18.907620 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Nov 13 11:56:18.907629 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Nov 13 11:56:18.907638 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 13 11:56:18.907647 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 13 11:56:18.907656 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Nov 13 11:56:18.907665 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Nov 13 11:56:18.907674 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Nov 13 11:56:18.907685 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Nov 13 11:56:18.907694 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Nov 13 11:56:18.907703 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Nov 13 11:56:18.907712 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Nov 13 11:56:18.907721 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Nov 13 11:56:18.907729 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Nov 13 11:56:18.907738 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Nov 13 11:56:18.907747 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Nov 13 11:56:18.907756 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Nov 13 11:56:18.907765 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Nov 13 11:56:18.907776 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Nov 13 11:56:18.907785 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 13 11:56:18.907794 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Nov 13 11:56:18.907803 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Nov 13 11:56:18.907812 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Nov 13 11:56:18.907821 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Nov 13 11:56:18.907830 kernel: Zone ranges: Nov 13 11:56:18.907839 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 13 11:56:18.907848 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Nov 13 11:56:18.907859 kernel: Normal empty Nov 13 11:56:18.907868 kernel: Movable zone start for each node Nov 13 11:56:18.907877 kernel: Early memory node ranges Nov 13 11:56:18.907886 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 13 11:56:18.907895 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Nov 13 11:56:18.907904 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Nov 13 11:56:18.907913 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 13 11:56:18.907922 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 13 11:56:18.907930 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Nov 13 11:56:18.907939 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 13 11:56:18.907951 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 13 11:56:18.907960 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 13 11:56:18.907968 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 13 11:56:18.907977 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 13 11:56:18.907986 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 13 11:56:18.907995 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 13 11:56:18.908004 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 13 11:56:18.908012 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 13 11:56:18.908021 kernel: TSC deadline timer available Nov 13 11:56:18.908033 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Nov 13 11:56:18.908042 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 13 11:56:18.908051 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 13 11:56:18.908060 kernel: Booting paravirtualized kernel on KVM Nov 13 11:56:18.908069 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 13 11:56:18.908078 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Nov 13 11:56:18.908087 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Nov 13 11:56:18.908096 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Nov 13 11:56:18.908105 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Nov 13 11:56:18.908116 kernel: kvm-guest: PV spinlocks enabled Nov 13 11:56:18.908125 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 13 11:56:18.908135 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 13 11:56:18.908144 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 13 11:56:18.908153 kernel: random: crng init done Nov 13 11:56:18.908169 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 13 11:56:18.908179 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 13 11:56:18.908188 kernel: Fallback order for Node 0: 0 Nov 13 11:56:18.908206 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Nov 13 11:56:18.908215 kernel: Policy zone: DMA32 Nov 13 11:56:18.908224 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 13 11:56:18.908233 kernel: software IO TLB: area num 16. Nov 13 11:56:18.908242 kernel: Memory: 1901532K/2096616K available (12288K kernel code, 2305K rwdata, 22724K rodata, 42828K init, 2360K bss, 194824K reserved, 0K cma-reserved) Nov 13 11:56:18.908251 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Nov 13 11:56:18.908260 kernel: ftrace: allocating 37799 entries in 148 pages Nov 13 11:56:18.908269 kernel: ftrace: allocated 148 pages with 3 groups Nov 13 11:56:18.908278 kernel: Dynamic Preempt: voluntary Nov 13 11:56:18.908290 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 13 11:56:18.908300 kernel: rcu: RCU event tracing is enabled. Nov 13 11:56:18.908309 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Nov 13 11:56:18.908318 kernel: Trampoline variant of Tasks RCU enabled. Nov 13 11:56:18.908328 kernel: Rude variant of Tasks RCU enabled. Nov 13 11:56:18.908347 kernel: Tracing variant of Tasks RCU enabled. Nov 13 11:56:18.908364 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 13 11:56:18.908383 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Nov 13 11:56:18.908392 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Nov 13 11:56:18.908402 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 13 11:56:18.908411 kernel: Console: colour VGA+ 80x25 Nov 13 11:56:18.908421 kernel: printk: console [tty0] enabled Nov 13 11:56:18.908433 kernel: printk: console [ttyS0] enabled Nov 13 11:56:18.908442 kernel: ACPI: Core revision 20230628 Nov 13 11:56:18.908452 kernel: APIC: Switch to symmetric I/O mode setup Nov 13 11:56:18.908461 kernel: x2apic enabled Nov 13 11:56:18.908471 kernel: APIC: Switched APIC routing to: physical x2apic Nov 13 11:56:18.908483 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2113312ac93, max_idle_ns: 440795244843 ns Nov 13 11:56:18.908493 kernel: Calibrating delay loop (skipped) preset value.. 4589.15 BogoMIPS (lpj=2294576) Nov 13 11:56:18.908503 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 13 11:56:18.908512 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Nov 13 11:56:18.908522 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Nov 13 11:56:18.908531 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 13 11:56:18.908541 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Nov 13 11:56:18.908550 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Nov 13 11:56:18.908560 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Nov 13 11:56:18.908572 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Nov 13 11:56:18.908581 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Nov 13 11:56:18.908591 kernel: RETBleed: Mitigation: Enhanced IBRS Nov 13 11:56:18.908600 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 13 11:56:18.908610 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 13 11:56:18.908619 kernel: TAA: Mitigation: Clear CPU buffers Nov 13 11:56:18.908628 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 13 11:56:18.908638 kernel: GDS: Unknown: Dependent on hypervisor status Nov 13 11:56:18.908648 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 13 11:56:18.908657 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 13 11:56:18.908666 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 13 11:56:18.908678 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Nov 13 11:56:18.908688 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Nov 13 11:56:18.908697 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Nov 13 11:56:18.908707 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Nov 13 11:56:18.908716 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 13 11:56:18.908726 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Nov 13 11:56:18.908735 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Nov 13 11:56:18.908745 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Nov 13 11:56:18.908754 kernel: x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]: 8 Nov 13 11:56:18.908764 kernel: x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted' format. Nov 13 11:56:18.908773 kernel: Freeing SMP alternatives memory: 32K Nov 13 11:56:18.908783 kernel: pid_max: default: 32768 minimum: 301 Nov 13 11:56:18.908795 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 13 11:56:18.908804 kernel: landlock: Up and running. Nov 13 11:56:18.908814 kernel: SELinux: Initializing. Nov 13 11:56:18.908823 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 13 11:56:18.908833 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 13 11:56:18.908842 kernel: smpboot: CPU0: Intel Xeon Processor (Cascadelake) (family: 0x6, model: 0x55, stepping: 0x6) Nov 13 11:56:18.908852 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 13 11:56:18.908862 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 13 11:56:18.908872 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 13 11:56:18.908881 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Nov 13 11:56:18.908893 kernel: signal: max sigframe size: 3632 Nov 13 11:56:18.908903 kernel: rcu: Hierarchical SRCU implementation. Nov 13 11:56:18.908913 kernel: rcu: Max phase no-delay instances is 400. Nov 13 11:56:18.908922 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 13 11:56:18.908932 kernel: smp: Bringing up secondary CPUs ... Nov 13 11:56:18.908941 kernel: smpboot: x86: Booting SMP configuration: Nov 13 11:56:18.908951 kernel: .... node #0, CPUs: #1 Nov 13 11:56:18.908960 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Nov 13 11:56:18.908970 kernel: smp: Brought up 1 node, 2 CPUs Nov 13 11:56:18.908982 kernel: smpboot: Max logical packages: 16 Nov 13 11:56:18.908991 kernel: smpboot: Total of 2 processors activated (9178.30 BogoMIPS) Nov 13 11:56:18.909001 kernel: devtmpfs: initialized Nov 13 11:56:18.909010 kernel: x86/mm: Memory block size: 128MB Nov 13 11:56:18.909020 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 13 11:56:18.909030 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Nov 13 11:56:18.909039 kernel: pinctrl core: initialized pinctrl subsystem Nov 13 11:56:18.909049 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 13 11:56:18.909058 kernel: audit: initializing netlink subsys (disabled) Nov 13 11:56:18.909070 kernel: audit: type=2000 audit(1731498977.388:1): state=initialized audit_enabled=0 res=1 Nov 13 11:56:18.909080 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 13 11:56:18.909089 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 13 11:56:18.909099 kernel: cpuidle: using governor menu Nov 13 11:56:18.909108 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 13 11:56:18.909118 kernel: dca service started, version 1.12.1 Nov 13 11:56:18.909127 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Nov 13 11:56:18.909137 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 13 11:56:18.909147 kernel: PCI: Using configuration type 1 for base access Nov 13 11:56:18.909158 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 13 11:56:18.909256 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 13 11:56:18.909265 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 13 11:56:18.909275 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 13 11:56:18.909284 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 13 11:56:18.909294 kernel: ACPI: Added _OSI(Module Device) Nov 13 11:56:18.909304 kernel: ACPI: Added _OSI(Processor Device) Nov 13 11:56:18.909314 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Nov 13 11:56:18.909323 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 13 11:56:18.909336 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 13 11:56:18.909346 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 13 11:56:18.909355 kernel: ACPI: Interpreter enabled Nov 13 11:56:18.909364 kernel: ACPI: PM: (supports S0 S5) Nov 13 11:56:18.909374 kernel: ACPI: Using IOAPIC for interrupt routing Nov 13 11:56:18.909383 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 13 11:56:18.909393 kernel: PCI: Using E820 reservations for host bridge windows Nov 13 11:56:18.909403 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 13 11:56:18.909412 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 13 11:56:18.909587 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 13 11:56:18.909687 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Nov 13 11:56:18.909777 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Nov 13 11:56:18.909790 kernel: PCI host bridge to bus 0000:00 Nov 13 11:56:18.909886 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 13 11:56:18.909969 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 13 11:56:18.910054 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 13 11:56:18.910133 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Nov 13 11:56:18.910251 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 13 11:56:18.910332 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Nov 13 11:56:18.910411 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 13 11:56:18.910516 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Nov 13 11:56:18.910614 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Nov 13 11:56:18.910710 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Nov 13 11:56:18.910801 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Nov 13 11:56:18.910891 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Nov 13 11:56:18.910980 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 13 11:56:18.911079 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Nov 13 11:56:18.911181 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Nov 13 11:56:18.911289 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Nov 13 11:56:18.911385 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Nov 13 11:56:18.911480 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Nov 13 11:56:18.911572 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Nov 13 11:56:18.911674 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Nov 13 11:56:18.911765 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Nov 13 11:56:18.911863 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Nov 13 11:56:18.911957 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Nov 13 11:56:18.912054 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Nov 13 11:56:18.912145 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Nov 13 11:56:18.912267 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Nov 13 11:56:18.912358 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Nov 13 11:56:18.912454 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Nov 13 11:56:18.912549 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Nov 13 11:56:18.912645 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Nov 13 11:56:18.912736 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Nov 13 11:56:18.912825 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Nov 13 11:56:18.912915 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Nov 13 11:56:18.913006 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Nov 13 11:56:18.913102 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Nov 13 11:56:18.913231 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Nov 13 11:56:18.913334 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Nov 13 11:56:18.913425 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Nov 13 11:56:18.913522 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Nov 13 11:56:18.913611 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 13 11:56:18.913711 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Nov 13 11:56:18.913806 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Nov 13 11:56:18.913895 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Nov 13 11:56:18.913992 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Nov 13 11:56:18.914084 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Nov 13 11:56:18.914206 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Nov 13 11:56:18.914307 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Nov 13 11:56:18.914404 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Nov 13 11:56:18.914495 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Nov 13 11:56:18.914585 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Nov 13 11:56:18.914682 kernel: pci_bus 0000:02: extended config space not accessible Nov 13 11:56:18.914787 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Nov 13 11:56:18.914885 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Nov 13 11:56:18.914979 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Nov 13 11:56:18.915076 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Nov 13 11:56:18.915187 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Nov 13 11:56:18.915349 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Nov 13 11:56:18.915440 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Nov 13 11:56:18.915530 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Nov 13 11:56:18.915621 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 13 11:56:18.915720 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Nov 13 11:56:18.915818 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Nov 13 11:56:18.915908 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Nov 13 11:56:18.915997 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Nov 13 11:56:18.916085 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 13 11:56:18.916187 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Nov 13 11:56:18.916327 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Nov 13 11:56:18.916417 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 13 11:56:18.916508 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Nov 13 11:56:18.916601 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Nov 13 11:56:18.916689 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 13 11:56:18.916778 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Nov 13 11:56:18.916866 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Nov 13 11:56:18.916954 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 13 11:56:18.917044 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Nov 13 11:56:18.917133 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Nov 13 11:56:18.917256 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 13 11:56:18.917353 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Nov 13 11:56:18.917441 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Nov 13 11:56:18.917531 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 13 11:56:18.917544 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 13 11:56:18.917554 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 13 11:56:18.917564 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 13 11:56:18.917574 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 13 11:56:18.917583 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 13 11:56:18.917597 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 13 11:56:18.917607 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 13 11:56:18.917617 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 13 11:56:18.917626 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 13 11:56:18.917636 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 13 11:56:18.917645 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 13 11:56:18.917655 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 13 11:56:18.917664 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 13 11:56:18.917674 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 13 11:56:18.917686 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 13 11:56:18.917696 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 13 11:56:18.917706 kernel: iommu: Default domain type: Translated Nov 13 11:56:18.917715 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 13 11:56:18.917725 kernel: PCI: Using ACPI for IRQ routing Nov 13 11:56:18.917735 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 13 11:56:18.917744 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 13 11:56:18.917753 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Nov 13 11:56:18.917842 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 13 11:56:18.917935 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 13 11:56:18.918024 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 13 11:56:18.918043 kernel: vgaarb: loaded Nov 13 11:56:18.918053 kernel: clocksource: Switched to clocksource kvm-clock Nov 13 11:56:18.918063 kernel: VFS: Disk quotas dquot_6.6.0 Nov 13 11:56:18.918073 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 13 11:56:18.918083 kernel: pnp: PnP ACPI init Nov 13 11:56:18.918186 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 13 11:56:18.918233 kernel: pnp: PnP ACPI: found 5 devices Nov 13 11:56:18.918243 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 13 11:56:18.918253 kernel: NET: Registered PF_INET protocol family Nov 13 11:56:18.918262 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 13 11:56:18.918272 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 13 11:56:18.918282 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 13 11:56:18.918293 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 13 11:56:18.918303 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 13 11:56:18.918312 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 13 11:56:18.918325 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 13 11:56:18.918334 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 13 11:56:18.918344 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 13 11:56:18.918354 kernel: NET: Registered PF_XDP protocol family Nov 13 11:56:18.918445 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Nov 13 11:56:18.918535 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Nov 13 11:56:18.918625 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Nov 13 11:56:18.918718 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Nov 13 11:56:18.918807 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Nov 13 11:56:18.918896 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Nov 13 11:56:18.918988 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Nov 13 11:56:18.919097 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Nov 13 11:56:18.919209 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Nov 13 11:56:18.919305 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Nov 13 11:56:18.919395 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Nov 13 11:56:18.919484 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Nov 13 11:56:18.919572 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Nov 13 11:56:18.919662 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Nov 13 11:56:18.919751 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Nov 13 11:56:18.919844 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Nov 13 11:56:18.919938 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Nov 13 11:56:18.920035 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Nov 13 11:56:18.920125 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Nov 13 11:56:18.920268 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Nov 13 11:56:18.920359 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Nov 13 11:56:18.920453 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Nov 13 11:56:18.920542 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Nov 13 11:56:18.920633 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Nov 13 11:56:18.920721 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Nov 13 11:56:18.920810 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 13 11:56:18.920899 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Nov 13 11:56:18.920986 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Nov 13 11:56:18.921075 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Nov 13 11:56:18.921172 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 13 11:56:18.921300 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Nov 13 11:56:18.921390 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Nov 13 11:56:18.921484 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Nov 13 11:56:18.921573 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 13 11:56:18.921662 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Nov 13 11:56:18.921752 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Nov 13 11:56:18.921841 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Nov 13 11:56:18.921930 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 13 11:56:18.922019 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Nov 13 11:56:18.922108 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Nov 13 11:56:18.922220 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Nov 13 11:56:18.922315 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 13 11:56:18.922405 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Nov 13 11:56:18.922495 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Nov 13 11:56:18.922583 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Nov 13 11:56:18.922672 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 13 11:56:18.922765 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Nov 13 11:56:18.922854 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Nov 13 11:56:18.922942 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Nov 13 11:56:18.923031 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 13 11:56:18.923116 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 13 11:56:18.923229 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 13 11:56:18.923310 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 13 11:56:18.923397 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Nov 13 11:56:18.923481 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 13 11:56:18.923560 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Nov 13 11:56:18.923650 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Nov 13 11:56:18.923734 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Nov 13 11:56:18.923820 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Nov 13 11:56:18.923912 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Nov 13 11:56:18.924001 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Nov 13 11:56:18.924088 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Nov 13 11:56:18.924178 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 13 11:56:18.924573 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Nov 13 11:56:18.924659 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Nov 13 11:56:18.924905 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 13 11:56:18.925007 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Nov 13 11:56:18.925094 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Nov 13 11:56:18.925430 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 13 11:56:18.925539 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Nov 13 11:56:18.925626 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Nov 13 11:56:18.925712 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 13 11:56:18.925801 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Nov 13 11:56:18.925885 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Nov 13 11:56:18.925973 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 13 11:56:18.926061 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Nov 13 11:56:18.926145 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Nov 13 11:56:18.926248 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 13 11:56:18.926342 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Nov 13 11:56:18.926426 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Nov 13 11:56:18.926509 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 13 11:56:18.926528 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 13 11:56:18.926542 kernel: PCI: CLS 0 bytes, default 64 Nov 13 11:56:18.926553 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 13 11:56:18.926563 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Nov 13 11:56:18.926574 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 13 11:56:18.926584 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2113312ac93, max_idle_ns: 440795244843 ns Nov 13 11:56:18.926595 kernel: Initialise system trusted keyrings Nov 13 11:56:18.926605 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 13 11:56:18.926616 kernel: Key type asymmetric registered Nov 13 11:56:18.926629 kernel: Asymmetric key parser 'x509' registered Nov 13 11:56:18.926639 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 13 11:56:18.926649 kernel: io scheduler mq-deadline registered Nov 13 11:56:18.926660 kernel: io scheduler kyber registered Nov 13 11:56:18.926670 kernel: io scheduler bfq registered Nov 13 11:56:18.926770 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Nov 13 11:56:18.926865 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Nov 13 11:56:18.926958 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 13 11:56:18.927056 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Nov 13 11:56:18.927148 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Nov 13 11:56:18.927284 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 13 11:56:18.927378 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Nov 13 11:56:18.927469 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Nov 13 11:56:18.927558 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 13 11:56:18.927654 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Nov 13 11:56:18.927743 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Nov 13 11:56:18.927833 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 13 11:56:18.927924 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Nov 13 11:56:18.928014 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Nov 13 11:56:18.928104 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 13 11:56:18.928215 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Nov 13 11:56:18.928307 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Nov 13 11:56:18.928398 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 13 11:56:18.928490 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Nov 13 11:56:18.928580 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Nov 13 11:56:18.928671 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 13 11:56:18.928765 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Nov 13 11:56:18.928855 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Nov 13 11:56:18.928944 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 13 11:56:18.928958 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 13 11:56:18.928969 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 13 11:56:18.928980 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 13 11:56:18.928990 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 13 11:56:18.929004 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 13 11:56:18.929015 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 13 11:56:18.929025 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 13 11:56:18.929035 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 13 11:56:18.929046 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 13 11:56:18.929137 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 13 11:56:18.931294 kernel: rtc_cmos 00:03: registered as rtc0 Nov 13 11:56:18.931399 kernel: rtc_cmos 00:03: setting system clock to 2024-11-13T11:56:18 UTC (1731498978) Nov 13 11:56:18.931491 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Nov 13 11:56:18.931505 kernel: intel_pstate: CPU model not supported Nov 13 11:56:18.931516 kernel: NET: Registered PF_INET6 protocol family Nov 13 11:56:18.931527 kernel: Segment Routing with IPv6 Nov 13 11:56:18.931537 kernel: In-situ OAM (IOAM) with IPv6 Nov 13 11:56:18.931547 kernel: NET: Registered PF_PACKET protocol family Nov 13 11:56:18.931558 kernel: Key type dns_resolver registered Nov 13 11:56:18.931569 kernel: IPI shorthand broadcast: enabled Nov 13 11:56:18.931579 kernel: sched_clock: Marking stable (871011330, 122962984)->(1167141455, -173167141) Nov 13 11:56:18.931593 kernel: registered taskstats version 1 Nov 13 11:56:18.931604 kernel: Loading compiled-in X.509 certificates Nov 13 11:56:18.931614 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.60-flatcar: 0473a73d840db5324524af106a53c13fc6fc218a' Nov 13 11:56:18.931624 kernel: Key type .fscrypt registered Nov 13 11:56:18.931635 kernel: Key type fscrypt-provisioning registered Nov 13 11:56:18.931645 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 13 11:56:18.931656 kernel: ima: Allocated hash algorithm: sha1 Nov 13 11:56:18.931666 kernel: ima: No architecture policies found Nov 13 11:56:18.931676 kernel: clk: Disabling unused clocks Nov 13 11:56:18.931689 kernel: Freeing unused kernel image (initmem) memory: 42828K Nov 13 11:56:18.931699 kernel: Write protecting the kernel read-only data: 36864k Nov 13 11:56:18.931710 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Nov 13 11:56:18.931720 kernel: Run /init as init process Nov 13 11:56:18.931730 kernel: with arguments: Nov 13 11:56:18.931740 kernel: /init Nov 13 11:56:18.931750 kernel: with environment: Nov 13 11:56:18.931760 kernel: HOME=/ Nov 13 11:56:18.931770 kernel: TERM=linux Nov 13 11:56:18.931783 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 13 11:56:18.931796 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 13 11:56:18.931809 systemd[1]: Detected virtualization kvm. Nov 13 11:56:18.931820 systemd[1]: Detected architecture x86-64. Nov 13 11:56:18.931831 systemd[1]: Running in initrd. Nov 13 11:56:18.931841 systemd[1]: No hostname configured, using default hostname. Nov 13 11:56:18.931851 systemd[1]: Hostname set to . Nov 13 11:56:18.931865 systemd[1]: Initializing machine ID from VM UUID. Nov 13 11:56:18.931876 systemd[1]: Queued start job for default target initrd.target. Nov 13 11:56:18.931886 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 13 11:56:18.931897 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 13 11:56:18.931909 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 13 11:56:18.931920 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 13 11:56:18.931931 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 13 11:56:18.931942 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 13 11:56:18.931957 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 13 11:56:18.931969 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 13 11:56:18.931980 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 13 11:56:18.931990 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 13 11:56:18.932001 systemd[1]: Reached target paths.target - Path Units. Nov 13 11:56:18.932012 systemd[1]: Reached target slices.target - Slice Units. Nov 13 11:56:18.932023 systemd[1]: Reached target swap.target - Swaps. Nov 13 11:56:18.932036 systemd[1]: Reached target timers.target - Timer Units. Nov 13 11:56:18.932047 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 13 11:56:18.932058 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 13 11:56:18.932069 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 13 11:56:18.932080 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 13 11:56:18.932091 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 13 11:56:18.932101 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 13 11:56:18.932112 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 13 11:56:18.932124 systemd[1]: Reached target sockets.target - Socket Units. Nov 13 11:56:18.932137 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 13 11:56:18.932148 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 13 11:56:18.932159 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 13 11:56:18.932180 systemd[1]: Starting systemd-fsck-usr.service... Nov 13 11:56:18.932210 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 13 11:56:18.932222 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 13 11:56:18.932279 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 13 11:56:18.932325 systemd-journald[200]: Collecting audit messages is disabled. Nov 13 11:56:18.932355 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 13 11:56:18.932366 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 13 11:56:18.932377 systemd[1]: Finished systemd-fsck-usr.service. Nov 13 11:56:18.932392 systemd-journald[200]: Journal started Nov 13 11:56:18.932416 systemd-journald[200]: Runtime Journal (/run/log/journal/480bec079c6b4681bd5f8df586d4f68b) is 4.7M, max 38.0M, 33.2M free. Nov 13 11:56:18.932558 systemd-modules-load[202]: Inserted module 'overlay' Nov 13 11:56:18.941215 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 13 11:56:18.943230 systemd[1]: Started systemd-journald.service - Journal Service. Nov 13 11:56:18.965458 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 13 11:56:18.966441 systemd-modules-load[202]: Inserted module 'br_netfilter' Nov 13 11:56:18.985305 kernel: Bridge firewalling registered Nov 13 11:56:18.985363 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 13 11:56:18.986038 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 13 11:56:18.987085 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 13 11:56:18.994355 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 13 11:56:18.996418 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 13 11:56:18.998588 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 13 11:56:19.007583 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 13 11:56:19.023460 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 13 11:56:19.026028 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 13 11:56:19.029510 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 13 11:56:19.035329 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 13 11:56:19.035901 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 13 11:56:19.040441 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 13 11:56:19.058077 dracut-cmdline[236]: dracut-dracut-053 Nov 13 11:56:19.061991 dracut-cmdline[236]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 13 11:56:19.068231 systemd-resolved[235]: Positive Trust Anchors: Nov 13 11:56:19.068250 systemd-resolved[235]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 13 11:56:19.068289 systemd-resolved[235]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 13 11:56:19.074800 systemd-resolved[235]: Defaulting to hostname 'linux'. Nov 13 11:56:19.076415 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 13 11:56:19.076909 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 13 11:56:19.189263 kernel: SCSI subsystem initialized Nov 13 11:56:19.206264 kernel: Loading iSCSI transport class v2.0-870. Nov 13 11:56:19.219290 kernel: iscsi: registered transport (tcp) Nov 13 11:56:19.248271 kernel: iscsi: registered transport (qla4xxx) Nov 13 11:56:19.248425 kernel: QLogic iSCSI HBA Driver Nov 13 11:56:19.306948 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 13 11:56:19.312366 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 13 11:56:19.339408 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 13 11:56:19.339508 kernel: device-mapper: uevent: version 1.0.3 Nov 13 11:56:19.339529 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 13 11:56:19.394239 kernel: raid6: avx512x4 gen() 17737 MB/s Nov 13 11:56:19.408278 kernel: raid6: avx512x2 gen() 17734 MB/s Nov 13 11:56:19.425255 kernel: raid6: avx512x1 gen() 17746 MB/s Nov 13 11:56:19.442261 kernel: raid6: avx2x4 gen() 17759 MB/s Nov 13 11:56:19.459232 kernel: raid6: avx2x2 gen() 17714 MB/s Nov 13 11:56:19.476302 kernel: raid6: avx2x1 gen() 13715 MB/s Nov 13 11:56:19.476442 kernel: raid6: using algorithm avx2x4 gen() 17759 MB/s Nov 13 11:56:19.494353 kernel: raid6: .... xor() 7030 MB/s, rmw enabled Nov 13 11:56:19.494449 kernel: raid6: using avx512x2 recovery algorithm Nov 13 11:56:19.517277 kernel: xor: automatically using best checksumming function avx Nov 13 11:56:19.696259 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 13 11:56:19.720680 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 13 11:56:19.727363 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 13 11:56:19.743988 systemd-udevd[419]: Using default interface naming scheme 'v255'. Nov 13 11:56:19.749084 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 13 11:56:19.760380 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 13 11:56:19.782062 dracut-pre-trigger[427]: rd.md=0: removing MD RAID activation Nov 13 11:56:19.837528 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 13 11:56:19.843476 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 13 11:56:19.905087 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 13 11:56:19.917654 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 13 11:56:19.934049 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 13 11:56:19.935608 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 13 11:56:19.936314 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 13 11:56:19.937536 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 13 11:56:19.942323 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 13 11:56:19.963756 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 13 11:56:19.988252 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Nov 13 11:56:20.023155 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Nov 13 11:56:20.023326 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 13 11:56:20.023349 kernel: GPT:17805311 != 125829119 Nov 13 11:56:20.023362 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 13 11:56:20.023374 kernel: GPT:17805311 != 125829119 Nov 13 11:56:20.023386 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 13 11:56:20.023399 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 13 11:56:20.023411 kernel: cryptd: max_cpu_qlen set to 1000 Nov 13 11:56:20.031686 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 13 11:56:20.031805 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 13 11:56:20.033006 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 13 11:56:20.033667 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 13 11:56:20.033872 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 13 11:56:20.034481 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 13 11:56:20.043711 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 13 11:56:20.045558 kernel: AVX2 version of gcm_enc/dec engaged. Nov 13 11:56:20.047215 kernel: AES CTR mode by8 optimization enabled Nov 13 11:56:20.062379 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (474) Nov 13 11:56:20.064927 kernel: ACPI: bus type USB registered Nov 13 11:56:20.064963 kernel: usbcore: registered new interface driver usbfs Nov 13 11:56:20.065892 kernel: usbcore: registered new interface driver hub Nov 13 11:56:20.066709 kernel: usbcore: registered new device driver usb Nov 13 11:56:20.071234 kernel: libata version 3.00 loaded. Nov 13 11:56:20.072663 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 13 11:56:20.085157 kernel: BTRFS: device fsid 9dfeafbb-8ab7-4be2-acae-f51db463fc77 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (464) Nov 13 11:56:20.107507 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 13 11:56:20.116259 kernel: ahci 0000:00:1f.2: version 3.0 Nov 13 11:56:20.158517 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 13 11:56:20.158554 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Nov 13 11:56:20.158698 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 13 11:56:20.158813 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Nov 13 11:56:20.158936 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Nov 13 11:56:20.159048 kernel: scsi host0: ahci Nov 13 11:56:20.159182 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Nov 13 11:56:20.159551 kernel: scsi host1: ahci Nov 13 11:56:20.159673 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Nov 13 11:56:20.159785 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Nov 13 11:56:20.159896 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Nov 13 11:56:20.160005 kernel: scsi host2: ahci Nov 13 11:56:20.160124 kernel: hub 1-0:1.0: USB hub found Nov 13 11:56:20.161279 kernel: hub 1-0:1.0: 4 ports detected Nov 13 11:56:20.161416 kernel: scsi host3: ahci Nov 13 11:56:20.161536 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Nov 13 11:56:20.161720 kernel: hub 2-0:1.0: USB hub found Nov 13 11:56:20.161855 kernel: hub 2-0:1.0: 4 ports detected Nov 13 11:56:20.161975 kernel: scsi host4: ahci Nov 13 11:56:20.162096 kernel: scsi host5: ahci Nov 13 11:56:20.163058 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Nov 13 11:56:20.163094 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Nov 13 11:56:20.163108 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Nov 13 11:56:20.163121 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Nov 13 11:56:20.163134 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Nov 13 11:56:20.163147 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Nov 13 11:56:20.150139 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 13 11:56:20.184057 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 13 11:56:20.188977 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 13 11:56:20.189513 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 13 11:56:20.198615 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 13 11:56:20.200349 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 13 11:56:20.208598 disk-uuid[565]: Primary Header is updated. Nov 13 11:56:20.208598 disk-uuid[565]: Secondary Entries is updated. Nov 13 11:56:20.208598 disk-uuid[565]: Secondary Header is updated. Nov 13 11:56:20.212299 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 13 11:56:20.217271 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 13 11:56:20.224243 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 13 11:56:20.239045 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 13 11:56:20.393262 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Nov 13 11:56:20.469243 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 13 11:56:20.469368 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 13 11:56:20.472019 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 13 11:56:20.472400 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 13 11:56:20.475104 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 13 11:56:20.477793 kernel: ata3: SATA link down (SStatus 0 SControl 300) Nov 13 11:56:20.540236 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 13 11:56:20.545411 kernel: usbcore: registered new interface driver usbhid Nov 13 11:56:20.545483 kernel: usbhid: USB HID core driver Nov 13 11:56:20.552095 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Nov 13 11:56:20.552176 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Nov 13 11:56:21.231227 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 13 11:56:21.233282 disk-uuid[566]: The operation has completed successfully. Nov 13 11:56:21.286076 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 13 11:56:21.286212 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 13 11:56:21.297422 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 13 11:56:21.300829 sh[588]: Success Nov 13 11:56:21.320236 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 13 11:56:21.388124 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 13 11:56:21.390536 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 13 11:56:21.398402 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 13 11:56:21.430467 kernel: BTRFS info (device dm-0): first mount of filesystem 9dfeafbb-8ab7-4be2-acae-f51db463fc77 Nov 13 11:56:21.430536 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 13 11:56:21.434768 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 13 11:56:21.434874 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 13 11:56:21.436004 kernel: BTRFS info (device dm-0): using free space tree Nov 13 11:56:21.444085 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 13 11:56:21.446400 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 13 11:56:21.461446 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 13 11:56:21.465303 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 13 11:56:21.475395 kernel: BTRFS info (device vda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 13 11:56:21.475439 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 13 11:56:21.475461 kernel: BTRFS info (device vda6): using free space tree Nov 13 11:56:21.481220 kernel: BTRFS info (device vda6): auto enabling async discard Nov 13 11:56:21.487735 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 13 11:56:21.489266 kernel: BTRFS info (device vda6): last unmount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 13 11:56:21.495964 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 13 11:56:21.501351 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 13 11:56:21.605758 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 13 11:56:21.619122 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 13 11:56:21.639012 ignition[663]: Ignition 2.19.0 Nov 13 11:56:21.641304 ignition[663]: Stage: fetch-offline Nov 13 11:56:21.641363 ignition[663]: no configs at "/usr/lib/ignition/base.d" Nov 13 11:56:21.641376 ignition[663]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 13 11:56:21.641523 ignition[663]: parsed url from cmdline: "" Nov 13 11:56:21.641527 ignition[663]: no config URL provided Nov 13 11:56:21.641532 ignition[663]: reading system config file "/usr/lib/ignition/user.ign" Nov 13 11:56:21.641540 ignition[663]: no config at "/usr/lib/ignition/user.ign" Nov 13 11:56:21.644730 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 13 11:56:21.641546 ignition[663]: failed to fetch config: resource requires networking Nov 13 11:56:21.641750 ignition[663]: Ignition finished successfully Nov 13 11:56:21.647483 systemd-networkd[778]: lo: Link UP Nov 13 11:56:21.647486 systemd-networkd[778]: lo: Gained carrier Nov 13 11:56:21.648889 systemd-networkd[778]: Enumeration completed Nov 13 11:56:21.649336 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 13 11:56:21.649432 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 13 11:56:21.649437 systemd-networkd[778]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 13 11:56:21.649866 systemd[1]: Reached target network.target - Network. Nov 13 11:56:21.650550 systemd-networkd[778]: eth0: Link UP Nov 13 11:56:21.650555 systemd-networkd[778]: eth0: Gained carrier Nov 13 11:56:21.650563 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 13 11:56:21.655400 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 13 11:56:21.662249 systemd-networkd[778]: eth0: DHCPv4 address 10.244.96.58/30, gateway 10.244.96.57 acquired from 10.244.96.57 Nov 13 11:56:21.672749 ignition[782]: Ignition 2.19.0 Nov 13 11:56:21.672760 ignition[782]: Stage: fetch Nov 13 11:56:21.672941 ignition[782]: no configs at "/usr/lib/ignition/base.d" Nov 13 11:56:21.672951 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 13 11:56:21.673061 ignition[782]: parsed url from cmdline: "" Nov 13 11:56:21.673065 ignition[782]: no config URL provided Nov 13 11:56:21.673070 ignition[782]: reading system config file "/usr/lib/ignition/user.ign" Nov 13 11:56:21.673077 ignition[782]: no config at "/usr/lib/ignition/user.ign" Nov 13 11:56:21.673312 ignition[782]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Nov 13 11:56:21.673389 ignition[782]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Nov 13 11:56:21.674416 ignition[782]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Nov 13 11:56:21.698870 ignition[782]: GET result: OK Nov 13 11:56:21.699129 ignition[782]: parsing config with SHA512: 985f1c70231d56ea12d33af582d3f8c97733a72812523ee564ab9d193114d02cbd4afb8e6fa113488be0c1278173c1ccd5b9a4980d67a25b2f50a310a5dbe939 Nov 13 11:56:21.706421 unknown[782]: fetched base config from "system" Nov 13 11:56:21.706436 unknown[782]: fetched base config from "system" Nov 13 11:56:21.707088 ignition[782]: fetch: fetch complete Nov 13 11:56:21.706446 unknown[782]: fetched user config from "openstack" Nov 13 11:56:21.707096 ignition[782]: fetch: fetch passed Nov 13 11:56:21.710562 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 13 11:56:21.707164 ignition[782]: Ignition finished successfully Nov 13 11:56:21.720386 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 13 11:56:21.753447 ignition[790]: Ignition 2.19.0 Nov 13 11:56:21.753460 ignition[790]: Stage: kargs Nov 13 11:56:21.753634 ignition[790]: no configs at "/usr/lib/ignition/base.d" Nov 13 11:56:21.753644 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 13 11:56:21.755793 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 13 11:56:21.754626 ignition[790]: kargs: kargs passed Nov 13 11:56:21.754673 ignition[790]: Ignition finished successfully Nov 13 11:56:21.772458 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 13 11:56:21.785357 ignition[797]: Ignition 2.19.0 Nov 13 11:56:21.785367 ignition[797]: Stage: disks Nov 13 11:56:21.785542 ignition[797]: no configs at "/usr/lib/ignition/base.d" Nov 13 11:56:21.785553 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 13 11:56:21.786435 ignition[797]: disks: disks passed Nov 13 11:56:21.786483 ignition[797]: Ignition finished successfully Nov 13 11:56:21.787984 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 13 11:56:21.788972 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 13 11:56:21.789785 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 13 11:56:21.790623 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 13 11:56:21.791509 systemd[1]: Reached target sysinit.target - System Initialization. Nov 13 11:56:21.792247 systemd[1]: Reached target basic.target - Basic System. Nov 13 11:56:21.797527 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 13 11:56:21.818146 systemd-fsck[806]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Nov 13 11:56:21.821520 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 13 11:56:21.826809 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 13 11:56:21.925224 kernel: EXT4-fs (vda9): mounted filesystem cc5635ac-cac6-420e-b789-89e3a937cfb2 r/w with ordered data mode. Quota mode: none. Nov 13 11:56:21.925762 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 13 11:56:21.926694 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 13 11:56:21.934286 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 13 11:56:21.936289 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 13 11:56:21.937459 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 13 11:56:21.939349 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Nov 13 11:56:21.940431 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 13 11:56:21.941232 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 13 11:56:21.944241 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (814) Nov 13 11:56:21.948705 kernel: BTRFS info (device vda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 13 11:56:21.948749 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 13 11:56:21.948763 kernel: BTRFS info (device vda6): using free space tree Nov 13 11:56:21.950233 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 13 11:56:21.954218 kernel: BTRFS info (device vda6): auto enabling async discard Nov 13 11:56:21.958114 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 13 11:56:21.959723 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 13 11:56:22.016322 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Nov 13 11:56:22.026677 initrd-setup-root[850]: cut: /sysroot/etc/group: No such file or directory Nov 13 11:56:22.033018 initrd-setup-root[857]: cut: /sysroot/etc/shadow: No such file or directory Nov 13 11:56:22.038739 initrd-setup-root[864]: cut: /sysroot/etc/gshadow: No such file or directory Nov 13 11:56:22.159537 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 13 11:56:22.165438 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 13 11:56:22.171567 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 13 11:56:22.179206 kernel: BTRFS info (device vda6): last unmount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 13 11:56:22.207943 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 13 11:56:22.209048 ignition[932]: INFO : Ignition 2.19.0 Nov 13 11:56:22.209048 ignition[932]: INFO : Stage: mount Nov 13 11:56:22.210049 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 13 11:56:22.210049 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 13 11:56:22.211039 ignition[932]: INFO : mount: mount passed Nov 13 11:56:22.211039 ignition[932]: INFO : Ignition finished successfully Nov 13 11:56:22.212067 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 13 11:56:22.430051 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 13 11:56:23.499609 systemd-networkd[778]: eth0: Gained IPv6LL Nov 13 11:56:25.010415 systemd-networkd[778]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:180e:24:19ff:fef4:603a/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:180e:24:19ff:fef4:603a/64 assigned by NDisc. Nov 13 11:56:25.010439 systemd-networkd[778]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Nov 13 11:56:29.092699 coreos-metadata[816]: Nov 13 11:56:29.092 WARN failed to locate config-drive, using the metadata service API instead Nov 13 11:56:29.110579 coreos-metadata[816]: Nov 13 11:56:29.110 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Nov 13 11:56:29.126536 coreos-metadata[816]: Nov 13 11:56:29.126 INFO Fetch successful Nov 13 11:56:29.127877 coreos-metadata[816]: Nov 13 11:56:29.127 INFO wrote hostname srv-gr2mf.gb1.brightbox.com to /sysroot/etc/hostname Nov 13 11:56:29.129764 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Nov 13 11:56:29.129905 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Nov 13 11:56:29.151296 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 13 11:56:29.159109 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 13 11:56:29.191223 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (949) Nov 13 11:56:29.194788 kernel: BTRFS info (device vda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 13 11:56:29.194870 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 13 11:56:29.194907 kernel: BTRFS info (device vda6): using free space tree Nov 13 11:56:29.198230 kernel: BTRFS info (device vda6): auto enabling async discard Nov 13 11:56:29.202897 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 13 11:56:29.226395 ignition[966]: INFO : Ignition 2.19.0 Nov 13 11:56:29.227116 ignition[966]: INFO : Stage: files Nov 13 11:56:29.227744 ignition[966]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 13 11:56:29.228273 ignition[966]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 13 11:56:29.229788 ignition[966]: DEBUG : files: compiled without relabeling support, skipping Nov 13 11:56:29.231478 ignition[966]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 13 11:56:29.232066 ignition[966]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 13 11:56:29.237894 ignition[966]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 13 11:56:29.238482 ignition[966]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 13 11:56:29.239055 unknown[966]: wrote ssh authorized keys file for user: core Nov 13 11:56:29.240669 ignition[966]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 13 11:56:29.249964 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 13 11:56:29.249964 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Nov 13 11:56:29.460687 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 13 11:56:29.732856 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 13 11:56:29.732856 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 13 11:56:29.734587 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 13 11:56:29.734587 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 13 11:56:29.734587 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 13 11:56:29.734587 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 13 11:56:29.734587 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 13 11:56:29.734587 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 13 11:56:29.734587 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 13 11:56:29.734587 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 13 11:56:29.734587 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 13 11:56:29.734587 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Nov 13 11:56:29.734587 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Nov 13 11:56:29.734587 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Nov 13 11:56:29.742725 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Nov 13 11:56:30.371670 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 13 11:56:31.713157 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Nov 13 11:56:31.713157 ignition[966]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 13 11:56:31.718452 ignition[966]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 13 11:56:31.718452 ignition[966]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 13 11:56:31.718452 ignition[966]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 13 11:56:31.718452 ignition[966]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 13 11:56:31.718452 ignition[966]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 13 11:56:31.718452 ignition[966]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 13 11:56:31.718452 ignition[966]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 13 11:56:31.718452 ignition[966]: INFO : files: files passed Nov 13 11:56:31.718452 ignition[966]: INFO : Ignition finished successfully Nov 13 11:56:31.720332 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 13 11:56:31.730513 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 13 11:56:31.734377 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 13 11:56:31.737056 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 13 11:56:31.737178 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 13 11:56:31.757130 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 13 11:56:31.757130 initrd-setup-root-after-ignition[995]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 13 11:56:31.760101 initrd-setup-root-after-ignition[999]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 13 11:56:31.762139 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 13 11:56:31.762879 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 13 11:56:31.769516 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 13 11:56:31.811885 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 13 11:56:31.812074 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 13 11:56:31.813741 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 13 11:56:31.814756 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 13 11:56:31.816512 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 13 11:56:31.818465 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 13 11:56:31.840770 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 13 11:56:31.847467 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 13 11:56:31.865349 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 13 11:56:31.866445 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 13 11:56:31.867595 systemd[1]: Stopped target timers.target - Timer Units. Nov 13 11:56:31.868529 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 13 11:56:31.868662 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 13 11:56:31.869624 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 13 11:56:31.870153 systemd[1]: Stopped target basic.target - Basic System. Nov 13 11:56:31.871030 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 13 11:56:31.871816 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 13 11:56:31.872602 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 13 11:56:31.873464 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 13 11:56:31.874321 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 13 11:56:31.875160 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 13 11:56:31.875957 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 13 11:56:31.876780 systemd[1]: Stopped target swap.target - Swaps. Nov 13 11:56:31.877516 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 13 11:56:31.877635 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 13 11:56:31.878610 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 13 11:56:31.879538 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 13 11:56:31.880449 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 13 11:56:31.882835 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 13 11:56:31.883759 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 13 11:56:31.883886 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 13 11:56:31.885283 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 13 11:56:31.885403 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 13 11:56:31.885951 systemd[1]: ignition-files.service: Deactivated successfully. Nov 13 11:56:31.886055 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 13 11:56:31.894494 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 13 11:56:31.909503 ignition[1019]: INFO : Ignition 2.19.0 Nov 13 11:56:31.909503 ignition[1019]: INFO : Stage: umount Nov 13 11:56:31.913973 ignition[1019]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 13 11:56:31.913973 ignition[1019]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 13 11:56:31.913973 ignition[1019]: INFO : umount: umount passed Nov 13 11:56:31.913973 ignition[1019]: INFO : Ignition finished successfully Nov 13 11:56:31.913371 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 13 11:56:31.913823 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 13 11:56:31.914398 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 13 11:56:31.915708 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 13 11:56:31.917482 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 13 11:56:31.921881 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 13 11:56:31.921977 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 13 11:56:31.924604 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 13 11:56:31.926648 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 13 11:56:31.927164 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 13 11:56:31.930779 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 13 11:56:31.930826 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 13 11:56:31.932112 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 13 11:56:31.932155 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 13 11:56:31.933050 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 13 11:56:31.933089 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 13 11:56:31.933945 systemd[1]: Stopped target network.target - Network. Nov 13 11:56:31.934757 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 13 11:56:31.934802 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 13 11:56:31.935681 systemd[1]: Stopped target paths.target - Path Units. Nov 13 11:56:31.936467 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 13 11:56:31.940273 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 13 11:56:31.941292 systemd[1]: Stopped target slices.target - Slice Units. Nov 13 11:56:31.942063 systemd[1]: Stopped target sockets.target - Socket Units. Nov 13 11:56:31.942884 systemd[1]: iscsid.socket: Deactivated successfully. Nov 13 11:56:31.942924 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 13 11:56:31.943407 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 13 11:56:31.943444 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 13 11:56:31.944650 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 13 11:56:31.944695 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 13 11:56:31.945525 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 13 11:56:31.945582 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 13 11:56:31.946597 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 13 11:56:31.947333 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 13 11:56:31.953539 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 13 11:56:31.953689 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 13 11:56:31.954063 systemd-networkd[778]: eth0: DHCPv6 lease lost Nov 13 11:56:31.956725 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 13 11:56:31.956810 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 13 11:56:31.958315 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 13 11:56:31.958465 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 13 11:56:31.960092 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 13 11:56:31.960214 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 13 11:56:31.961709 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 13 11:56:31.961773 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 13 11:56:31.962297 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 13 11:56:31.962345 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 13 11:56:31.969296 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 13 11:56:31.969684 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 13 11:56:31.969734 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 13 11:56:31.970178 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 13 11:56:31.970233 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 13 11:56:31.970638 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 13 11:56:31.970674 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 13 11:56:31.971336 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 13 11:56:31.980405 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 13 11:56:31.982021 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 13 11:56:31.984471 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 13 11:56:31.984630 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 13 11:56:31.987046 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 13 11:56:31.987119 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 13 11:56:31.988628 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 13 11:56:31.988663 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 13 11:56:31.989974 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 13 11:56:31.990014 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 13 11:56:31.992028 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 13 11:56:31.992067 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 13 11:56:31.993259 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 13 11:56:31.993300 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 13 11:56:32.005356 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 13 11:56:32.006455 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 13 11:56:32.006511 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 13 11:56:32.006960 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 13 11:56:32.006997 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 13 11:56:32.007433 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 13 11:56:32.007468 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 13 11:56:32.007876 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 13 11:56:32.007917 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 13 11:56:32.011589 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 13 11:56:32.011685 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 13 11:56:32.012621 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 13 11:56:32.021346 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 13 11:56:32.028864 systemd[1]: Switching root. Nov 13 11:56:32.064742 systemd-journald[200]: Journal stopped Nov 13 11:56:33.070694 systemd-journald[200]: Received SIGTERM from PID 1 (systemd). Nov 13 11:56:33.070820 kernel: SELinux: policy capability network_peer_controls=1 Nov 13 11:56:33.070840 kernel: SELinux: policy capability open_perms=1 Nov 13 11:56:33.070857 kernel: SELinux: policy capability extended_socket_class=1 Nov 13 11:56:33.070870 kernel: SELinux: policy capability always_check_network=0 Nov 13 11:56:33.070885 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 13 11:56:33.070898 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 13 11:56:33.070918 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 13 11:56:33.070934 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 13 11:56:33.070950 kernel: audit: type=1403 audit(1731498992.186:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 13 11:56:33.070969 systemd[1]: Successfully loaded SELinux policy in 41.682ms. Nov 13 11:56:33.071008 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.965ms. Nov 13 11:56:33.071024 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 13 11:56:33.071042 systemd[1]: Detected virtualization kvm. Nov 13 11:56:33.071056 systemd[1]: Detected architecture x86-64. Nov 13 11:56:33.071069 systemd[1]: Detected first boot. Nov 13 11:56:33.071083 systemd[1]: Hostname set to . Nov 13 11:56:33.071099 systemd[1]: Initializing machine ID from VM UUID. Nov 13 11:56:33.071121 zram_generator::config[1062]: No configuration found. Nov 13 11:56:33.071143 systemd[1]: Populated /etc with preset unit settings. Nov 13 11:56:33.071166 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 13 11:56:33.071184 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 13 11:56:33.073252 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 13 11:56:33.073271 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 13 11:56:33.073292 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 13 11:56:33.073311 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 13 11:56:33.073325 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 13 11:56:33.073339 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 13 11:56:33.073353 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 13 11:56:33.073367 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 13 11:56:33.073385 systemd[1]: Created slice user.slice - User and Session Slice. Nov 13 11:56:33.073401 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 13 11:56:33.073415 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 13 11:56:33.073429 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 13 11:56:33.073445 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 13 11:56:33.073459 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 13 11:56:33.073472 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 13 11:56:33.073490 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 13 11:56:33.073510 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 13 11:56:33.073527 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 13 11:56:33.073547 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 13 11:56:33.073561 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 13 11:56:33.073575 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 13 11:56:33.073589 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 13 11:56:33.073606 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 13 11:56:33.073626 systemd[1]: Reached target slices.target - Slice Units. Nov 13 11:56:33.073642 systemd[1]: Reached target swap.target - Swaps. Nov 13 11:56:33.073655 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 13 11:56:33.073672 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 13 11:56:33.073685 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 13 11:56:33.073699 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 13 11:56:33.073712 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 13 11:56:33.073725 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 13 11:56:33.073739 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 13 11:56:33.073753 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 13 11:56:33.073769 systemd[1]: Mounting media.mount - External Media Directory... Nov 13 11:56:33.073783 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 13 11:56:33.073805 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 13 11:56:33.073819 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 13 11:56:33.073832 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 13 11:56:33.073845 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 13 11:56:33.073858 systemd[1]: Reached target machines.target - Containers. Nov 13 11:56:33.073873 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 13 11:56:33.073895 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 13 11:56:33.073912 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 13 11:56:33.073926 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 13 11:56:33.073940 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 13 11:56:33.073954 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 13 11:56:33.073968 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 13 11:56:33.073986 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 13 11:56:33.074000 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 13 11:56:33.074014 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 13 11:56:33.074031 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 13 11:56:33.074045 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 13 11:56:33.074058 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 13 11:56:33.074071 systemd[1]: Stopped systemd-fsck-usr.service. Nov 13 11:56:33.074085 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 13 11:56:33.074098 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 13 11:56:33.074111 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 13 11:56:33.074129 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 13 11:56:33.074143 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 13 11:56:33.074169 systemd[1]: verity-setup.service: Deactivated successfully. Nov 13 11:56:33.074182 systemd[1]: Stopped verity-setup.service. Nov 13 11:56:33.074203 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 13 11:56:33.074217 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 13 11:56:33.074230 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 13 11:56:33.074244 systemd[1]: Mounted media.mount - External Media Directory. Nov 13 11:56:33.074259 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 13 11:56:33.074279 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 13 11:56:33.074294 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 13 11:56:33.074307 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 13 11:56:33.074327 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 13 11:56:33.074342 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 13 11:56:33.074362 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 13 11:56:33.074376 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 13 11:56:33.074393 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 13 11:56:33.074410 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 13 11:56:33.074427 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 13 11:56:33.074441 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 13 11:56:33.074461 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 13 11:56:33.074475 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 13 11:56:33.074514 systemd-journald[1155]: Collecting audit messages is disabled. Nov 13 11:56:33.076658 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 13 11:56:33.076684 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 13 11:56:33.076699 kernel: loop: module loaded Nov 13 11:56:33.076716 systemd-journald[1155]: Journal started Nov 13 11:56:33.076753 systemd-journald[1155]: Runtime Journal (/run/log/journal/480bec079c6b4681bd5f8df586d4f68b) is 4.7M, max 38.0M, 33.2M free. Nov 13 11:56:32.785248 systemd[1]: Queued start job for default target multi-user.target. Nov 13 11:56:32.808070 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 13 11:56:33.080569 systemd[1]: Started systemd-journald.service - Journal Service. Nov 13 11:56:32.808628 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 13 11:56:33.079380 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 13 11:56:33.080837 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 13 11:56:33.080975 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 13 11:56:33.081818 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 13 11:56:33.084346 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 13 11:56:33.094207 kernel: fuse: init (API version 7.39) Nov 13 11:56:33.099749 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 13 11:56:33.099894 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 13 11:56:33.115637 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 13 11:56:33.117261 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 13 11:56:33.117302 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 13 11:56:33.118730 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 13 11:56:33.128323 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 13 11:56:33.134357 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 13 11:56:33.135406 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 13 11:56:33.141350 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 13 11:56:33.143126 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 13 11:56:33.143653 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 13 11:56:33.147135 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 13 11:56:33.147617 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 13 11:56:33.148627 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 13 11:56:33.154777 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 13 11:56:33.155409 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 13 11:56:33.156038 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 13 11:56:33.182808 kernel: ACPI: bus type drm_connector registered Nov 13 11:56:33.181673 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 13 11:56:33.181826 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 13 11:56:33.218332 systemd-journald[1155]: Time spent on flushing to /var/log/journal/480bec079c6b4681bd5f8df586d4f68b is 96.125ms for 1154 entries. Nov 13 11:56:33.218332 systemd-journald[1155]: System Journal (/var/log/journal/480bec079c6b4681bd5f8df586d4f68b) is 8.0M, max 584.8M, 576.8M free. Nov 13 11:56:33.340606 systemd-journald[1155]: Received client request to flush runtime journal. Nov 13 11:56:33.340658 kernel: loop0: detected capacity change from 0 to 8 Nov 13 11:56:33.340676 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 13 11:56:33.340695 kernel: loop1: detected capacity change from 0 to 210664 Nov 13 11:56:33.241519 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 13 11:56:33.242279 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 13 11:56:33.249870 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. Nov 13 11:56:33.249886 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. Nov 13 11:56:33.253608 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 13 11:56:33.277088 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 13 11:56:33.284438 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 13 11:56:33.285182 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 13 11:56:33.289403 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 13 11:56:33.308277 udevadm[1210]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 13 11:56:33.312646 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 13 11:56:33.313776 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 13 11:56:33.346248 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 13 11:56:33.351912 kernel: loop2: detected capacity change from 0 to 142488 Nov 13 11:56:33.367601 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 13 11:56:33.384354 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 13 11:56:33.408643 systemd-tmpfiles[1220]: ACLs are not supported, ignoring. Nov 13 11:56:33.408937 systemd-tmpfiles[1220]: ACLs are not supported, ignoring. Nov 13 11:56:33.413705 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 13 11:56:33.427217 kernel: loop3: detected capacity change from 0 to 140768 Nov 13 11:56:33.469215 kernel: loop4: detected capacity change from 0 to 8 Nov 13 11:56:33.475213 kernel: loop5: detected capacity change from 0 to 210664 Nov 13 11:56:33.503214 kernel: loop6: detected capacity change from 0 to 142488 Nov 13 11:56:33.526215 kernel: loop7: detected capacity change from 0 to 140768 Nov 13 11:56:33.546277 (sd-merge)[1225]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Nov 13 11:56:33.546810 (sd-merge)[1225]: Merged extensions into '/usr'. Nov 13 11:56:33.558532 systemd[1]: Reloading requested from client PID 1200 ('systemd-sysext') (unit systemd-sysext.service)... Nov 13 11:56:33.558648 systemd[1]: Reloading... Nov 13 11:56:33.705220 ldconfig[1195]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 13 11:56:33.716686 zram_generator::config[1251]: No configuration found. Nov 13 11:56:33.881487 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 13 11:56:33.930639 systemd[1]: Reloading finished in 371 ms. Nov 13 11:56:33.961217 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 13 11:56:33.965013 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 13 11:56:33.976453 systemd[1]: Starting ensure-sysext.service... Nov 13 11:56:33.982231 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 13 11:56:34.003256 systemd[1]: Reloading requested from client PID 1307 ('systemctl') (unit ensure-sysext.service)... Nov 13 11:56:34.003280 systemd[1]: Reloading... Nov 13 11:56:34.036570 systemd-tmpfiles[1308]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 13 11:56:34.036964 systemd-tmpfiles[1308]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 13 11:56:34.037913 systemd-tmpfiles[1308]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 13 11:56:34.040533 systemd-tmpfiles[1308]: ACLs are not supported, ignoring. Nov 13 11:56:34.040630 systemd-tmpfiles[1308]: ACLs are not supported, ignoring. Nov 13 11:56:34.044007 systemd-tmpfiles[1308]: Detected autofs mount point /boot during canonicalization of boot. Nov 13 11:56:34.044019 systemd-tmpfiles[1308]: Skipping /boot Nov 13 11:56:34.060356 systemd-tmpfiles[1308]: Detected autofs mount point /boot during canonicalization of boot. Nov 13 11:56:34.060368 systemd-tmpfiles[1308]: Skipping /boot Nov 13 11:56:34.112250 zram_generator::config[1343]: No configuration found. Nov 13 11:56:34.243399 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 13 11:56:34.292796 systemd[1]: Reloading finished in 289 ms. Nov 13 11:56:34.309939 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 13 11:56:34.314682 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 13 11:56:34.328403 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 13 11:56:34.334028 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 13 11:56:34.337234 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 13 11:56:34.341605 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 13 11:56:34.343280 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 13 11:56:34.345617 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 13 11:56:34.354784 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 13 11:56:34.354974 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 13 11:56:34.357442 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 13 11:56:34.359429 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 13 11:56:34.362492 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 13 11:56:34.363000 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 13 11:56:34.363129 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 13 11:56:34.370785 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 13 11:56:34.370976 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 13 11:56:34.371132 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 13 11:56:34.379608 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 13 11:56:34.380554 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 13 11:56:34.383536 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 13 11:56:34.383758 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 13 11:56:34.391470 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 13 11:56:34.393355 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 13 11:56:34.393506 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 13 11:56:34.396233 systemd[1]: Finished ensure-sysext.service. Nov 13 11:56:34.409096 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 13 11:56:34.415736 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 13 11:56:34.417681 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 13 11:56:34.418416 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 13 11:56:34.418544 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 13 11:56:34.420849 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 13 11:56:34.420987 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 13 11:56:34.426937 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 13 11:56:34.427016 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 13 11:56:34.432520 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 13 11:56:34.432685 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 13 11:56:34.447309 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 13 11:56:34.447466 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 13 11:56:34.448131 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 13 11:56:34.452478 systemd-udevd[1398]: Using default interface naming scheme 'v255'. Nov 13 11:56:34.461468 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 13 11:56:34.465427 augenrules[1426]: No rules Nov 13 11:56:34.467351 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 13 11:56:34.469254 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 13 11:56:34.483038 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 13 11:56:34.486838 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 13 11:56:34.492345 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 13 11:56:34.502407 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 13 11:56:34.580865 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 13 11:56:34.581886 systemd[1]: Reached target time-set.target - System Time Set. Nov 13 11:56:34.616114 systemd-resolved[1397]: Positive Trust Anchors: Nov 13 11:56:34.616137 systemd-resolved[1397]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 13 11:56:34.616177 systemd-resolved[1397]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 13 11:56:34.622013 systemd-resolved[1397]: Using system hostname 'srv-gr2mf.gb1.brightbox.com'. Nov 13 11:56:34.623544 systemd-networkd[1448]: lo: Link UP Nov 13 11:56:34.624127 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 13 11:56:34.624596 systemd-networkd[1448]: lo: Gained carrier Nov 13 11:56:34.625288 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 13 11:56:34.626358 systemd-networkd[1448]: Enumeration completed Nov 13 11:56:34.626430 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 13 11:56:34.626892 systemd[1]: Reached target network.target - Network. Nov 13 11:56:34.634216 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1442) Nov 13 11:56:34.635350 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 13 11:56:34.638214 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1442) Nov 13 11:56:34.652119 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 13 11:56:34.660801 systemd-networkd[1448]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 13 11:56:34.660812 systemd-networkd[1448]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 13 11:56:34.663507 systemd-networkd[1448]: eth0: Link UP Nov 13 11:56:34.663517 systemd-networkd[1448]: eth0: Gained carrier Nov 13 11:56:34.663537 systemd-networkd[1448]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 13 11:56:34.685517 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1447) Nov 13 11:56:34.687952 systemd-networkd[1448]: eth0: DHCPv4 address 10.244.96.58/30, gateway 10.244.96.57 acquired from 10.244.96.57 Nov 13 11:56:34.688742 systemd-timesyncd[1411]: Network configuration changed, trying to establish connection. Nov 13 11:56:34.728295 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 13 11:56:34.733215 kernel: ACPI: button: Power Button [PWRF] Nov 13 11:56:34.746010 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 13 11:56:34.749209 kernel: mousedev: PS/2 mouse device common for all mice Nov 13 11:56:34.754746 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 13 11:56:34.774215 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Nov 13 11:56:34.785536 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 13 11:56:34.798216 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 13 11:56:34.805027 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Nov 13 11:56:34.805209 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 13 11:56:34.820430 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 13 11:56:34.960916 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 13 11:56:34.993629 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 13 11:56:34.999463 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 13 11:56:35.022470 lvm[1479]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 13 11:56:35.051093 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 13 11:56:35.053859 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 13 11:56:35.061677 systemd[1]: Reached target sysinit.target - System Initialization. Nov 13 11:56:35.062339 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 13 11:56:35.062907 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 13 11:56:35.063660 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 13 11:56:35.064303 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 13 11:56:35.064829 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 13 11:56:35.065358 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 13 11:56:35.065399 systemd[1]: Reached target paths.target - Path Units. Nov 13 11:56:35.065824 systemd[1]: Reached target timers.target - Timer Units. Nov 13 11:56:35.067089 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 13 11:56:35.069576 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 13 11:56:35.078360 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 13 11:56:35.081516 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 13 11:56:35.083532 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 13 11:56:35.084602 systemd[1]: Reached target sockets.target - Socket Units. Nov 13 11:56:35.085508 systemd[1]: Reached target basic.target - Basic System. Nov 13 11:56:35.086381 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 13 11:56:35.086551 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 13 11:56:35.088372 systemd[1]: Starting containerd.service - containerd container runtime... Nov 13 11:56:35.101385 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 13 11:56:35.106212 lvm[1483]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 13 11:56:35.107290 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 13 11:56:35.115780 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 13 11:56:35.120256 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 13 11:56:35.120772 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 13 11:56:35.130458 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 13 11:56:35.147166 jq[1489]: false Nov 13 11:56:35.137558 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 13 11:56:35.139340 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 13 11:56:35.143415 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 13 11:56:35.150360 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 13 11:56:35.151366 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 13 11:56:35.151853 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 13 11:56:35.152805 dbus-daemon[1486]: [system] SELinux support is enabled Nov 13 11:56:35.153410 systemd[1]: Starting update-engine.service - Update Engine... Nov 13 11:56:35.156713 dbus-daemon[1486]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1448 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Nov 13 11:56:35.162290 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 13 11:56:35.163314 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 13 11:56:35.168585 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 13 11:56:35.171178 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 13 11:56:35.172130 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 13 11:56:35.177607 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 13 11:56:35.177787 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 13 11:56:35.186717 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 13 11:56:35.186777 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 13 11:56:35.189232 dbus-daemon[1486]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 13 11:56:35.190112 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 13 11:56:35.190136 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 13 11:56:35.208706 jq[1498]: true Nov 13 11:56:35.204440 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Nov 13 11:56:35.221563 (ntainerd)[1515]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 13 11:56:35.222434 systemd[1]: motdgen.service: Deactivated successfully. Nov 13 11:56:35.222619 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 13 11:56:35.242210 tar[1503]: linux-amd64/helm Nov 13 11:56:35.268287 extend-filesystems[1490]: Found loop4 Nov 13 11:56:35.268287 extend-filesystems[1490]: Found loop5 Nov 13 11:56:35.268287 extend-filesystems[1490]: Found loop6 Nov 13 11:56:35.268287 extend-filesystems[1490]: Found loop7 Nov 13 11:56:35.268287 extend-filesystems[1490]: Found vda Nov 13 11:56:35.268287 extend-filesystems[1490]: Found vda1 Nov 13 11:56:35.268287 extend-filesystems[1490]: Found vda2 Nov 13 11:56:35.268287 extend-filesystems[1490]: Found vda3 Nov 13 11:56:35.268287 extend-filesystems[1490]: Found usr Nov 13 11:56:35.268287 extend-filesystems[1490]: Found vda4 Nov 13 11:56:35.268287 extend-filesystems[1490]: Found vda6 Nov 13 11:56:35.268287 extend-filesystems[1490]: Found vda7 Nov 13 11:56:35.268287 extend-filesystems[1490]: Found vda9 Nov 13 11:56:35.268287 extend-filesystems[1490]: Checking size of /dev/vda9 Nov 13 11:56:35.274339 systemd[1]: Started update-engine.service - Update Engine. Nov 13 11:56:35.315360 update_engine[1497]: I20241113 11:56:35.263170 1497 main.cc:92] Flatcar Update Engine starting Nov 13 11:56:35.315360 update_engine[1497]: I20241113 11:56:35.284077 1497 update_check_scheduler.cc:74] Next update check in 2m7s Nov 13 11:56:35.315530 jq[1516]: true Nov 13 11:56:35.285772 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 13 11:56:35.336244 extend-filesystems[1490]: Resized partition /dev/vda9 Nov 13 11:56:35.348331 extend-filesystems[1533]: resize2fs 1.47.1 (20-May-2024) Nov 13 11:56:35.360237 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Nov 13 11:56:35.394357 systemd-logind[1496]: Watching system buttons on /dev/input/event2 (Power Button) Nov 13 11:56:35.394770 systemd-logind[1496]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 13 11:56:35.395319 systemd-logind[1496]: New seat seat0. Nov 13 11:56:35.396633 systemd[1]: Started systemd-logind.service - User Login Management. Nov 13 11:56:35.414380 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1449) Nov 13 11:56:35.434911 dbus-daemon[1486]: [system] Successfully activated service 'org.freedesktop.hostname1' Nov 13 11:56:35.435886 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Nov 13 11:56:35.437847 dbus-daemon[1486]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1514 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Nov 13 11:56:35.450459 systemd[1]: Starting polkit.service - Authorization Manager... Nov 13 11:56:35.464259 bash[1543]: Updated "/home/core/.ssh/authorized_keys" Nov 13 11:56:35.462297 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 13 11:56:35.476314 systemd[1]: Starting sshkeys.service... Nov 13 11:56:35.512606 polkitd[1546]: Started polkitd version 121 Nov 13 11:56:35.518079 polkitd[1546]: Loading rules from directory /etc/polkit-1/rules.d Nov 13 11:56:35.519096 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 13 11:56:35.518136 polkitd[1546]: Loading rules from directory /usr/share/polkit-1/rules.d Nov 13 11:56:35.522804 systemd[1]: Started polkit.service - Authorization Manager. Nov 13 11:56:35.519757 polkitd[1546]: Finished loading, compiling and executing 2 rules Nov 13 11:56:35.520452 dbus-daemon[1486]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Nov 13 11:56:35.520754 polkitd[1546]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Nov 13 11:56:35.533343 systemd-hostnamed[1514]: Hostname set to (static) Nov 13 11:56:35.534444 locksmithd[1523]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 13 11:56:35.542913 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 13 11:56:35.550244 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 13 11:56:35.602414 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Nov 13 11:56:35.617743 extend-filesystems[1533]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 13 11:56:35.617743 extend-filesystems[1533]: old_desc_blocks = 1, new_desc_blocks = 8 Nov 13 11:56:35.617743 extend-filesystems[1533]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Nov 13 11:56:35.624904 extend-filesystems[1490]: Resized filesystem in /dev/vda9 Nov 13 11:56:35.618708 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 13 11:56:35.619499 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 13 11:56:35.657579 containerd[1515]: time="2024-11-13T11:56:35.657492126Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 13 11:56:35.725206 containerd[1515]: time="2024-11-13T11:56:35.722943095Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 13 11:56:35.728943 containerd[1515]: time="2024-11-13T11:56:35.728903819Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.60-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 13 11:56:35.729056 containerd[1515]: time="2024-11-13T11:56:35.729043508Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 13 11:56:35.729109 containerd[1515]: time="2024-11-13T11:56:35.729099327Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 13 11:56:35.729333 containerd[1515]: time="2024-11-13T11:56:35.729316832Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 13 11:56:35.730267 containerd[1515]: time="2024-11-13T11:56:35.729409626Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 13 11:56:35.730267 containerd[1515]: time="2024-11-13T11:56:35.729481896Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 13 11:56:35.730267 containerd[1515]: time="2024-11-13T11:56:35.729496052Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 13 11:56:35.730267 containerd[1515]: time="2024-11-13T11:56:35.729658381Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 13 11:56:35.730267 containerd[1515]: time="2024-11-13T11:56:35.729673957Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 13 11:56:35.730267 containerd[1515]: time="2024-11-13T11:56:35.729687704Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 13 11:56:35.730267 containerd[1515]: time="2024-11-13T11:56:35.729698843Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 13 11:56:35.730267 containerd[1515]: time="2024-11-13T11:56:35.729772207Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 13 11:56:35.730267 containerd[1515]: time="2024-11-13T11:56:35.729965573Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 13 11:56:35.730267 containerd[1515]: time="2024-11-13T11:56:35.730075788Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 13 11:56:35.730267 containerd[1515]: time="2024-11-13T11:56:35.730089987Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 13 11:56:35.730545 containerd[1515]: time="2024-11-13T11:56:35.730158701Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 13 11:56:35.730545 containerd[1515]: time="2024-11-13T11:56:35.730238361Z" level=info msg="metadata content store policy set" policy=shared Nov 13 11:56:35.731988 containerd[1515]: time="2024-11-13T11:56:35.731947275Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 13 11:56:35.732102 containerd[1515]: time="2024-11-13T11:56:35.732088623Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 13 11:56:35.732187 containerd[1515]: time="2024-11-13T11:56:35.732175714Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 13 11:56:35.732268 containerd[1515]: time="2024-11-13T11:56:35.732257327Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 13 11:56:35.732327 containerd[1515]: time="2024-11-13T11:56:35.732307642Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 13 11:56:35.732518 containerd[1515]: time="2024-11-13T11:56:35.732499726Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 13 11:56:35.733009 containerd[1515]: time="2024-11-13T11:56:35.732991396Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 13 11:56:35.733292 containerd[1515]: time="2024-11-13T11:56:35.733187393Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 13 11:56:35.733368 containerd[1515]: time="2024-11-13T11:56:35.733357306Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 13 11:56:35.733424 containerd[1515]: time="2024-11-13T11:56:35.733408656Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 13 11:56:35.733476 containerd[1515]: time="2024-11-13T11:56:35.733466174Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 13 11:56:35.733529 containerd[1515]: time="2024-11-13T11:56:35.733515042Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 13 11:56:35.733594 containerd[1515]: time="2024-11-13T11:56:35.733582966Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 13 11:56:35.733653 containerd[1515]: time="2024-11-13T11:56:35.733633686Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 13 11:56:35.733704 containerd[1515]: time="2024-11-13T11:56:35.733694041Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 13 11:56:35.733762 containerd[1515]: time="2024-11-13T11:56:35.733750865Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 13 11:56:35.733813 containerd[1515]: time="2024-11-13T11:56:35.733798523Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 13 11:56:35.733862 containerd[1515]: time="2024-11-13T11:56:35.733852991Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 13 11:56:35.733940 containerd[1515]: time="2024-11-13T11:56:35.733929729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 13 11:56:35.734006 containerd[1515]: time="2024-11-13T11:56:35.733996451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 13 11:56:35.734059 containerd[1515]: time="2024-11-13T11:56:35.734049435Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 13 11:56:35.734124 containerd[1515]: time="2024-11-13T11:56:35.734104295Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 13 11:56:35.734177 containerd[1515]: time="2024-11-13T11:56:35.734167273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 13 11:56:35.734244 containerd[1515]: time="2024-11-13T11:56:35.734234099Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 13 11:56:35.734295 containerd[1515]: time="2024-11-13T11:56:35.734281116Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 13 11:56:35.734346 containerd[1515]: time="2024-11-13T11:56:35.734335345Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 13 11:56:35.734399 containerd[1515]: time="2024-11-13T11:56:35.734388119Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 13 11:56:35.734453 containerd[1515]: time="2024-11-13T11:56:35.734438481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 13 11:56:35.734503 containerd[1515]: time="2024-11-13T11:56:35.734493627Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 13 11:56:35.734610 containerd[1515]: time="2024-11-13T11:56:35.734598893Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 13 11:56:35.734672 containerd[1515]: time="2024-11-13T11:56:35.734652619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 13 11:56:35.734728 containerd[1515]: time="2024-11-13T11:56:35.734718374Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 13 11:56:35.734832 containerd[1515]: time="2024-11-13T11:56:35.734821302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 13 11:56:35.734886 containerd[1515]: time="2024-11-13T11:56:35.734876716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 13 11:56:35.734933 containerd[1515]: time="2024-11-13T11:56:35.734923761Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 13 11:56:35.735186 containerd[1515]: time="2024-11-13T11:56:35.735170994Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 13 11:56:35.735307 containerd[1515]: time="2024-11-13T11:56:35.735292150Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 13 11:56:35.735365 containerd[1515]: time="2024-11-13T11:56:35.735355553Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 13 11:56:35.735415 containerd[1515]: time="2024-11-13T11:56:35.735403998Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 13 11:56:35.735518 containerd[1515]: time="2024-11-13T11:56:35.735507329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 13 11:56:35.735567 containerd[1515]: time="2024-11-13T11:56:35.735558289Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 13 11:56:35.735617 containerd[1515]: time="2024-11-13T11:56:35.735608506Z" level=info msg="NRI interface is disabled by configuration." Nov 13 11:56:35.735708 containerd[1515]: time="2024-11-13T11:56:35.735696722Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 13 11:56:35.736457 containerd[1515]: time="2024-11-13T11:56:35.736220057Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 13 11:56:35.737178 containerd[1515]: time="2024-11-13T11:56:35.736965748Z" level=info msg="Connect containerd service" Nov 13 11:56:35.737178 containerd[1515]: time="2024-11-13T11:56:35.737044300Z" level=info msg="using legacy CRI server" Nov 13 11:56:35.737178 containerd[1515]: time="2024-11-13T11:56:35.737054817Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 13 11:56:35.737375 containerd[1515]: time="2024-11-13T11:56:35.737328004Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 13 11:56:35.739435 containerd[1515]: time="2024-11-13T11:56:35.739390849Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 13 11:56:35.740028 containerd[1515]: time="2024-11-13T11:56:35.739988077Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 13 11:56:35.740203 containerd[1515]: time="2024-11-13T11:56:35.740109773Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 13 11:56:35.740592 containerd[1515]: time="2024-11-13T11:56:35.740560554Z" level=info msg="Start subscribing containerd event" Nov 13 11:56:35.740806 containerd[1515]: time="2024-11-13T11:56:35.740709487Z" level=info msg="Start recovering state" Nov 13 11:56:35.740927 containerd[1515]: time="2024-11-13T11:56:35.740914369Z" level=info msg="Start event monitor" Nov 13 11:56:35.741188 containerd[1515]: time="2024-11-13T11:56:35.741093535Z" level=info msg="Start snapshots syncer" Nov 13 11:56:35.741188 containerd[1515]: time="2024-11-13T11:56:35.741128886Z" level=info msg="Start cni network conf syncer for default" Nov 13 11:56:35.741188 containerd[1515]: time="2024-11-13T11:56:35.741141228Z" level=info msg="Start streaming server" Nov 13 11:56:35.741470 systemd[1]: Started containerd.service - containerd container runtime. Nov 13 11:56:35.742651 containerd[1515]: time="2024-11-13T11:56:35.742632892Z" level=info msg="containerd successfully booted in 0.086367s" Nov 13 11:56:35.839399 sshd_keygen[1521]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 13 11:56:35.869750 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 13 11:56:35.878640 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 13 11:56:35.883533 systemd[1]: Started sshd@0-10.244.96.58:22-147.75.109.163:45478.service - OpenSSH per-connection server daemon (147.75.109.163:45478). Nov 13 11:56:35.895442 systemd[1]: issuegen.service: Deactivated successfully. Nov 13 11:56:35.895650 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 13 11:56:35.909595 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 13 11:56:35.933765 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 13 11:56:35.942642 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 13 11:56:35.946936 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 13 11:56:35.949937 systemd[1]: Reached target getty.target - Login Prompts. Nov 13 11:56:36.060814 tar[1503]: linux-amd64/LICENSE Nov 13 11:56:36.061711 tar[1503]: linux-amd64/README.md Nov 13 11:56:36.073894 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 13 11:56:36.363691 systemd-networkd[1448]: eth0: Gained IPv6LL Nov 13 11:56:36.365682 systemd-timesyncd[1411]: Network configuration changed, trying to establish connection. Nov 13 11:56:36.369888 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 13 11:56:36.372862 systemd[1]: Reached target network-online.target - Network is Online. Nov 13 11:56:36.380658 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 13 11:56:36.389062 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 13 11:56:36.413379 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 13 11:56:36.794715 sshd[1584]: Accepted publickey for core from 147.75.109.163 port 45478 ssh2: RSA SHA256:6zq1KeZH3fhJd7rNbiqRD8Qhg+Zgu4M5RIFDzzh/o6k Nov 13 11:56:36.799572 sshd[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 11:56:36.814204 systemd-logind[1496]: New session 1 of user core. Nov 13 11:56:36.816009 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 13 11:56:36.823532 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 13 11:56:36.841869 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 13 11:56:36.849364 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 13 11:56:36.865325 (systemd)[1611]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 13 11:56:36.967630 systemd[1611]: Queued start job for default target default.target. Nov 13 11:56:36.984074 systemd[1611]: Created slice app.slice - User Application Slice. Nov 13 11:56:36.984318 systemd[1611]: Reached target paths.target - Paths. Nov 13 11:56:36.984361 systemd[1611]: Reached target timers.target - Timers. Nov 13 11:56:36.988477 systemd[1611]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 13 11:56:37.003387 systemd[1611]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 13 11:56:37.004057 systemd[1611]: Reached target sockets.target - Sockets. Nov 13 11:56:37.004079 systemd[1611]: Reached target basic.target - Basic System. Nov 13 11:56:37.004121 systemd[1611]: Reached target default.target - Main User Target. Nov 13 11:56:37.004151 systemd[1611]: Startup finished in 128ms. Nov 13 11:56:37.004694 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 13 11:56:37.011391 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 13 11:56:37.208596 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 13 11:56:37.218166 (kubelet)[1625]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 13 11:56:37.651593 systemd[1]: Started sshd@1-10.244.96.58:22-147.75.109.163:45482.service - OpenSSH per-connection server daemon (147.75.109.163:45482). Nov 13 11:56:37.856434 kubelet[1625]: E1113 11:56:37.856308 1625 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 13 11:56:37.861010 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 13 11:56:37.861239 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 13 11:56:37.861710 systemd[1]: kubelet.service: Consumed 1.115s CPU time. Nov 13 11:56:37.871977 systemd-timesyncd[1411]: Network configuration changed, trying to establish connection. Nov 13 11:56:37.873972 systemd-networkd[1448]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:180e:24:19ff:fef4:603a/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:180e:24:19ff:fef4:603a/64 assigned by NDisc. Nov 13 11:56:37.873993 systemd-networkd[1448]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Nov 13 11:56:38.540724 sshd[1633]: Accepted publickey for core from 147.75.109.163 port 45482 ssh2: RSA SHA256:6zq1KeZH3fhJd7rNbiqRD8Qhg+Zgu4M5RIFDzzh/o6k Nov 13 11:56:38.542745 sshd[1633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 11:56:38.548423 systemd-logind[1496]: New session 2 of user core. Nov 13 11:56:38.555437 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 13 11:56:38.924681 systemd-timesyncd[1411]: Network configuration changed, trying to establish connection. Nov 13 11:56:39.164551 sshd[1633]: pam_unix(sshd:session): session closed for user core Nov 13 11:56:39.171835 systemd-logind[1496]: Session 2 logged out. Waiting for processes to exit. Nov 13 11:56:39.172572 systemd[1]: sshd@1-10.244.96.58:22-147.75.109.163:45482.service: Deactivated successfully. Nov 13 11:56:39.176835 systemd[1]: session-2.scope: Deactivated successfully. Nov 13 11:56:39.180092 systemd-logind[1496]: Removed session 2. Nov 13 11:56:39.336687 systemd[1]: Started sshd@2-10.244.96.58:22-147.75.109.163:36524.service - OpenSSH per-connection server daemon (147.75.109.163:36524). Nov 13 11:56:40.224410 sshd[1646]: Accepted publickey for core from 147.75.109.163 port 36524 ssh2: RSA SHA256:6zq1KeZH3fhJd7rNbiqRD8Qhg+Zgu4M5RIFDzzh/o6k Nov 13 11:56:40.227002 sshd[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 11:56:40.237496 systemd-logind[1496]: New session 3 of user core. Nov 13 11:56:40.246558 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 13 11:56:40.846820 sshd[1646]: pam_unix(sshd:session): session closed for user core Nov 13 11:56:40.855477 systemd[1]: sshd@2-10.244.96.58:22-147.75.109.163:36524.service: Deactivated successfully. Nov 13 11:56:40.858379 systemd[1]: session-3.scope: Deactivated successfully. Nov 13 11:56:40.859389 systemd-logind[1496]: Session 3 logged out. Waiting for processes to exit. Nov 13 11:56:40.860998 systemd-logind[1496]: Removed session 3. Nov 13 11:56:41.004835 login[1592]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 13 11:56:41.009013 login[1591]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 13 11:56:41.010374 systemd-logind[1496]: New session 4 of user core. Nov 13 11:56:41.018586 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 13 11:56:41.021071 systemd-logind[1496]: New session 5 of user core. Nov 13 11:56:41.022640 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 13 11:56:42.201242 coreos-metadata[1485]: Nov 13 11:56:42.200 WARN failed to locate config-drive, using the metadata service API instead Nov 13 11:56:42.220184 coreos-metadata[1485]: Nov 13 11:56:42.220 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Nov 13 11:56:42.225763 coreos-metadata[1485]: Nov 13 11:56:42.225 INFO Fetch failed with 404: resource not found Nov 13 11:56:42.225763 coreos-metadata[1485]: Nov 13 11:56:42.225 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Nov 13 11:56:42.226370 coreos-metadata[1485]: Nov 13 11:56:42.226 INFO Fetch successful Nov 13 11:56:42.226479 coreos-metadata[1485]: Nov 13 11:56:42.226 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Nov 13 11:56:42.236203 coreos-metadata[1485]: Nov 13 11:56:42.236 INFO Fetch successful Nov 13 11:56:42.236359 coreos-metadata[1485]: Nov 13 11:56:42.236 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Nov 13 11:56:42.250927 coreos-metadata[1485]: Nov 13 11:56:42.250 INFO Fetch successful Nov 13 11:56:42.251377 coreos-metadata[1485]: Nov 13 11:56:42.251 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Nov 13 11:56:42.267818 coreos-metadata[1485]: Nov 13 11:56:42.267 INFO Fetch successful Nov 13 11:56:42.268151 coreos-metadata[1485]: Nov 13 11:56:42.268 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Nov 13 11:56:42.286303 coreos-metadata[1485]: Nov 13 11:56:42.286 INFO Fetch successful Nov 13 11:56:42.333713 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 13 11:56:42.336846 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 13 11:56:42.656721 coreos-metadata[1566]: Nov 13 11:56:42.655 WARN failed to locate config-drive, using the metadata service API instead Nov 13 11:56:42.672569 coreos-metadata[1566]: Nov 13 11:56:42.672 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Nov 13 11:56:42.698427 coreos-metadata[1566]: Nov 13 11:56:42.698 INFO Fetch successful Nov 13 11:56:42.698841 coreos-metadata[1566]: Nov 13 11:56:42.698 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Nov 13 11:56:42.734139 coreos-metadata[1566]: Nov 13 11:56:42.733 INFO Fetch successful Nov 13 11:56:42.736569 unknown[1566]: wrote ssh authorized keys file for user: core Nov 13 11:56:42.771555 update-ssh-keys[1688]: Updated "/home/core/.ssh/authorized_keys" Nov 13 11:56:42.772972 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 13 11:56:42.776595 systemd[1]: Finished sshkeys.service. Nov 13 11:56:42.781878 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 13 11:56:42.782333 systemd[1]: Startup finished in 1.026s (kernel) + 13.471s (initrd) + 10.637s (userspace) = 25.135s. Nov 13 11:56:48.112490 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 13 11:56:48.121587 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 13 11:56:48.281722 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 13 11:56:48.286639 (kubelet)[1699]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 13 11:56:48.337548 kubelet[1699]: E1113 11:56:48.337464 1699 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 13 11:56:48.347987 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 13 11:56:48.348693 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 13 11:56:51.013559 systemd[1]: Started sshd@3-10.244.96.58:22-147.75.109.163:55558.service - OpenSSH per-connection server daemon (147.75.109.163:55558). Nov 13 11:56:51.924063 sshd[1709]: Accepted publickey for core from 147.75.109.163 port 55558 ssh2: RSA SHA256:6zq1KeZH3fhJd7rNbiqRD8Qhg+Zgu4M5RIFDzzh/o6k Nov 13 11:56:51.928060 sshd[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 11:56:51.935736 systemd-logind[1496]: New session 6 of user core. Nov 13 11:56:51.948417 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 13 11:56:52.549426 sshd[1709]: pam_unix(sshd:session): session closed for user core Nov 13 11:56:52.554863 systemd[1]: sshd@3-10.244.96.58:22-147.75.109.163:55558.service: Deactivated successfully. Nov 13 11:56:52.556859 systemd[1]: session-6.scope: Deactivated successfully. Nov 13 11:56:52.558625 systemd-logind[1496]: Session 6 logged out. Waiting for processes to exit. Nov 13 11:56:52.559844 systemd-logind[1496]: Removed session 6. Nov 13 11:56:52.717883 systemd[1]: Started sshd@4-10.244.96.58:22-147.75.109.163:55562.service - OpenSSH per-connection server daemon (147.75.109.163:55562). Nov 13 11:56:53.611573 sshd[1716]: Accepted publickey for core from 147.75.109.163 port 55562 ssh2: RSA SHA256:6zq1KeZH3fhJd7rNbiqRD8Qhg+Zgu4M5RIFDzzh/o6k Nov 13 11:56:53.615114 sshd[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 11:56:53.625527 systemd-logind[1496]: New session 7 of user core. Nov 13 11:56:53.632660 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 13 11:56:54.227553 sshd[1716]: pam_unix(sshd:session): session closed for user core Nov 13 11:56:54.236357 systemd[1]: sshd@4-10.244.96.58:22-147.75.109.163:55562.service: Deactivated successfully. Nov 13 11:56:54.239022 systemd[1]: session-7.scope: Deactivated successfully. Nov 13 11:56:54.240095 systemd-logind[1496]: Session 7 logged out. Waiting for processes to exit. Nov 13 11:56:54.242064 systemd-logind[1496]: Removed session 7. Nov 13 11:56:54.393665 systemd[1]: Started sshd@5-10.244.96.58:22-147.75.109.163:55566.service - OpenSSH per-connection server daemon (147.75.109.163:55566). Nov 13 11:56:55.293082 sshd[1723]: Accepted publickey for core from 147.75.109.163 port 55566 ssh2: RSA SHA256:6zq1KeZH3fhJd7rNbiqRD8Qhg+Zgu4M5RIFDzzh/o6k Nov 13 11:56:55.296680 sshd[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 11:56:55.308459 systemd-logind[1496]: New session 8 of user core. Nov 13 11:56:55.318568 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 13 11:56:55.918999 sshd[1723]: pam_unix(sshd:session): session closed for user core Nov 13 11:56:55.928083 systemd[1]: sshd@5-10.244.96.58:22-147.75.109.163:55566.service: Deactivated successfully. Nov 13 11:56:55.931838 systemd[1]: session-8.scope: Deactivated successfully. Nov 13 11:56:55.933137 systemd-logind[1496]: Session 8 logged out. Waiting for processes to exit. Nov 13 11:56:55.935325 systemd-logind[1496]: Removed session 8. Nov 13 11:56:56.077896 systemd[1]: Started sshd@6-10.244.96.58:22-147.75.109.163:55574.service - OpenSSH per-connection server daemon (147.75.109.163:55574). Nov 13 11:56:56.972115 sshd[1730]: Accepted publickey for core from 147.75.109.163 port 55574 ssh2: RSA SHA256:6zq1KeZH3fhJd7rNbiqRD8Qhg+Zgu4M5RIFDzzh/o6k Nov 13 11:56:56.975665 sshd[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 11:56:56.985256 systemd-logind[1496]: New session 9 of user core. Nov 13 11:56:56.992419 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 13 11:56:57.467660 sudo[1733]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 13 11:56:57.468182 sudo[1733]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 13 11:56:57.481904 sudo[1733]: pam_unix(sudo:session): session closed for user root Nov 13 11:56:57.628169 sshd[1730]: pam_unix(sshd:session): session closed for user core Nov 13 11:56:57.638284 systemd[1]: sshd@6-10.244.96.58:22-147.75.109.163:55574.service: Deactivated successfully. Nov 13 11:56:57.643029 systemd[1]: session-9.scope: Deactivated successfully. Nov 13 11:56:57.644970 systemd-logind[1496]: Session 9 logged out. Waiting for processes to exit. Nov 13 11:56:57.647060 systemd-logind[1496]: Removed session 9. Nov 13 11:56:57.787662 systemd[1]: Started sshd@7-10.244.96.58:22-147.75.109.163:55576.service - OpenSSH per-connection server daemon (147.75.109.163:55576). Nov 13 11:56:58.071621 systemd[1]: Started sshd@8-10.244.96.58:22-92.255.85.189:39970.service - OpenSSH per-connection server daemon (92.255.85.189:39970). Nov 13 11:56:58.538629 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 13 11:56:58.553535 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 13 11:56:58.692874 sshd[1738]: Accepted publickey for core from 147.75.109.163 port 55576 ssh2: RSA SHA256:6zq1KeZH3fhJd7rNbiqRD8Qhg+Zgu4M5RIFDzzh/o6k Nov 13 11:56:58.694447 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 13 11:56:58.696502 sshd[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 11:56:58.702004 systemd-logind[1496]: New session 10 of user core. Nov 13 11:56:58.708580 (kubelet)[1750]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 13 11:56:58.710132 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 13 11:56:58.771684 kubelet[1750]: E1113 11:56:58.771628 1750 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 13 11:56:58.776671 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 13 11:56:58.776855 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 13 11:56:59.177878 sudo[1761]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 13 11:56:59.178469 sudo[1761]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 13 11:56:59.187071 sudo[1761]: pam_unix(sudo:session): session closed for user root Nov 13 11:56:59.199014 sudo[1760]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 13 11:56:59.199372 sudo[1760]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 13 11:56:59.224904 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 13 11:56:59.227089 auditctl[1764]: No rules Nov 13 11:56:59.229421 systemd[1]: audit-rules.service: Deactivated successfully. Nov 13 11:56:59.229926 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 13 11:56:59.237599 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 13 11:56:59.264322 augenrules[1782]: No rules Nov 13 11:56:59.265282 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 13 11:56:59.266508 sudo[1760]: pam_unix(sudo:session): session closed for user root Nov 13 11:56:59.372273 sshd[1741]: Invalid user ubnt from 92.255.85.189 port 39970 Nov 13 11:56:59.410977 sshd[1738]: pam_unix(sshd:session): session closed for user core Nov 13 11:56:59.418417 systemd[1]: sshd@7-10.244.96.58:22-147.75.109.163:55576.service: Deactivated successfully. Nov 13 11:56:59.420947 systemd[1]: session-10.scope: Deactivated successfully. Nov 13 11:56:59.423183 systemd-logind[1496]: Session 10 logged out. Waiting for processes to exit. Nov 13 11:56:59.424529 systemd-logind[1496]: Removed session 10. Nov 13 11:56:59.465567 sshd[1741]: Connection closed by invalid user ubnt 92.255.85.189 port 39970 [preauth] Nov 13 11:56:59.468364 systemd[1]: sshd@8-10.244.96.58:22-92.255.85.189:39970.service: Deactivated successfully. Nov 13 11:56:59.574664 systemd[1]: Started sshd@9-10.244.96.58:22-147.75.109.163:51912.service - OpenSSH per-connection server daemon (147.75.109.163:51912). Nov 13 11:57:00.466941 sshd[1792]: Accepted publickey for core from 147.75.109.163 port 51912 ssh2: RSA SHA256:6zq1KeZH3fhJd7rNbiqRD8Qhg+Zgu4M5RIFDzzh/o6k Nov 13 11:57:00.469328 sshd[1792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 11:57:00.476071 systemd-logind[1496]: New session 11 of user core. Nov 13 11:57:00.486745 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 13 11:57:00.946826 sudo[1795]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 13 11:57:00.947109 sudo[1795]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 13 11:57:01.352890 (dockerd)[1812]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 13 11:57:01.353361 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 13 11:57:01.772844 dockerd[1812]: time="2024-11-13T11:57:01.772402567Z" level=info msg="Starting up" Nov 13 11:57:01.916999 dockerd[1812]: time="2024-11-13T11:57:01.916949853Z" level=info msg="Loading containers: start." Nov 13 11:57:02.051298 kernel: Initializing XFRM netlink socket Nov 13 11:57:02.092731 systemd-timesyncd[1411]: Network configuration changed, trying to establish connection. Nov 13 11:57:02.162105 systemd-networkd[1448]: docker0: Link UP Nov 13 11:57:02.183395 dockerd[1812]: time="2024-11-13T11:57:02.183309412Z" level=info msg="Loading containers: done." Nov 13 11:57:02.207003 dockerd[1812]: time="2024-11-13T11:57:02.206507286Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 13 11:57:02.207003 dockerd[1812]: time="2024-11-13T11:57:02.206629380Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 13 11:57:02.207003 dockerd[1812]: time="2024-11-13T11:57:02.206747560Z" level=info msg="Daemon has completed initialization" Nov 13 11:57:02.231387 dockerd[1812]: time="2024-11-13T11:57:02.231323968Z" level=info msg="API listen on /run/docker.sock" Nov 13 11:57:02.231554 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 13 11:57:02.366694 systemd-timesyncd[1411]: Contacted time server [2a01:7e00::f03c:91ff:fe89:410f]:123 (2.flatcar.pool.ntp.org). Nov 13 11:57:02.366777 systemd-timesyncd[1411]: Initial clock synchronization to Wed 2024-11-13 11:57:02.328496 UTC. Nov 13 11:57:03.599758 containerd[1515]: time="2024-11-13T11:57:03.599287661Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.6\"" Nov 13 11:57:04.543380 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3907113299.mount: Deactivated successfully. Nov 13 11:57:06.188307 containerd[1515]: time="2024-11-13T11:57:06.188093382Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 11:57:06.190274 containerd[1515]: time="2024-11-13T11:57:06.190150899Z" level=info msg="ImageCreate event name:\"sha256:a247bfa6152e770cd36ef6fe2a8831429eb43da1fd506c30b12af93f032ee849\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 11:57:06.190274 containerd[1515]: time="2024-11-13T11:57:06.190221272Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.6: active requests=0, bytes read=32676451" Nov 13 11:57:06.195388 containerd[1515]: time="2024-11-13T11:57:06.195344493Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:3a820898379831ecff7cf4ce4954bb7a6505988eefcef146fd1ee2f56a01cdbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 11:57:06.197801 containerd[1515]: time="2024-11-13T11:57:06.196765734Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.6\" with image id \"sha256:a247bfa6152e770cd36ef6fe2a8831429eb43da1fd506c30b12af93f032ee849\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:3a820898379831ecff7cf4ce4954bb7a6505988eefcef146fd1ee2f56a01cdbb\", size \"32673243\" in 2.597398099s" Nov 13 11:57:06.197801 containerd[1515]: time="2024-11-13T11:57:06.196813693Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.6\" returns image reference \"sha256:a247bfa6152e770cd36ef6fe2a8831429eb43da1fd506c30b12af93f032ee849\"" Nov 13 11:57:06.224266 containerd[1515]: time="2024-11-13T11:57:06.224230704Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.6\"" Nov 13 11:57:07.921241 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 13 11:57:07.937534 containerd[1515]: time="2024-11-13T11:57:07.936234271Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 11:57:07.938447 containerd[1515]: time="2024-11-13T11:57:07.938212048Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.6: active requests=0, bytes read=29605804" Nov 13 11:57:07.939210 containerd[1515]: time="2024-11-13T11:57:07.939131282Z" level=info msg="ImageCreate event name:\"sha256:382949f9bfdd9da8bf555d18adac4eb0dba8264b7e3b5963e6a26ef8d412477c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 11:57:07.942704 containerd[1515]: time="2024-11-13T11:57:07.942647281Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a412c3cdf35d39c8d37748b457a486faae7c5f2ee1d1ba2059c709bc5534686\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 11:57:07.943971 containerd[1515]: time="2024-11-13T11:57:07.943880700Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.6\" with image id \"sha256:382949f9bfdd9da8bf555d18adac4eb0dba8264b7e3b5963e6a26ef8d412477c\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a412c3cdf35d39c8d37748b457a486faae7c5f2ee1d1ba2059c709bc5534686\", size \"31051162\" in 1.71944128s" Nov 13 11:57:07.943971 containerd[1515]: time="2024-11-13T11:57:07.943941940Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.6\" returns image reference \"sha256:382949f9bfdd9da8bf555d18adac4eb0dba8264b7e3b5963e6a26ef8d412477c\"" Nov 13 11:57:07.974980 containerd[1515]: time="2024-11-13T11:57:07.974697672Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.6\"" Nov 13 11:57:09.028839 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 13 11:57:09.037578 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 13 11:57:09.197377 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 13 11:57:09.210777 (kubelet)[2042]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 13 11:57:09.297412 kubelet[2042]: E1113 11:57:09.296839 2042 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 13 11:57:09.300419 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 13 11:57:09.300617 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 13 11:57:09.580334 containerd[1515]: time="2024-11-13T11:57:09.579905302Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 11:57:09.581184 containerd[1515]: time="2024-11-13T11:57:09.581136703Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.6: active requests=0, bytes read=17784252" Nov 13 11:57:09.581887 containerd[1515]: time="2024-11-13T11:57:09.581504662Z" level=info msg="ImageCreate event name:\"sha256:ad5858afd532223324ff223396490f5fd8228323963b424ad7868407bd4ef1fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 11:57:09.584962 containerd[1515]: time="2024-11-13T11:57:09.584884118Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:948395c284d82c985f2dc0d99b5b51b3ca85eba97003babbc73834e0ab91fa59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 11:57:09.586452 containerd[1515]: time="2024-11-13T11:57:09.586001981Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.6\" with image id \"sha256:ad5858afd532223324ff223396490f5fd8228323963b424ad7868407bd4ef1fb\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:948395c284d82c985f2dc0d99b5b51b3ca85eba97003babbc73834e0ab91fa59\", size \"19229628\" in 1.611257789s" Nov 13 11:57:09.586452 containerd[1515]: time="2024-11-13T11:57:09.586049649Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.6\" returns image reference \"sha256:ad5858afd532223324ff223396490f5fd8228323963b424ad7868407bd4ef1fb\"" Nov 13 11:57:09.612945 containerd[1515]: time="2024-11-13T11:57:09.612892124Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.6\"" Nov 13 11:57:10.964537 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3960498758.mount: Deactivated successfully. Nov 13 11:57:11.338146 containerd[1515]: time="2024-11-13T11:57:11.337503982Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 11:57:11.339719 containerd[1515]: time="2024-11-13T11:57:11.338840842Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.6: active requests=0, bytes read=29054632" Nov 13 11:57:11.339719 containerd[1515]: time="2024-11-13T11:57:11.339570466Z" level=info msg="ImageCreate event name:\"sha256:2cce8902ed3ccdc78ecdb02734bd9ba32e2c7b44fc221663cf9ece2a179ff6a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 11:57:11.341436 containerd[1515]: time="2024-11-13T11:57:11.341384153Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:aaf790f611159ab21713affc2c5676f742c9b31db26dd2e61e46c4257dd11b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 11:57:11.342612 containerd[1515]: time="2024-11-13T11:57:11.342571044Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.6\" with image id \"sha256:2cce8902ed3ccdc78ecdb02734bd9ba32e2c7b44fc221663cf9ece2a179ff6a6\", repo tag \"registry.k8s.io/kube-proxy:v1.30.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:aaf790f611159ab21713affc2c5676f742c9b31db26dd2e61e46c4257dd11b76\", size \"29053643\" in 1.729634983s" Nov 13 11:57:11.342739 containerd[1515]: time="2024-11-13T11:57:11.342614537Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.6\" returns image reference \"sha256:2cce8902ed3ccdc78ecdb02734bd9ba32e2c7b44fc221663cf9ece2a179ff6a6\"" Nov 13 11:57:11.369762 containerd[1515]: time="2024-11-13T11:57:11.369697011Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Nov 13 11:57:12.025860 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4188365018.mount: Deactivated successfully. Nov 13 11:57:12.966744 containerd[1515]: time="2024-11-13T11:57:12.965529764Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 11:57:12.967776 containerd[1515]: time="2024-11-13T11:57:12.967739849Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Nov 13 11:57:12.968252 containerd[1515]: time="2024-11-13T11:57:12.968221631Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 11:57:12.971025 containerd[1515]: time="2024-11-13T11:57:12.970993947Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 11:57:12.972290 containerd[1515]: time="2024-11-13T11:57:12.972263751Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.602498669s" Nov 13 11:57:12.972434 containerd[1515]: time="2024-11-13T11:57:12.972418388Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Nov 13 11:57:12.996602 containerd[1515]: time="2024-11-13T11:57:12.996567250Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Nov 13 11:57:13.642384 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2873717852.mount: Deactivated successfully. Nov 13 11:57:13.646349 containerd[1515]: time="2024-11-13T11:57:13.646285633Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 11:57:13.646969 containerd[1515]: time="2024-11-13T11:57:13.646921395Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Nov 13 11:57:13.647847 containerd[1515]: time="2024-11-13T11:57:13.647815448Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 11:57:13.651880 containerd[1515]: time="2024-11-13T11:57:13.651835699Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 11:57:13.653770 containerd[1515]: time="2024-11-13T11:57:13.653722838Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 656.963401ms" Nov 13 11:57:13.653927 containerd[1515]: time="2024-11-13T11:57:13.653775008Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Nov 13 11:57:13.686406 containerd[1515]: time="2024-11-13T11:57:13.686310790Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Nov 13 11:57:14.292758 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1373368347.mount: Deactivated successfully. Nov 13 11:57:18.312012 containerd[1515]: time="2024-11-13T11:57:18.311461362Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 11:57:18.313085 containerd[1515]: time="2024-11-13T11:57:18.312607157Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238579" Nov 13 11:57:18.313318 containerd[1515]: time="2024-11-13T11:57:18.313268759Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 11:57:18.316274 containerd[1515]: time="2024-11-13T11:57:18.316229916Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 11:57:18.321539 containerd[1515]: time="2024-11-13T11:57:18.321306241Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 4.634937757s" Nov 13 11:57:18.321539 containerd[1515]: time="2024-11-13T11:57:18.321404153Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Nov 13 11:57:19.332167 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 13 11:57:19.344277 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 13 11:57:19.474406 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 13 11:57:19.481147 (kubelet)[2236]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 13 11:57:19.537697 kubelet[2236]: E1113 11:57:19.537651 2236 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 13 11:57:19.540672 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 13 11:57:19.540830 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 13 11:57:20.658637 update_engine[1497]: I20241113 11:57:20.658435 1497 update_attempter.cc:509] Updating boot flags... Nov 13 11:57:20.720226 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2252) Nov 13 11:57:20.791317 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2253) Nov 13 11:57:21.237793 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 13 11:57:21.246587 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 13 11:57:21.264454 systemd[1]: Reloading requested from client PID 2265 ('systemctl') (unit session-11.scope)... Nov 13 11:57:21.264487 systemd[1]: Reloading... Nov 13 11:57:21.393262 zram_generator::config[2308]: No configuration found. Nov 13 11:57:21.530837 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 13 11:57:21.607505 systemd[1]: Reloading finished in 342 ms. Nov 13 11:57:21.673350 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 13 11:57:21.678836 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 13 11:57:21.680371 systemd[1]: kubelet.service: Deactivated successfully. Nov 13 11:57:21.680632 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 13 11:57:21.685464 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 13 11:57:21.797592 (kubelet)[2373]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 13 11:57:21.797605 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 13 11:57:21.850640 kubelet[2373]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 13 11:57:21.850640 kubelet[2373]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 13 11:57:21.850640 kubelet[2373]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 13 11:57:21.852166 kubelet[2373]: I1113 11:57:21.851752 2373 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 13 11:57:22.202126 kubelet[2373]: I1113 11:57:22.202079 2373 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Nov 13 11:57:22.202126 kubelet[2373]: I1113 11:57:22.202112 2373 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 13 11:57:22.202639 kubelet[2373]: I1113 11:57:22.202622 2373 server.go:927] "Client rotation is on, will bootstrap in background" Nov 13 11:57:22.225060 kubelet[2373]: I1113 11:57:22.225026 2373 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 13 11:57:22.234413 kubelet[2373]: E1113 11:57:22.234380 2373 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.244.96.58:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.244.96.58:6443: connect: connection refused Nov 13 11:57:22.245586 kubelet[2373]: I1113 11:57:22.245546 2373 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 13 11:57:22.245903 kubelet[2373]: I1113 11:57:22.245861 2373 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 13 11:57:22.247332 kubelet[2373]: I1113 11:57:22.245905 2373 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-gr2mf.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 13 11:57:22.247497 kubelet[2373]: I1113 11:57:22.247360 2373 topology_manager.go:138] "Creating topology manager with none policy" Nov 13 11:57:22.247497 kubelet[2373]: I1113 11:57:22.247370 2373 container_manager_linux.go:301] "Creating device plugin manager" Nov 13 11:57:22.248943 kubelet[2373]: I1113 11:57:22.248910 2373 state_mem.go:36] "Initialized new in-memory state store" Nov 13 11:57:22.251312 kubelet[2373]: I1113 11:57:22.251290 2373 kubelet.go:400] "Attempting to sync node with API server" Nov 13 11:57:22.251350 kubelet[2373]: I1113 11:57:22.251314 2373 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 13 11:57:22.251350 kubelet[2373]: I1113 11:57:22.251346 2373 kubelet.go:312] "Adding apiserver pod source" Nov 13 11:57:22.251396 kubelet[2373]: I1113 11:57:22.251372 2373 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 13 11:57:22.253972 kubelet[2373]: W1113 11:57:22.253911 2373 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.244.96.58:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-gr2mf.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.96.58:6443: connect: connection refused Nov 13 11:57:22.254860 kubelet[2373]: E1113 11:57:22.254103 2373 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.244.96.58:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-gr2mf.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.96.58:6443: connect: connection refused Nov 13 11:57:22.254860 kubelet[2373]: W1113 11:57:22.254545 2373 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.244.96.58:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.244.96.58:6443: connect: connection refused Nov 13 11:57:22.254860 kubelet[2373]: E1113 11:57:22.254585 2373 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.244.96.58:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.244.96.58:6443: connect: connection refused Nov 13 11:57:22.255149 kubelet[2373]: I1113 11:57:22.255132 2373 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 13 11:57:22.257773 kubelet[2373]: I1113 11:57:22.257738 2373 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 13 11:57:22.258308 kubelet[2373]: W1113 11:57:22.258284 2373 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 13 11:57:22.261401 kubelet[2373]: I1113 11:57:22.261324 2373 server.go:1264] "Started kubelet" Nov 13 11:57:22.264067 kubelet[2373]: I1113 11:57:22.263719 2373 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 13 11:57:22.267019 kubelet[2373]: I1113 11:57:22.266795 2373 server.go:455] "Adding debug handlers to kubelet server" Nov 13 11:57:22.272207 kubelet[2373]: I1113 11:57:22.271852 2373 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 13 11:57:22.272207 kubelet[2373]: I1113 11:57:22.272113 2373 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 13 11:57:22.274329 kubelet[2373]: E1113 11:57:22.274183 2373 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.244.96.58:6443/api/v1/namespaces/default/events\": dial tcp 10.244.96.58:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-gr2mf.gb1.brightbox.com.180785347ee9a3b3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-gr2mf.gb1.brightbox.com,UID:srv-gr2mf.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-gr2mf.gb1.brightbox.com,},FirstTimestamp:2024-11-13 11:57:22.261279667 +0000 UTC m=+0.459107530,LastTimestamp:2024-11-13 11:57:22.261279667 +0000 UTC m=+0.459107530,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-gr2mf.gb1.brightbox.com,}" Nov 13 11:57:22.276698 kubelet[2373]: I1113 11:57:22.276677 2373 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 13 11:57:22.279622 kubelet[2373]: I1113 11:57:22.279349 2373 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 13 11:57:22.280891 kubelet[2373]: I1113 11:57:22.280773 2373 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Nov 13 11:57:22.280891 kubelet[2373]: I1113 11:57:22.280841 2373 reconciler.go:26] "Reconciler: start to sync state" Nov 13 11:57:22.282004 kubelet[2373]: W1113 11:57:22.281792 2373 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.244.96.58:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.96.58:6443: connect: connection refused Nov 13 11:57:22.282004 kubelet[2373]: E1113 11:57:22.281842 2373 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.244.96.58:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.96.58:6443: connect: connection refused Nov 13 11:57:22.282004 kubelet[2373]: E1113 11:57:22.281889 2373 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.96.58:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-gr2mf.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.96.58:6443: connect: connection refused" interval="200ms" Nov 13 11:57:22.282345 kubelet[2373]: E1113 11:57:22.282327 2373 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 13 11:57:22.283426 kubelet[2373]: I1113 11:57:22.283403 2373 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 13 11:57:22.285719 kubelet[2373]: I1113 11:57:22.285517 2373 factory.go:221] Registration of the containerd container factory successfully Nov 13 11:57:22.285719 kubelet[2373]: I1113 11:57:22.285534 2373 factory.go:221] Registration of the systemd container factory successfully Nov 13 11:57:22.294694 kubelet[2373]: I1113 11:57:22.294649 2373 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 13 11:57:22.295919 kubelet[2373]: I1113 11:57:22.295899 2373 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 13 11:57:22.295971 kubelet[2373]: I1113 11:57:22.295943 2373 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 13 11:57:22.295971 kubelet[2373]: I1113 11:57:22.295964 2373 kubelet.go:2337] "Starting kubelet main sync loop" Nov 13 11:57:22.296029 kubelet[2373]: E1113 11:57:22.296008 2373 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 13 11:57:22.303238 kubelet[2373]: W1113 11:57:22.303182 2373 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.244.96.58:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.96.58:6443: connect: connection refused Nov 13 11:57:22.303360 kubelet[2373]: E1113 11:57:22.303242 2373 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.244.96.58:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.96.58:6443: connect: connection refused Nov 13 11:57:22.316867 kubelet[2373]: I1113 11:57:22.316851 2373 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 13 11:57:22.316867 kubelet[2373]: I1113 11:57:22.316863 2373 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 13 11:57:22.316973 kubelet[2373]: I1113 11:57:22.316879 2373 state_mem.go:36] "Initialized new in-memory state store" Nov 13 11:57:22.318288 kubelet[2373]: I1113 11:57:22.318268 2373 policy_none.go:49] "None policy: Start" Nov 13 11:57:22.318877 kubelet[2373]: I1113 11:57:22.318852 2373 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 13 11:57:22.318937 kubelet[2373]: I1113 11:57:22.318883 2373 state_mem.go:35] "Initializing new in-memory state store" Nov 13 11:57:22.323737 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 13 11:57:22.332203 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 13 11:57:22.336566 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 13 11:57:22.343774 kubelet[2373]: I1113 11:57:22.342909 2373 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 13 11:57:22.343774 kubelet[2373]: I1113 11:57:22.343088 2373 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 13 11:57:22.343774 kubelet[2373]: I1113 11:57:22.343220 2373 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 13 11:57:22.349083 kubelet[2373]: E1113 11:57:22.348565 2373 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-gr2mf.gb1.brightbox.com\" not found" Nov 13 11:57:22.384027 kubelet[2373]: I1113 11:57:22.383977 2373 kubelet_node_status.go:73] "Attempting to register node" node="srv-gr2mf.gb1.brightbox.com" Nov 13 11:57:22.384992 kubelet[2373]: E1113 11:57:22.384923 2373 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.244.96.58:6443/api/v1/nodes\": dial tcp 10.244.96.58:6443: connect: connection refused" node="srv-gr2mf.gb1.brightbox.com" Nov 13 11:57:22.396845 kubelet[2373]: I1113 11:57:22.396230 2373 topology_manager.go:215] "Topology Admit Handler" podUID="5f6bf0a5ac2d61e32b0973a4745aa96c" podNamespace="kube-system" podName="kube-apiserver-srv-gr2mf.gb1.brightbox.com" Nov 13 11:57:22.401574 kubelet[2373]: I1113 11:57:22.401499 2373 topology_manager.go:215] "Topology Admit Handler" podUID="e06fde8b02fb0908c92f03501a0df148" podNamespace="kube-system" podName="kube-controller-manager-srv-gr2mf.gb1.brightbox.com" Nov 13 11:57:22.406612 kubelet[2373]: I1113 11:57:22.405447 2373 topology_manager.go:215] "Topology Admit Handler" podUID="cb106591ffea98bd6286cc044f766e07" podNamespace="kube-system" podName="kube-scheduler-srv-gr2mf.gb1.brightbox.com" Nov 13 11:57:22.415034 systemd[1]: Created slice kubepods-burstable-pod5f6bf0a5ac2d61e32b0973a4745aa96c.slice - libcontainer container kubepods-burstable-pod5f6bf0a5ac2d61e32b0973a4745aa96c.slice. Nov 13 11:57:22.437762 systemd[1]: Created slice kubepods-burstable-pode06fde8b02fb0908c92f03501a0df148.slice - libcontainer container kubepods-burstable-pode06fde8b02fb0908c92f03501a0df148.slice. Nov 13 11:57:22.448760 systemd[1]: Created slice kubepods-burstable-podcb106591ffea98bd6286cc044f766e07.slice - libcontainer container kubepods-burstable-podcb106591ffea98bd6286cc044f766e07.slice. Nov 13 11:57:22.483069 kubelet[2373]: E1113 11:57:22.482413 2373 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.96.58:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-gr2mf.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.96.58:6443: connect: connection refused" interval="400ms" Nov 13 11:57:22.583428 kubelet[2373]: I1113 11:57:22.583270 2373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5f6bf0a5ac2d61e32b0973a4745aa96c-usr-share-ca-certificates\") pod \"kube-apiserver-srv-gr2mf.gb1.brightbox.com\" (UID: \"5f6bf0a5ac2d61e32b0973a4745aa96c\") " pod="kube-system/kube-apiserver-srv-gr2mf.gb1.brightbox.com" Nov 13 11:57:22.583428 kubelet[2373]: I1113 11:57:22.583338 2373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e06fde8b02fb0908c92f03501a0df148-k8s-certs\") pod \"kube-controller-manager-srv-gr2mf.gb1.brightbox.com\" (UID: \"e06fde8b02fb0908c92f03501a0df148\") " pod="kube-system/kube-controller-manager-srv-gr2mf.gb1.brightbox.com" Nov 13 11:57:22.583428 kubelet[2373]: I1113 11:57:22.583367 2373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5f6bf0a5ac2d61e32b0973a4745aa96c-ca-certs\") pod \"kube-apiserver-srv-gr2mf.gb1.brightbox.com\" (UID: \"5f6bf0a5ac2d61e32b0973a4745aa96c\") " pod="kube-system/kube-apiserver-srv-gr2mf.gb1.brightbox.com" Nov 13 11:57:22.583428 kubelet[2373]: I1113 11:57:22.583395 2373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e06fde8b02fb0908c92f03501a0df148-ca-certs\") pod \"kube-controller-manager-srv-gr2mf.gb1.brightbox.com\" (UID: \"e06fde8b02fb0908c92f03501a0df148\") " pod="kube-system/kube-controller-manager-srv-gr2mf.gb1.brightbox.com" Nov 13 11:57:22.583428 kubelet[2373]: I1113 11:57:22.583432 2373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e06fde8b02fb0908c92f03501a0df148-flexvolume-dir\") pod \"kube-controller-manager-srv-gr2mf.gb1.brightbox.com\" (UID: \"e06fde8b02fb0908c92f03501a0df148\") " pod="kube-system/kube-controller-manager-srv-gr2mf.gb1.brightbox.com" Nov 13 11:57:22.583752 kubelet[2373]: I1113 11:57:22.583460 2373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e06fde8b02fb0908c92f03501a0df148-kubeconfig\") pod \"kube-controller-manager-srv-gr2mf.gb1.brightbox.com\" (UID: \"e06fde8b02fb0908c92f03501a0df148\") " pod="kube-system/kube-controller-manager-srv-gr2mf.gb1.brightbox.com" Nov 13 11:57:22.583752 kubelet[2373]: I1113 11:57:22.583485 2373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e06fde8b02fb0908c92f03501a0df148-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-gr2mf.gb1.brightbox.com\" (UID: \"e06fde8b02fb0908c92f03501a0df148\") " pod="kube-system/kube-controller-manager-srv-gr2mf.gb1.brightbox.com" Nov 13 11:57:22.583752 kubelet[2373]: I1113 11:57:22.583512 2373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cb106591ffea98bd6286cc044f766e07-kubeconfig\") pod \"kube-scheduler-srv-gr2mf.gb1.brightbox.com\" (UID: \"cb106591ffea98bd6286cc044f766e07\") " pod="kube-system/kube-scheduler-srv-gr2mf.gb1.brightbox.com" Nov 13 11:57:22.583752 kubelet[2373]: I1113 11:57:22.583535 2373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5f6bf0a5ac2d61e32b0973a4745aa96c-k8s-certs\") pod \"kube-apiserver-srv-gr2mf.gb1.brightbox.com\" (UID: \"5f6bf0a5ac2d61e32b0973a4745aa96c\") " pod="kube-system/kube-apiserver-srv-gr2mf.gb1.brightbox.com" Nov 13 11:57:22.588270 kubelet[2373]: I1113 11:57:22.588127 2373 kubelet_node_status.go:73] "Attempting to register node" node="srv-gr2mf.gb1.brightbox.com" Nov 13 11:57:22.588955 kubelet[2373]: E1113 11:57:22.588910 2373 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.244.96.58:6443/api/v1/nodes\": dial tcp 10.244.96.58:6443: connect: connection refused" node="srv-gr2mf.gb1.brightbox.com" Nov 13 11:57:22.737037 containerd[1515]: time="2024-11-13T11:57:22.736740305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-gr2mf.gb1.brightbox.com,Uid:5f6bf0a5ac2d61e32b0973a4745aa96c,Namespace:kube-system,Attempt:0,}" Nov 13 11:57:22.756540 containerd[1515]: time="2024-11-13T11:57:22.756115117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-gr2mf.gb1.brightbox.com,Uid:e06fde8b02fb0908c92f03501a0df148,Namespace:kube-system,Attempt:0,}" Nov 13 11:57:22.756540 containerd[1515]: time="2024-11-13T11:57:22.756240498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-gr2mf.gb1.brightbox.com,Uid:cb106591ffea98bd6286cc044f766e07,Namespace:kube-system,Attempt:0,}" Nov 13 11:57:22.883139 kubelet[2373]: E1113 11:57:22.883042 2373 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.96.58:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-gr2mf.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.96.58:6443: connect: connection refused" interval="800ms" Nov 13 11:57:22.993616 kubelet[2373]: I1113 11:57:22.992741 2373 kubelet_node_status.go:73] "Attempting to register node" node="srv-gr2mf.gb1.brightbox.com" Nov 13 11:57:22.993616 kubelet[2373]: E1113 11:57:22.993405 2373 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.244.96.58:6443/api/v1/nodes\": dial tcp 10.244.96.58:6443: connect: connection refused" node="srv-gr2mf.gb1.brightbox.com" Nov 13 11:57:23.096372 kubelet[2373]: W1113 11:57:23.096281 2373 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.244.96.58:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.96.58:6443: connect: connection refused Nov 13 11:57:23.096851 kubelet[2373]: E1113 11:57:23.096759 2373 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.244.96.58:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.96.58:6443: connect: connection refused Nov 13 11:57:23.274020 kubelet[2373]: W1113 11:57:23.273691 2373 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.244.96.58:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-gr2mf.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.96.58:6443: connect: connection refused Nov 13 11:57:23.274020 kubelet[2373]: E1113 11:57:23.273826 2373 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.244.96.58:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-gr2mf.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.96.58:6443: connect: connection refused Nov 13 11:57:23.393883 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2754868137.mount: Deactivated successfully. Nov 13 11:57:23.398657 containerd[1515]: time="2024-11-13T11:57:23.397802424Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 13 11:57:23.399359 containerd[1515]: time="2024-11-13T11:57:23.399312385Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Nov 13 11:57:23.400059 containerd[1515]: time="2024-11-13T11:57:23.400022981Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 13 11:57:23.401181 containerd[1515]: time="2024-11-13T11:57:23.401151213Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 13 11:57:23.401442 containerd[1515]: time="2024-11-13T11:57:23.401414910Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 13 11:57:23.402405 containerd[1515]: time="2024-11-13T11:57:23.402211573Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 13 11:57:23.402778 containerd[1515]: time="2024-11-13T11:57:23.402747893Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 13 11:57:23.403648 containerd[1515]: time="2024-11-13T11:57:23.403622989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 13 11:57:23.406750 containerd[1515]: time="2024-11-13T11:57:23.406705616Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 669.731422ms" Nov 13 11:57:23.408582 containerd[1515]: time="2024-11-13T11:57:23.408542387Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 652.158483ms" Nov 13 11:57:23.413527 containerd[1515]: time="2024-11-13T11:57:23.413373556Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 657.115718ms" Nov 13 11:57:23.471951 kubelet[2373]: W1113 11:57:23.471887 2373 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.244.96.58:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.244.96.58:6443: connect: connection refused Nov 13 11:57:23.471951 kubelet[2373]: E1113 11:57:23.471957 2373 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.244.96.58:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.244.96.58:6443: connect: connection refused Nov 13 11:57:23.494629 kubelet[2373]: W1113 11:57:23.494587 2373 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.244.96.58:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.96.58:6443: connect: connection refused Nov 13 11:57:23.495283 kubelet[2373]: E1113 11:57:23.495129 2373 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.244.96.58:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.96.58:6443: connect: connection refused Nov 13 11:57:23.556150 containerd[1515]: time="2024-11-13T11:57:23.555866142Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 13 11:57:23.556150 containerd[1515]: time="2024-11-13T11:57:23.555950629Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 13 11:57:23.556444 containerd[1515]: time="2024-11-13T11:57:23.555967914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 11:57:23.556444 containerd[1515]: time="2024-11-13T11:57:23.556049967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 11:57:23.560955 containerd[1515]: time="2024-11-13T11:57:23.560778061Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 13 11:57:23.560955 containerd[1515]: time="2024-11-13T11:57:23.560834903Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 13 11:57:23.560955 containerd[1515]: time="2024-11-13T11:57:23.560848419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 11:57:23.561387 containerd[1515]: time="2024-11-13T11:57:23.560931196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 11:57:23.563211 containerd[1515]: time="2024-11-13T11:57:23.563059210Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 13 11:57:23.563211 containerd[1515]: time="2024-11-13T11:57:23.563125887Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 13 11:57:23.563211 containerd[1515]: time="2024-11-13T11:57:23.563144171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 11:57:23.563477 containerd[1515]: time="2024-11-13T11:57:23.563246139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 11:57:23.585394 systemd[1]: Started cri-containerd-fc79be5201ebe02e47da0ab7bcfb1dc443f39b19636d9d0228e05f7d3be2d5f1.scope - libcontainer container fc79be5201ebe02e47da0ab7bcfb1dc443f39b19636d9d0228e05f7d3be2d5f1. Nov 13 11:57:23.604211 systemd[1]: Started cri-containerd-4cd803f36e25dd67d90a1eed3bdc22296cb46acfdd0e9b41d4b63bd597edcde9.scope - libcontainer container 4cd803f36e25dd67d90a1eed3bdc22296cb46acfdd0e9b41d4b63bd597edcde9. Nov 13 11:57:23.608617 systemd[1]: Started cri-containerd-bf22d880d1955c17b616b62da4a1a8220331ea826d847f3482389561e8dabefa.scope - libcontainer container bf22d880d1955c17b616b62da4a1a8220331ea826d847f3482389561e8dabefa. Nov 13 11:57:23.682674 containerd[1515]: time="2024-11-13T11:57:23.682472697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-gr2mf.gb1.brightbox.com,Uid:e06fde8b02fb0908c92f03501a0df148,Namespace:kube-system,Attempt:0,} returns sandbox id \"fc79be5201ebe02e47da0ab7bcfb1dc443f39b19636d9d0228e05f7d3be2d5f1\"" Nov 13 11:57:23.685631 kubelet[2373]: E1113 11:57:23.685576 2373 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.96.58:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-gr2mf.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.96.58:6443: connect: connection refused" interval="1.6s" Nov 13 11:57:23.688289 containerd[1515]: time="2024-11-13T11:57:23.688198056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-gr2mf.gb1.brightbox.com,Uid:5f6bf0a5ac2d61e32b0973a4745aa96c,Namespace:kube-system,Attempt:0,} returns sandbox id \"bf22d880d1955c17b616b62da4a1a8220331ea826d847f3482389561e8dabefa\"" Nov 13 11:57:23.689752 containerd[1515]: time="2024-11-13T11:57:23.689725953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-gr2mf.gb1.brightbox.com,Uid:cb106591ffea98bd6286cc044f766e07,Namespace:kube-system,Attempt:0,} returns sandbox id \"4cd803f36e25dd67d90a1eed3bdc22296cb46acfdd0e9b41d4b63bd597edcde9\"" Nov 13 11:57:23.693859 containerd[1515]: time="2024-11-13T11:57:23.693751517Z" level=info msg="CreateContainer within sandbox \"4cd803f36e25dd67d90a1eed3bdc22296cb46acfdd0e9b41d4b63bd597edcde9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 13 11:57:23.693859 containerd[1515]: time="2024-11-13T11:57:23.693825867Z" level=info msg="CreateContainer within sandbox \"fc79be5201ebe02e47da0ab7bcfb1dc443f39b19636d9d0228e05f7d3be2d5f1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 13 11:57:23.694424 containerd[1515]: time="2024-11-13T11:57:23.694311453Z" level=info msg="CreateContainer within sandbox \"bf22d880d1955c17b616b62da4a1a8220331ea826d847f3482389561e8dabefa\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 13 11:57:23.705970 containerd[1515]: time="2024-11-13T11:57:23.705937040Z" level=info msg="CreateContainer within sandbox \"fc79be5201ebe02e47da0ab7bcfb1dc443f39b19636d9d0228e05f7d3be2d5f1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f18c9bb636151ed06b45a3ec305c4a48c9f8c08b195e66e643060225a55064b5\"" Nov 13 11:57:23.706798 containerd[1515]: time="2024-11-13T11:57:23.706770687Z" level=info msg="StartContainer for \"f18c9bb636151ed06b45a3ec305c4a48c9f8c08b195e66e643060225a55064b5\"" Nov 13 11:57:23.708232 containerd[1515]: time="2024-11-13T11:57:23.708204394Z" level=info msg="CreateContainer within sandbox \"4cd803f36e25dd67d90a1eed3bdc22296cb46acfdd0e9b41d4b63bd597edcde9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"faa53104338fc571240e007a71535e8e748e956a3116847abcc37c56b5e3ea7c\"" Nov 13 11:57:23.708561 containerd[1515]: time="2024-11-13T11:57:23.708542007Z" level=info msg="StartContainer for \"faa53104338fc571240e007a71535e8e748e956a3116847abcc37c56b5e3ea7c\"" Nov 13 11:57:23.713411 containerd[1515]: time="2024-11-13T11:57:23.713363107Z" level=info msg="CreateContainer within sandbox \"bf22d880d1955c17b616b62da4a1a8220331ea826d847f3482389561e8dabefa\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d8895b5e3a359cce85d179d7f2da311912da5dd3b99cd6de4d99112149d2f005\"" Nov 13 11:57:23.714386 containerd[1515]: time="2024-11-13T11:57:23.714324010Z" level=info msg="StartContainer for \"d8895b5e3a359cce85d179d7f2da311912da5dd3b99cd6de4d99112149d2f005\"" Nov 13 11:57:23.742365 systemd[1]: Started cri-containerd-faa53104338fc571240e007a71535e8e748e956a3116847abcc37c56b5e3ea7c.scope - libcontainer container faa53104338fc571240e007a71535e8e748e956a3116847abcc37c56b5e3ea7c. Nov 13 11:57:23.751347 systemd[1]: Started cri-containerd-f18c9bb636151ed06b45a3ec305c4a48c9f8c08b195e66e643060225a55064b5.scope - libcontainer container f18c9bb636151ed06b45a3ec305c4a48c9f8c08b195e66e643060225a55064b5. Nov 13 11:57:23.761738 systemd[1]: Started cri-containerd-d8895b5e3a359cce85d179d7f2da311912da5dd3b99cd6de4d99112149d2f005.scope - libcontainer container d8895b5e3a359cce85d179d7f2da311912da5dd3b99cd6de4d99112149d2f005. Nov 13 11:57:23.799863 kubelet[2373]: I1113 11:57:23.799396 2373 kubelet_node_status.go:73] "Attempting to register node" node="srv-gr2mf.gb1.brightbox.com" Nov 13 11:57:23.799863 kubelet[2373]: E1113 11:57:23.799735 2373 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.244.96.58:6443/api/v1/nodes\": dial tcp 10.244.96.58:6443: connect: connection refused" node="srv-gr2mf.gb1.brightbox.com" Nov 13 11:57:23.832705 containerd[1515]: time="2024-11-13T11:57:23.832573746Z" level=info msg="StartContainer for \"faa53104338fc571240e007a71535e8e748e956a3116847abcc37c56b5e3ea7c\" returns successfully" Nov 13 11:57:23.848685 containerd[1515]: time="2024-11-13T11:57:23.848629365Z" level=info msg="StartContainer for \"f18c9bb636151ed06b45a3ec305c4a48c9f8c08b195e66e643060225a55064b5\" returns successfully" Nov 13 11:57:23.854698 containerd[1515]: time="2024-11-13T11:57:23.854659411Z" level=info msg="StartContainer for \"d8895b5e3a359cce85d179d7f2da311912da5dd3b99cd6de4d99112149d2f005\" returns successfully" Nov 13 11:57:25.407247 kubelet[2373]: I1113 11:57:25.404528 2373 kubelet_node_status.go:73] "Attempting to register node" node="srv-gr2mf.gb1.brightbox.com" Nov 13 11:57:25.661421 kubelet[2373]: E1113 11:57:25.659224 2373 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-gr2mf.gb1.brightbox.com\" not found" node="srv-gr2mf.gb1.brightbox.com" Nov 13 11:57:25.812795 kubelet[2373]: I1113 11:57:25.812699 2373 kubelet_node_status.go:76] "Successfully registered node" node="srv-gr2mf.gb1.brightbox.com" Nov 13 11:57:26.256489 kubelet[2373]: I1113 11:57:26.256285 2373 apiserver.go:52] "Watching apiserver" Nov 13 11:57:26.281851 kubelet[2373]: I1113 11:57:26.281773 2373 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Nov 13 11:57:27.779486 systemd[1]: Reloading requested from client PID 2649 ('systemctl') (unit session-11.scope)... Nov 13 11:57:27.779520 systemd[1]: Reloading... Nov 13 11:57:27.895293 zram_generator::config[2697]: No configuration found. Nov 13 11:57:28.027470 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 13 11:57:28.117252 systemd[1]: Reloading finished in 337 ms. Nov 13 11:57:28.162479 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 13 11:57:28.172578 systemd[1]: kubelet.service: Deactivated successfully. Nov 13 11:57:28.172855 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 13 11:57:28.177612 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 13 11:57:28.371619 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 13 11:57:28.372356 (kubelet)[2752]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 13 11:57:28.464079 kubelet[2752]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 13 11:57:28.464079 kubelet[2752]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 13 11:57:28.464079 kubelet[2752]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 13 11:57:28.465540 kubelet[2752]: I1113 11:57:28.465467 2752 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 13 11:57:28.471831 kubelet[2752]: I1113 11:57:28.471791 2752 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Nov 13 11:57:28.472566 kubelet[2752]: I1113 11:57:28.472015 2752 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 13 11:57:28.472566 kubelet[2752]: I1113 11:57:28.472298 2752 server.go:927] "Client rotation is on, will bootstrap in background" Nov 13 11:57:28.473895 kubelet[2752]: I1113 11:57:28.473870 2752 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 13 11:57:28.475785 kubelet[2752]: I1113 11:57:28.475552 2752 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 13 11:57:28.492900 kubelet[2752]: I1113 11:57:28.492864 2752 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 13 11:57:28.493187 kubelet[2752]: I1113 11:57:28.493145 2752 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 13 11:57:28.493396 kubelet[2752]: I1113 11:57:28.493212 2752 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-gr2mf.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 13 11:57:28.493637 kubelet[2752]: I1113 11:57:28.493428 2752 topology_manager.go:138] "Creating topology manager with none policy" Nov 13 11:57:28.493637 kubelet[2752]: I1113 11:57:28.493441 2752 container_manager_linux.go:301] "Creating device plugin manager" Nov 13 11:57:28.493637 kubelet[2752]: I1113 11:57:28.493510 2752 state_mem.go:36] "Initialized new in-memory state store" Nov 13 11:57:28.493744 kubelet[2752]: I1113 11:57:28.493681 2752 kubelet.go:400] "Attempting to sync node with API server" Nov 13 11:57:28.493744 kubelet[2752]: I1113 11:57:28.493697 2752 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 13 11:57:28.493744 kubelet[2752]: I1113 11:57:28.493722 2752 kubelet.go:312] "Adding apiserver pod source" Nov 13 11:57:28.493744 kubelet[2752]: I1113 11:57:28.493742 2752 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 13 11:57:28.495856 kubelet[2752]: I1113 11:57:28.495833 2752 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 13 11:57:28.498202 kubelet[2752]: I1113 11:57:28.496036 2752 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 13 11:57:28.498202 kubelet[2752]: I1113 11:57:28.496527 2752 server.go:1264] "Started kubelet" Nov 13 11:57:28.501269 kubelet[2752]: I1113 11:57:28.500241 2752 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 13 11:57:28.513046 kubelet[2752]: I1113 11:57:28.512938 2752 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 13 11:57:28.516901 kubelet[2752]: I1113 11:57:28.516131 2752 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 13 11:57:28.519648 kubelet[2752]: I1113 11:57:28.518776 2752 server.go:455] "Adding debug handlers to kubelet server" Nov 13 11:57:28.519648 kubelet[2752]: I1113 11:57:28.519081 2752 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Nov 13 11:57:28.522256 kubelet[2752]: I1113 11:57:28.520265 2752 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 13 11:57:28.522256 kubelet[2752]: I1113 11:57:28.520457 2752 reconciler.go:26] "Reconciler: start to sync state" Nov 13 11:57:28.522256 kubelet[2752]: I1113 11:57:28.520568 2752 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 13 11:57:28.525094 kubelet[2752]: I1113 11:57:28.525053 2752 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 13 11:57:28.527906 kubelet[2752]: I1113 11:57:28.527886 2752 factory.go:221] Registration of the containerd container factory successfully Nov 13 11:57:28.528022 kubelet[2752]: I1113 11:57:28.528012 2752 factory.go:221] Registration of the systemd container factory successfully Nov 13 11:57:28.533550 kubelet[2752]: I1113 11:57:28.533490 2752 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 13 11:57:28.534760 kubelet[2752]: I1113 11:57:28.534733 2752 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 13 11:57:28.534816 kubelet[2752]: I1113 11:57:28.534784 2752 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 13 11:57:28.534862 kubelet[2752]: I1113 11:57:28.534822 2752 kubelet.go:2337] "Starting kubelet main sync loop" Nov 13 11:57:28.534937 kubelet[2752]: E1113 11:57:28.534907 2752 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 13 11:57:28.609453 kubelet[2752]: I1113 11:57:28.609425 2752 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 13 11:57:28.610014 kubelet[2752]: I1113 11:57:28.609654 2752 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 13 11:57:28.610014 kubelet[2752]: I1113 11:57:28.609680 2752 state_mem.go:36] "Initialized new in-memory state store" Nov 13 11:57:28.610014 kubelet[2752]: I1113 11:57:28.609909 2752 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 13 11:57:28.610014 kubelet[2752]: I1113 11:57:28.609921 2752 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 13 11:57:28.610014 kubelet[2752]: I1113 11:57:28.609941 2752 policy_none.go:49] "None policy: Start" Nov 13 11:57:28.611081 kubelet[2752]: I1113 11:57:28.611065 2752 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 13 11:57:28.611245 kubelet[2752]: I1113 11:57:28.611153 2752 state_mem.go:35] "Initializing new in-memory state store" Nov 13 11:57:28.611408 kubelet[2752]: I1113 11:57:28.611398 2752 state_mem.go:75] "Updated machine memory state" Nov 13 11:57:28.628563 kubelet[2752]: I1113 11:57:28.628404 2752 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 13 11:57:28.630481 kubelet[2752]: I1113 11:57:28.630351 2752 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 13 11:57:28.631074 kubelet[2752]: I1113 11:57:28.630658 2752 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 13 11:57:28.635379 kubelet[2752]: I1113 11:57:28.635336 2752 topology_manager.go:215] "Topology Admit Handler" podUID="cb106591ffea98bd6286cc044f766e07" podNamespace="kube-system" podName="kube-scheduler-srv-gr2mf.gb1.brightbox.com" Nov 13 11:57:28.635621 kubelet[2752]: I1113 11:57:28.635494 2752 topology_manager.go:215] "Topology Admit Handler" podUID="5f6bf0a5ac2d61e32b0973a4745aa96c" podNamespace="kube-system" podName="kube-apiserver-srv-gr2mf.gb1.brightbox.com" Nov 13 11:57:28.635621 kubelet[2752]: I1113 11:57:28.635617 2752 topology_manager.go:215] "Topology Admit Handler" podUID="e06fde8b02fb0908c92f03501a0df148" podNamespace="kube-system" podName="kube-controller-manager-srv-gr2mf.gb1.brightbox.com" Nov 13 11:57:28.642694 kubelet[2752]: I1113 11:57:28.642666 2752 kubelet_node_status.go:73] "Attempting to register node" node="srv-gr2mf.gb1.brightbox.com" Nov 13 11:57:28.658720 kubelet[2752]: W1113 11:57:28.658677 2752 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 13 11:57:28.659090 kubelet[2752]: I1113 11:57:28.659072 2752 kubelet_node_status.go:112] "Node was previously registered" node="srv-gr2mf.gb1.brightbox.com" Nov 13 11:57:28.659184 kubelet[2752]: I1113 11:57:28.659171 2752 kubelet_node_status.go:76] "Successfully registered node" node="srv-gr2mf.gb1.brightbox.com" Nov 13 11:57:28.663410 kubelet[2752]: W1113 11:57:28.663363 2752 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 13 11:57:28.665274 kubelet[2752]: W1113 11:57:28.664527 2752 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 13 11:57:28.822820 kubelet[2752]: I1113 11:57:28.822758 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cb106591ffea98bd6286cc044f766e07-kubeconfig\") pod \"kube-scheduler-srv-gr2mf.gb1.brightbox.com\" (UID: \"cb106591ffea98bd6286cc044f766e07\") " pod="kube-system/kube-scheduler-srv-gr2mf.gb1.brightbox.com" Nov 13 11:57:28.823089 kubelet[2752]: I1113 11:57:28.823073 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5f6bf0a5ac2d61e32b0973a4745aa96c-ca-certs\") pod \"kube-apiserver-srv-gr2mf.gb1.brightbox.com\" (UID: \"5f6bf0a5ac2d61e32b0973a4745aa96c\") " pod="kube-system/kube-apiserver-srv-gr2mf.gb1.brightbox.com" Nov 13 11:57:28.823175 kubelet[2752]: I1113 11:57:28.823163 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5f6bf0a5ac2d61e32b0973a4745aa96c-k8s-certs\") pod \"kube-apiserver-srv-gr2mf.gb1.brightbox.com\" (UID: \"5f6bf0a5ac2d61e32b0973a4745aa96c\") " pod="kube-system/kube-apiserver-srv-gr2mf.gb1.brightbox.com" Nov 13 11:57:28.823274 kubelet[2752]: I1113 11:57:28.823257 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5f6bf0a5ac2d61e32b0973a4745aa96c-usr-share-ca-certificates\") pod \"kube-apiserver-srv-gr2mf.gb1.brightbox.com\" (UID: \"5f6bf0a5ac2d61e32b0973a4745aa96c\") " pod="kube-system/kube-apiserver-srv-gr2mf.gb1.brightbox.com" Nov 13 11:57:28.823398 kubelet[2752]: I1113 11:57:28.823386 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e06fde8b02fb0908c92f03501a0df148-ca-certs\") pod \"kube-controller-manager-srv-gr2mf.gb1.brightbox.com\" (UID: \"e06fde8b02fb0908c92f03501a0df148\") " pod="kube-system/kube-controller-manager-srv-gr2mf.gb1.brightbox.com" Nov 13 11:57:28.823497 kubelet[2752]: I1113 11:57:28.823485 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e06fde8b02fb0908c92f03501a0df148-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-gr2mf.gb1.brightbox.com\" (UID: \"e06fde8b02fb0908c92f03501a0df148\") " pod="kube-system/kube-controller-manager-srv-gr2mf.gb1.brightbox.com" Nov 13 11:57:28.823720 kubelet[2752]: I1113 11:57:28.823575 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e06fde8b02fb0908c92f03501a0df148-flexvolume-dir\") pod \"kube-controller-manager-srv-gr2mf.gb1.brightbox.com\" (UID: \"e06fde8b02fb0908c92f03501a0df148\") " pod="kube-system/kube-controller-manager-srv-gr2mf.gb1.brightbox.com" Nov 13 11:57:28.823720 kubelet[2752]: I1113 11:57:28.823596 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e06fde8b02fb0908c92f03501a0df148-k8s-certs\") pod \"kube-controller-manager-srv-gr2mf.gb1.brightbox.com\" (UID: \"e06fde8b02fb0908c92f03501a0df148\") " pod="kube-system/kube-controller-manager-srv-gr2mf.gb1.brightbox.com" Nov 13 11:57:28.823720 kubelet[2752]: I1113 11:57:28.823615 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e06fde8b02fb0908c92f03501a0df148-kubeconfig\") pod \"kube-controller-manager-srv-gr2mf.gb1.brightbox.com\" (UID: \"e06fde8b02fb0908c92f03501a0df148\") " pod="kube-system/kube-controller-manager-srv-gr2mf.gb1.brightbox.com" Nov 13 11:57:29.494535 kubelet[2752]: I1113 11:57:29.494469 2752 apiserver.go:52] "Watching apiserver" Nov 13 11:57:29.519777 kubelet[2752]: I1113 11:57:29.519734 2752 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Nov 13 11:57:29.582929 kubelet[2752]: I1113 11:57:29.582853 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-gr2mf.gb1.brightbox.com" podStartSLOduration=1.5828221120000001 podStartE2EDuration="1.582822112s" podCreationTimestamp="2024-11-13 11:57:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-13 11:57:29.544992768 +0000 UTC m=+1.164901527" watchObservedRunningTime="2024-11-13 11:57:29.582822112 +0000 UTC m=+1.202730807" Nov 13 11:57:29.609210 kubelet[2752]: W1113 11:57:29.607983 2752 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 13 11:57:29.609210 kubelet[2752]: E1113 11:57:29.608086 2752 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-srv-gr2mf.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-gr2mf.gb1.brightbox.com" Nov 13 11:57:29.626228 kubelet[2752]: I1113 11:57:29.626146 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-gr2mf.gb1.brightbox.com" podStartSLOduration=1.626125314 podStartE2EDuration="1.626125314s" podCreationTimestamp="2024-11-13 11:57:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-13 11:57:29.584821465 +0000 UTC m=+1.204730164" watchObservedRunningTime="2024-11-13 11:57:29.626125314 +0000 UTC m=+1.246034013" Nov 13 11:57:29.626406 kubelet[2752]: I1113 11:57:29.626237 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-gr2mf.gb1.brightbox.com" podStartSLOduration=1.626233627 podStartE2EDuration="1.626233627s" podCreationTimestamp="2024-11-13 11:57:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-13 11:57:29.625309672 +0000 UTC m=+1.245218360" watchObservedRunningTime="2024-11-13 11:57:29.626233627 +0000 UTC m=+1.246142301" Nov 13 11:57:34.163639 sudo[1795]: pam_unix(sudo:session): session closed for user root Nov 13 11:57:34.311131 sshd[1792]: pam_unix(sshd:session): session closed for user core Nov 13 11:57:34.322582 systemd[1]: sshd@9-10.244.96.58:22-147.75.109.163:51912.service: Deactivated successfully. Nov 13 11:57:34.325437 systemd[1]: session-11.scope: Deactivated successfully. Nov 13 11:57:34.325765 systemd[1]: session-11.scope: Consumed 5.204s CPU time, 187.3M memory peak, 0B memory swap peak. Nov 13 11:57:34.327448 systemd-logind[1496]: Session 11 logged out. Waiting for processes to exit. Nov 13 11:57:34.330390 systemd-logind[1496]: Removed session 11. Nov 13 11:57:38.324567 systemd[1]: Started sshd@10-10.244.96.58:22-206.168.34.62:35390.service - OpenSSH per-connection server daemon (206.168.34.62:35390). Nov 13 11:57:43.961155 kubelet[2752]: I1113 11:57:43.960983 2752 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 13 11:57:43.963717 containerd[1515]: time="2024-11-13T11:57:43.963623318Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 13 11:57:43.964610 kubelet[2752]: I1113 11:57:43.963981 2752 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 13 11:57:44.848399 kubelet[2752]: I1113 11:57:44.848231 2752 topology_manager.go:215] "Topology Admit Handler" podUID="92f405c2-9230-47b6-b13b-9ef49d340fac" podNamespace="kube-system" podName="kube-proxy-85cvc" Nov 13 11:57:44.864526 systemd[1]: Created slice kubepods-besteffort-pod92f405c2_9230_47b6_b13b_9ef49d340fac.slice - libcontainer container kubepods-besteffort-pod92f405c2_9230_47b6_b13b_9ef49d340fac.slice. Nov 13 11:57:44.937240 kubelet[2752]: I1113 11:57:44.936384 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/92f405c2-9230-47b6-b13b-9ef49d340fac-xtables-lock\") pod \"kube-proxy-85cvc\" (UID: \"92f405c2-9230-47b6-b13b-9ef49d340fac\") " pod="kube-system/kube-proxy-85cvc" Nov 13 11:57:44.937240 kubelet[2752]: I1113 11:57:44.936512 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/92f405c2-9230-47b6-b13b-9ef49d340fac-lib-modules\") pod \"kube-proxy-85cvc\" (UID: \"92f405c2-9230-47b6-b13b-9ef49d340fac\") " pod="kube-system/kube-proxy-85cvc" Nov 13 11:57:44.937240 kubelet[2752]: I1113 11:57:44.936540 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/92f405c2-9230-47b6-b13b-9ef49d340fac-kube-proxy\") pod \"kube-proxy-85cvc\" (UID: \"92f405c2-9230-47b6-b13b-9ef49d340fac\") " pod="kube-system/kube-proxy-85cvc" Nov 13 11:57:44.937240 kubelet[2752]: I1113 11:57:44.936596 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm8kh\" (UniqueName: \"kubernetes.io/projected/92f405c2-9230-47b6-b13b-9ef49d340fac-kube-api-access-lm8kh\") pod \"kube-proxy-85cvc\" (UID: \"92f405c2-9230-47b6-b13b-9ef49d340fac\") " pod="kube-system/kube-proxy-85cvc" Nov 13 11:57:45.067469 kubelet[2752]: I1113 11:57:45.064817 2752 topology_manager.go:215] "Topology Admit Handler" podUID="38023ca4-5e28-46e0-8f9e-7096a637e252" podNamespace="tigera-operator" podName="tigera-operator-5645cfc98-nnqlk" Nov 13 11:57:45.086357 systemd[1]: Created slice kubepods-besteffort-pod38023ca4_5e28_46e0_8f9e_7096a637e252.slice - libcontainer container kubepods-besteffort-pod38023ca4_5e28_46e0_8f9e_7096a637e252.slice. Nov 13 11:57:45.138668 kubelet[2752]: I1113 11:57:45.138450 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/38023ca4-5e28-46e0-8f9e-7096a637e252-var-lib-calico\") pod \"tigera-operator-5645cfc98-nnqlk\" (UID: \"38023ca4-5e28-46e0-8f9e-7096a637e252\") " pod="tigera-operator/tigera-operator-5645cfc98-nnqlk" Nov 13 11:57:45.139079 kubelet[2752]: I1113 11:57:45.139001 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l58tl\" (UniqueName: \"kubernetes.io/projected/38023ca4-5e28-46e0-8f9e-7096a637e252-kube-api-access-l58tl\") pod \"tigera-operator-5645cfc98-nnqlk\" (UID: \"38023ca4-5e28-46e0-8f9e-7096a637e252\") " pod="tigera-operator/tigera-operator-5645cfc98-nnqlk" Nov 13 11:57:45.178164 containerd[1515]: time="2024-11-13T11:57:45.178055518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-85cvc,Uid:92f405c2-9230-47b6-b13b-9ef49d340fac,Namespace:kube-system,Attempt:0,}" Nov 13 11:57:45.210375 containerd[1515]: time="2024-11-13T11:57:45.210231518Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 13 11:57:45.210375 containerd[1515]: time="2024-11-13T11:57:45.210330802Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 13 11:57:45.210375 containerd[1515]: time="2024-11-13T11:57:45.210347773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 11:57:45.211308 containerd[1515]: time="2024-11-13T11:57:45.210466295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 11:57:45.248415 systemd[1]: Started cri-containerd-e0feab24fee7c77868342626b6c80d48e6ac8d31300632e79c6dd0f9ddfcb3ff.scope - libcontainer container e0feab24fee7c77868342626b6c80d48e6ac8d31300632e79c6dd0f9ddfcb3ff. Nov 13 11:57:45.291538 containerd[1515]: time="2024-11-13T11:57:45.291459839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-85cvc,Uid:92f405c2-9230-47b6-b13b-9ef49d340fac,Namespace:kube-system,Attempt:0,} returns sandbox id \"e0feab24fee7c77868342626b6c80d48e6ac8d31300632e79c6dd0f9ddfcb3ff\"" Nov 13 11:57:45.301259 containerd[1515]: time="2024-11-13T11:57:45.300996101Z" level=info msg="CreateContainer within sandbox \"e0feab24fee7c77868342626b6c80d48e6ac8d31300632e79c6dd0f9ddfcb3ff\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 13 11:57:45.313151 containerd[1515]: time="2024-11-13T11:57:45.313078057Z" level=info msg="CreateContainer within sandbox \"e0feab24fee7c77868342626b6c80d48e6ac8d31300632e79c6dd0f9ddfcb3ff\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d7e532c79ff91d42f0d924df54e3ce122e09cc24d3bdac9157fa61dbfafc3a8d\"" Nov 13 11:57:45.314399 containerd[1515]: time="2024-11-13T11:57:45.314367792Z" level=info msg="StartContainer for \"d7e532c79ff91d42f0d924df54e3ce122e09cc24d3bdac9157fa61dbfafc3a8d\"" Nov 13 11:57:45.347392 systemd[1]: Started cri-containerd-d7e532c79ff91d42f0d924df54e3ce122e09cc24d3bdac9157fa61dbfafc3a8d.scope - libcontainer container d7e532c79ff91d42f0d924df54e3ce122e09cc24d3bdac9157fa61dbfafc3a8d. Nov 13 11:57:45.380533 containerd[1515]: time="2024-11-13T11:57:45.380474244Z" level=info msg="StartContainer for \"d7e532c79ff91d42f0d924df54e3ce122e09cc24d3bdac9157fa61dbfafc3a8d\" returns successfully" Nov 13 11:57:45.401141 containerd[1515]: time="2024-11-13T11:57:45.401030158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5645cfc98-nnqlk,Uid:38023ca4-5e28-46e0-8f9e-7096a637e252,Namespace:tigera-operator,Attempt:0,}" Nov 13 11:57:45.433520 containerd[1515]: time="2024-11-13T11:57:45.433253035Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 13 11:57:45.433520 containerd[1515]: time="2024-11-13T11:57:45.433364148Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 13 11:57:45.433520 containerd[1515]: time="2024-11-13T11:57:45.433382751Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 11:57:45.434035 containerd[1515]: time="2024-11-13T11:57:45.433500832Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 11:57:45.455554 systemd[1]: Started cri-containerd-459d9aad693c15bfd4463de1fed7f6b13de3067ee9150e6c4714a1505ceae7c6.scope - libcontainer container 459d9aad693c15bfd4463de1fed7f6b13de3067ee9150e6c4714a1505ceae7c6. Nov 13 11:57:45.510405 containerd[1515]: time="2024-11-13T11:57:45.510357068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5645cfc98-nnqlk,Uid:38023ca4-5e28-46e0-8f9e-7096a637e252,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"459d9aad693c15bfd4463de1fed7f6b13de3067ee9150e6c4714a1505ceae7c6\"" Nov 13 11:57:45.518001 containerd[1515]: time="2024-11-13T11:57:45.517971004Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.0\"" Nov 13 11:57:45.669283 kubelet[2752]: I1113 11:57:45.667392 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-85cvc" podStartSLOduration=1.667358942 podStartE2EDuration="1.667358942s" podCreationTimestamp="2024-11-13 11:57:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-13 11:57:45.666461015 +0000 UTC m=+17.286369713" watchObservedRunningTime="2024-11-13 11:57:45.667358942 +0000 UTC m=+17.287267643" Nov 13 11:57:47.688744 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2884746404.mount: Deactivated successfully. Nov 13 11:57:48.287427 containerd[1515]: time="2024-11-13T11:57:48.287376175Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 11:57:48.288932 containerd[1515]: time="2024-11-13T11:57:48.288875456Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.0: active requests=0, bytes read=21763331" Nov 13 11:57:48.289625 containerd[1515]: time="2024-11-13T11:57:48.289585644Z" level=info msg="ImageCreate event name:\"sha256:6969e3644ac6358fd921194ec267a243ad5856f3d9595bdbb9a76dc5c5e9875d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 11:57:48.291510 containerd[1515]: time="2024-11-13T11:57:48.291470749Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:67a96f7dcdde24abff66b978202c5e64b9909f4a8fcd9357daca92b499b26e4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 11:57:48.292414 containerd[1515]: time="2024-11-13T11:57:48.292288353Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.0\" with image id \"sha256:6969e3644ac6358fd921194ec267a243ad5856f3d9595bdbb9a76dc5c5e9875d\", repo tag \"quay.io/tigera/operator:v1.36.0\", repo digest \"quay.io/tigera/operator@sha256:67a96f7dcdde24abff66b978202c5e64b9909f4a8fcd9357daca92b499b26e4d\", size \"21757542\" in 2.774239475s" Nov 13 11:57:48.292414 containerd[1515]: time="2024-11-13T11:57:48.292321941Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.0\" returns image reference \"sha256:6969e3644ac6358fd921194ec267a243ad5856f3d9595bdbb9a76dc5c5e9875d\"" Nov 13 11:57:48.295389 containerd[1515]: time="2024-11-13T11:57:48.295242672Z" level=info msg="CreateContainer within sandbox \"459d9aad693c15bfd4463de1fed7f6b13de3067ee9150e6c4714a1505ceae7c6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 13 11:57:48.308803 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount553440937.mount: Deactivated successfully. Nov 13 11:57:48.312080 containerd[1515]: time="2024-11-13T11:57:48.312044863Z" level=info msg="CreateContainer within sandbox \"459d9aad693c15bfd4463de1fed7f6b13de3067ee9150e6c4714a1505ceae7c6\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"02eac10302fb2e701ba24b2349186209b6f6c234ba6c3184552f8554a3c58bb8\"" Nov 13 11:57:48.313287 containerd[1515]: time="2024-11-13T11:57:48.312659638Z" level=info msg="StartContainer for \"02eac10302fb2e701ba24b2349186209b6f6c234ba6c3184552f8554a3c58bb8\"" Nov 13 11:57:48.353399 systemd[1]: Started cri-containerd-02eac10302fb2e701ba24b2349186209b6f6c234ba6c3184552f8554a3c58bb8.scope - libcontainer container 02eac10302fb2e701ba24b2349186209b6f6c234ba6c3184552f8554a3c58bb8. Nov 13 11:57:48.389344 containerd[1515]: time="2024-11-13T11:57:48.389306529Z" level=info msg="StartContainer for \"02eac10302fb2e701ba24b2349186209b6f6c234ba6c3184552f8554a3c58bb8\" returns successfully" Nov 13 11:57:48.669036 kubelet[2752]: I1113 11:57:48.668472 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5645cfc98-nnqlk" podStartSLOduration=1.887907707 podStartE2EDuration="4.668413359s" podCreationTimestamp="2024-11-13 11:57:44 +0000 UTC" firstStartedPulling="2024-11-13 11:57:45.51283072 +0000 UTC m=+17.132739395" lastFinishedPulling="2024-11-13 11:57:48.293336369 +0000 UTC m=+19.913245047" observedRunningTime="2024-11-13 11:57:48.667622727 +0000 UTC m=+20.287531404" watchObservedRunningTime="2024-11-13 11:57:48.668413359 +0000 UTC m=+20.288322056" Nov 13 11:57:51.626645 kubelet[2752]: I1113 11:57:51.626548 2752 topology_manager.go:215] "Topology Admit Handler" podUID="77a3ac83-6add-48c1-903c-e90868b112ef" podNamespace="calico-system" podName="calico-typha-76f5754ccb-8vzcq" Nov 13 11:57:51.637291 systemd[1]: Created slice kubepods-besteffort-pod77a3ac83_6add_48c1_903c_e90868b112ef.slice - libcontainer container kubepods-besteffort-pod77a3ac83_6add_48c1_903c_e90868b112ef.slice. Nov 13 11:57:51.682497 kubelet[2752]: I1113 11:57:51.682388 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77a3ac83-6add-48c1-903c-e90868b112ef-tigera-ca-bundle\") pod \"calico-typha-76f5754ccb-8vzcq\" (UID: \"77a3ac83-6add-48c1-903c-e90868b112ef\") " pod="calico-system/calico-typha-76f5754ccb-8vzcq" Nov 13 11:57:51.682497 kubelet[2752]: I1113 11:57:51.682430 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/77a3ac83-6add-48c1-903c-e90868b112ef-typha-certs\") pod \"calico-typha-76f5754ccb-8vzcq\" (UID: \"77a3ac83-6add-48c1-903c-e90868b112ef\") " pod="calico-system/calico-typha-76f5754ccb-8vzcq" Nov 13 11:57:51.682497 kubelet[2752]: I1113 11:57:51.682452 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc6xw\" (UniqueName: \"kubernetes.io/projected/77a3ac83-6add-48c1-903c-e90868b112ef-kube-api-access-lc6xw\") pod \"calico-typha-76f5754ccb-8vzcq\" (UID: \"77a3ac83-6add-48c1-903c-e90868b112ef\") " pod="calico-system/calico-typha-76f5754ccb-8vzcq" Nov 13 11:57:51.741963 kubelet[2752]: I1113 11:57:51.740946 2752 topology_manager.go:215] "Topology Admit Handler" podUID="7ef55bf0-6208-40c1-8547-b68d5dbdf7e9" podNamespace="calico-system" podName="calico-node-n49j7" Nov 13 11:57:51.749290 systemd[1]: Created slice kubepods-besteffort-pod7ef55bf0_6208_40c1_8547_b68d5dbdf7e9.slice - libcontainer container kubepods-besteffort-pod7ef55bf0_6208_40c1_8547_b68d5dbdf7e9.slice. Nov 13 11:57:51.783293 kubelet[2752]: I1113 11:57:51.783245 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7ef55bf0-6208-40c1-8547-b68d5dbdf7e9-lib-modules\") pod \"calico-node-n49j7\" (UID: \"7ef55bf0-6208-40c1-8547-b68d5dbdf7e9\") " pod="calico-system/calico-node-n49j7" Nov 13 11:57:51.783559 kubelet[2752]: I1113 11:57:51.783538 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7ef55bf0-6208-40c1-8547-b68d5dbdf7e9-xtables-lock\") pod \"calico-node-n49j7\" (UID: \"7ef55bf0-6208-40c1-8547-b68d5dbdf7e9\") " pod="calico-system/calico-node-n49j7" Nov 13 11:57:51.783690 kubelet[2752]: I1113 11:57:51.783647 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7ef55bf0-6208-40c1-8547-b68d5dbdf7e9-policysync\") pod \"calico-node-n49j7\" (UID: \"7ef55bf0-6208-40c1-8547-b68d5dbdf7e9\") " pod="calico-system/calico-node-n49j7" Nov 13 11:57:51.783690 kubelet[2752]: I1113 11:57:51.783667 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ef55bf0-6208-40c1-8547-b68d5dbdf7e9-tigera-ca-bundle\") pod \"calico-node-n49j7\" (UID: \"7ef55bf0-6208-40c1-8547-b68d5dbdf7e9\") " pod="calico-system/calico-node-n49j7" Nov 13 11:57:51.783866 kubelet[2752]: I1113 11:57:51.783827 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7ef55bf0-6208-40c1-8547-b68d5dbdf7e9-cni-log-dir\") pod \"calico-node-n49j7\" (UID: \"7ef55bf0-6208-40c1-8547-b68d5dbdf7e9\") " pod="calico-system/calico-node-n49j7" Nov 13 11:57:51.784088 kubelet[2752]: I1113 11:57:51.784048 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7ef55bf0-6208-40c1-8547-b68d5dbdf7e9-cni-bin-dir\") pod \"calico-node-n49j7\" (UID: \"7ef55bf0-6208-40c1-8547-b68d5dbdf7e9\") " pod="calico-system/calico-node-n49j7" Nov 13 11:57:51.784285 kubelet[2752]: I1113 11:57:51.784070 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7ef55bf0-6208-40c1-8547-b68d5dbdf7e9-cni-net-dir\") pod \"calico-node-n49j7\" (UID: \"7ef55bf0-6208-40c1-8547-b68d5dbdf7e9\") " pod="calico-system/calico-node-n49j7" Nov 13 11:57:51.785241 kubelet[2752]: I1113 11:57:51.784443 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7ef55bf0-6208-40c1-8547-b68d5dbdf7e9-var-run-calico\") pod \"calico-node-n49j7\" (UID: \"7ef55bf0-6208-40c1-8547-b68d5dbdf7e9\") " pod="calico-system/calico-node-n49j7" Nov 13 11:57:51.785241 kubelet[2752]: I1113 11:57:51.784484 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7ef55bf0-6208-40c1-8547-b68d5dbdf7e9-node-certs\") pod \"calico-node-n49j7\" (UID: \"7ef55bf0-6208-40c1-8547-b68d5dbdf7e9\") " pod="calico-system/calico-node-n49j7" Nov 13 11:57:51.785241 kubelet[2752]: I1113 11:57:51.784502 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m94pg\" (UniqueName: \"kubernetes.io/projected/7ef55bf0-6208-40c1-8547-b68d5dbdf7e9-kube-api-access-m94pg\") pod \"calico-node-n49j7\" (UID: \"7ef55bf0-6208-40c1-8547-b68d5dbdf7e9\") " pod="calico-system/calico-node-n49j7" Nov 13 11:57:51.785241 kubelet[2752]: I1113 11:57:51.784534 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7ef55bf0-6208-40c1-8547-b68d5dbdf7e9-var-lib-calico\") pod \"calico-node-n49j7\" (UID: \"7ef55bf0-6208-40c1-8547-b68d5dbdf7e9\") " pod="calico-system/calico-node-n49j7" Nov 13 11:57:51.785241 kubelet[2752]: I1113 11:57:51.784556 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7ef55bf0-6208-40c1-8547-b68d5dbdf7e9-flexvol-driver-host\") pod \"calico-node-n49j7\" (UID: \"7ef55bf0-6208-40c1-8547-b68d5dbdf7e9\") " pod="calico-system/calico-node-n49j7" Nov 13 11:57:51.866735 kubelet[2752]: I1113 11:57:51.866016 2752 topology_manager.go:215] "Topology Admit Handler" podUID="a73a079a-5c98-427b-b55a-3d27769f0826" podNamespace="calico-system" podName="csi-node-driver-5tp6q" Nov 13 11:57:51.868054 kubelet[2752]: E1113 11:57:51.867295 2752 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5tp6q" podUID="a73a079a-5c98-427b-b55a-3d27769f0826" Nov 13 11:57:51.885681 kubelet[2752]: I1113 11:57:51.885509 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a73a079a-5c98-427b-b55a-3d27769f0826-registration-dir\") pod \"csi-node-driver-5tp6q\" (UID: \"a73a079a-5c98-427b-b55a-3d27769f0826\") " pod="calico-system/csi-node-driver-5tp6q" Nov 13 11:57:51.885681 kubelet[2752]: I1113 11:57:51.885607 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a73a079a-5c98-427b-b55a-3d27769f0826-socket-dir\") pod \"csi-node-driver-5tp6q\" (UID: \"a73a079a-5c98-427b-b55a-3d27769f0826\") " pod="calico-system/csi-node-driver-5tp6q" Nov 13 11:57:51.885681 kubelet[2752]: I1113 11:57:51.885626 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnxb9\" (UniqueName: \"kubernetes.io/projected/a73a079a-5c98-427b-b55a-3d27769f0826-kube-api-access-dnxb9\") pod \"csi-node-driver-5tp6q\" (UID: \"a73a079a-5c98-427b-b55a-3d27769f0826\") " pod="calico-system/csi-node-driver-5tp6q" Nov 13 11:57:51.885879 kubelet[2752]: I1113 11:57:51.885692 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a73a079a-5c98-427b-b55a-3d27769f0826-varrun\") pod \"csi-node-driver-5tp6q\" (UID: \"a73a079a-5c98-427b-b55a-3d27769f0826\") " pod="calico-system/csi-node-driver-5tp6q" Nov 13 11:57:51.885879 kubelet[2752]: I1113 11:57:51.885720 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a73a079a-5c98-427b-b55a-3d27769f0826-kubelet-dir\") pod \"csi-node-driver-5tp6q\" (UID: \"a73a079a-5c98-427b-b55a-3d27769f0826\") " pod="calico-system/csi-node-driver-5tp6q" Nov 13 11:57:51.890645 kubelet[2752]: E1113 11:57:51.890057 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 11:57:51.890645 kubelet[2752]: W1113 11:57:51.890089 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 11:57:51.890645 kubelet[2752]: E1113 11:57:51.890131 2752 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 11:57:51.891445 kubelet[2752]: E1113 11:57:51.890611 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 11:57:51.891445 kubelet[2752]: W1113 11:57:51.891247 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 11:57:51.891445 kubelet[2752]: E1113 11:57:51.891262 2752 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 11:57:51.892286 kubelet[2752]: E1113 11:57:51.891730 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 11:57:51.892286 kubelet[2752]: W1113 11:57:51.891742 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 11:57:51.892286 kubelet[2752]: E1113 11:57:51.891753 2752 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 11:57:51.896531 kubelet[2752]: E1113 11:57:51.896414 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 11:57:51.896531 kubelet[2752]: W1113 11:57:51.896433 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 11:57:51.896531 kubelet[2752]: E1113 11:57:51.896462 2752 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 11:57:51.897013 kubelet[2752]: E1113 11:57:51.896915 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 11:57:51.897013 kubelet[2752]: W1113 11:57:51.896927 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 11:57:51.897013 kubelet[2752]: E1113 11:57:51.896941 2752 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 11:57:51.897355 kubelet[2752]: E1113 11:57:51.897261 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 11:57:51.897355 kubelet[2752]: W1113 11:57:51.897283 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 11:57:51.897355 kubelet[2752]: E1113 11:57:51.897293 2752 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 11:57:51.897705 kubelet[2752]: E1113 11:57:51.897644 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 11:57:51.897705 kubelet[2752]: W1113 11:57:51.897655 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 11:57:51.897705 kubelet[2752]: E1113 11:57:51.897666 2752 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 11:57:51.898075 kubelet[2752]: E1113 11:57:51.897971 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 11:57:51.898075 kubelet[2752]: W1113 11:57:51.897991 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 11:57:51.898075 kubelet[2752]: E1113 11:57:51.898007 2752 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 11:57:51.900519 kubelet[2752]: E1113 11:57:51.900397 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 11:57:51.900519 kubelet[2752]: W1113 11:57:51.900412 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 11:57:51.900519 kubelet[2752]: E1113 11:57:51.900442 2752 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 11:57:51.903023 kubelet[2752]: E1113 11:57:51.903008 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 11:57:51.903245 kubelet[2752]: W1113 11:57:51.903046 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 11:57:51.903245 kubelet[2752]: E1113 11:57:51.903083 2752 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 11:57:51.903456 kubelet[2752]: E1113 11:57:51.903445 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 11:57:51.903593 kubelet[2752]: W1113 11:57:51.903492 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 11:57:51.904214 kubelet[2752]: E1113 11:57:51.903648 2752 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 11:57:51.904524 kubelet[2752]: E1113 11:57:51.904454 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 11:57:51.904524 kubelet[2752]: W1113 11:57:51.904467 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 11:57:51.904791 kubelet[2752]: E1113 11:57:51.904703 2752 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 11:57:51.905478 kubelet[2752]: E1113 11:57:51.905464 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 11:57:51.905761 kubelet[2752]: W1113 11:57:51.905504 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 11:57:51.906724 kubelet[2752]: E1113 11:57:51.906488 2752 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 11:57:51.906997 kubelet[2752]: E1113 11:57:51.906843 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 11:57:51.906997 kubelet[2752]: W1113 11:57:51.906856 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 11:57:51.906997 kubelet[2752]: E1113 11:57:51.906978 2752 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 11:57:51.908457 kubelet[2752]: E1113 11:57:51.908159 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 11:57:51.908457 kubelet[2752]: W1113 11:57:51.908174 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 11:57:51.908457 kubelet[2752]: E1113 11:57:51.908374 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 11:57:51.908457 kubelet[2752]: W1113 11:57:51.908382 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 11:57:51.909948 kubelet[2752]: E1113 11:57:51.908988 2752 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 11:57:51.909948 kubelet[2752]: E1113 11:57:51.909010 2752 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 11:57:51.909948 kubelet[2752]: E1113 11:57:51.909121 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 11:57:51.909948 kubelet[2752]: W1113 11:57:51.909128 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 11:57:51.909948 kubelet[2752]: E1113 11:57:51.909150 2752 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 11:57:51.911363 kubelet[2752]: E1113 11:57:51.910449 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 11:57:51.911363 kubelet[2752]: W1113 11:57:51.910459 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 11:57:51.911363 kubelet[2752]: E1113 11:57:51.910485 2752 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 11:57:51.911363 kubelet[2752]: E1113 11:57:51.910697 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 11:57:51.911363 kubelet[2752]: W1113 11:57:51.910705 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 11:57:51.911363 kubelet[2752]: E1113 11:57:51.910722 2752 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 11:57:51.911363 kubelet[2752]: E1113 11:57:51.910898 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 11:57:51.911363 kubelet[2752]: W1113 11:57:51.910906 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 11:57:51.911363 kubelet[2752]: E1113 11:57:51.910984 2752 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 11:57:51.911363 kubelet[2752]: E1113 11:57:51.911135 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 11:57:51.911662 kubelet[2752]: W1113 11:57:51.911142 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 11:57:51.911662 kubelet[2752]: E1113 11:57:51.911151 2752 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 11:57:51.911662 kubelet[2752]: E1113 11:57:51.911322 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 11:57:51.911662 kubelet[2752]: W1113 11:57:51.911329 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 11:57:51.911662 kubelet[2752]: E1113 11:57:51.911338 2752 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 11:57:51.913076 kubelet[2752]: E1113 11:57:51.912378 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 11:57:51.913076 kubelet[2752]: W1113 11:57:51.912393 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 11:57:51.913076 kubelet[2752]: E1113 11:57:51.912405 2752 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 11:57:51.913076 kubelet[2752]: E1113 11:57:51.912585 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 11:57:51.913076 kubelet[2752]: W1113 11:57:51.912592 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 11:57:51.913076 kubelet[2752]: E1113 11:57:51.912601 2752 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 11:57:51.922922 kubelet[2752]: E1113 11:57:51.922844 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 11:57:51.922922 kubelet[2752]: W1113 11:57:51.922863 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 11:57:51.922922 kubelet[2752]: E1113 11:57:51.922882 2752 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 11:57:51.943304 containerd[1515]: time="2024-11-13T11:57:51.942501560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-76f5754ccb-8vzcq,Uid:77a3ac83-6add-48c1-903c-e90868b112ef,Namespace:calico-system,Attempt:0,}" Nov 13 11:57:51.988568 kubelet[2752]: E1113 11:57:51.988401 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 11:57:51.988568 kubelet[2752]: W1113 11:57:51.988432 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 11:57:51.988568 kubelet[2752]: E1113 11:57:51.988453 2752 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 11:57:51.989102 kubelet[2752]: E1113 11:57:51.988692 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 11:57:51.989102 kubelet[2752]: W1113 11:57:51.988701 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 11:57:51.989102 kubelet[2752]: E1113 11:57:51.988716 2752 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 11:57:51.989102 kubelet[2752]: E1113 11:57:51.988905 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 11:57:51.989102 kubelet[2752]: W1113 11:57:51.988912 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 11:57:51.989102 kubelet[2752]: E1113 11:57:51.988926 2752 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 11:57:51.991495 kubelet[2752]: E1113 11:57:51.991386 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 11:57:51.991495 kubelet[2752]: W1113 11:57:51.991400 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 11:57:51.991495 kubelet[2752]: E1113 11:57:51.991430 2752 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 11:57:51.991664 kubelet[2752]: E1113 11:57:51.991654 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 11:57:51.991714 kubelet[2752]: W1113 11:57:51.991706 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 11:57:51.991780 kubelet[2752]: E1113 11:57:51.991765 2752 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 11:57:51.991973 kubelet[2752]: E1113 11:57:51.991964 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 11:57:51.992031 kubelet[2752]: W1113 11:57:51.992023 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 11:57:51.992147 kubelet[2752]: E1113 11:57:51.992085 2752 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 11:57:51.992343 kubelet[2752]: E1113 11:57:51.992333 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 11:57:51.992401 kubelet[2752]: W1113 11:57:51.992393 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 11:57:51.992475 kubelet[2752]: E1113 11:57:51.992450 2752 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 11:57:51.992689 kubelet[2752]: E1113 11:57:51.992673 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 11:57:51.992731 kubelet[2752]: W1113 11:57:51.992688 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 11:57:51.992731 kubelet[2752]: E1113 11:57:51.992706 2752 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 11:57:51.993371 kubelet[2752]: E1113 11:57:51.993355 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 11:57:51.993371 kubelet[2752]: W1113 11:57:51.993369 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 11:57:51.993604 kubelet[2752]: E1113 11:57:51.993580 2752 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 11:57:51.995375 kubelet[2752]: E1113 11:57:51.995359 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 11:57:51.995375 kubelet[2752]: W1113 11:57:51.995373 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 11:57:51.995465 kubelet[2752]: E1113 11:57:51.995455 2752 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 11:57:51.995582 kubelet[2752]: E1113 11:57:51.995573 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 11:57:51.995616 kubelet[2752]: W1113 11:57:51.995582 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 11:57:51.995668 kubelet[2752]: E1113 11:57:51.995657 2752 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 11:57:51.995783 kubelet[2752]: E1113 11:57:51.995773 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 11:57:51.995783 kubelet[2752]: W1113 11:57:51.995782 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 11:57:51.995886 kubelet[2752]: E1113 11:57:51.995862 2752 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 11:57:51.995943 kubelet[2752]: E1113 11:57:51.995934 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 11:57:51.995986 kubelet[2752]: W1113 11:57:51.995943 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 11:57:51.996027 kubelet[2752]: E1113 11:57:51.996016 2752 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 11:57:51.996126 kubelet[2752]: E1113 11:57:51.996117 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 11:57:51.996157 kubelet[2752]: W1113 11:57:51.996126 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 11:57:51.996269 kubelet[2752]: E1113 11:57:51.996244 2752 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 11:57:51.996320 kubelet[2752]: E1113 11:57:51.996299 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 11:57:51.996320 kubelet[2752]: W1113 11:57:51.996307 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 11:57:51.996388 kubelet[2752]: E1113 11:57:51.996325 2752 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 11:57:51.996502 kubelet[2752]: E1113 11:57:51.996491 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 11:57:51.996541 kubelet[2752]: W1113 11:57:51.996502 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 11:57:51.996541 kubelet[2752]: E1113 11:57:51.996514 2752 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 11:57:51.997305 kubelet[2752]: E1113 11:57:51.997292 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 11:57:51.997344 kubelet[2752]: W1113 11:57:51.997305 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 11:57:51.997344 kubelet[2752]: E1113 11:57:51.997321 2752 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 11:57:51.997519 kubelet[2752]: E1113 11:57:51.997508 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 11:57:51.997550 kubelet[2752]: W1113 11:57:51.997518 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 11:57:51.997550 kubelet[2752]: E1113 11:57:51.997538 2752 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 11:57:51.999059 kubelet[2752]: E1113 11:57:51.999044 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 11:57:51.999059 kubelet[2752]: W1113 11:57:51.999058 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 11:57:51.999153 kubelet[2752]: E1113 11:57:51.999140 2752 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 11:57:51.999314 kubelet[2752]: E1113 11:57:51.999303 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 11:57:51.999428 kubelet[2752]: W1113 11:57:51.999313 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 11:57:51.999428 kubelet[2752]: E1113 11:57:51.999335 2752 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 11:57:51.999496 kubelet[2752]: E1113 11:57:51.999459 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 11:57:51.999496 kubelet[2752]: W1113 11:57:51.999465 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 11:57:51.999769 kubelet[2752]: E1113 11:57:51.999652 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 11:57:51.999769 kubelet[2752]: W1113 11:57:51.999662 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 11:57:51.999769 kubelet[2752]: E1113 11:57:51.999671 2752 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 11:57:51.999769 kubelet[2752]: E1113 11:57:51.999742 2752 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 11:57:52.004779 kubelet[2752]: E1113 11:57:52.004762 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 11:57:52.004779 kubelet[2752]: W1113 11:57:52.004778 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 11:57:52.004869 kubelet[2752]: E1113 11:57:52.004793 2752 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 11:57:52.005021 kubelet[2752]: E1113 11:57:52.005011 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 11:57:52.005065 kubelet[2752]: W1113 11:57:52.005021 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 11:57:52.005065 kubelet[2752]: E1113 11:57:52.005043 2752 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 11:57:52.005437 kubelet[2752]: E1113 11:57:52.005425 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 11:57:52.005480 kubelet[2752]: W1113 11:57:52.005436 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 11:57:52.005480 kubelet[2752]: E1113 11:57:52.005447 2752 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 11:57:52.022313 containerd[1515]: time="2024-11-13T11:57:52.021957114Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 13 11:57:52.022313 containerd[1515]: time="2024-11-13T11:57:52.022045027Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 13 11:57:52.022313 containerd[1515]: time="2024-11-13T11:57:52.022087083Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 11:57:52.022313 containerd[1515]: time="2024-11-13T11:57:52.022258417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 11:57:52.037223 kubelet[2752]: E1113 11:57:52.037174 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 13 11:57:52.037223 kubelet[2752]: W1113 11:57:52.037214 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 13 11:57:52.037386 kubelet[2752]: E1113 11:57:52.037236 2752 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 13 11:57:52.051447 systemd[1]: Started cri-containerd-f2c4694e9a612be6be59892172d2752be07f0489be877df395d20597b9b8cc12.scope - libcontainer container f2c4694e9a612be6be59892172d2752be07f0489be877df395d20597b9b8cc12. Nov 13 11:57:52.054166 containerd[1515]: time="2024-11-13T11:57:52.054108091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-n49j7,Uid:7ef55bf0-6208-40c1-8547-b68d5dbdf7e9,Namespace:calico-system,Attempt:0,}" Nov 13 11:57:52.082483 containerd[1515]: time="2024-11-13T11:57:52.081656232Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 13 11:57:52.082925 containerd[1515]: time="2024-11-13T11:57:52.082811783Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 13 11:57:52.082925 containerd[1515]: time="2024-11-13T11:57:52.082833493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 11:57:52.083252 containerd[1515]: time="2024-11-13T11:57:52.083121197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 11:57:52.106377 systemd[1]: Started cri-containerd-fd65cbc2e311051c503a4d0e5e01672cb6971e9e742cb8daf58978bade4a906a.scope - libcontainer container fd65cbc2e311051c503a4d0e5e01672cb6971e9e742cb8daf58978bade4a906a. Nov 13 11:57:52.186131 containerd[1515]: time="2024-11-13T11:57:52.186092267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-n49j7,Uid:7ef55bf0-6208-40c1-8547-b68d5dbdf7e9,Namespace:calico-system,Attempt:0,} returns sandbox id \"fd65cbc2e311051c503a4d0e5e01672cb6971e9e742cb8daf58978bade4a906a\"" Nov 13 11:57:52.189745 containerd[1515]: time="2024-11-13T11:57:52.189712803Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\"" Nov 13 11:57:52.258180 containerd[1515]: time="2024-11-13T11:57:52.257929794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-76f5754ccb-8vzcq,Uid:77a3ac83-6add-48c1-903c-e90868b112ef,Namespace:calico-system,Attempt:0,} returns sandbox id \"f2c4694e9a612be6be59892172d2752be07f0489be877df395d20597b9b8cc12\"" Nov 13 11:57:53.381236 sshd[2832]: Connection closed by 206.168.34.62 port 35390 [preauth] Nov 13 11:57:53.383400 systemd[1]: sshd@10-10.244.96.58:22-206.168.34.62:35390.service: Deactivated successfully. Nov 13 11:57:53.537444 kubelet[2752]: E1113 11:57:53.535885 2752 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5tp6q" podUID="a73a079a-5c98-427b-b55a-3d27769f0826" Nov 13 11:57:53.926751 containerd[1515]: time="2024-11-13T11:57:53.926703757Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 11:57:53.928765 containerd[1515]: time="2024-11-13T11:57:53.928684918Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0: active requests=0, bytes read=5362116" Nov 13 11:57:53.935850 containerd[1515]: time="2024-11-13T11:57:53.931796070Z" level=info msg="ImageCreate event name:\"sha256:3fbafc0cb73520aede9a07469f27fd8798e681807d14465761f19c8c2bda1cec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 11:57:53.937846 containerd[1515]: time="2024-11-13T11:57:53.937811726Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" with image id \"sha256:3fbafc0cb73520aede9a07469f27fd8798e681807d14465761f19c8c2bda1cec\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:bed11f00e388b9bbf6eb3be410d4bc86d7020f790902b87f9e330df5a2058769\", size \"6855168\" in 1.748062696s" Nov 13 11:57:53.938100 containerd[1515]: time="2024-11-13T11:57:53.937970819Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" returns image reference \"sha256:3fbafc0cb73520aede9a07469f27fd8798e681807d14465761f19c8c2bda1cec\"" Nov 13 11:57:53.938100 containerd[1515]: time="2024-11-13T11:57:53.938063217Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:bed11f00e388b9bbf6eb3be410d4bc86d7020f790902b87f9e330df5a2058769\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 11:57:53.940990 containerd[1515]: time="2024-11-13T11:57:53.940409838Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.0\"" Nov 13 11:57:53.945451 containerd[1515]: time="2024-11-13T11:57:53.945388963Z" level=info msg="CreateContainer within sandbox \"fd65cbc2e311051c503a4d0e5e01672cb6971e9e742cb8daf58978bade4a906a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 13 11:57:53.959858 containerd[1515]: time="2024-11-13T11:57:53.959637875Z" level=info msg="CreateContainer within sandbox \"fd65cbc2e311051c503a4d0e5e01672cb6971e9e742cb8daf58978bade4a906a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5162a7b0e5744a61d071904781acbb928fa53090f23dee457c9c6383cdce84b8\"" Nov 13 11:57:53.961274 containerd[1515]: time="2024-11-13T11:57:53.961242549Z" level=info msg="StartContainer for \"5162a7b0e5744a61d071904781acbb928fa53090f23dee457c9c6383cdce84b8\"" Nov 13 11:57:54.011411 systemd[1]: Started cri-containerd-5162a7b0e5744a61d071904781acbb928fa53090f23dee457c9c6383cdce84b8.scope - libcontainer container 5162a7b0e5744a61d071904781acbb928fa53090f23dee457c9c6383cdce84b8. Nov 13 11:57:54.054724 containerd[1515]: time="2024-11-13T11:57:54.054673594Z" level=info msg="StartContainer for \"5162a7b0e5744a61d071904781acbb928fa53090f23dee457c9c6383cdce84b8\" returns successfully" Nov 13 11:57:54.071779 systemd[1]: cri-containerd-5162a7b0e5744a61d071904781acbb928fa53090f23dee457c9c6383cdce84b8.scope: Deactivated successfully. Nov 13 11:57:54.119304 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5162a7b0e5744a61d071904781acbb928fa53090f23dee457c9c6383cdce84b8-rootfs.mount: Deactivated successfully. Nov 13 11:57:54.153694 containerd[1515]: time="2024-11-13T11:57:54.121496782Z" level=info msg="shim disconnected" id=5162a7b0e5744a61d071904781acbb928fa53090f23dee457c9c6383cdce84b8 namespace=k8s.io Nov 13 11:57:54.153694 containerd[1515]: time="2024-11-13T11:57:54.153511292Z" level=warning msg="cleaning up after shim disconnected" id=5162a7b0e5744a61d071904781acbb928fa53090f23dee457c9c6383cdce84b8 namespace=k8s.io Nov 13 11:57:54.153694 containerd[1515]: time="2024-11-13T11:57:54.153527137Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 13 11:57:55.536724 kubelet[2752]: E1113 11:57:55.535420 2752 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5tp6q" podUID="a73a079a-5c98-427b-b55a-3d27769f0826" Nov 13 11:57:56.436222 containerd[1515]: time="2024-11-13T11:57:56.434932943Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 11:57:56.436222 containerd[1515]: time="2024-11-13T11:57:56.435494802Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.0: active requests=0, bytes read=29849168" Nov 13 11:57:56.436222 containerd[1515]: time="2024-11-13T11:57:56.435690908Z" level=info msg="ImageCreate event name:\"sha256:eb8a933b39daca50b75ccf193cc6193e39512bc996c16898d43d4c1f39c8603b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 11:57:56.438092 containerd[1515]: time="2024-11-13T11:57:56.438063240Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:850e5f751e100580bffb57d1b70d4e90d90ecaab5ef1b6dc6a43dcd34a5e1057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 11:57:56.438944 containerd[1515]: time="2024-11-13T11:57:56.438917057Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.0\" with image id \"sha256:eb8a933b39daca50b75ccf193cc6193e39512bc996c16898d43d4c1f39c8603b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:850e5f751e100580bffb57d1b70d4e90d90ecaab5ef1b6dc6a43dcd34a5e1057\", size \"31342252\" in 2.498476583s" Nov 13 11:57:56.439069 containerd[1515]: time="2024-11-13T11:57:56.439052001Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.0\" returns image reference \"sha256:eb8a933b39daca50b75ccf193cc6193e39512bc996c16898d43d4c1f39c8603b\"" Nov 13 11:57:56.440692 containerd[1515]: time="2024-11-13T11:57:56.440554687Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.0\"" Nov 13 11:57:56.472088 containerd[1515]: time="2024-11-13T11:57:56.472038076Z" level=info msg="CreateContainer within sandbox \"f2c4694e9a612be6be59892172d2752be07f0489be877df395d20597b9b8cc12\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 13 11:57:56.491608 containerd[1515]: time="2024-11-13T11:57:56.491551690Z" level=info msg="CreateContainer within sandbox \"f2c4694e9a612be6be59892172d2752be07f0489be877df395d20597b9b8cc12\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"92971ca5c4ff6b24398c5f8aa445fabe04d9343a5247a4be4467dee0f224af49\"" Nov 13 11:57:56.493126 containerd[1515]: time="2024-11-13T11:57:56.493083199Z" level=info msg="StartContainer for \"92971ca5c4ff6b24398c5f8aa445fabe04d9343a5247a4be4467dee0f224af49\"" Nov 13 11:57:56.535636 systemd[1]: Started cri-containerd-92971ca5c4ff6b24398c5f8aa445fabe04d9343a5247a4be4467dee0f224af49.scope - libcontainer container 92971ca5c4ff6b24398c5f8aa445fabe04d9343a5247a4be4467dee0f224af49. Nov 13 11:57:56.606938 containerd[1515]: time="2024-11-13T11:57:56.606810209Z" level=info msg="StartContainer for \"92971ca5c4ff6b24398c5f8aa445fabe04d9343a5247a4be4467dee0f224af49\" returns successfully" Nov 13 11:57:57.536358 kubelet[2752]: E1113 11:57:57.536171 2752 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5tp6q" podUID="a73a079a-5c98-427b-b55a-3d27769f0826" Nov 13 11:57:57.710968 kubelet[2752]: I1113 11:57:57.710756 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-76f5754ccb-8vzcq" podStartSLOduration=2.534901 podStartE2EDuration="6.710687096s" podCreationTimestamp="2024-11-13 11:57:51 +0000 UTC" firstStartedPulling="2024-11-13 11:57:52.264103671 +0000 UTC m=+23.884012346" lastFinishedPulling="2024-11-13 11:57:56.439889768 +0000 UTC m=+28.059798442" observedRunningTime="2024-11-13 11:57:56.708326835 +0000 UTC m=+28.328235527" watchObservedRunningTime="2024-11-13 11:57:57.710687096 +0000 UTC m=+29.330595877" Nov 13 11:57:59.537148 kubelet[2752]: E1113 11:57:59.535384 2752 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5tp6q" podUID="a73a079a-5c98-427b-b55a-3d27769f0826" Nov 13 11:58:01.536144 kubelet[2752]: E1113 11:58:01.535895 2752 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5tp6q" podUID="a73a079a-5c98-427b-b55a-3d27769f0826" Nov 13 11:58:01.553963 containerd[1515]: time="2024-11-13T11:58:01.553722202Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 11:58:01.555711 containerd[1515]: time="2024-11-13T11:58:01.554938712Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.0: active requests=0, bytes read=96163683" Nov 13 11:58:01.555711 containerd[1515]: time="2024-11-13T11:58:01.555142303Z" level=info msg="ImageCreate event name:\"sha256:124793defc2ae544a3e0dcd1a225bff5166dbebc1bdacb41c4161b9c0c53425c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 11:58:01.558417 containerd[1515]: time="2024-11-13T11:58:01.558373119Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:a7c1b02375aa96ae882655397cd9dd0dcc867d9587ce7b866cf9cd65fd7ca1dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 11:58:01.559381 containerd[1515]: time="2024-11-13T11:58:01.559355541Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.0\" with image id \"sha256:124793defc2ae544a3e0dcd1a225bff5166dbebc1bdacb41c4161b9c0c53425c\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:a7c1b02375aa96ae882655397cd9dd0dcc867d9587ce7b866cf9cd65fd7ca1dd\", size \"97656775\" in 5.118770279s" Nov 13 11:58:01.559463 containerd[1515]: time="2024-11-13T11:58:01.559385196Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.0\" returns image reference \"sha256:124793defc2ae544a3e0dcd1a225bff5166dbebc1bdacb41c4161b9c0c53425c\"" Nov 13 11:58:01.563382 containerd[1515]: time="2024-11-13T11:58:01.563340506Z" level=info msg="CreateContainer within sandbox \"fd65cbc2e311051c503a4d0e5e01672cb6971e9e742cb8daf58978bade4a906a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 13 11:58:01.585761 containerd[1515]: time="2024-11-13T11:58:01.585718978Z" level=info msg="CreateContainer within sandbox \"fd65cbc2e311051c503a4d0e5e01672cb6971e9e742cb8daf58978bade4a906a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ea85023723f1a49a54dd8fab506b1743891774722dbc3f6e045d6756c0406579\"" Nov 13 11:58:01.586456 containerd[1515]: time="2024-11-13T11:58:01.586431268Z" level=info msg="StartContainer for \"ea85023723f1a49a54dd8fab506b1743891774722dbc3f6e045d6756c0406579\"" Nov 13 11:58:01.657388 systemd[1]: Started cri-containerd-ea85023723f1a49a54dd8fab506b1743891774722dbc3f6e045d6756c0406579.scope - libcontainer container ea85023723f1a49a54dd8fab506b1743891774722dbc3f6e045d6756c0406579. Nov 13 11:58:01.711498 containerd[1515]: time="2024-11-13T11:58:01.711449579Z" level=info msg="StartContainer for \"ea85023723f1a49a54dd8fab506b1743891774722dbc3f6e045d6756c0406579\" returns successfully" Nov 13 11:58:02.376948 systemd[1]: cri-containerd-ea85023723f1a49a54dd8fab506b1743891774722dbc3f6e045d6756c0406579.scope: Deactivated successfully. Nov 13 11:58:02.410730 kubelet[2752]: I1113 11:58:02.409723 2752 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Nov 13 11:58:02.410042 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea85023723f1a49a54dd8fab506b1743891774722dbc3f6e045d6756c0406579-rootfs.mount: Deactivated successfully. Nov 13 11:58:02.461390 containerd[1515]: time="2024-11-13T11:58:02.461052150Z" level=info msg="shim disconnected" id=ea85023723f1a49a54dd8fab506b1743891774722dbc3f6e045d6756c0406579 namespace=k8s.io Nov 13 11:58:02.465276 containerd[1515]: time="2024-11-13T11:58:02.462444522Z" level=warning msg="cleaning up after shim disconnected" id=ea85023723f1a49a54dd8fab506b1743891774722dbc3f6e045d6756c0406579 namespace=k8s.io Nov 13 11:58:02.465276 containerd[1515]: time="2024-11-13T11:58:02.463439670Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 13 11:58:02.477321 kubelet[2752]: I1113 11:58:02.474805 2752 topology_manager.go:215] "Topology Admit Handler" podUID="f284fb92-56b8-452d-85a1-fc02bb9810b6" podNamespace="kube-system" podName="coredns-7db6d8ff4d-gjw7n" Nov 13 11:58:02.482006 kubelet[2752]: I1113 11:58:02.481972 2752 topology_manager.go:215] "Topology Admit Handler" podUID="3cfa244a-99c4-4135-8a9f-f544de1085c6" podNamespace="calico-apiserver" podName="calico-apiserver-8588545cd8-kdzjc" Nov 13 11:58:02.485229 kubelet[2752]: I1113 11:58:02.484738 2752 topology_manager.go:215] "Topology Admit Handler" podUID="c2089e4d-0914-4208-bd3f-ebfa5baa6636" podNamespace="kube-system" podName="coredns-7db6d8ff4d-29ddh" Nov 13 11:58:02.489246 kubelet[2752]: I1113 11:58:02.486600 2752 topology_manager.go:215] "Topology Admit Handler" podUID="b9eb2b70-ede6-4290-8483-7657e4c96a8b" podNamespace="calico-system" podName="calico-kube-controllers-7f4895d8cb-7gh4j" Nov 13 11:58:02.498742 kubelet[2752]: I1113 11:58:02.498525 2752 topology_manager.go:215] "Topology Admit Handler" podUID="f8b9f6f4-7488-43e0-87ea-b08a68022038" podNamespace="calico-apiserver" podName="calico-apiserver-8588545cd8-hgc4l" Nov 13 11:58:02.504591 kubelet[2752]: W1113 11:58:02.504256 2752 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:srv-gr2mf.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'srv-gr2mf.gb1.brightbox.com' and this object Nov 13 11:58:02.504591 kubelet[2752]: E1113 11:58:02.504472 2752 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:srv-gr2mf.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'srv-gr2mf.gb1.brightbox.com' and this object Nov 13 11:58:02.510209 systemd[1]: Created slice kubepods-besteffort-pod3cfa244a_99c4_4135_8a9f_f544de1085c6.slice - libcontainer container kubepods-besteffort-pod3cfa244a_99c4_4135_8a9f_f544de1085c6.slice. Nov 13 11:58:02.519235 systemd[1]: Created slice kubepods-burstable-podf284fb92_56b8_452d_85a1_fc02bb9810b6.slice - libcontainer container kubepods-burstable-podf284fb92_56b8_452d_85a1_fc02bb9810b6.slice. Nov 13 11:58:02.525847 systemd[1]: Created slice kubepods-burstable-podc2089e4d_0914_4208_bd3f_ebfa5baa6636.slice - libcontainer container kubepods-burstable-podc2089e4d_0914_4208_bd3f_ebfa5baa6636.slice. Nov 13 11:58:02.530727 systemd[1]: Created slice kubepods-besteffort-podb9eb2b70_ede6_4290_8483_7657e4c96a8b.slice - libcontainer container kubepods-besteffort-podb9eb2b70_ede6_4290_8483_7657e4c96a8b.slice. Nov 13 11:58:02.542248 systemd[1]: Created slice kubepods-besteffort-podf8b9f6f4_7488_43e0_87ea_b08a68022038.slice - libcontainer container kubepods-besteffort-podf8b9f6f4_7488_43e0_87ea_b08a68022038.slice. Nov 13 11:58:02.578416 kubelet[2752]: I1113 11:58:02.578360 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f729b\" (UniqueName: \"kubernetes.io/projected/3cfa244a-99c4-4135-8a9f-f544de1085c6-kube-api-access-f729b\") pod \"calico-apiserver-8588545cd8-kdzjc\" (UID: \"3cfa244a-99c4-4135-8a9f-f544de1085c6\") " pod="calico-apiserver/calico-apiserver-8588545cd8-kdzjc" Nov 13 11:58:02.580323 kubelet[2752]: I1113 11:58:02.579253 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3cfa244a-99c4-4135-8a9f-f544de1085c6-calico-apiserver-certs\") pod \"calico-apiserver-8588545cd8-kdzjc\" (UID: \"3cfa244a-99c4-4135-8a9f-f544de1085c6\") " pod="calico-apiserver/calico-apiserver-8588545cd8-kdzjc" Nov 13 11:58:02.580323 kubelet[2752]: I1113 11:58:02.579317 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c2089e4d-0914-4208-bd3f-ebfa5baa6636-config-volume\") pod \"coredns-7db6d8ff4d-29ddh\" (UID: \"c2089e4d-0914-4208-bd3f-ebfa5baa6636\") " pod="kube-system/coredns-7db6d8ff4d-29ddh" Nov 13 11:58:02.580323 kubelet[2752]: I1113 11:58:02.579362 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mmst\" (UniqueName: \"kubernetes.io/projected/b9eb2b70-ede6-4290-8483-7657e4c96a8b-kube-api-access-6mmst\") pod \"calico-kube-controllers-7f4895d8cb-7gh4j\" (UID: \"b9eb2b70-ede6-4290-8483-7657e4c96a8b\") " pod="calico-system/calico-kube-controllers-7f4895d8cb-7gh4j" Nov 13 11:58:02.580323 kubelet[2752]: I1113 11:58:02.579416 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9eb2b70-ede6-4290-8483-7657e4c96a8b-tigera-ca-bundle\") pod \"calico-kube-controllers-7f4895d8cb-7gh4j\" (UID: \"b9eb2b70-ede6-4290-8483-7657e4c96a8b\") " pod="calico-system/calico-kube-controllers-7f4895d8cb-7gh4j" Nov 13 11:58:02.580323 kubelet[2752]: I1113 11:58:02.579451 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vtgf\" (UniqueName: \"kubernetes.io/projected/f284fb92-56b8-452d-85a1-fc02bb9810b6-kube-api-access-7vtgf\") pod \"coredns-7db6d8ff4d-gjw7n\" (UID: \"f284fb92-56b8-452d-85a1-fc02bb9810b6\") " pod="kube-system/coredns-7db6d8ff4d-gjw7n" Nov 13 11:58:02.580582 kubelet[2752]: I1113 11:58:02.579482 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xcg2\" (UniqueName: \"kubernetes.io/projected/f8b9f6f4-7488-43e0-87ea-b08a68022038-kube-api-access-2xcg2\") pod \"calico-apiserver-8588545cd8-hgc4l\" (UID: \"f8b9f6f4-7488-43e0-87ea-b08a68022038\") " pod="calico-apiserver/calico-apiserver-8588545cd8-hgc4l" Nov 13 11:58:02.580582 kubelet[2752]: I1113 11:58:02.579514 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhmb5\" (UniqueName: \"kubernetes.io/projected/c2089e4d-0914-4208-bd3f-ebfa5baa6636-kube-api-access-vhmb5\") pod \"coredns-7db6d8ff4d-29ddh\" (UID: \"c2089e4d-0914-4208-bd3f-ebfa5baa6636\") " pod="kube-system/coredns-7db6d8ff4d-29ddh" Nov 13 11:58:02.580582 kubelet[2752]: I1113 11:58:02.579561 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f284fb92-56b8-452d-85a1-fc02bb9810b6-config-volume\") pod \"coredns-7db6d8ff4d-gjw7n\" (UID: \"f284fb92-56b8-452d-85a1-fc02bb9810b6\") " pod="kube-system/coredns-7db6d8ff4d-gjw7n" Nov 13 11:58:02.580582 kubelet[2752]: I1113 11:58:02.579594 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f8b9f6f4-7488-43e0-87ea-b08a68022038-calico-apiserver-certs\") pod \"calico-apiserver-8588545cd8-hgc4l\" (UID: \"f8b9f6f4-7488-43e0-87ea-b08a68022038\") " pod="calico-apiserver/calico-apiserver-8588545cd8-hgc4l" Nov 13 11:58:02.736260 containerd[1515]: time="2024-11-13T11:58:02.735329661Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.0\"" Nov 13 11:58:02.818270 containerd[1515]: time="2024-11-13T11:58:02.817228034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8588545cd8-kdzjc,Uid:3cfa244a-99c4-4135-8a9f-f544de1085c6,Namespace:calico-apiserver,Attempt:0,}" Nov 13 11:58:02.837011 containerd[1515]: time="2024-11-13T11:58:02.836958582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f4895d8cb-7gh4j,Uid:b9eb2b70-ede6-4290-8483-7657e4c96a8b,Namespace:calico-system,Attempt:0,}" Nov 13 11:58:02.864146 containerd[1515]: time="2024-11-13T11:58:02.864096346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8588545cd8-hgc4l,Uid:f8b9f6f4-7488-43e0-87ea-b08a68022038,Namespace:calico-apiserver,Attempt:0,}" Nov 13 11:58:03.016014 containerd[1515]: time="2024-11-13T11:58:03.015507819Z" level=error msg="Failed to destroy network for sandbox \"064bc4be65a841d0edadb1da42894914b0b94459fbcedfc3e1cd96a74d0c5fc7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 13 11:58:03.022724 containerd[1515]: time="2024-11-13T11:58:03.022666569Z" level=error msg="encountered an error cleaning up failed sandbox \"064bc4be65a841d0edadb1da42894914b0b94459fbcedfc3e1cd96a74d0c5fc7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 13 11:58:03.022886 containerd[1515]: time="2024-11-13T11:58:03.022778060Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f4895d8cb-7gh4j,Uid:b9eb2b70-ede6-4290-8483-7657e4c96a8b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"064bc4be65a841d0edadb1da42894914b0b94459fbcedfc3e1cd96a74d0c5fc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 13 11:58:03.024475 containerd[1515]: time="2024-11-13T11:58:03.024430472Z" level=error msg="Failed to destroy network for sandbox \"188c707f1dc1ccc79f3dc1b617cca7702a851fa5b4b03d4f3b7f1cc152ebe11e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 13 11:58:03.025982 containerd[1515]: time="2024-11-13T11:58:03.025946369Z" level=error msg="encountered an error cleaning up failed sandbox \"188c707f1dc1ccc79f3dc1b617cca7702a851fa5b4b03d4f3b7f1cc152ebe11e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 13 11:58:03.026099 containerd[1515]: time="2024-11-13T11:58:03.026012579Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8588545cd8-kdzjc,Uid:3cfa244a-99c4-4135-8a9f-f544de1085c6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"188c707f1dc1ccc79f3dc1b617cca7702a851fa5b4b03d4f3b7f1cc152ebe11e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 13 11:58:03.030128 kubelet[2752]: E1113 11:58:03.028774 2752 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"188c707f1dc1ccc79f3dc1b617cca7702a851fa5b4b03d4f3b7f1cc152ebe11e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 13 11:58:03.030128 kubelet[2752]: E1113 11:58:03.028867 2752 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"188c707f1dc1ccc79f3dc1b617cca7702a851fa5b4b03d4f3b7f1cc152ebe11e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8588545cd8-kdzjc" Nov 13 11:58:03.030128 kubelet[2752]: E1113 11:58:03.028903 2752 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"188c707f1dc1ccc79f3dc1b617cca7702a851fa5b4b03d4f3b7f1cc152ebe11e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8588545cd8-kdzjc" Nov 13 11:58:03.031322 kubelet[2752]: E1113 11:58:03.028996 2752 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8588545cd8-kdzjc_calico-apiserver(3cfa244a-99c4-4135-8a9f-f544de1085c6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8588545cd8-kdzjc_calico-apiserver(3cfa244a-99c4-4135-8a9f-f544de1085c6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"188c707f1dc1ccc79f3dc1b617cca7702a851fa5b4b03d4f3b7f1cc152ebe11e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8588545cd8-kdzjc" podUID="3cfa244a-99c4-4135-8a9f-f544de1085c6" Nov 13 11:58:03.031322 kubelet[2752]: E1113 11:58:03.029919 2752 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"064bc4be65a841d0edadb1da42894914b0b94459fbcedfc3e1cd96a74d0c5fc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 13 11:58:03.031322 kubelet[2752]: E1113 11:58:03.029965 2752 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"064bc4be65a841d0edadb1da42894914b0b94459fbcedfc3e1cd96a74d0c5fc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f4895d8cb-7gh4j" Nov 13 11:58:03.031483 kubelet[2752]: E1113 11:58:03.029987 2752 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"064bc4be65a841d0edadb1da42894914b0b94459fbcedfc3e1cd96a74d0c5fc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f4895d8cb-7gh4j" Nov 13 11:58:03.031483 kubelet[2752]: E1113 11:58:03.030024 2752 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7f4895d8cb-7gh4j_calico-system(b9eb2b70-ede6-4290-8483-7657e4c96a8b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7f4895d8cb-7gh4j_calico-system(b9eb2b70-ede6-4290-8483-7657e4c96a8b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"064bc4be65a841d0edadb1da42894914b0b94459fbcedfc3e1cd96a74d0c5fc7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f4895d8cb-7gh4j" podUID="b9eb2b70-ede6-4290-8483-7657e4c96a8b" Nov 13 11:58:03.041517 containerd[1515]: time="2024-11-13T11:58:03.041459875Z" level=error msg="Failed to destroy network for sandbox \"daefabc137c78c71cba1858efcb6631812e380fde6818cff42d819cc290c2b1e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 13 11:58:03.041911 containerd[1515]: time="2024-11-13T11:58:03.041883574Z" level=error msg="encountered an error cleaning up failed sandbox \"daefabc137c78c71cba1858efcb6631812e380fde6818cff42d819cc290c2b1e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 13 11:58:03.041978 containerd[1515]: time="2024-11-13T11:58:03.041953532Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8588545cd8-hgc4l,Uid:f8b9f6f4-7488-43e0-87ea-b08a68022038,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"daefabc137c78c71cba1858efcb6631812e380fde6818cff42d819cc290c2b1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 13 11:58:03.042523 kubelet[2752]: E1113 11:58:03.042162 2752 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"daefabc137c78c71cba1858efcb6631812e380fde6818cff42d819cc290c2b1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 13 11:58:03.042523 kubelet[2752]: E1113 11:58:03.042233 2752 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"daefabc137c78c71cba1858efcb6631812e380fde6818cff42d819cc290c2b1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8588545cd8-hgc4l" Nov 13 11:58:03.042523 kubelet[2752]: E1113 11:58:03.042265 2752 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"daefabc137c78c71cba1858efcb6631812e380fde6818cff42d819cc290c2b1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8588545cd8-hgc4l" Nov 13 11:58:03.042708 kubelet[2752]: E1113 11:58:03.042316 2752 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8588545cd8-hgc4l_calico-apiserver(f8b9f6f4-7488-43e0-87ea-b08a68022038)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8588545cd8-hgc4l_calico-apiserver(f8b9f6f4-7488-43e0-87ea-b08a68022038)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"daefabc137c78c71cba1858efcb6631812e380fde6818cff42d819cc290c2b1e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8588545cd8-hgc4l" podUID="f8b9f6f4-7488-43e0-87ea-b08a68022038" Nov 13 11:58:03.542844 systemd[1]: Created slice kubepods-besteffort-poda73a079a_5c98_427b_b55a_3d27769f0826.slice - libcontainer container kubepods-besteffort-poda73a079a_5c98_427b_b55a_3d27769f0826.slice. Nov 13 11:58:03.545753 containerd[1515]: time="2024-11-13T11:58:03.545570860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5tp6q,Uid:a73a079a-5c98-427b-b55a-3d27769f0826,Namespace:calico-system,Attempt:0,}" Nov 13 11:58:03.605357 containerd[1515]: time="2024-11-13T11:58:03.605293441Z" level=error msg="Failed to destroy network for sandbox \"006fa4d57cb1a807046c8487ecd1cce84ebbdadba53e8507eecf1524e3cea65d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 13 11:58:03.605672 containerd[1515]: time="2024-11-13T11:58:03.605640284Z" level=error msg="encountered an error cleaning up failed sandbox \"006fa4d57cb1a807046c8487ecd1cce84ebbdadba53e8507eecf1524e3cea65d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 13 11:58:03.605806 containerd[1515]: time="2024-11-13T11:58:03.605693818Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5tp6q,Uid:a73a079a-5c98-427b-b55a-3d27769f0826,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"006fa4d57cb1a807046c8487ecd1cce84ebbdadba53e8507eecf1524e3cea65d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 13 11:58:03.605987 kubelet[2752]: E1113 11:58:03.605949 2752 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"006fa4d57cb1a807046c8487ecd1cce84ebbdadba53e8507eecf1524e3cea65d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 13 11:58:03.606669 kubelet[2752]: E1113 11:58:03.606005 2752 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"006fa4d57cb1a807046c8487ecd1cce84ebbdadba53e8507eecf1524e3cea65d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5tp6q" Nov 13 11:58:03.606669 kubelet[2752]: E1113 11:58:03.606045 2752 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"006fa4d57cb1a807046c8487ecd1cce84ebbdadba53e8507eecf1524e3cea65d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5tp6q" Nov 13 11:58:03.606669 kubelet[2752]: E1113 11:58:03.606131 2752 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5tp6q_calico-system(a73a079a-5c98-427b-b55a-3d27769f0826)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5tp6q_calico-system(a73a079a-5c98-427b-b55a-3d27769f0826)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"006fa4d57cb1a807046c8487ecd1cce84ebbdadba53e8507eecf1524e3cea65d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5tp6q" podUID="a73a079a-5c98-427b-b55a-3d27769f0826" Nov 13 11:58:03.683051 kubelet[2752]: E1113 11:58:03.682876 2752 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Nov 13 11:58:03.683319 kubelet[2752]: E1113 11:58:03.683121 2752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2089e4d-0914-4208-bd3f-ebfa5baa6636-config-volume podName:c2089e4d-0914-4208-bd3f-ebfa5baa6636 nodeName:}" failed. No retries permitted until 2024-11-13 11:58:04.18304587 +0000 UTC m=+35.802954593 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c2089e4d-0914-4208-bd3f-ebfa5baa6636-config-volume") pod "coredns-7db6d8ff4d-29ddh" (UID: "c2089e4d-0914-4208-bd3f-ebfa5baa6636") : failed to sync configmap cache: timed out waiting for the condition Nov 13 11:58:03.685210 kubelet[2752]: E1113 11:58:03.685123 2752 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Nov 13 11:58:03.685409 kubelet[2752]: E1113 11:58:03.685274 2752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f284fb92-56b8-452d-85a1-fc02bb9810b6-config-volume podName:f284fb92-56b8-452d-85a1-fc02bb9810b6 nodeName:}" failed. No retries permitted until 2024-11-13 11:58:04.185237724 +0000 UTC m=+35.805146487 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/f284fb92-56b8-452d-85a1-fc02bb9810b6-config-volume") pod "coredns-7db6d8ff4d-gjw7n" (UID: "f284fb92-56b8-452d-85a1-fc02bb9810b6") : failed to sync configmap cache: timed out waiting for the condition Nov 13 11:58:03.736246 kubelet[2752]: I1113 11:58:03.735712 2752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="006fa4d57cb1a807046c8487ecd1cce84ebbdadba53e8507eecf1524e3cea65d" Nov 13 11:58:03.742120 kubelet[2752]: I1113 11:58:03.740989 2752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="daefabc137c78c71cba1858efcb6631812e380fde6818cff42d819cc290c2b1e" Nov 13 11:58:03.746111 containerd[1515]: time="2024-11-13T11:58:03.745809657Z" level=info msg="StopPodSandbox for \"006fa4d57cb1a807046c8487ecd1cce84ebbdadba53e8507eecf1524e3cea65d\"" Nov 13 11:58:03.748261 containerd[1515]: time="2024-11-13T11:58:03.747894326Z" level=info msg="StopPodSandbox for \"daefabc137c78c71cba1858efcb6631812e380fde6818cff42d819cc290c2b1e\"" Nov 13 11:58:03.749133 containerd[1515]: time="2024-11-13T11:58:03.748468391Z" level=info msg="Ensure that sandbox daefabc137c78c71cba1858efcb6631812e380fde6818cff42d819cc290c2b1e in task-service has been cleanup successfully" Nov 13 11:58:03.750325 containerd[1515]: time="2024-11-13T11:58:03.750278058Z" level=info msg="Ensure that sandbox 006fa4d57cb1a807046c8487ecd1cce84ebbdadba53e8507eecf1524e3cea65d in task-service has been cleanup successfully" Nov 13 11:58:03.753887 kubelet[2752]: I1113 11:58:03.753865 2752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="064bc4be65a841d0edadb1da42894914b0b94459fbcedfc3e1cd96a74d0c5fc7" Nov 13 11:58:03.755384 containerd[1515]: time="2024-11-13T11:58:03.755357518Z" level=info msg="StopPodSandbox for \"064bc4be65a841d0edadb1da42894914b0b94459fbcedfc3e1cd96a74d0c5fc7\"" Nov 13 11:58:03.756235 containerd[1515]: time="2024-11-13T11:58:03.756210497Z" level=info msg="Ensure that sandbox 064bc4be65a841d0edadb1da42894914b0b94459fbcedfc3e1cd96a74d0c5fc7 in task-service has been cleanup successfully" Nov 13 11:58:03.756889 kubelet[2752]: I1113 11:58:03.756827 2752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="188c707f1dc1ccc79f3dc1b617cca7702a851fa5b4b03d4f3b7f1cc152ebe11e" Nov 13 11:58:03.757830 containerd[1515]: time="2024-11-13T11:58:03.757716838Z" level=info msg="StopPodSandbox for \"188c707f1dc1ccc79f3dc1b617cca7702a851fa5b4b03d4f3b7f1cc152ebe11e\"" Nov 13 11:58:03.758087 containerd[1515]: time="2024-11-13T11:58:03.758068611Z" level=info msg="Ensure that sandbox 188c707f1dc1ccc79f3dc1b617cca7702a851fa5b4b03d4f3b7f1cc152ebe11e in task-service has been cleanup successfully" Nov 13 11:58:03.825656 containerd[1515]: time="2024-11-13T11:58:03.825543100Z" level=error msg="StopPodSandbox for \"188c707f1dc1ccc79f3dc1b617cca7702a851fa5b4b03d4f3b7f1cc152ebe11e\" failed" error="failed to destroy network for sandbox \"188c707f1dc1ccc79f3dc1b617cca7702a851fa5b4b03d4f3b7f1cc152ebe11e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 13 11:58:03.826497 kubelet[2752]: E1113 11:58:03.826324 2752 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"188c707f1dc1ccc79f3dc1b617cca7702a851fa5b4b03d4f3b7f1cc152ebe11e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="188c707f1dc1ccc79f3dc1b617cca7702a851fa5b4b03d4f3b7f1cc152ebe11e" Nov 13 11:58:03.826497 kubelet[2752]: E1113 11:58:03.826408 2752 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"188c707f1dc1ccc79f3dc1b617cca7702a851fa5b4b03d4f3b7f1cc152ebe11e"} Nov 13 11:58:03.827290 kubelet[2752]: E1113 11:58:03.826542 2752 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3cfa244a-99c4-4135-8a9f-f544de1085c6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"188c707f1dc1ccc79f3dc1b617cca7702a851fa5b4b03d4f3b7f1cc152ebe11e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 13 11:58:03.827290 kubelet[2752]: E1113 11:58:03.826571 2752 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3cfa244a-99c4-4135-8a9f-f544de1085c6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"188c707f1dc1ccc79f3dc1b617cca7702a851fa5b4b03d4f3b7f1cc152ebe11e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8588545cd8-kdzjc" podUID="3cfa244a-99c4-4135-8a9f-f544de1085c6" Nov 13 11:58:03.829287 containerd[1515]: time="2024-11-13T11:58:03.829022921Z" level=error msg="StopPodSandbox for \"006fa4d57cb1a807046c8487ecd1cce84ebbdadba53e8507eecf1524e3cea65d\" failed" error="failed to destroy network for sandbox \"006fa4d57cb1a807046c8487ecd1cce84ebbdadba53e8507eecf1524e3cea65d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 13 11:58:03.829458 kubelet[2752]: E1113 11:58:03.829216 2752 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"006fa4d57cb1a807046c8487ecd1cce84ebbdadba53e8507eecf1524e3cea65d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="006fa4d57cb1a807046c8487ecd1cce84ebbdadba53e8507eecf1524e3cea65d" Nov 13 11:58:03.829458 kubelet[2752]: E1113 11:58:03.829267 2752 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"006fa4d57cb1a807046c8487ecd1cce84ebbdadba53e8507eecf1524e3cea65d"} Nov 13 11:58:03.829458 kubelet[2752]: E1113 11:58:03.829302 2752 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a73a079a-5c98-427b-b55a-3d27769f0826\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"006fa4d57cb1a807046c8487ecd1cce84ebbdadba53e8507eecf1524e3cea65d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 13 11:58:03.829458 kubelet[2752]: E1113 11:58:03.829328 2752 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a73a079a-5c98-427b-b55a-3d27769f0826\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"006fa4d57cb1a807046c8487ecd1cce84ebbdadba53e8507eecf1524e3cea65d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5tp6q" podUID="a73a079a-5c98-427b-b55a-3d27769f0826" Nov 13 11:58:03.829946 containerd[1515]: time="2024-11-13T11:58:03.829871817Z" level=error msg="StopPodSandbox for \"daefabc137c78c71cba1858efcb6631812e380fde6818cff42d819cc290c2b1e\" failed" error="failed to destroy network for sandbox \"daefabc137c78c71cba1858efcb6631812e380fde6818cff42d819cc290c2b1e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 13 11:58:03.830285 kubelet[2752]: E1113 11:58:03.830239 2752 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"daefabc137c78c71cba1858efcb6631812e380fde6818cff42d819cc290c2b1e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="daefabc137c78c71cba1858efcb6631812e380fde6818cff42d819cc290c2b1e" Nov 13 11:58:03.830350 kubelet[2752]: E1113 11:58:03.830309 2752 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"daefabc137c78c71cba1858efcb6631812e380fde6818cff42d819cc290c2b1e"} Nov 13 11:58:03.830350 kubelet[2752]: E1113 11:58:03.830343 2752 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f8b9f6f4-7488-43e0-87ea-b08a68022038\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"daefabc137c78c71cba1858efcb6631812e380fde6818cff42d819cc290c2b1e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 13 11:58:03.831298 kubelet[2752]: E1113 11:58:03.830365 2752 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f8b9f6f4-7488-43e0-87ea-b08a68022038\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"daefabc137c78c71cba1858efcb6631812e380fde6818cff42d819cc290c2b1e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8588545cd8-hgc4l" podUID="f8b9f6f4-7488-43e0-87ea-b08a68022038" Nov 13 11:58:03.831488 containerd[1515]: time="2024-11-13T11:58:03.831440053Z" level=error msg="StopPodSandbox for \"064bc4be65a841d0edadb1da42894914b0b94459fbcedfc3e1cd96a74d0c5fc7\" failed" error="failed to destroy network for sandbox \"064bc4be65a841d0edadb1da42894914b0b94459fbcedfc3e1cd96a74d0c5fc7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 13 11:58:03.831915 kubelet[2752]: E1113 11:58:03.831598 2752 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"064bc4be65a841d0edadb1da42894914b0b94459fbcedfc3e1cd96a74d0c5fc7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="064bc4be65a841d0edadb1da42894914b0b94459fbcedfc3e1cd96a74d0c5fc7" Nov 13 11:58:03.831915 kubelet[2752]: E1113 11:58:03.831626 2752 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"064bc4be65a841d0edadb1da42894914b0b94459fbcedfc3e1cd96a74d0c5fc7"} Nov 13 11:58:03.831915 kubelet[2752]: E1113 11:58:03.831653 2752 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b9eb2b70-ede6-4290-8483-7657e4c96a8b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"064bc4be65a841d0edadb1da42894914b0b94459fbcedfc3e1cd96a74d0c5fc7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 13 11:58:03.831915 kubelet[2752]: E1113 11:58:03.831686 2752 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b9eb2b70-ede6-4290-8483-7657e4c96a8b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"064bc4be65a841d0edadb1da42894914b0b94459fbcedfc3e1cd96a74d0c5fc7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f4895d8cb-7gh4j" podUID="b9eb2b70-ede6-4290-8483-7657e4c96a8b" Nov 13 11:58:04.323352 containerd[1515]: time="2024-11-13T11:58:04.323304001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gjw7n,Uid:f284fb92-56b8-452d-85a1-fc02bb9810b6,Namespace:kube-system,Attempt:0,}" Nov 13 11:58:04.331700 containerd[1515]: time="2024-11-13T11:58:04.331407634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-29ddh,Uid:c2089e4d-0914-4208-bd3f-ebfa5baa6636,Namespace:kube-system,Attempt:0,}" Nov 13 11:58:04.439489 containerd[1515]: time="2024-11-13T11:58:04.439400678Z" level=error msg="Failed to destroy network for sandbox \"cf80b4606d40efa79952746e4e2c57f405cf1c723c111f8b81e878e6d059b8f4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 13 11:58:04.446033 containerd[1515]: time="2024-11-13T11:58:04.441625439Z" level=error msg="encountered an error cleaning up failed sandbox \"cf80b4606d40efa79952746e4e2c57f405cf1c723c111f8b81e878e6d059b8f4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 13 11:58:04.446033 containerd[1515]: time="2024-11-13T11:58:04.441728372Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gjw7n,Uid:f284fb92-56b8-452d-85a1-fc02bb9810b6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cf80b4606d40efa79952746e4e2c57f405cf1c723c111f8b81e878e6d059b8f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 13 11:58:04.446472 kubelet[2752]: E1113 11:58:04.444159 2752 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf80b4606d40efa79952746e4e2c57f405cf1c723c111f8b81e878e6d059b8f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 13 11:58:04.446472 kubelet[2752]: E1113 11:58:04.444297 2752 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf80b4606d40efa79952746e4e2c57f405cf1c723c111f8b81e878e6d059b8f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-gjw7n" Nov 13 11:58:04.446472 kubelet[2752]: E1113 11:58:04.444332 2752 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf80b4606d40efa79952746e4e2c57f405cf1c723c111f8b81e878e6d059b8f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-gjw7n" Nov 13 11:58:04.444009 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cf80b4606d40efa79952746e4e2c57f405cf1c723c111f8b81e878e6d059b8f4-shm.mount: Deactivated successfully. Nov 13 11:58:04.446679 kubelet[2752]: E1113 11:58:04.444395 2752 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-gjw7n_kube-system(f284fb92-56b8-452d-85a1-fc02bb9810b6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-gjw7n_kube-system(f284fb92-56b8-452d-85a1-fc02bb9810b6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cf80b4606d40efa79952746e4e2c57f405cf1c723c111f8b81e878e6d059b8f4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-gjw7n" podUID="f284fb92-56b8-452d-85a1-fc02bb9810b6" Nov 13 11:58:04.469433 containerd[1515]: time="2024-11-13T11:58:04.469269535Z" level=error msg="Failed to destroy network for sandbox \"bf7cc5c3037e25e5ea5ae6693908f8aa9d5cc145ab62795380efaa23baaf6923\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 13 11:58:04.470347 containerd[1515]: time="2024-11-13T11:58:04.469874634Z" level=error msg="encountered an error cleaning up failed sandbox \"bf7cc5c3037e25e5ea5ae6693908f8aa9d5cc145ab62795380efaa23baaf6923\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 13 11:58:04.470347 containerd[1515]: time="2024-11-13T11:58:04.469982884Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-29ddh,Uid:c2089e4d-0914-4208-bd3f-ebfa5baa6636,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bf7cc5c3037e25e5ea5ae6693908f8aa9d5cc145ab62795380efaa23baaf6923\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 13 11:58:04.470563 kubelet[2752]: E1113 11:58:04.470485 2752 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf7cc5c3037e25e5ea5ae6693908f8aa9d5cc145ab62795380efaa23baaf6923\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 13 11:58:04.470708 kubelet[2752]: E1113 11:58:04.470571 2752 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf7cc5c3037e25e5ea5ae6693908f8aa9d5cc145ab62795380efaa23baaf6923\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-29ddh" Nov 13 11:58:04.470708 kubelet[2752]: E1113 11:58:04.470609 2752 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf7cc5c3037e25e5ea5ae6693908f8aa9d5cc145ab62795380efaa23baaf6923\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-29ddh" Nov 13 11:58:04.470708 kubelet[2752]: E1113 11:58:04.470670 2752 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-29ddh_kube-system(c2089e4d-0914-4208-bd3f-ebfa5baa6636)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-29ddh_kube-system(c2089e4d-0914-4208-bd3f-ebfa5baa6636)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bf7cc5c3037e25e5ea5ae6693908f8aa9d5cc145ab62795380efaa23baaf6923\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-29ddh" podUID="c2089e4d-0914-4208-bd3f-ebfa5baa6636" Nov 13 11:58:04.700444 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bf7cc5c3037e25e5ea5ae6693908f8aa9d5cc145ab62795380efaa23baaf6923-shm.mount: Deactivated successfully. Nov 13 11:58:04.759738 kubelet[2752]: I1113 11:58:04.759704 2752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf80b4606d40efa79952746e4e2c57f405cf1c723c111f8b81e878e6d059b8f4" Nov 13 11:58:04.761521 containerd[1515]: time="2024-11-13T11:58:04.760539193Z" level=info msg="StopPodSandbox for \"cf80b4606d40efa79952746e4e2c57f405cf1c723c111f8b81e878e6d059b8f4\"" Nov 13 11:58:04.761521 containerd[1515]: time="2024-11-13T11:58:04.760997689Z" level=info msg="Ensure that sandbox cf80b4606d40efa79952746e4e2c57f405cf1c723c111f8b81e878e6d059b8f4 in task-service has been cleanup successfully" Nov 13 11:58:04.764692 kubelet[2752]: I1113 11:58:04.764341 2752 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf7cc5c3037e25e5ea5ae6693908f8aa9d5cc145ab62795380efaa23baaf6923" Nov 13 11:58:04.765689 containerd[1515]: time="2024-11-13T11:58:04.764981082Z" level=info msg="StopPodSandbox for \"bf7cc5c3037e25e5ea5ae6693908f8aa9d5cc145ab62795380efaa23baaf6923\"" Nov 13 11:58:04.765689 containerd[1515]: time="2024-11-13T11:58:04.765464642Z" level=info msg="Ensure that sandbox bf7cc5c3037e25e5ea5ae6693908f8aa9d5cc145ab62795380efaa23baaf6923 in task-service has been cleanup successfully" Nov 13 11:58:04.818830 containerd[1515]: time="2024-11-13T11:58:04.818782484Z" level=error msg="StopPodSandbox for \"cf80b4606d40efa79952746e4e2c57f405cf1c723c111f8b81e878e6d059b8f4\" failed" error="failed to destroy network for sandbox \"cf80b4606d40efa79952746e4e2c57f405cf1c723c111f8b81e878e6d059b8f4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 13 11:58:04.819213 kubelet[2752]: E1113 11:58:04.819172 2752 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cf80b4606d40efa79952746e4e2c57f405cf1c723c111f8b81e878e6d059b8f4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cf80b4606d40efa79952746e4e2c57f405cf1c723c111f8b81e878e6d059b8f4" Nov 13 11:58:04.819383 kubelet[2752]: E1113 11:58:04.819359 2752 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cf80b4606d40efa79952746e4e2c57f405cf1c723c111f8b81e878e6d059b8f4"} Nov 13 11:58:04.819498 kubelet[2752]: E1113 11:58:04.819484 2752 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f284fb92-56b8-452d-85a1-fc02bb9810b6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cf80b4606d40efa79952746e4e2c57f405cf1c723c111f8b81e878e6d059b8f4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 13 11:58:04.819633 kubelet[2752]: E1113 11:58:04.819605 2752 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f284fb92-56b8-452d-85a1-fc02bb9810b6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cf80b4606d40efa79952746e4e2c57f405cf1c723c111f8b81e878e6d059b8f4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-gjw7n" podUID="f284fb92-56b8-452d-85a1-fc02bb9810b6" Nov 13 11:58:04.822823 containerd[1515]: time="2024-11-13T11:58:04.822764748Z" level=error msg="StopPodSandbox for \"bf7cc5c3037e25e5ea5ae6693908f8aa9d5cc145ab62795380efaa23baaf6923\" failed" error="failed to destroy network for sandbox \"bf7cc5c3037e25e5ea5ae6693908f8aa9d5cc145ab62795380efaa23baaf6923\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 13 11:58:04.823131 kubelet[2752]: E1113 11:58:04.823003 2752 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bf7cc5c3037e25e5ea5ae6693908f8aa9d5cc145ab62795380efaa23baaf6923\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bf7cc5c3037e25e5ea5ae6693908f8aa9d5cc145ab62795380efaa23baaf6923" Nov 13 11:58:04.823131 kubelet[2752]: E1113 11:58:04.823049 2752 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bf7cc5c3037e25e5ea5ae6693908f8aa9d5cc145ab62795380efaa23baaf6923"} Nov 13 11:58:04.823131 kubelet[2752]: E1113 11:58:04.823082 2752 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c2089e4d-0914-4208-bd3f-ebfa5baa6636\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bf7cc5c3037e25e5ea5ae6693908f8aa9d5cc145ab62795380efaa23baaf6923\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 13 11:58:04.823131 kubelet[2752]: E1113 11:58:04.823106 2752 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c2089e4d-0914-4208-bd3f-ebfa5baa6636\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bf7cc5c3037e25e5ea5ae6693908f8aa9d5cc145ab62795380efaa23baaf6923\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-29ddh" podUID="c2089e4d-0914-4208-bd3f-ebfa5baa6636" Nov 13 11:58:10.968813 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount659886709.mount: Deactivated successfully. Nov 13 11:58:11.058666 containerd[1515]: time="2024-11-13T11:58:11.057644077Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.0: active requests=0, bytes read=140580710" Nov 13 11:58:11.060824 containerd[1515]: time="2024-11-13T11:58:11.060757898Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 11:58:11.063623 containerd[1515]: time="2024-11-13T11:58:11.063527655Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.0\" with image id \"sha256:df7e265d5ccd035f529156d2ef608d879200d07c1539ca9cac539da91478bc9f\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:0761a4b4a20aefdf788f2b42a221bfcfe926a474152b74fbe091d847f5d823d7\", size \"140580572\" in 8.328095051s" Nov 13 11:58:11.063623 containerd[1515]: time="2024-11-13T11:58:11.063582457Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.0\" returns image reference \"sha256:df7e265d5ccd035f529156d2ef608d879200d07c1539ca9cac539da91478bc9f\"" Nov 13 11:58:11.118283 containerd[1515]: time="2024-11-13T11:58:11.118071440Z" level=info msg="ImageCreate event name:\"sha256:df7e265d5ccd035f529156d2ef608d879200d07c1539ca9cac539da91478bc9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 11:58:11.119458 containerd[1515]: time="2024-11-13T11:58:11.118837263Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:0761a4b4a20aefdf788f2b42a221bfcfe926a474152b74fbe091d847f5d823d7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 11:58:11.190983 containerd[1515]: time="2024-11-13T11:58:11.190732010Z" level=info msg="CreateContainer within sandbox \"fd65cbc2e311051c503a4d0e5e01672cb6971e9e742cb8daf58978bade4a906a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 13 11:58:11.258752 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1471027826.mount: Deactivated successfully. Nov 13 11:58:11.271965 containerd[1515]: time="2024-11-13T11:58:11.271886230Z" level=info msg="CreateContainer within sandbox \"fd65cbc2e311051c503a4d0e5e01672cb6971e9e742cb8daf58978bade4a906a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"922ca22decf2b5690bcf2632951e8bed25d5215a238b5cb41bc11fddefd8b82a\"" Nov 13 11:58:11.277718 containerd[1515]: time="2024-11-13T11:58:11.277633874Z" level=info msg="StartContainer for \"922ca22decf2b5690bcf2632951e8bed25d5215a238b5cb41bc11fddefd8b82a\"" Nov 13 11:58:11.414479 systemd[1]: Started cri-containerd-922ca22decf2b5690bcf2632951e8bed25d5215a238b5cb41bc11fddefd8b82a.scope - libcontainer container 922ca22decf2b5690bcf2632951e8bed25d5215a238b5cb41bc11fddefd8b82a. Nov 13 11:58:11.457315 containerd[1515]: time="2024-11-13T11:58:11.454387175Z" level=info msg="StartContainer for \"922ca22decf2b5690bcf2632951e8bed25d5215a238b5cb41bc11fddefd8b82a\" returns successfully" Nov 13 11:58:11.615393 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 13 11:58:11.626632 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 13 11:58:12.018402 kubelet[2752]: I1113 11:58:12.017352 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-n49j7" podStartSLOduration=2.071392711 podStartE2EDuration="21.000256578s" podCreationTimestamp="2024-11-13 11:57:51 +0000 UTC" firstStartedPulling="2024-11-13 11:57:52.189279458 +0000 UTC m=+23.809188135" lastFinishedPulling="2024-11-13 11:58:11.118143324 +0000 UTC m=+42.738052002" observedRunningTime="2024-11-13 11:58:11.999897904 +0000 UTC m=+43.619806594" watchObservedRunningTime="2024-11-13 11:58:12.000256578 +0000 UTC m=+43.620165363" Nov 13 11:58:12.806817 systemd[1]: run-containerd-runc-k8s.io-922ca22decf2b5690bcf2632951e8bed25d5215a238b5cb41bc11fddefd8b82a-runc.2wMAcm.mount: Deactivated successfully. Nov 13 11:58:12.897351 systemd[1]: run-containerd-runc-k8s.io-922ca22decf2b5690bcf2632951e8bed25d5215a238b5cb41bc11fddefd8b82a-runc.SW9W42.mount: Deactivated successfully. Nov 13 11:58:13.621315 kernel: bpftool[4019]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 13 11:58:13.919416 systemd[1]: run-containerd-runc-k8s.io-922ca22decf2b5690bcf2632951e8bed25d5215a238b5cb41bc11fddefd8b82a-runc.qYbync.mount: Deactivated successfully. Nov 13 11:58:13.935664 systemd-networkd[1448]: vxlan.calico: Link UP Nov 13 11:58:13.938785 systemd-networkd[1448]: vxlan.calico: Gained carrier Nov 13 11:58:15.051595 systemd-networkd[1448]: vxlan.calico: Gained IPv6LL Nov 13 11:58:15.550251 containerd[1515]: time="2024-11-13T11:58:15.549148755Z" level=info msg="StopPodSandbox for \"006fa4d57cb1a807046c8487ecd1cce84ebbdadba53e8507eecf1524e3cea65d\"" Nov 13 11:58:15.550251 containerd[1515]: time="2024-11-13T11:58:15.549333446Z" level=info msg="StopPodSandbox for \"188c707f1dc1ccc79f3dc1b617cca7702a851fa5b4b03d4f3b7f1cc152ebe11e\"" Nov 13 11:58:15.784765 containerd[1515]: 2024-11-13 11:58:15.665 [INFO][4158] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="006fa4d57cb1a807046c8487ecd1cce84ebbdadba53e8507eecf1524e3cea65d" Nov 13 11:58:15.784765 containerd[1515]: 2024-11-13 11:58:15.665 [INFO][4158] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="006fa4d57cb1a807046c8487ecd1cce84ebbdadba53e8507eecf1524e3cea65d" iface="eth0" netns="/var/run/netns/cni-f1f15905-2c83-200a-5051-74c66671d8bf" Nov 13 11:58:15.784765 containerd[1515]: 2024-11-13 11:58:15.665 [INFO][4158] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="006fa4d57cb1a807046c8487ecd1cce84ebbdadba53e8507eecf1524e3cea65d" iface="eth0" netns="/var/run/netns/cni-f1f15905-2c83-200a-5051-74c66671d8bf" Nov 13 11:58:15.784765 containerd[1515]: 2024-11-13 11:58:15.666 [INFO][4158] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="006fa4d57cb1a807046c8487ecd1cce84ebbdadba53e8507eecf1524e3cea65d" iface="eth0" netns="/var/run/netns/cni-f1f15905-2c83-200a-5051-74c66671d8bf" Nov 13 11:58:15.784765 containerd[1515]: 2024-11-13 11:58:15.666 [INFO][4158] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="006fa4d57cb1a807046c8487ecd1cce84ebbdadba53e8507eecf1524e3cea65d" Nov 13 11:58:15.784765 containerd[1515]: 2024-11-13 11:58:15.667 [INFO][4158] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="006fa4d57cb1a807046c8487ecd1cce84ebbdadba53e8507eecf1524e3cea65d" Nov 13 11:58:15.784765 containerd[1515]: 2024-11-13 11:58:15.755 [INFO][4173] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="006fa4d57cb1a807046c8487ecd1cce84ebbdadba53e8507eecf1524e3cea65d" HandleID="k8s-pod-network.006fa4d57cb1a807046c8487ecd1cce84ebbdadba53e8507eecf1524e3cea65d" Workload="srv--gr2mf.gb1.brightbox.com-k8s-csi--node--driver--5tp6q-eth0" Nov 13 11:58:15.784765 containerd[1515]: 2024-11-13 11:58:15.756 [INFO][4173] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 13 11:58:15.784765 containerd[1515]: 2024-11-13 11:58:15.757 [INFO][4173] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 13 11:58:15.784765 containerd[1515]: 2024-11-13 11:58:15.772 [WARNING][4173] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="006fa4d57cb1a807046c8487ecd1cce84ebbdadba53e8507eecf1524e3cea65d" HandleID="k8s-pod-network.006fa4d57cb1a807046c8487ecd1cce84ebbdadba53e8507eecf1524e3cea65d" Workload="srv--gr2mf.gb1.brightbox.com-k8s-csi--node--driver--5tp6q-eth0" Nov 13 11:58:15.784765 containerd[1515]: 2024-11-13 11:58:15.773 [INFO][4173] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="006fa4d57cb1a807046c8487ecd1cce84ebbdadba53e8507eecf1524e3cea65d" HandleID="k8s-pod-network.006fa4d57cb1a807046c8487ecd1cce84ebbdadba53e8507eecf1524e3cea65d" Workload="srv--gr2mf.gb1.brightbox.com-k8s-csi--node--driver--5tp6q-eth0" Nov 13 11:58:15.784765 containerd[1515]: 2024-11-13 11:58:15.777 [INFO][4173] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 13 11:58:15.784765 containerd[1515]: 2024-11-13 11:58:15.778 [INFO][4158] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="006fa4d57cb1a807046c8487ecd1cce84ebbdadba53e8507eecf1524e3cea65d" Nov 13 11:58:15.787022 systemd[1]: run-netns-cni\x2df1f15905\x2d2c83\x2d200a\x2d5051\x2d74c66671d8bf.mount: Deactivated successfully. Nov 13 11:58:15.791060 containerd[1515]: time="2024-11-13T11:58:15.791019307Z" level=info msg="TearDown network for sandbox \"006fa4d57cb1a807046c8487ecd1cce84ebbdadba53e8507eecf1524e3cea65d\" successfully" Nov 13 11:58:15.791154 containerd[1515]: time="2024-11-13T11:58:15.791060408Z" level=info msg="StopPodSandbox for \"006fa4d57cb1a807046c8487ecd1cce84ebbdadba53e8507eecf1524e3cea65d\" returns successfully" Nov 13 11:58:15.792516 containerd[1515]: time="2024-11-13T11:58:15.792480664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5tp6q,Uid:a73a079a-5c98-427b-b55a-3d27769f0826,Namespace:calico-system,Attempt:1,}" Nov 13 11:58:15.823294 containerd[1515]: 2024-11-13 11:58:15.656 [INFO][4149] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="188c707f1dc1ccc79f3dc1b617cca7702a851fa5b4b03d4f3b7f1cc152ebe11e" Nov 13 11:58:15.823294 containerd[1515]: 2024-11-13 11:58:15.660 [INFO][4149] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="188c707f1dc1ccc79f3dc1b617cca7702a851fa5b4b03d4f3b7f1cc152ebe11e" iface="eth0" netns="/var/run/netns/cni-88e22cf4-fe61-e6d9-4ccc-5cbc8b129487" Nov 13 11:58:15.823294 containerd[1515]: 2024-11-13 11:58:15.661 [INFO][4149] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="188c707f1dc1ccc79f3dc1b617cca7702a851fa5b4b03d4f3b7f1cc152ebe11e" iface="eth0" netns="/var/run/netns/cni-88e22cf4-fe61-e6d9-4ccc-5cbc8b129487" Nov 13 11:58:15.823294 containerd[1515]: 2024-11-13 11:58:15.662 [INFO][4149] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="188c707f1dc1ccc79f3dc1b617cca7702a851fa5b4b03d4f3b7f1cc152ebe11e" iface="eth0" netns="/var/run/netns/cni-88e22cf4-fe61-e6d9-4ccc-5cbc8b129487" Nov 13 11:58:15.823294 containerd[1515]: 2024-11-13 11:58:15.663 [INFO][4149] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="188c707f1dc1ccc79f3dc1b617cca7702a851fa5b4b03d4f3b7f1cc152ebe11e" Nov 13 11:58:15.823294 containerd[1515]: 2024-11-13 11:58:15.663 [INFO][4149] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="188c707f1dc1ccc79f3dc1b617cca7702a851fa5b4b03d4f3b7f1cc152ebe11e" Nov 13 11:58:15.823294 containerd[1515]: 2024-11-13 11:58:15.758 [INFO][4172] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="188c707f1dc1ccc79f3dc1b617cca7702a851fa5b4b03d4f3b7f1cc152ebe11e" HandleID="k8s-pod-network.188c707f1dc1ccc79f3dc1b617cca7702a851fa5b4b03d4f3b7f1cc152ebe11e" Workload="srv--gr2mf.gb1.brightbox.com-k8s-calico--apiserver--8588545cd8--kdzjc-eth0" Nov 13 11:58:15.823294 containerd[1515]: 2024-11-13 11:58:15.758 [INFO][4172] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 13 11:58:15.823294 containerd[1515]: 2024-11-13 11:58:15.777 [INFO][4172] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 13 11:58:15.823294 containerd[1515]: 2024-11-13 11:58:15.793 [WARNING][4172] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="188c707f1dc1ccc79f3dc1b617cca7702a851fa5b4b03d4f3b7f1cc152ebe11e" HandleID="k8s-pod-network.188c707f1dc1ccc79f3dc1b617cca7702a851fa5b4b03d4f3b7f1cc152ebe11e" Workload="srv--gr2mf.gb1.brightbox.com-k8s-calico--apiserver--8588545cd8--kdzjc-eth0" Nov 13 11:58:15.823294 containerd[1515]: 2024-11-13 11:58:15.793 [INFO][4172] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="188c707f1dc1ccc79f3dc1b617cca7702a851fa5b4b03d4f3b7f1cc152ebe11e" HandleID="k8s-pod-network.188c707f1dc1ccc79f3dc1b617cca7702a851fa5b4b03d4f3b7f1cc152ebe11e" Workload="srv--gr2mf.gb1.brightbox.com-k8s-calico--apiserver--8588545cd8--kdzjc-eth0" Nov 13 11:58:15.823294 containerd[1515]: 2024-11-13 11:58:15.803 [INFO][4172] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 13 11:58:15.823294 containerd[1515]: 2024-11-13 11:58:15.808 [INFO][4149] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="188c707f1dc1ccc79f3dc1b617cca7702a851fa5b4b03d4f3b7f1cc152ebe11e" Nov 13 11:58:15.824672 containerd[1515]: time="2024-11-13T11:58:15.823911183Z" level=info msg="TearDown network for sandbox \"188c707f1dc1ccc79f3dc1b617cca7702a851fa5b4b03d4f3b7f1cc152ebe11e\" successfully" Nov 13 11:58:15.824672 containerd[1515]: time="2024-11-13T11:58:15.824439621Z" level=info msg="StopPodSandbox for \"188c707f1dc1ccc79f3dc1b617cca7702a851fa5b4b03d4f3b7f1cc152ebe11e\" returns successfully" Nov 13 11:58:15.833234 containerd[1515]: time="2024-11-13T11:58:15.832766688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8588545cd8-kdzjc,Uid:3cfa244a-99c4-4135-8a9f-f544de1085c6,Namespace:calico-apiserver,Attempt:1,}" Nov 13 11:58:15.834217 systemd[1]: run-netns-cni\x2d88e22cf4\x2dfe61\x2de6d9\x2d4ccc\x2d5cbc8b129487.mount: Deactivated successfully. Nov 13 11:58:16.081076 systemd-networkd[1448]: cali712d876c997: Link UP Nov 13 11:58:16.081993 systemd-networkd[1448]: cali712d876c997: Gained carrier Nov 13 11:58:16.099520 containerd[1515]: 2024-11-13 11:58:15.943 [INFO][4197] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--gr2mf.gb1.brightbox.com-k8s-calico--apiserver--8588545cd8--kdzjc-eth0 calico-apiserver-8588545cd8- calico-apiserver 3cfa244a-99c4-4135-8a9f-f544de1085c6 797 0 2024-11-13 11:57:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8588545cd8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-gr2mf.gb1.brightbox.com calico-apiserver-8588545cd8-kdzjc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali712d876c997 [] []}} ContainerID="7dee4b55cf43fb3bbfde3b2222dc00740643b0f49a749a0fbf618bed5706de99" Namespace="calico-apiserver" Pod="calico-apiserver-8588545cd8-kdzjc" WorkloadEndpoint="srv--gr2mf.gb1.brightbox.com-k8s-calico--apiserver--8588545cd8--kdzjc-" Nov 13 11:58:16.099520 containerd[1515]: 2024-11-13 11:58:15.943 [INFO][4197] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7dee4b55cf43fb3bbfde3b2222dc00740643b0f49a749a0fbf618bed5706de99" Namespace="calico-apiserver" Pod="calico-apiserver-8588545cd8-kdzjc" WorkloadEndpoint="srv--gr2mf.gb1.brightbox.com-k8s-calico--apiserver--8588545cd8--kdzjc-eth0" Nov 13 11:58:16.099520 containerd[1515]: 2024-11-13 11:58:15.983 [INFO][4212] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7dee4b55cf43fb3bbfde3b2222dc00740643b0f49a749a0fbf618bed5706de99" HandleID="k8s-pod-network.7dee4b55cf43fb3bbfde3b2222dc00740643b0f49a749a0fbf618bed5706de99" Workload="srv--gr2mf.gb1.brightbox.com-k8s-calico--apiserver--8588545cd8--kdzjc-eth0" Nov 13 11:58:16.099520 containerd[1515]: 2024-11-13 11:58:15.999 [INFO][4212] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7dee4b55cf43fb3bbfde3b2222dc00740643b0f49a749a0fbf618bed5706de99" HandleID="k8s-pod-network.7dee4b55cf43fb3bbfde3b2222dc00740643b0f49a749a0fbf618bed5706de99" Workload="srv--gr2mf.gb1.brightbox.com-k8s-calico--apiserver--8588545cd8--kdzjc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ec090), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-gr2mf.gb1.brightbox.com", "pod":"calico-apiserver-8588545cd8-kdzjc", "timestamp":"2024-11-13 11:58:15.983772139 +0000 UTC"}, Hostname:"srv-gr2mf.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 13 11:58:16.099520 containerd[1515]: 2024-11-13 11:58:15.999 [INFO][4212] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 13 11:58:16.099520 containerd[1515]: 2024-11-13 11:58:15.999 [INFO][4212] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 13 11:58:16.099520 containerd[1515]: 2024-11-13 11:58:15.999 [INFO][4212] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-gr2mf.gb1.brightbox.com' Nov 13 11:58:16.099520 containerd[1515]: 2024-11-13 11:58:16.001 [INFO][4212] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7dee4b55cf43fb3bbfde3b2222dc00740643b0f49a749a0fbf618bed5706de99" host="srv-gr2mf.gb1.brightbox.com" Nov 13 11:58:16.099520 containerd[1515]: 2024-11-13 11:58:16.017 [INFO][4212] ipam/ipam.go 372: Looking up existing affinities for host host="srv-gr2mf.gb1.brightbox.com" Nov 13 11:58:16.099520 containerd[1515]: 2024-11-13 11:58:16.029 [INFO][4212] ipam/ipam.go 489: Trying affinity for 192.168.55.0/26 host="srv-gr2mf.gb1.brightbox.com" Nov 13 11:58:16.099520 containerd[1515]: 2024-11-13 11:58:16.032 [INFO][4212] ipam/ipam.go 155: Attempting to load block cidr=192.168.55.0/26 host="srv-gr2mf.gb1.brightbox.com" Nov 13 11:58:16.099520 containerd[1515]: 2024-11-13 11:58:16.046 [INFO][4212] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.55.0/26 host="srv-gr2mf.gb1.brightbox.com" Nov 13 11:58:16.099520 containerd[1515]: 2024-11-13 11:58:16.047 [INFO][4212] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.55.0/26 handle="k8s-pod-network.7dee4b55cf43fb3bbfde3b2222dc00740643b0f49a749a0fbf618bed5706de99" host="srv-gr2mf.gb1.brightbox.com" Nov 13 11:58:16.099520 containerd[1515]: 2024-11-13 11:58:16.054 [INFO][4212] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7dee4b55cf43fb3bbfde3b2222dc00740643b0f49a749a0fbf618bed5706de99 Nov 13 11:58:16.099520 containerd[1515]: 2024-11-13 11:58:16.061 [INFO][4212] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.55.0/26 handle="k8s-pod-network.7dee4b55cf43fb3bbfde3b2222dc00740643b0f49a749a0fbf618bed5706de99" host="srv-gr2mf.gb1.brightbox.com" Nov 13 11:58:16.099520 containerd[1515]: 2024-11-13 11:58:16.071 [INFO][4212] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.55.1/26] block=192.168.55.0/26 handle="k8s-pod-network.7dee4b55cf43fb3bbfde3b2222dc00740643b0f49a749a0fbf618bed5706de99" host="srv-gr2mf.gb1.brightbox.com" Nov 13 11:58:16.099520 containerd[1515]: 2024-11-13 11:58:16.071 [INFO][4212] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.55.1/26] handle="k8s-pod-network.7dee4b55cf43fb3bbfde3b2222dc00740643b0f49a749a0fbf618bed5706de99" host="srv-gr2mf.gb1.brightbox.com" Nov 13 11:58:16.099520 containerd[1515]: 2024-11-13 11:58:16.072 [INFO][4212] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 13 11:58:16.099520 containerd[1515]: 2024-11-13 11:58:16.072 [INFO][4212] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.55.1/26] IPv6=[] ContainerID="7dee4b55cf43fb3bbfde3b2222dc00740643b0f49a749a0fbf618bed5706de99" HandleID="k8s-pod-network.7dee4b55cf43fb3bbfde3b2222dc00740643b0f49a749a0fbf618bed5706de99" Workload="srv--gr2mf.gb1.brightbox.com-k8s-calico--apiserver--8588545cd8--kdzjc-eth0" Nov 13 11:58:16.100416 containerd[1515]: 2024-11-13 11:58:16.075 [INFO][4197] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7dee4b55cf43fb3bbfde3b2222dc00740643b0f49a749a0fbf618bed5706de99" Namespace="calico-apiserver" Pod="calico-apiserver-8588545cd8-kdzjc" WorkloadEndpoint="srv--gr2mf.gb1.brightbox.com-k8s-calico--apiserver--8588545cd8--kdzjc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gr2mf.gb1.brightbox.com-k8s-calico--apiserver--8588545cd8--kdzjc-eth0", GenerateName:"calico-apiserver-8588545cd8-", Namespace:"calico-apiserver", SelfLink:"", UID:"3cfa244a-99c4-4135-8a9f-f544de1085c6", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2024, time.November, 13, 11, 57, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8588545cd8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gr2mf.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-8588545cd8-kdzjc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.55.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali712d876c997", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 13 11:58:16.100416 containerd[1515]: 2024-11-13 11:58:16.076 [INFO][4197] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.55.1/32] ContainerID="7dee4b55cf43fb3bbfde3b2222dc00740643b0f49a749a0fbf618bed5706de99" Namespace="calico-apiserver" Pod="calico-apiserver-8588545cd8-kdzjc" WorkloadEndpoint="srv--gr2mf.gb1.brightbox.com-k8s-calico--apiserver--8588545cd8--kdzjc-eth0" Nov 13 11:58:16.100416 containerd[1515]: 2024-11-13 11:58:16.076 [INFO][4197] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali712d876c997 ContainerID="7dee4b55cf43fb3bbfde3b2222dc00740643b0f49a749a0fbf618bed5706de99" Namespace="calico-apiserver" Pod="calico-apiserver-8588545cd8-kdzjc" WorkloadEndpoint="srv--gr2mf.gb1.brightbox.com-k8s-calico--apiserver--8588545cd8--kdzjc-eth0" Nov 13 11:58:16.100416 containerd[1515]: 2024-11-13 11:58:16.082 [INFO][4197] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7dee4b55cf43fb3bbfde3b2222dc00740643b0f49a749a0fbf618bed5706de99" Namespace="calico-apiserver" Pod="calico-apiserver-8588545cd8-kdzjc" WorkloadEndpoint="srv--gr2mf.gb1.brightbox.com-k8s-calico--apiserver--8588545cd8--kdzjc-eth0" Nov 13 11:58:16.100416 containerd[1515]: 2024-11-13 11:58:16.083 [INFO][4197] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7dee4b55cf43fb3bbfde3b2222dc00740643b0f49a749a0fbf618bed5706de99" Namespace="calico-apiserver" Pod="calico-apiserver-8588545cd8-kdzjc" WorkloadEndpoint="srv--gr2mf.gb1.brightbox.com-k8s-calico--apiserver--8588545cd8--kdzjc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gr2mf.gb1.brightbox.com-k8s-calico--apiserver--8588545cd8--kdzjc-eth0", GenerateName:"calico-apiserver-8588545cd8-", Namespace:"calico-apiserver", SelfLink:"", UID:"3cfa244a-99c4-4135-8a9f-f544de1085c6", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2024, time.November, 13, 11, 57, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8588545cd8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gr2mf.gb1.brightbox.com", ContainerID:"7dee4b55cf43fb3bbfde3b2222dc00740643b0f49a749a0fbf618bed5706de99", Pod:"calico-apiserver-8588545cd8-kdzjc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.55.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali712d876c997", MAC:"22:05:7c:01:62:53", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 13 11:58:16.100416 containerd[1515]: 2024-11-13 11:58:16.093 [INFO][4197] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7dee4b55cf43fb3bbfde3b2222dc00740643b0f49a749a0fbf618bed5706de99" Namespace="calico-apiserver" Pod="calico-apiserver-8588545cd8-kdzjc" WorkloadEndpoint="srv--gr2mf.gb1.brightbox.com-k8s-calico--apiserver--8588545cd8--kdzjc-eth0" Nov 13 11:58:16.153165 systemd-networkd[1448]: cali63df1322774: Link UP Nov 13 11:58:16.154105 systemd-networkd[1448]: cali63df1322774: Gained carrier Nov 13 11:58:16.177293 containerd[1515]: 2024-11-13 11:58:15.905 [INFO][4189] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--gr2mf.gb1.brightbox.com-k8s-csi--node--driver--5tp6q-eth0 csi-node-driver- calico-system a73a079a-5c98-427b-b55a-3d27769f0826 798 0 2024-11-13 11:57:51 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:85bdc57578 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-gr2mf.gb1.brightbox.com csi-node-driver-5tp6q eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali63df1322774 [] []}} ContainerID="f912b0d667a3cde7a65722e5a3b1dabdc8d29c742e476fad9b79c8885e4457bf" Namespace="calico-system" Pod="csi-node-driver-5tp6q" WorkloadEndpoint="srv--gr2mf.gb1.brightbox.com-k8s-csi--node--driver--5tp6q-" Nov 13 11:58:16.177293 containerd[1515]: 2024-11-13 11:58:15.906 [INFO][4189] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f912b0d667a3cde7a65722e5a3b1dabdc8d29c742e476fad9b79c8885e4457bf" Namespace="calico-system" Pod="csi-node-driver-5tp6q" WorkloadEndpoint="srv--gr2mf.gb1.brightbox.com-k8s-csi--node--driver--5tp6q-eth0" Nov 13 11:58:16.177293 containerd[1515]: 2024-11-13 11:58:15.985 [INFO][4208] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f912b0d667a3cde7a65722e5a3b1dabdc8d29c742e476fad9b79c8885e4457bf" HandleID="k8s-pod-network.f912b0d667a3cde7a65722e5a3b1dabdc8d29c742e476fad9b79c8885e4457bf" Workload="srv--gr2mf.gb1.brightbox.com-k8s-csi--node--driver--5tp6q-eth0" Nov 13 11:58:16.177293 containerd[1515]: 2024-11-13 11:58:16.013 [INFO][4208] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f912b0d667a3cde7a65722e5a3b1dabdc8d29c742e476fad9b79c8885e4457bf" HandleID="k8s-pod-network.f912b0d667a3cde7a65722e5a3b1dabdc8d29c742e476fad9b79c8885e4457bf" Workload="srv--gr2mf.gb1.brightbox.com-k8s-csi--node--driver--5tp6q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003193b0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-gr2mf.gb1.brightbox.com", "pod":"csi-node-driver-5tp6q", "timestamp":"2024-11-13 11:58:15.985717027 +0000 UTC"}, Hostname:"srv-gr2mf.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 13 11:58:16.177293 containerd[1515]: 2024-11-13 11:58:16.014 [INFO][4208] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 13 11:58:16.177293 containerd[1515]: 2024-11-13 11:58:16.072 [INFO][4208] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 13 11:58:16.177293 containerd[1515]: 2024-11-13 11:58:16.072 [INFO][4208] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-gr2mf.gb1.brightbox.com' Nov 13 11:58:16.177293 containerd[1515]: 2024-11-13 11:58:16.076 [INFO][4208] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f912b0d667a3cde7a65722e5a3b1dabdc8d29c742e476fad9b79c8885e4457bf" host="srv-gr2mf.gb1.brightbox.com" Nov 13 11:58:16.177293 containerd[1515]: 2024-11-13 11:58:16.092 [INFO][4208] ipam/ipam.go 372: Looking up existing affinities for host host="srv-gr2mf.gb1.brightbox.com" Nov 13 11:58:16.177293 containerd[1515]: 2024-11-13 11:58:16.116 [INFO][4208] ipam/ipam.go 489: Trying affinity for 192.168.55.0/26 host="srv-gr2mf.gb1.brightbox.com" Nov 13 11:58:16.177293 containerd[1515]: 2024-11-13 11:58:16.123 [INFO][4208] ipam/ipam.go 155: Attempting to load block cidr=192.168.55.0/26 host="srv-gr2mf.gb1.brightbox.com" Nov 13 11:58:16.177293 containerd[1515]: 2024-11-13 11:58:16.128 [INFO][4208] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.55.0/26 host="srv-gr2mf.gb1.brightbox.com" Nov 13 11:58:16.177293 containerd[1515]: 2024-11-13 11:58:16.128 [INFO][4208] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.55.0/26 handle="k8s-pod-network.f912b0d667a3cde7a65722e5a3b1dabdc8d29c742e476fad9b79c8885e4457bf" host="srv-gr2mf.gb1.brightbox.com" Nov 13 11:58:16.177293 containerd[1515]: 2024-11-13 11:58:16.131 [INFO][4208] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f912b0d667a3cde7a65722e5a3b1dabdc8d29c742e476fad9b79c8885e4457bf Nov 13 11:58:16.177293 containerd[1515]: 2024-11-13 11:58:16.138 [INFO][4208] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.55.0/26 handle="k8s-pod-network.f912b0d667a3cde7a65722e5a3b1dabdc8d29c742e476fad9b79c8885e4457bf" host="srv-gr2mf.gb1.brightbox.com" Nov 13 11:58:16.177293 containerd[1515]: 2024-11-13 11:58:16.144 [INFO][4208] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.55.2/26] block=192.168.55.0/26 handle="k8s-pod-network.f912b0d667a3cde7a65722e5a3b1dabdc8d29c742e476fad9b79c8885e4457bf" host="srv-gr2mf.gb1.brightbox.com" Nov 13 11:58:16.177293 containerd[1515]: 2024-11-13 11:58:16.145 [INFO][4208] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.55.2/26] handle="k8s-pod-network.f912b0d667a3cde7a65722e5a3b1dabdc8d29c742e476fad9b79c8885e4457bf" host="srv-gr2mf.gb1.brightbox.com" Nov 13 11:58:16.177293 containerd[1515]: 2024-11-13 11:58:16.145 [INFO][4208] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 13 11:58:16.177293 containerd[1515]: 2024-11-13 11:58:16.145 [INFO][4208] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.55.2/26] IPv6=[] ContainerID="f912b0d667a3cde7a65722e5a3b1dabdc8d29c742e476fad9b79c8885e4457bf" HandleID="k8s-pod-network.f912b0d667a3cde7a65722e5a3b1dabdc8d29c742e476fad9b79c8885e4457bf" Workload="srv--gr2mf.gb1.brightbox.com-k8s-csi--node--driver--5tp6q-eth0" Nov 13 11:58:16.178056 containerd[1515]: 2024-11-13 11:58:16.148 [INFO][4189] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f912b0d667a3cde7a65722e5a3b1dabdc8d29c742e476fad9b79c8885e4457bf" Namespace="calico-system" Pod="csi-node-driver-5tp6q" WorkloadEndpoint="srv--gr2mf.gb1.brightbox.com-k8s-csi--node--driver--5tp6q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gr2mf.gb1.brightbox.com-k8s-csi--node--driver--5tp6q-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a73a079a-5c98-427b-b55a-3d27769f0826", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2024, time.November, 13, 11, 57, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"85bdc57578", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gr2mf.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-5tp6q", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.55.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali63df1322774", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 13 11:58:16.178056 containerd[1515]: 2024-11-13 11:58:16.148 [INFO][4189] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.55.2/32] ContainerID="f912b0d667a3cde7a65722e5a3b1dabdc8d29c742e476fad9b79c8885e4457bf" Namespace="calico-system" Pod="csi-node-driver-5tp6q" WorkloadEndpoint="srv--gr2mf.gb1.brightbox.com-k8s-csi--node--driver--5tp6q-eth0" Nov 13 11:58:16.178056 containerd[1515]: 2024-11-13 11:58:16.148 [INFO][4189] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali63df1322774 ContainerID="f912b0d667a3cde7a65722e5a3b1dabdc8d29c742e476fad9b79c8885e4457bf" Namespace="calico-system" Pod="csi-node-driver-5tp6q" WorkloadEndpoint="srv--gr2mf.gb1.brightbox.com-k8s-csi--node--driver--5tp6q-eth0" Nov 13 11:58:16.178056 containerd[1515]: 2024-11-13 11:58:16.154 [INFO][4189] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f912b0d667a3cde7a65722e5a3b1dabdc8d29c742e476fad9b79c8885e4457bf" Namespace="calico-system" Pod="csi-node-driver-5tp6q" WorkloadEndpoint="srv--gr2mf.gb1.brightbox.com-k8s-csi--node--driver--5tp6q-eth0" Nov 13 11:58:16.178056 containerd[1515]: 2024-11-13 11:58:16.155 [INFO][4189] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f912b0d667a3cde7a65722e5a3b1dabdc8d29c742e476fad9b79c8885e4457bf" Namespace="calico-system" Pod="csi-node-driver-5tp6q" WorkloadEndpoint="srv--gr2mf.gb1.brightbox.com-k8s-csi--node--driver--5tp6q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gr2mf.gb1.brightbox.com-k8s-csi--node--driver--5tp6q-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a73a079a-5c98-427b-b55a-3d27769f0826", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2024, time.November, 13, 11, 57, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"85bdc57578", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gr2mf.gb1.brightbox.com", ContainerID:"f912b0d667a3cde7a65722e5a3b1dabdc8d29c742e476fad9b79c8885e4457bf", Pod:"csi-node-driver-5tp6q", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.55.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali63df1322774", MAC:"9a:e7:1e:d7:1b:a3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 13 11:58:16.178056 containerd[1515]: 2024-11-13 11:58:16.169 [INFO][4189] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f912b0d667a3cde7a65722e5a3b1dabdc8d29c742e476fad9b79c8885e4457bf" Namespace="calico-system" Pod="csi-node-driver-5tp6q" WorkloadEndpoint="srv--gr2mf.gb1.brightbox.com-k8s-csi--node--driver--5tp6q-eth0" Nov 13 11:58:16.188961 containerd[1515]: time="2024-11-13T11:58:16.187733793Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 13 11:58:16.188961 containerd[1515]: time="2024-11-13T11:58:16.187810333Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 13 11:58:16.188961 containerd[1515]: time="2024-11-13T11:58:16.187841115Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 11:58:16.188961 containerd[1515]: time="2024-11-13T11:58:16.188300781Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 11:58:16.221378 containerd[1515]: time="2024-11-13T11:58:16.220469864Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 13 11:58:16.221378 containerd[1515]: time="2024-11-13T11:58:16.220528987Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 13 11:58:16.221378 containerd[1515]: time="2024-11-13T11:58:16.220540218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 11:58:16.221378 containerd[1515]: time="2024-11-13T11:58:16.220618019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 11:58:16.220889 systemd[1]: Started cri-containerd-7dee4b55cf43fb3bbfde3b2222dc00740643b0f49a749a0fbf618bed5706de99.scope - libcontainer container 7dee4b55cf43fb3bbfde3b2222dc00740643b0f49a749a0fbf618bed5706de99. Nov 13 11:58:16.249383 systemd[1]: Started cri-containerd-f912b0d667a3cde7a65722e5a3b1dabdc8d29c742e476fad9b79c8885e4457bf.scope - libcontainer container f912b0d667a3cde7a65722e5a3b1dabdc8d29c742e476fad9b79c8885e4457bf. Nov 13 11:58:16.288453 containerd[1515]: time="2024-11-13T11:58:16.288267611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5tp6q,Uid:a73a079a-5c98-427b-b55a-3d27769f0826,Namespace:calico-system,Attempt:1,} returns sandbox id \"f912b0d667a3cde7a65722e5a3b1dabdc8d29c742e476fad9b79c8885e4457bf\"" Nov 13 11:58:16.296263 containerd[1515]: time="2024-11-13T11:58:16.295334933Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.0\"" Nov 13 11:58:16.297532 containerd[1515]: time="2024-11-13T11:58:16.297402809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8588545cd8-kdzjc,Uid:3cfa244a-99c4-4135-8a9f-f544de1085c6,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"7dee4b55cf43fb3bbfde3b2222dc00740643b0f49a749a0fbf618bed5706de99\"" Nov 13 11:58:16.537736 containerd[1515]: time="2024-11-13T11:58:16.537632455Z" level=info msg="StopPodSandbox for \"cf80b4606d40efa79952746e4e2c57f405cf1c723c111f8b81e878e6d059b8f4\"" Nov 13 11:58:16.664381 containerd[1515]: 2024-11-13 11:58:16.594 [INFO][4341] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cf80b4606d40efa79952746e4e2c57f405cf1c723c111f8b81e878e6d059b8f4" Nov 13 11:58:16.664381 containerd[1515]: 2024-11-13 11:58:16.595 [INFO][4341] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cf80b4606d40efa79952746e4e2c57f405cf1c723c111f8b81e878e6d059b8f4" iface="eth0" netns="/var/run/netns/cni-3a52a82f-9278-efeb-0b1b-555a17e19d06" Nov 13 11:58:16.664381 containerd[1515]: 2024-11-13 11:58:16.601 [INFO][4341] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cf80b4606d40efa79952746e4e2c57f405cf1c723c111f8b81e878e6d059b8f4" iface="eth0" netns="/var/run/netns/cni-3a52a82f-9278-efeb-0b1b-555a17e19d06" Nov 13 11:58:16.664381 containerd[1515]: 2024-11-13 11:58:16.602 [INFO][4341] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cf80b4606d40efa79952746e4e2c57f405cf1c723c111f8b81e878e6d059b8f4" iface="eth0" netns="/var/run/netns/cni-3a52a82f-9278-efeb-0b1b-555a17e19d06" Nov 13 11:58:16.664381 containerd[1515]: 2024-11-13 11:58:16.603 [INFO][4341] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cf80b4606d40efa79952746e4e2c57f405cf1c723c111f8b81e878e6d059b8f4" Nov 13 11:58:16.664381 containerd[1515]: 2024-11-13 11:58:16.603 [INFO][4341] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cf80b4606d40efa79952746e4e2c57f405cf1c723c111f8b81e878e6d059b8f4" Nov 13 11:58:16.664381 containerd[1515]: 2024-11-13 11:58:16.634 [INFO][4347] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cf80b4606d40efa79952746e4e2c57f405cf1c723c111f8b81e878e6d059b8f4" HandleID="k8s-pod-network.cf80b4606d40efa79952746e4e2c57f405cf1c723c111f8b81e878e6d059b8f4" Workload="srv--gr2mf.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--gjw7n-eth0" Nov 13 11:58:16.664381 containerd[1515]: 2024-11-13 11:58:16.634 [INFO][4347] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 13 11:58:16.664381 containerd[1515]: 2024-11-13 11:58:16.634 [INFO][4347] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 13 11:58:16.664381 containerd[1515]: 2024-11-13 11:58:16.647 [WARNING][4347] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cf80b4606d40efa79952746e4e2c57f405cf1c723c111f8b81e878e6d059b8f4" HandleID="k8s-pod-network.cf80b4606d40efa79952746e4e2c57f405cf1c723c111f8b81e878e6d059b8f4" Workload="srv--gr2mf.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--gjw7n-eth0" Nov 13 11:58:16.664381 containerd[1515]: 2024-11-13 11:58:16.647 [INFO][4347] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cf80b4606d40efa79952746e4e2c57f405cf1c723c111f8b81e878e6d059b8f4" HandleID="k8s-pod-network.cf80b4606d40efa79952746e4e2c57f405cf1c723c111f8b81e878e6d059b8f4" Workload="srv--gr2mf.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--gjw7n-eth0" Nov 13 11:58:16.664381 containerd[1515]: 2024-11-13 11:58:16.659 [INFO][4347] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 13 11:58:16.664381 containerd[1515]: 2024-11-13 11:58:16.661 [INFO][4341] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cf80b4606d40efa79952746e4e2c57f405cf1c723c111f8b81e878e6d059b8f4" Nov 13 11:58:16.665320 containerd[1515]: time="2024-11-13T11:58:16.664592890Z" level=info msg="TearDown network for sandbox \"cf80b4606d40efa79952746e4e2c57f405cf1c723c111f8b81e878e6d059b8f4\" successfully" Nov 13 11:58:16.665320 containerd[1515]: time="2024-11-13T11:58:16.664644732Z" level=info msg="StopPodSandbox for \"cf80b4606d40efa79952746e4e2c57f405cf1c723c111f8b81e878e6d059b8f4\" returns successfully" Nov 13 11:58:16.666370 containerd[1515]: time="2024-11-13T11:58:16.666343483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gjw7n,Uid:f284fb92-56b8-452d-85a1-fc02bb9810b6,Namespace:kube-system,Attempt:1,}" Nov 13 11:58:16.794028 systemd[1]: run-netns-cni\x2d3a52a82f\x2d9278\x2defeb\x2d0b1b\x2d555a17e19d06.mount: Deactivated successfully. Nov 13 11:58:16.827599 systemd-networkd[1448]: calib78b4920e14: Link UP Nov 13 11:58:16.828295 systemd-networkd[1448]: calib78b4920e14: Gained carrier Nov 13 11:58:16.845678 containerd[1515]: 2024-11-13 11:58:16.720 [INFO][4354] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--gr2mf.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--gjw7n-eth0 coredns-7db6d8ff4d- kube-system f284fb92-56b8-452d-85a1-fc02bb9810b6 810 0 2024-11-13 11:57:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-gr2mf.gb1.brightbox.com coredns-7db6d8ff4d-gjw7n eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib78b4920e14 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="f69e8f87d6a28573e7e020d16c40d91477a6bfd3ab2dea52b33bd4a56482ca7a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gjw7n" WorkloadEndpoint="srv--gr2mf.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--gjw7n-" Nov 13 11:58:16.845678 containerd[1515]: 2024-11-13 11:58:16.721 [INFO][4354] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f69e8f87d6a28573e7e020d16c40d91477a6bfd3ab2dea52b33bd4a56482ca7a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gjw7n" WorkloadEndpoint="srv--gr2mf.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--gjw7n-eth0" Nov 13 11:58:16.845678 containerd[1515]: 2024-11-13 11:58:16.764 [INFO][4364] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f69e8f87d6a28573e7e020d16c40d91477a6bfd3ab2dea52b33bd4a56482ca7a" HandleID="k8s-pod-network.f69e8f87d6a28573e7e020d16c40d91477a6bfd3ab2dea52b33bd4a56482ca7a" Workload="srv--gr2mf.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--gjw7n-eth0" Nov 13 11:58:16.845678 containerd[1515]: 2024-11-13 11:58:16.774 [INFO][4364] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f69e8f87d6a28573e7e020d16c40d91477a6bfd3ab2dea52b33bd4a56482ca7a" HandleID="k8s-pod-network.f69e8f87d6a28573e7e020d16c40d91477a6bfd3ab2dea52b33bd4a56482ca7a" Workload="srv--gr2mf.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--gjw7n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00040ca90), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-gr2mf.gb1.brightbox.com", "pod":"coredns-7db6d8ff4d-gjw7n", "timestamp":"2024-11-13 11:58:16.764180825 +0000 UTC"}, Hostname:"srv-gr2mf.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 13 11:58:16.845678 containerd[1515]: 2024-11-13 11:58:16.775 [INFO][4364] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 13 11:58:16.845678 containerd[1515]: 2024-11-13 11:58:16.775 [INFO][4364] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 13 11:58:16.845678 containerd[1515]: 2024-11-13 11:58:16.775 [INFO][4364] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-gr2mf.gb1.brightbox.com' Nov 13 11:58:16.845678 containerd[1515]: 2024-11-13 11:58:16.777 [INFO][4364] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f69e8f87d6a28573e7e020d16c40d91477a6bfd3ab2dea52b33bd4a56482ca7a" host="srv-gr2mf.gb1.brightbox.com" Nov 13 11:58:16.845678 containerd[1515]: 2024-11-13 11:58:16.787 [INFO][4364] ipam/ipam.go 372: Looking up existing affinities for host host="srv-gr2mf.gb1.brightbox.com" Nov 13 11:58:16.845678 containerd[1515]: 2024-11-13 11:58:16.797 [INFO][4364] ipam/ipam.go 489: Trying affinity for 192.168.55.0/26 host="srv-gr2mf.gb1.brightbox.com" Nov 13 11:58:16.845678 containerd[1515]: 2024-11-13 11:58:16.800 [INFO][4364] ipam/ipam.go 155: Attempting to load block cidr=192.168.55.0/26 host="srv-gr2mf.gb1.brightbox.com" Nov 13 11:58:16.845678 containerd[1515]: 2024-11-13 11:58:16.804 [INFO][4364] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.55.0/26 host="srv-gr2mf.gb1.brightbox.com" Nov 13 11:58:16.845678 containerd[1515]: 2024-11-13 11:58:16.804 [INFO][4364] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.55.0/26 handle="k8s-pod-network.f69e8f87d6a28573e7e020d16c40d91477a6bfd3ab2dea52b33bd4a56482ca7a" host="srv-gr2mf.gb1.brightbox.com" Nov 13 11:58:16.845678 containerd[1515]: 2024-11-13 11:58:16.807 [INFO][4364] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f69e8f87d6a28573e7e020d16c40d91477a6bfd3ab2dea52b33bd4a56482ca7a Nov 13 11:58:16.845678 containerd[1515]: 2024-11-13 11:58:16.813 [INFO][4364] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.55.0/26 handle="k8s-pod-network.f69e8f87d6a28573e7e020d16c40d91477a6bfd3ab2dea52b33bd4a56482ca7a" host="srv-gr2mf.gb1.brightbox.com" Nov 13 11:58:16.845678 containerd[1515]: 2024-11-13 11:58:16.820 [INFO][4364] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.55.3/26] block=192.168.55.0/26 handle="k8s-pod-network.f69e8f87d6a28573e7e020d16c40d91477a6bfd3ab2dea52b33bd4a56482ca7a" host="srv-gr2mf.gb1.brightbox.com" Nov 13 11:58:16.845678 containerd[1515]: 2024-11-13 11:58:16.821 [INFO][4364] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.55.3/26] handle="k8s-pod-network.f69e8f87d6a28573e7e020d16c40d91477a6bfd3ab2dea52b33bd4a56482ca7a" host="srv-gr2mf.gb1.brightbox.com" Nov 13 11:58:16.845678 containerd[1515]: 2024-11-13 11:58:16.821 [INFO][4364] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 13 11:58:16.845678 containerd[1515]: 2024-11-13 11:58:16.821 [INFO][4364] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.55.3/26] IPv6=[] ContainerID="f69e8f87d6a28573e7e020d16c40d91477a6bfd3ab2dea52b33bd4a56482ca7a" HandleID="k8s-pod-network.f69e8f87d6a28573e7e020d16c40d91477a6bfd3ab2dea52b33bd4a56482ca7a" Workload="srv--gr2mf.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--gjw7n-eth0" Nov 13 11:58:16.846953 containerd[1515]: 2024-11-13 11:58:16.824 [INFO][4354] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f69e8f87d6a28573e7e020d16c40d91477a6bfd3ab2dea52b33bd4a56482ca7a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gjw7n" WorkloadEndpoint="srv--gr2mf.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--gjw7n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gr2mf.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--gjw7n-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"f284fb92-56b8-452d-85a1-fc02bb9810b6", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2024, time.November, 13, 11, 57, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gr2mf.gb1.brightbox.com", ContainerID:"", Pod:"coredns-7db6d8ff4d-gjw7n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.55.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib78b4920e14", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 13 11:58:16.846953 containerd[1515]: 2024-11-13 11:58:16.824 [INFO][4354] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.55.3/32] ContainerID="f69e8f87d6a28573e7e020d16c40d91477a6bfd3ab2dea52b33bd4a56482ca7a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gjw7n" WorkloadEndpoint="srv--gr2mf.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--gjw7n-eth0" Nov 13 11:58:16.846953 containerd[1515]: 2024-11-13 11:58:16.825 [INFO][4354] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib78b4920e14 ContainerID="f69e8f87d6a28573e7e020d16c40d91477a6bfd3ab2dea52b33bd4a56482ca7a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gjw7n" WorkloadEndpoint="srv--gr2mf.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--gjw7n-eth0" Nov 13 11:58:16.846953 containerd[1515]: 2024-11-13 11:58:16.828 [INFO][4354] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f69e8f87d6a28573e7e020d16c40d91477a6bfd3ab2dea52b33bd4a56482ca7a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gjw7n" WorkloadEndpoint="srv--gr2mf.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--gjw7n-eth0" Nov 13 11:58:16.846953 containerd[1515]: 2024-11-13 11:58:16.829 [INFO][4354] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f69e8f87d6a28573e7e020d16c40d91477a6bfd3ab2dea52b33bd4a56482ca7a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gjw7n" WorkloadEndpoint="srv--gr2mf.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--gjw7n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gr2mf.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--gjw7n-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"f284fb92-56b8-452d-85a1-fc02bb9810b6", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2024, time.November, 13, 11, 57, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gr2mf.gb1.brightbox.com", ContainerID:"f69e8f87d6a28573e7e020d16c40d91477a6bfd3ab2dea52b33bd4a56482ca7a", Pod:"coredns-7db6d8ff4d-gjw7n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.55.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib78b4920e14", MAC:"52:26:d4:d3:3c:78", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 13 11:58:16.846953 containerd[1515]: 2024-11-13 11:58:16.841 [INFO][4354] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f69e8f87d6a28573e7e020d16c40d91477a6bfd3ab2dea52b33bd4a56482ca7a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gjw7n" WorkloadEndpoint="srv--gr2mf.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--gjw7n-eth0" Nov 13 11:58:16.871320 containerd[1515]: time="2024-11-13T11:58:16.871168480Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 13 11:58:16.871458 containerd[1515]: time="2024-11-13T11:58:16.871399535Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 13 11:58:16.871505 containerd[1515]: time="2024-11-13T11:58:16.871474794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 11:58:16.871845 containerd[1515]: time="2024-11-13T11:58:16.871809956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 11:58:16.899395 systemd[1]: Started cri-containerd-f69e8f87d6a28573e7e020d16c40d91477a6bfd3ab2dea52b33bd4a56482ca7a.scope - libcontainer container f69e8f87d6a28573e7e020d16c40d91477a6bfd3ab2dea52b33bd4a56482ca7a. Nov 13 11:58:16.974757 containerd[1515]: time="2024-11-13T11:58:16.974717043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gjw7n,Uid:f284fb92-56b8-452d-85a1-fc02bb9810b6,Namespace:kube-system,Attempt:1,} returns sandbox id \"f69e8f87d6a28573e7e020d16c40d91477a6bfd3ab2dea52b33bd4a56482ca7a\"" Nov 13 11:58:16.978957 containerd[1515]: time="2024-11-13T11:58:16.978920342Z" level=info msg="CreateContainer within sandbox \"f69e8f87d6a28573e7e020d16c40d91477a6bfd3ab2dea52b33bd4a56482ca7a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 13 11:58:16.994457 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount19692094.mount: Deactivated successfully. Nov 13 11:58:16.996399 containerd[1515]: time="2024-11-13T11:58:16.996355288Z" level=info msg="CreateContainer within sandbox \"f69e8f87d6a28573e7e020d16c40d91477a6bfd3ab2dea52b33bd4a56482ca7a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"15d2b6fa5dfdf9ba5ad63e48775d5e0429f52511ab67ce5f1d2e31571449dc51\"" Nov 13 11:58:16.998372 containerd[1515]: time="2024-11-13T11:58:16.997516305Z" level=info msg="StartContainer for \"15d2b6fa5dfdf9ba5ad63e48775d5e0429f52511ab67ce5f1d2e31571449dc51\"" Nov 13 11:58:17.026409 systemd[1]: Started cri-containerd-15d2b6fa5dfdf9ba5ad63e48775d5e0429f52511ab67ce5f1d2e31571449dc51.scope - libcontainer container 15d2b6fa5dfdf9ba5ad63e48775d5e0429f52511ab67ce5f1d2e31571449dc51. Nov 13 11:58:17.061946 containerd[1515]: time="2024-11-13T11:58:17.061686169Z" level=info msg="StartContainer for \"15d2b6fa5dfdf9ba5ad63e48775d5e0429f52511ab67ce5f1d2e31571449dc51\" returns successfully" Nov 13 11:58:17.163559 systemd-networkd[1448]: cali712d876c997: Gained IPv6LL Nov 13 11:58:17.291369 systemd-networkd[1448]: cali63df1322774: Gained IPv6LL Nov 13 11:58:17.974640 kubelet[2752]: I1113 11:58:17.974403 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-gjw7n" podStartSLOduration=33.974252213 podStartE2EDuration="33.974252213s" podCreationTimestamp="2024-11-13 11:57:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-13 11:58:17.973407555 +0000 UTC m=+49.593316250" watchObservedRunningTime="2024-11-13 11:58:17.974252213 +0000 UTC m=+49.594160929" Nov 13 11:58:18.226745 containerd[1515]: time="2024-11-13T11:58:18.226382369Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 11:58:18.227420 containerd[1515]: time="2024-11-13T11:58:18.227380062Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.0: active requests=0, bytes read=7902635" Nov 13 11:58:18.227983 containerd[1515]: time="2024-11-13T11:58:18.227690615Z" level=info msg="ImageCreate event name:\"sha256:a58f4c4b5a7fc2dc0036f198a37464aa007ff2dfe31c8fddad993477291bea46\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 11:58:18.229545 containerd[1515]: time="2024-11-13T11:58:18.229514946Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:034dac492808ec38cd5e596ef6c97d7cd01aaab29a4952c746b27c75ecab8cf5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 11:58:18.230523 containerd[1515]: time="2024-11-13T11:58:18.230357487Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.0\" with image id \"sha256:a58f4c4b5a7fc2dc0036f198a37464aa007ff2dfe31c8fddad993477291bea46\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:034dac492808ec38cd5e596ef6c97d7cd01aaab29a4952c746b27c75ecab8cf5\", size \"9395727\" in 1.934978963s" Nov 13 11:58:18.230523 containerd[1515]: time="2024-11-13T11:58:18.230414568Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.0\" returns image reference \"sha256:a58f4c4b5a7fc2dc0036f198a37464aa007ff2dfe31c8fddad993477291bea46\"" Nov 13 11:58:18.231751 containerd[1515]: time="2024-11-13T11:58:18.231663426Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\"" Nov 13 11:58:18.233252 containerd[1515]: time="2024-11-13T11:58:18.233209577Z" level=info msg="CreateContainer within sandbox \"f912b0d667a3cde7a65722e5a3b1dabdc8d29c742e476fad9b79c8885e4457bf\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Nov 13 11:58:18.247646 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1960843857.mount: Deactivated successfully. Nov 13 11:58:18.254982 containerd[1515]: time="2024-11-13T11:58:18.254904195Z" level=info msg="CreateContainer within sandbox \"f912b0d667a3cde7a65722e5a3b1dabdc8d29c742e476fad9b79c8885e4457bf\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"9aa6a60d4b63e51989ef8ea2aeef6cb4714e08d5e4cc70c4e14ddca8c38ee5eb\"" Nov 13 11:58:18.255700 containerd[1515]: time="2024-11-13T11:58:18.255657371Z" level=info msg="StartContainer for \"9aa6a60d4b63e51989ef8ea2aeef6cb4714e08d5e4cc70c4e14ddca8c38ee5eb\"" Nov 13 11:58:18.305429 systemd[1]: Started cri-containerd-9aa6a60d4b63e51989ef8ea2aeef6cb4714e08d5e4cc70c4e14ddca8c38ee5eb.scope - libcontainer container 9aa6a60d4b63e51989ef8ea2aeef6cb4714e08d5e4cc70c4e14ddca8c38ee5eb. Nov 13 11:58:18.347589 containerd[1515]: time="2024-11-13T11:58:18.347345123Z" level=info msg="StartContainer for \"9aa6a60d4b63e51989ef8ea2aeef6cb4714e08d5e4cc70c4e14ddca8c38ee5eb\" returns successfully" Nov 13 11:58:18.540284 containerd[1515]: time="2024-11-13T11:58:18.540036096Z" level=info msg="StopPodSandbox for \"064bc4be65a841d0edadb1da42894914b0b94459fbcedfc3e1cd96a74d0c5fc7\"" Nov 13 11:58:18.543857 containerd[1515]: time="2024-11-13T11:58:18.541376481Z" level=info msg="StopPodSandbox for \"bf7cc5c3037e25e5ea5ae6693908f8aa9d5cc145ab62795380efaa23baaf6923\"" Nov 13 11:58:18.698862 containerd[1515]: 2024-11-13 11:58:18.618 [INFO][4531] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bf7cc5c3037e25e5ea5ae6693908f8aa9d5cc145ab62795380efaa23baaf6923" Nov 13 11:58:18.698862 containerd[1515]: 2024-11-13 11:58:18.618 [INFO][4531] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bf7cc5c3037e25e5ea5ae6693908f8aa9d5cc145ab62795380efaa23baaf6923" iface="eth0" netns="/var/run/netns/cni-1a2c67b1-76c6-838e-290e-ea081c537a12" Nov 13 11:58:18.698862 containerd[1515]: 2024-11-13 11:58:18.619 [INFO][4531] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bf7cc5c3037e25e5ea5ae6693908f8aa9d5cc145ab62795380efaa23baaf6923" iface="eth0" netns="/var/run/netns/cni-1a2c67b1-76c6-838e-290e-ea081c537a12" Nov 13 11:58:18.698862 containerd[1515]: 2024-11-13 11:58:18.620 [INFO][4531] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bf7cc5c3037e25e5ea5ae6693908f8aa9d5cc145ab62795380efaa23baaf6923" iface="eth0" netns="/var/run/netns/cni-1a2c67b1-76c6-838e-290e-ea081c537a12" Nov 13 11:58:18.698862 containerd[1515]: 2024-11-13 11:58:18.620 [INFO][4531] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bf7cc5c3037e25e5ea5ae6693908f8aa9d5cc145ab62795380efaa23baaf6923" Nov 13 11:58:18.698862 containerd[1515]: 2024-11-13 11:58:18.620 [INFO][4531] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bf7cc5c3037e25e5ea5ae6693908f8aa9d5cc145ab62795380efaa23baaf6923" Nov 13 11:58:18.698862 containerd[1515]: 2024-11-13 11:58:18.666 [INFO][4543] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bf7cc5c3037e25e5ea5ae6693908f8aa9d5cc145ab62795380efaa23baaf6923" HandleID="k8s-pod-network.bf7cc5c3037e25e5ea5ae6693908f8aa9d5cc145ab62795380efaa23baaf6923" Workload="srv--gr2mf.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--29ddh-eth0" Nov 13 11:58:18.698862 containerd[1515]: 2024-11-13 11:58:18.667 [INFO][4543] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 13 11:58:18.698862 containerd[1515]: 2024-11-13 11:58:18.667 [INFO][4543] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 13 11:58:18.698862 containerd[1515]: 2024-11-13 11:58:18.677 [WARNING][4543] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bf7cc5c3037e25e5ea5ae6693908f8aa9d5cc145ab62795380efaa23baaf6923" HandleID="k8s-pod-network.bf7cc5c3037e25e5ea5ae6693908f8aa9d5cc145ab62795380efaa23baaf6923" Workload="srv--gr2mf.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--29ddh-eth0" Nov 13 11:58:18.698862 containerd[1515]: 2024-11-13 11:58:18.677 [INFO][4543] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bf7cc5c3037e25e5ea5ae6693908f8aa9d5cc145ab62795380efaa23baaf6923" HandleID="k8s-pod-network.bf7cc5c3037e25e5ea5ae6693908f8aa9d5cc145ab62795380efaa23baaf6923" Workload="srv--gr2mf.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--29ddh-eth0" Nov 13 11:58:18.698862 containerd[1515]: 2024-11-13 11:58:18.684 [INFO][4543] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 13 11:58:18.698862 containerd[1515]: 2024-11-13 11:58:18.691 [INFO][4531] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bf7cc5c3037e25e5ea5ae6693908f8aa9d5cc145ab62795380efaa23baaf6923" Nov 13 11:58:18.702135 containerd[1515]: time="2024-11-13T11:58:18.699058671Z" level=info msg="TearDown network for sandbox \"bf7cc5c3037e25e5ea5ae6693908f8aa9d5cc145ab62795380efaa23baaf6923\" successfully" Nov 13 11:58:18.702135 containerd[1515]: time="2024-11-13T11:58:18.699090663Z" level=info msg="StopPodSandbox for \"bf7cc5c3037e25e5ea5ae6693908f8aa9d5cc145ab62795380efaa23baaf6923\" returns successfully" Nov 13 11:58:18.702135 containerd[1515]: time="2024-11-13T11:58:18.701170418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-29ddh,Uid:c2089e4d-0914-4208-bd3f-ebfa5baa6636,Namespace:kube-system,Attempt:1,}" Nov 13 11:58:18.727218 containerd[1515]: 2024-11-13 11:58:18.645 [INFO][4530] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="064bc4be65a841d0edadb1da42894914b0b94459fbcedfc3e1cd96a74d0c5fc7" Nov 13 11:58:18.727218 containerd[1515]: 2024-11-13 11:58:18.645 [INFO][4530] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="064bc4be65a841d0edadb1da42894914b0b94459fbcedfc3e1cd96a74d0c5fc7" iface="eth0" netns="/var/run/netns/cni-e929d8b4-42ec-4643-66ff-523d0c8ecdf5" Nov 13 11:58:18.727218 containerd[1515]: 2024-11-13 11:58:18.646 [INFO][4530] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="064bc4be65a841d0edadb1da42894914b0b94459fbcedfc3e1cd96a74d0c5fc7" iface="eth0" netns="/var/run/netns/cni-e929d8b4-42ec-4643-66ff-523d0c8ecdf5" Nov 13 11:58:18.727218 containerd[1515]: 2024-11-13 11:58:18.650 [INFO][4530] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="064bc4be65a841d0edadb1da42894914b0b94459fbcedfc3e1cd96a74d0c5fc7" iface="eth0" netns="/var/run/netns/cni-e929d8b4-42ec-4643-66ff-523d0c8ecdf5" Nov 13 11:58:18.727218 containerd[1515]: 2024-11-13 11:58:18.650 [INFO][4530] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="064bc4be65a841d0edadb1da42894914b0b94459fbcedfc3e1cd96a74d0c5fc7" Nov 13 11:58:18.727218 containerd[1515]: 2024-11-13 11:58:18.650 [INFO][4530] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="064bc4be65a841d0edadb1da42894914b0b94459fbcedfc3e1cd96a74d0c5fc7" Nov 13 11:58:18.727218 containerd[1515]: 2024-11-13 11:58:18.692 [INFO][4547] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="064bc4be65a841d0edadb1da42894914b0b94459fbcedfc3e1cd96a74d0c5fc7" HandleID="k8s-pod-network.064bc4be65a841d0edadb1da42894914b0b94459fbcedfc3e1cd96a74d0c5fc7" Workload="srv--gr2mf.gb1.brightbox.com-k8s-calico--kube--controllers--7f4895d8cb--7gh4j-eth0" Nov 13 11:58:18.727218 containerd[1515]: 2024-11-13 11:58:18.692 [INFO][4547] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 13 11:58:18.727218 containerd[1515]: 2024-11-13 11:58:18.692 [INFO][4547] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 13 11:58:18.727218 containerd[1515]: 2024-11-13 11:58:18.708 [WARNING][4547] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="064bc4be65a841d0edadb1da42894914b0b94459fbcedfc3e1cd96a74d0c5fc7" HandleID="k8s-pod-network.064bc4be65a841d0edadb1da42894914b0b94459fbcedfc3e1cd96a74d0c5fc7" Workload="srv--gr2mf.gb1.brightbox.com-k8s-calico--kube--controllers--7f4895d8cb--7gh4j-eth0" Nov 13 11:58:18.727218 containerd[1515]: 2024-11-13 11:58:18.708 [INFO][4547] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="064bc4be65a841d0edadb1da42894914b0b94459fbcedfc3e1cd96a74d0c5fc7" HandleID="k8s-pod-network.064bc4be65a841d0edadb1da42894914b0b94459fbcedfc3e1cd96a74d0c5fc7" Workload="srv--gr2mf.gb1.brightbox.com-k8s-calico--kube--controllers--7f4895d8cb--7gh4j-eth0" Nov 13 11:58:18.727218 containerd[1515]: 2024-11-13 11:58:18.717 [INFO][4547] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 13 11:58:18.727218 containerd[1515]: 2024-11-13 11:58:18.722 [INFO][4530] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="064bc4be65a841d0edadb1da42894914b0b94459fbcedfc3e1cd96a74d0c5fc7" Nov 13 11:58:18.734262 containerd[1515]: time="2024-11-13T11:58:18.734222808Z" level=info msg="TearDown network for sandbox \"064bc4be65a841d0edadb1da42894914b0b94459fbcedfc3e1cd96a74d0c5fc7\" successfully" Nov 13 11:58:18.734262 containerd[1515]: time="2024-11-13T11:58:18.734259181Z" level=info msg="StopPodSandbox for \"064bc4be65a841d0edadb1da42894914b0b94459fbcedfc3e1cd96a74d0c5fc7\" returns successfully" Nov 13 11:58:18.735125 containerd[1515]: time="2024-11-13T11:58:18.734756272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f4895d8cb-7gh4j,Uid:b9eb2b70-ede6-4290-8483-7657e4c96a8b,Namespace:calico-system,Attempt:1,}" Nov 13 11:58:18.793342 systemd[1]: run-netns-cni\x2d1a2c67b1\x2d76c6\x2d838e\x2d290e\x2dea081c537a12.mount: Deactivated successfully. Nov 13 11:58:18.793436 systemd[1]: run-netns-cni\x2de929d8b4\x2d42ec\x2d4643\x2d66ff\x2d523d0c8ecdf5.mount: Deactivated successfully. Nov 13 11:58:18.827671 systemd-networkd[1448]: calib78b4920e14: Gained IPv6LL Nov 13 11:58:18.965205 systemd-networkd[1448]: cali86b37a1fb50: Link UP Nov 13 11:58:18.966330 systemd-networkd[1448]: cali86b37a1fb50: Gained carrier Nov 13 11:58:19.011266 containerd[1515]: 2024-11-13 11:58:18.817 [INFO][4556] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--gr2mf.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--29ddh-eth0 coredns-7db6d8ff4d- kube-system c2089e4d-0914-4208-bd3f-ebfa5baa6636 831 0 2024-11-13 11:57:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-gr2mf.gb1.brightbox.com coredns-7db6d8ff4d-29ddh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali86b37a1fb50 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="ffd6f344ea50e3bf3f87b7bbccb2a8caba45e5ab2c78fb7397cd1bc8c781af71" Namespace="kube-system" Pod="coredns-7db6d8ff4d-29ddh" WorkloadEndpoint="srv--gr2mf.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--29ddh-" Nov 13 11:58:19.011266 containerd[1515]: 2024-11-13 11:58:18.817 [INFO][4556] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ffd6f344ea50e3bf3f87b7bbccb2a8caba45e5ab2c78fb7397cd1bc8c781af71" Namespace="kube-system" Pod="coredns-7db6d8ff4d-29ddh" WorkloadEndpoint="srv--gr2mf.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--29ddh-eth0" Nov 13 11:58:19.011266 containerd[1515]: 2024-11-13 11:58:18.889 [INFO][4579] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ffd6f344ea50e3bf3f87b7bbccb2a8caba45e5ab2c78fb7397cd1bc8c781af71" HandleID="k8s-pod-network.ffd6f344ea50e3bf3f87b7bbccb2a8caba45e5ab2c78fb7397cd1bc8c781af71" Workload="srv--gr2mf.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--29ddh-eth0" Nov 13 11:58:19.011266 containerd[1515]: 2024-11-13 11:58:18.902 [INFO][4579] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ffd6f344ea50e3bf3f87b7bbccb2a8caba45e5ab2c78fb7397cd1bc8c781af71" HandleID="k8s-pod-network.ffd6f344ea50e3bf3f87b7bbccb2a8caba45e5ab2c78fb7397cd1bc8c781af71" Workload="srv--gr2mf.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--29ddh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00039fcc0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-gr2mf.gb1.brightbox.com", "pod":"coredns-7db6d8ff4d-29ddh", "timestamp":"2024-11-13 11:58:18.889338836 +0000 UTC"}, Hostname:"srv-gr2mf.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 13 11:58:19.011266 containerd[1515]: 2024-11-13 11:58:18.903 [INFO][4579] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 13 11:58:19.011266 containerd[1515]: 2024-11-13 11:58:18.903 [INFO][4579] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 13 11:58:19.011266 containerd[1515]: 2024-11-13 11:58:18.903 [INFO][4579] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-gr2mf.gb1.brightbox.com' Nov 13 11:58:19.011266 containerd[1515]: 2024-11-13 11:58:18.905 [INFO][4579] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ffd6f344ea50e3bf3f87b7bbccb2a8caba45e5ab2c78fb7397cd1bc8c781af71" host="srv-gr2mf.gb1.brightbox.com" Nov 13 11:58:19.011266 containerd[1515]: 2024-11-13 11:58:18.913 [INFO][4579] ipam/ipam.go 372: Looking up existing affinities for host host="srv-gr2mf.gb1.brightbox.com" Nov 13 11:58:19.011266 containerd[1515]: 2024-11-13 11:58:18.919 [INFO][4579] ipam/ipam.go 489: Trying affinity for 192.168.55.0/26 host="srv-gr2mf.gb1.brightbox.com" Nov 13 11:58:19.011266 containerd[1515]: 2024-11-13 11:58:18.922 [INFO][4579] ipam/ipam.go 155: Attempting to load block cidr=192.168.55.0/26 host="srv-gr2mf.gb1.brightbox.com" Nov 13 11:58:19.011266 containerd[1515]: 2024-11-13 11:58:18.925 [INFO][4579] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.55.0/26 host="srv-gr2mf.gb1.brightbox.com" Nov 13 11:58:19.011266 containerd[1515]: 2024-11-13 11:58:18.925 [INFO][4579] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.55.0/26 handle="k8s-pod-network.ffd6f344ea50e3bf3f87b7bbccb2a8caba45e5ab2c78fb7397cd1bc8c781af71" host="srv-gr2mf.gb1.brightbox.com" Nov 13 11:58:19.011266 containerd[1515]: 2024-11-13 11:58:18.928 [INFO][4579] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ffd6f344ea50e3bf3f87b7bbccb2a8caba45e5ab2c78fb7397cd1bc8c781af71 Nov 13 11:58:19.011266 containerd[1515]: 2024-11-13 11:58:18.934 [INFO][4579] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.55.0/26 handle="k8s-pod-network.ffd6f344ea50e3bf3f87b7bbccb2a8caba45e5ab2c78fb7397cd1bc8c781af71" host="srv-gr2mf.gb1.brightbox.com" Nov 13 11:58:19.011266 containerd[1515]: 2024-11-13 11:58:18.947 [INFO][4579] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.55.4/26] block=192.168.55.0/26 handle="k8s-pod-network.ffd6f344ea50e3bf3f87b7bbccb2a8caba45e5ab2c78fb7397cd1bc8c781af71" host="srv-gr2mf.gb1.brightbox.com" Nov 13 11:58:19.011266 containerd[1515]: 2024-11-13 11:58:18.947 [INFO][4579] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.55.4/26] handle="k8s-pod-network.ffd6f344ea50e3bf3f87b7bbccb2a8caba45e5ab2c78fb7397cd1bc8c781af71" host="srv-gr2mf.gb1.brightbox.com" Nov 13 11:58:19.011266 containerd[1515]: 2024-11-13 11:58:18.948 [INFO][4579] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 13 11:58:19.011266 containerd[1515]: 2024-11-13 11:58:18.948 [INFO][4579] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.55.4/26] IPv6=[] ContainerID="ffd6f344ea50e3bf3f87b7bbccb2a8caba45e5ab2c78fb7397cd1bc8c781af71" HandleID="k8s-pod-network.ffd6f344ea50e3bf3f87b7bbccb2a8caba45e5ab2c78fb7397cd1bc8c781af71" Workload="srv--gr2mf.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--29ddh-eth0" Nov 13 11:58:19.013917 containerd[1515]: 2024-11-13 11:58:18.957 [INFO][4556] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ffd6f344ea50e3bf3f87b7bbccb2a8caba45e5ab2c78fb7397cd1bc8c781af71" Namespace="kube-system" Pod="coredns-7db6d8ff4d-29ddh" WorkloadEndpoint="srv--gr2mf.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--29ddh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gr2mf.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--29ddh-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c2089e4d-0914-4208-bd3f-ebfa5baa6636", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2024, time.November, 13, 11, 57, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gr2mf.gb1.brightbox.com", ContainerID:"", Pod:"coredns-7db6d8ff4d-29ddh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.55.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali86b37a1fb50", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 13 11:58:19.013917 containerd[1515]: 2024-11-13 11:58:18.958 [INFO][4556] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.55.4/32] ContainerID="ffd6f344ea50e3bf3f87b7bbccb2a8caba45e5ab2c78fb7397cd1bc8c781af71" Namespace="kube-system" Pod="coredns-7db6d8ff4d-29ddh" WorkloadEndpoint="srv--gr2mf.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--29ddh-eth0" Nov 13 11:58:19.013917 containerd[1515]: 2024-11-13 11:58:18.958 [INFO][4556] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali86b37a1fb50 ContainerID="ffd6f344ea50e3bf3f87b7bbccb2a8caba45e5ab2c78fb7397cd1bc8c781af71" Namespace="kube-system" Pod="coredns-7db6d8ff4d-29ddh" WorkloadEndpoint="srv--gr2mf.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--29ddh-eth0" Nov 13 11:58:19.013917 containerd[1515]: 2024-11-13 11:58:18.967 [INFO][4556] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ffd6f344ea50e3bf3f87b7bbccb2a8caba45e5ab2c78fb7397cd1bc8c781af71" Namespace="kube-system" Pod="coredns-7db6d8ff4d-29ddh" WorkloadEndpoint="srv--gr2mf.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--29ddh-eth0" Nov 13 11:58:19.013917 containerd[1515]: 2024-11-13 11:58:18.967 [INFO][4556] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ffd6f344ea50e3bf3f87b7bbccb2a8caba45e5ab2c78fb7397cd1bc8c781af71" Namespace="kube-system" Pod="coredns-7db6d8ff4d-29ddh" WorkloadEndpoint="srv--gr2mf.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--29ddh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gr2mf.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--29ddh-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c2089e4d-0914-4208-bd3f-ebfa5baa6636", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2024, time.November, 13, 11, 57, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gr2mf.gb1.brightbox.com", ContainerID:"ffd6f344ea50e3bf3f87b7bbccb2a8caba45e5ab2c78fb7397cd1bc8c781af71", Pod:"coredns-7db6d8ff4d-29ddh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.55.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali86b37a1fb50", MAC:"7a:b1:5a:0e:87:ec", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 13 11:58:19.013917 containerd[1515]: 2024-11-13 11:58:18.994 [INFO][4556] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ffd6f344ea50e3bf3f87b7bbccb2a8caba45e5ab2c78fb7397cd1bc8c781af71" Namespace="kube-system" Pod="coredns-7db6d8ff4d-29ddh" WorkloadEndpoint="srv--gr2mf.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--29ddh-eth0" Nov 13 11:58:19.095411 systemd-networkd[1448]: cali75ef0cf2a16: Link UP Nov 13 11:58:19.097606 systemd-networkd[1448]: cali75ef0cf2a16: Gained carrier Nov 13 11:58:19.103213 containerd[1515]: time="2024-11-13T11:58:19.102409704Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 13 11:58:19.103213 containerd[1515]: time="2024-11-13T11:58:19.102936124Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 13 11:58:19.103213 containerd[1515]: time="2024-11-13T11:58:19.103061928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 11:58:19.104317 containerd[1515]: time="2024-11-13T11:58:19.103454742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 11:58:19.148513 containerd[1515]: 2024-11-13 11:58:18.877 [INFO][4567] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--gr2mf.gb1.brightbox.com-k8s-calico--kube--controllers--7f4895d8cb--7gh4j-eth0 calico-kube-controllers-7f4895d8cb- calico-system b9eb2b70-ede6-4290-8483-7657e4c96a8b 832 0 2024-11-13 11:57:52 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7f4895d8cb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-gr2mf.gb1.brightbox.com calico-kube-controllers-7f4895d8cb-7gh4j eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali75ef0cf2a16 [] []}} ContainerID="83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b" Namespace="calico-system" Pod="calico-kube-controllers-7f4895d8cb-7gh4j" WorkloadEndpoint="srv--gr2mf.gb1.brightbox.com-k8s-calico--kube--controllers--7f4895d8cb--7gh4j-" Nov 13 11:58:19.148513 containerd[1515]: 2024-11-13 11:58:18.878 [INFO][4567] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b" Namespace="calico-system" Pod="calico-kube-controllers-7f4895d8cb-7gh4j" WorkloadEndpoint="srv--gr2mf.gb1.brightbox.com-k8s-calico--kube--controllers--7f4895d8cb--7gh4j-eth0" Nov 13 11:58:19.148513 containerd[1515]: 2024-11-13 11:58:18.932 [INFO][4585] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b" HandleID="k8s-pod-network.83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b" Workload="srv--gr2mf.gb1.brightbox.com-k8s-calico--kube--controllers--7f4895d8cb--7gh4j-eth0" Nov 13 11:58:19.148513 containerd[1515]: 2024-11-13 11:58:18.958 [INFO][4585] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b" HandleID="k8s-pod-network.83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b" Workload="srv--gr2mf.gb1.brightbox.com-k8s-calico--kube--controllers--7f4895d8cb--7gh4j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000305a40), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-gr2mf.gb1.brightbox.com", "pod":"calico-kube-controllers-7f4895d8cb-7gh4j", "timestamp":"2024-11-13 11:58:18.932071615 +0000 UTC"}, Hostname:"srv-gr2mf.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 13 11:58:19.148513 containerd[1515]: 2024-11-13 11:58:18.959 [INFO][4585] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 13 11:58:19.148513 containerd[1515]: 2024-11-13 11:58:18.962 [INFO][4585] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 13 11:58:19.148513 containerd[1515]: 2024-11-13 11:58:18.962 [INFO][4585] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-gr2mf.gb1.brightbox.com' Nov 13 11:58:19.148513 containerd[1515]: 2024-11-13 11:58:18.972 [INFO][4585] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b" host="srv-gr2mf.gb1.brightbox.com" Nov 13 11:58:19.148513 containerd[1515]: 2024-11-13 11:58:18.992 [INFO][4585] ipam/ipam.go 372: Looking up existing affinities for host host="srv-gr2mf.gb1.brightbox.com" Nov 13 11:58:19.148513 containerd[1515]: 2024-11-13 11:58:19.027 [INFO][4585] ipam/ipam.go 489: Trying affinity for 192.168.55.0/26 host="srv-gr2mf.gb1.brightbox.com" Nov 13 11:58:19.148513 containerd[1515]: 2024-11-13 11:58:19.032 [INFO][4585] ipam/ipam.go 155: Attempting to load block cidr=192.168.55.0/26 host="srv-gr2mf.gb1.brightbox.com" Nov 13 11:58:19.148513 containerd[1515]: 2024-11-13 11:58:19.042 [INFO][4585] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.55.0/26 host="srv-gr2mf.gb1.brightbox.com" Nov 13 11:58:19.148513 containerd[1515]: 2024-11-13 11:58:19.043 [INFO][4585] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.55.0/26 handle="k8s-pod-network.83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b" host="srv-gr2mf.gb1.brightbox.com" Nov 13 11:58:19.148513 containerd[1515]: 2024-11-13 11:58:19.048 [INFO][4585] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b Nov 13 11:58:19.148513 containerd[1515]: 2024-11-13 11:58:19.058 [INFO][4585] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.55.0/26 handle="k8s-pod-network.83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b" host="srv-gr2mf.gb1.brightbox.com" Nov 13 11:58:19.148513 containerd[1515]: 2024-11-13 11:58:19.080 [INFO][4585] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.55.5/26] block=192.168.55.0/26 handle="k8s-pod-network.83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b" host="srv-gr2mf.gb1.brightbox.com" Nov 13 11:58:19.148513 containerd[1515]: 2024-11-13 11:58:19.084 [INFO][4585] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.55.5/26] handle="k8s-pod-network.83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b" host="srv-gr2mf.gb1.brightbox.com" Nov 13 11:58:19.148513 containerd[1515]: 2024-11-13 11:58:19.084 [INFO][4585] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 13 11:58:19.148513 containerd[1515]: 2024-11-13 11:58:19.085 [INFO][4585] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.55.5/26] IPv6=[] ContainerID="83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b" HandleID="k8s-pod-network.83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b" Workload="srv--gr2mf.gb1.brightbox.com-k8s-calico--kube--controllers--7f4895d8cb--7gh4j-eth0" Nov 13 11:58:19.151652 containerd[1515]: 2024-11-13 11:58:19.088 [INFO][4567] cni-plugin/k8s.go 386: Populated endpoint ContainerID="83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b" Namespace="calico-system" Pod="calico-kube-controllers-7f4895d8cb-7gh4j" WorkloadEndpoint="srv--gr2mf.gb1.brightbox.com-k8s-calico--kube--controllers--7f4895d8cb--7gh4j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gr2mf.gb1.brightbox.com-k8s-calico--kube--controllers--7f4895d8cb--7gh4j-eth0", GenerateName:"calico-kube-controllers-7f4895d8cb-", Namespace:"calico-system", SelfLink:"", UID:"b9eb2b70-ede6-4290-8483-7657e4c96a8b", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2024, time.November, 13, 11, 57, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f4895d8cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gr2mf.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-7f4895d8cb-7gh4j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.55.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali75ef0cf2a16", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 13 11:58:19.151652 containerd[1515]: 2024-11-13 11:58:19.089 [INFO][4567] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.55.5/32] ContainerID="83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b" Namespace="calico-system" Pod="calico-kube-controllers-7f4895d8cb-7gh4j" WorkloadEndpoint="srv--gr2mf.gb1.brightbox.com-k8s-calico--kube--controllers--7f4895d8cb--7gh4j-eth0" Nov 13 11:58:19.151652 containerd[1515]: 2024-11-13 11:58:19.089 [INFO][4567] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali75ef0cf2a16 ContainerID="83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b" Namespace="calico-system" Pod="calico-kube-controllers-7f4895d8cb-7gh4j" WorkloadEndpoint="srv--gr2mf.gb1.brightbox.com-k8s-calico--kube--controllers--7f4895d8cb--7gh4j-eth0" Nov 13 11:58:19.151652 containerd[1515]: 2024-11-13 11:58:19.098 [INFO][4567] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b" Namespace="calico-system" Pod="calico-kube-controllers-7f4895d8cb-7gh4j" WorkloadEndpoint="srv--gr2mf.gb1.brightbox.com-k8s-calico--kube--controllers--7f4895d8cb--7gh4j-eth0" Nov 13 11:58:19.151652 containerd[1515]: 2024-11-13 11:58:19.099 [INFO][4567] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b" Namespace="calico-system" Pod="calico-kube-controllers-7f4895d8cb-7gh4j" WorkloadEndpoint="srv--gr2mf.gb1.brightbox.com-k8s-calico--kube--controllers--7f4895d8cb--7gh4j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gr2mf.gb1.brightbox.com-k8s-calico--kube--controllers--7f4895d8cb--7gh4j-eth0", GenerateName:"calico-kube-controllers-7f4895d8cb-", Namespace:"calico-system", SelfLink:"", UID:"b9eb2b70-ede6-4290-8483-7657e4c96a8b", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2024, time.November, 13, 11, 57, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f4895d8cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gr2mf.gb1.brightbox.com", ContainerID:"83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b", Pod:"calico-kube-controllers-7f4895d8cb-7gh4j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.55.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali75ef0cf2a16", MAC:"a2:7a:e6:22:4e:1e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 13 11:58:19.151652 containerd[1515]: 2024-11-13 11:58:19.140 [INFO][4567] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b" Namespace="calico-system" Pod="calico-kube-controllers-7f4895d8cb-7gh4j" WorkloadEndpoint="srv--gr2mf.gb1.brightbox.com-k8s-calico--kube--controllers--7f4895d8cb--7gh4j-eth0" Nov 13 11:58:19.156941 systemd[1]: Started cri-containerd-ffd6f344ea50e3bf3f87b7bbccb2a8caba45e5ab2c78fb7397cd1bc8c781af71.scope - libcontainer container ffd6f344ea50e3bf3f87b7bbccb2a8caba45e5ab2c78fb7397cd1bc8c781af71. Nov 13 11:58:19.191884 containerd[1515]: time="2024-11-13T11:58:19.191767714Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 13 11:58:19.193540 containerd[1515]: time="2024-11-13T11:58:19.192758272Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 13 11:58:19.193726 containerd[1515]: time="2024-11-13T11:58:19.193695229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 11:58:19.194089 containerd[1515]: time="2024-11-13T11:58:19.194042590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 11:58:19.235419 systemd[1]: Started cri-containerd-83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b.scope - libcontainer container 83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b. Nov 13 11:58:19.237056 containerd[1515]: time="2024-11-13T11:58:19.237005116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-29ddh,Uid:c2089e4d-0914-4208-bd3f-ebfa5baa6636,Namespace:kube-system,Attempt:1,} returns sandbox id \"ffd6f344ea50e3bf3f87b7bbccb2a8caba45e5ab2c78fb7397cd1bc8c781af71\"" Nov 13 11:58:19.248664 containerd[1515]: time="2024-11-13T11:58:19.248623527Z" level=info msg="CreateContainer within sandbox \"ffd6f344ea50e3bf3f87b7bbccb2a8caba45e5ab2c78fb7397cd1bc8c781af71\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 13 11:58:19.266060 containerd[1515]: time="2024-11-13T11:58:19.265992203Z" level=info msg="CreateContainer within sandbox \"ffd6f344ea50e3bf3f87b7bbccb2a8caba45e5ab2c78fb7397cd1bc8c781af71\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b61aa87ee4f44bd9addf7ffe437234e4bab5508a5dbd7aa1b781c4504377a2e0\"" Nov 13 11:58:19.267680 containerd[1515]: time="2024-11-13T11:58:19.267627792Z" level=info msg="StartContainer for \"b61aa87ee4f44bd9addf7ffe437234e4bab5508a5dbd7aa1b781c4504377a2e0\"" Nov 13 11:58:19.308953 containerd[1515]: time="2024-11-13T11:58:19.308882509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f4895d8cb-7gh4j,Uid:b9eb2b70-ede6-4290-8483-7657e4c96a8b,Namespace:calico-system,Attempt:1,} returns sandbox id \"83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b\"" Nov 13 11:58:19.310350 systemd[1]: Started cri-containerd-b61aa87ee4f44bd9addf7ffe437234e4bab5508a5dbd7aa1b781c4504377a2e0.scope - libcontainer container b61aa87ee4f44bd9addf7ffe437234e4bab5508a5dbd7aa1b781c4504377a2e0. Nov 13 11:58:19.349047 containerd[1515]: time="2024-11-13T11:58:19.346948293Z" level=info msg="StartContainer for \"b61aa87ee4f44bd9addf7ffe437234e4bab5508a5dbd7aa1b781c4504377a2e0\" returns successfully" Nov 13 11:58:19.537028 containerd[1515]: time="2024-11-13T11:58:19.536933440Z" level=info msg="StopPodSandbox for \"daefabc137c78c71cba1858efcb6631812e380fde6818cff42d819cc290c2b1e\"" Nov 13 11:58:19.678456 containerd[1515]: 2024-11-13 11:58:19.612 [INFO][4763] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="daefabc137c78c71cba1858efcb6631812e380fde6818cff42d819cc290c2b1e" Nov 13 11:58:19.678456 containerd[1515]: 2024-11-13 11:58:19.613 [INFO][4763] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="daefabc137c78c71cba1858efcb6631812e380fde6818cff42d819cc290c2b1e" iface="eth0" netns="/var/run/netns/cni-97fb2f66-61d9-0931-282f-0f30f1e5692c" Nov 13 11:58:19.678456 containerd[1515]: 2024-11-13 11:58:19.615 [INFO][4763] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="daefabc137c78c71cba1858efcb6631812e380fde6818cff42d819cc290c2b1e" iface="eth0" netns="/var/run/netns/cni-97fb2f66-61d9-0931-282f-0f30f1e5692c" Nov 13 11:58:19.678456 containerd[1515]: 2024-11-13 11:58:19.616 [INFO][4763] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="daefabc137c78c71cba1858efcb6631812e380fde6818cff42d819cc290c2b1e" iface="eth0" netns="/var/run/netns/cni-97fb2f66-61d9-0931-282f-0f30f1e5692c" Nov 13 11:58:19.678456 containerd[1515]: 2024-11-13 11:58:19.616 [INFO][4763] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="daefabc137c78c71cba1858efcb6631812e380fde6818cff42d819cc290c2b1e" Nov 13 11:58:19.678456 containerd[1515]: 2024-11-13 11:58:19.616 [INFO][4763] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="daefabc137c78c71cba1858efcb6631812e380fde6818cff42d819cc290c2b1e" Nov 13 11:58:19.678456 containerd[1515]: 2024-11-13 11:58:19.655 [INFO][4769] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="daefabc137c78c71cba1858efcb6631812e380fde6818cff42d819cc290c2b1e" HandleID="k8s-pod-network.daefabc137c78c71cba1858efcb6631812e380fde6818cff42d819cc290c2b1e" Workload="srv--gr2mf.gb1.brightbox.com-k8s-calico--apiserver--8588545cd8--hgc4l-eth0" Nov 13 11:58:19.678456 containerd[1515]: 2024-11-13 11:58:19.655 [INFO][4769] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 13 11:58:19.678456 containerd[1515]: 2024-11-13 11:58:19.655 [INFO][4769] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 13 11:58:19.678456 containerd[1515]: 2024-11-13 11:58:19.665 [WARNING][4769] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="daefabc137c78c71cba1858efcb6631812e380fde6818cff42d819cc290c2b1e" HandleID="k8s-pod-network.daefabc137c78c71cba1858efcb6631812e380fde6818cff42d819cc290c2b1e" Workload="srv--gr2mf.gb1.brightbox.com-k8s-calico--apiserver--8588545cd8--hgc4l-eth0" Nov 13 11:58:19.678456 containerd[1515]: 2024-11-13 11:58:19.665 [INFO][4769] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="daefabc137c78c71cba1858efcb6631812e380fde6818cff42d819cc290c2b1e" HandleID="k8s-pod-network.daefabc137c78c71cba1858efcb6631812e380fde6818cff42d819cc290c2b1e" Workload="srv--gr2mf.gb1.brightbox.com-k8s-calico--apiserver--8588545cd8--hgc4l-eth0" Nov 13 11:58:19.678456 containerd[1515]: 2024-11-13 11:58:19.673 [INFO][4769] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 13 11:58:19.678456 containerd[1515]: 2024-11-13 11:58:19.674 [INFO][4763] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="daefabc137c78c71cba1858efcb6631812e380fde6818cff42d819cc290c2b1e" Nov 13 11:58:19.679921 containerd[1515]: time="2024-11-13T11:58:19.679695658Z" level=info msg="TearDown network for sandbox \"daefabc137c78c71cba1858efcb6631812e380fde6818cff42d819cc290c2b1e\" successfully" Nov 13 11:58:19.679921 containerd[1515]: time="2024-11-13T11:58:19.679754649Z" level=info msg="StopPodSandbox for \"daefabc137c78c71cba1858efcb6631812e380fde6818cff42d819cc290c2b1e\" returns successfully" Nov 13 11:58:19.681439 containerd[1515]: time="2024-11-13T11:58:19.681406782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8588545cd8-hgc4l,Uid:f8b9f6f4-7488-43e0-87ea-b08a68022038,Namespace:calico-apiserver,Attempt:1,}" Nov 13 11:58:19.797862 systemd[1]: run-netns-cni\x2d97fb2f66\x2d61d9\x2d0931\x2d282f\x2d0f30f1e5692c.mount: Deactivated successfully. Nov 13 11:58:19.917123 systemd-networkd[1448]: cali8f7f851bfd3: Link UP Nov 13 11:58:19.917719 systemd-networkd[1448]: cali8f7f851bfd3: Gained carrier Nov 13 11:58:19.955090 containerd[1515]: 2024-11-13 11:58:19.771 [INFO][4776] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--gr2mf.gb1.brightbox.com-k8s-calico--apiserver--8588545cd8--hgc4l-eth0 calico-apiserver-8588545cd8- calico-apiserver f8b9f6f4-7488-43e0-87ea-b08a68022038 851 0 2024-11-13 11:57:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8588545cd8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-gr2mf.gb1.brightbox.com calico-apiserver-8588545cd8-hgc4l eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8f7f851bfd3 [] []}} ContainerID="255ee61a18e99c3652a33e3c4f66398f9abd42728de8ed20322c0a65e7b7dcb6" Namespace="calico-apiserver" Pod="calico-apiserver-8588545cd8-hgc4l" WorkloadEndpoint="srv--gr2mf.gb1.brightbox.com-k8s-calico--apiserver--8588545cd8--hgc4l-" Nov 13 11:58:19.955090 containerd[1515]: 2024-11-13 11:58:19.773 [INFO][4776] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="255ee61a18e99c3652a33e3c4f66398f9abd42728de8ed20322c0a65e7b7dcb6" Namespace="calico-apiserver" Pod="calico-apiserver-8588545cd8-hgc4l" WorkloadEndpoint="srv--gr2mf.gb1.brightbox.com-k8s-calico--apiserver--8588545cd8--hgc4l-eth0" Nov 13 11:58:19.955090 containerd[1515]: 2024-11-13 11:58:19.828 [INFO][4787] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="255ee61a18e99c3652a33e3c4f66398f9abd42728de8ed20322c0a65e7b7dcb6" HandleID="k8s-pod-network.255ee61a18e99c3652a33e3c4f66398f9abd42728de8ed20322c0a65e7b7dcb6" Workload="srv--gr2mf.gb1.brightbox.com-k8s-calico--apiserver--8588545cd8--hgc4l-eth0" Nov 13 11:58:19.955090 containerd[1515]: 2024-11-13 11:58:19.857 [INFO][4787] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="255ee61a18e99c3652a33e3c4f66398f9abd42728de8ed20322c0a65e7b7dcb6" HandleID="k8s-pod-network.255ee61a18e99c3652a33e3c4f66398f9abd42728de8ed20322c0a65e7b7dcb6" Workload="srv--gr2mf.gb1.brightbox.com-k8s-calico--apiserver--8588545cd8--hgc4l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002907f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-gr2mf.gb1.brightbox.com", "pod":"calico-apiserver-8588545cd8-hgc4l", "timestamp":"2024-11-13 11:58:19.828688506 +0000 UTC"}, Hostname:"srv-gr2mf.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 13 11:58:19.955090 containerd[1515]: 2024-11-13 11:58:19.857 [INFO][4787] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 13 11:58:19.955090 containerd[1515]: 2024-11-13 11:58:19.857 [INFO][4787] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 13 11:58:19.955090 containerd[1515]: 2024-11-13 11:58:19.857 [INFO][4787] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-gr2mf.gb1.brightbox.com' Nov 13 11:58:19.955090 containerd[1515]: 2024-11-13 11:58:19.861 [INFO][4787] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.255ee61a18e99c3652a33e3c4f66398f9abd42728de8ed20322c0a65e7b7dcb6" host="srv-gr2mf.gb1.brightbox.com" Nov 13 11:58:19.955090 containerd[1515]: 2024-11-13 11:58:19.871 [INFO][4787] ipam/ipam.go 372: Looking up existing affinities for host host="srv-gr2mf.gb1.brightbox.com" Nov 13 11:58:19.955090 containerd[1515]: 2024-11-13 11:58:19.878 [INFO][4787] ipam/ipam.go 489: Trying affinity for 192.168.55.0/26 host="srv-gr2mf.gb1.brightbox.com" Nov 13 11:58:19.955090 containerd[1515]: 2024-11-13 11:58:19.881 [INFO][4787] ipam/ipam.go 155: Attempting to load block cidr=192.168.55.0/26 host="srv-gr2mf.gb1.brightbox.com" Nov 13 11:58:19.955090 containerd[1515]: 2024-11-13 11:58:19.884 [INFO][4787] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.55.0/26 host="srv-gr2mf.gb1.brightbox.com" Nov 13 11:58:19.955090 containerd[1515]: 2024-11-13 11:58:19.884 [INFO][4787] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.55.0/26 handle="k8s-pod-network.255ee61a18e99c3652a33e3c4f66398f9abd42728de8ed20322c0a65e7b7dcb6" host="srv-gr2mf.gb1.brightbox.com" Nov 13 11:58:19.955090 containerd[1515]: 2024-11-13 11:58:19.887 [INFO][4787] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.255ee61a18e99c3652a33e3c4f66398f9abd42728de8ed20322c0a65e7b7dcb6 Nov 13 11:58:19.955090 containerd[1515]: 2024-11-13 11:58:19.895 [INFO][4787] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.55.0/26 handle="k8s-pod-network.255ee61a18e99c3652a33e3c4f66398f9abd42728de8ed20322c0a65e7b7dcb6" host="srv-gr2mf.gb1.brightbox.com" Nov 13 11:58:19.955090 containerd[1515]: 2024-11-13 11:58:19.906 [INFO][4787] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.55.6/26] block=192.168.55.0/26 handle="k8s-pod-network.255ee61a18e99c3652a33e3c4f66398f9abd42728de8ed20322c0a65e7b7dcb6" host="srv-gr2mf.gb1.brightbox.com" Nov 13 11:58:19.955090 containerd[1515]: 2024-11-13 11:58:19.906 [INFO][4787] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.55.6/26] handle="k8s-pod-network.255ee61a18e99c3652a33e3c4f66398f9abd42728de8ed20322c0a65e7b7dcb6" host="srv-gr2mf.gb1.brightbox.com" Nov 13 11:58:19.955090 containerd[1515]: 2024-11-13 11:58:19.906 [INFO][4787] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 13 11:58:19.955090 containerd[1515]: 2024-11-13 11:58:19.906 [INFO][4787] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.55.6/26] IPv6=[] ContainerID="255ee61a18e99c3652a33e3c4f66398f9abd42728de8ed20322c0a65e7b7dcb6" HandleID="k8s-pod-network.255ee61a18e99c3652a33e3c4f66398f9abd42728de8ed20322c0a65e7b7dcb6" Workload="srv--gr2mf.gb1.brightbox.com-k8s-calico--apiserver--8588545cd8--hgc4l-eth0" Nov 13 11:58:19.958251 containerd[1515]: 2024-11-13 11:58:19.911 [INFO][4776] cni-plugin/k8s.go 386: Populated endpoint ContainerID="255ee61a18e99c3652a33e3c4f66398f9abd42728de8ed20322c0a65e7b7dcb6" Namespace="calico-apiserver" Pod="calico-apiserver-8588545cd8-hgc4l" WorkloadEndpoint="srv--gr2mf.gb1.brightbox.com-k8s-calico--apiserver--8588545cd8--hgc4l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gr2mf.gb1.brightbox.com-k8s-calico--apiserver--8588545cd8--hgc4l-eth0", GenerateName:"calico-apiserver-8588545cd8-", Namespace:"calico-apiserver", SelfLink:"", UID:"f8b9f6f4-7488-43e0-87ea-b08a68022038", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2024, time.November, 13, 11, 57, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8588545cd8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gr2mf.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-8588545cd8-hgc4l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.55.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8f7f851bfd3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 13 11:58:19.958251 containerd[1515]: 2024-11-13 11:58:19.911 [INFO][4776] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.55.6/32] ContainerID="255ee61a18e99c3652a33e3c4f66398f9abd42728de8ed20322c0a65e7b7dcb6" Namespace="calico-apiserver" Pod="calico-apiserver-8588545cd8-hgc4l" WorkloadEndpoint="srv--gr2mf.gb1.brightbox.com-k8s-calico--apiserver--8588545cd8--hgc4l-eth0" Nov 13 11:58:19.958251 containerd[1515]: 2024-11-13 11:58:19.911 [INFO][4776] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8f7f851bfd3 ContainerID="255ee61a18e99c3652a33e3c4f66398f9abd42728de8ed20322c0a65e7b7dcb6" Namespace="calico-apiserver" Pod="calico-apiserver-8588545cd8-hgc4l" WorkloadEndpoint="srv--gr2mf.gb1.brightbox.com-k8s-calico--apiserver--8588545cd8--hgc4l-eth0" Nov 13 11:58:19.958251 containerd[1515]: 2024-11-13 11:58:19.918 [INFO][4776] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="255ee61a18e99c3652a33e3c4f66398f9abd42728de8ed20322c0a65e7b7dcb6" Namespace="calico-apiserver" Pod="calico-apiserver-8588545cd8-hgc4l" WorkloadEndpoint="srv--gr2mf.gb1.brightbox.com-k8s-calico--apiserver--8588545cd8--hgc4l-eth0" Nov 13 11:58:19.958251 containerd[1515]: 2024-11-13 11:58:19.919 [INFO][4776] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="255ee61a18e99c3652a33e3c4f66398f9abd42728de8ed20322c0a65e7b7dcb6" Namespace="calico-apiserver" Pod="calico-apiserver-8588545cd8-hgc4l" WorkloadEndpoint="srv--gr2mf.gb1.brightbox.com-k8s-calico--apiserver--8588545cd8--hgc4l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gr2mf.gb1.brightbox.com-k8s-calico--apiserver--8588545cd8--hgc4l-eth0", GenerateName:"calico-apiserver-8588545cd8-", Namespace:"calico-apiserver", SelfLink:"", UID:"f8b9f6f4-7488-43e0-87ea-b08a68022038", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2024, time.November, 13, 11, 57, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8588545cd8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gr2mf.gb1.brightbox.com", ContainerID:"255ee61a18e99c3652a33e3c4f66398f9abd42728de8ed20322c0a65e7b7dcb6", Pod:"calico-apiserver-8588545cd8-hgc4l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.55.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8f7f851bfd3", MAC:"32:e3:95:75:1c:94", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 13 11:58:19.958251 containerd[1515]: 2024-11-13 11:58:19.940 [INFO][4776] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="255ee61a18e99c3652a33e3c4f66398f9abd42728de8ed20322c0a65e7b7dcb6" Namespace="calico-apiserver" Pod="calico-apiserver-8588545cd8-hgc4l" WorkloadEndpoint="srv--gr2mf.gb1.brightbox.com-k8s-calico--apiserver--8588545cd8--hgc4l-eth0" Nov 13 11:58:20.024941 containerd[1515]: time="2024-11-13T11:58:20.024797746Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 13 11:58:20.024941 containerd[1515]: time="2024-11-13T11:58:20.024908283Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 13 11:58:20.025311 containerd[1515]: time="2024-11-13T11:58:20.024968499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 11:58:20.025764 containerd[1515]: time="2024-11-13T11:58:20.025561924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 11:58:20.063572 systemd[1]: Started cri-containerd-255ee61a18e99c3652a33e3c4f66398f9abd42728de8ed20322c0a65e7b7dcb6.scope - libcontainer container 255ee61a18e99c3652a33e3c4f66398f9abd42728de8ed20322c0a65e7b7dcb6. Nov 13 11:58:20.140020 containerd[1515]: time="2024-11-13T11:58:20.139949152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8588545cd8-hgc4l,Uid:f8b9f6f4-7488-43e0-87ea-b08a68022038,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"255ee61a18e99c3652a33e3c4f66398f9abd42728de8ed20322c0a65e7b7dcb6\"" Nov 13 11:58:20.747883 systemd-networkd[1448]: cali86b37a1fb50: Gained IPv6LL Nov 13 11:58:21.003423 systemd-networkd[1448]: cali75ef0cf2a16: Gained IPv6LL Nov 13 11:58:21.018908 kubelet[2752]: I1113 11:58:21.018716 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-29ddh" podStartSLOduration=37.018145442 podStartE2EDuration="37.018145442s" podCreationTimestamp="2024-11-13 11:57:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-13 11:58:19.996477017 +0000 UTC m=+51.616385694" watchObservedRunningTime="2024-11-13 11:58:21.018145442 +0000 UTC m=+52.638054141" Nov 13 11:58:21.188874 containerd[1515]: time="2024-11-13T11:58:21.188815689Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 11:58:21.190256 containerd[1515]: time="2024-11-13T11:58:21.189932964Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.0: active requests=0, bytes read=41963930" Nov 13 11:58:21.190256 containerd[1515]: time="2024-11-13T11:58:21.189999151Z" level=info msg="ImageCreate event name:\"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 11:58:21.193330 containerd[1515]: time="2024-11-13T11:58:21.193234011Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 11:58:21.194258 containerd[1515]: time="2024-11-13T11:58:21.194019070Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" with image id \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\", size \"43457038\" in 2.962314909s" Nov 13 11:58:21.194258 containerd[1515]: time="2024-11-13T11:58:21.194068378Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" returns image reference \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\"" Nov 13 11:58:21.198881 containerd[1515]: time="2024-11-13T11:58:21.198857707Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\"" Nov 13 11:58:21.201659 containerd[1515]: time="2024-11-13T11:58:21.201634444Z" level=info msg="CreateContainer within sandbox \"7dee4b55cf43fb3bbfde3b2222dc00740643b0f49a749a0fbf618bed5706de99\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Nov 13 11:58:21.225934 containerd[1515]: time="2024-11-13T11:58:21.225891089Z" level=info msg="CreateContainer within sandbox \"7dee4b55cf43fb3bbfde3b2222dc00740643b0f49a749a0fbf618bed5706de99\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ce7a4200dff2998432a3521224f2e96967a4eb1eafd0a862d2af9c5f0278562a\"" Nov 13 11:58:21.227221 containerd[1515]: time="2024-11-13T11:58:21.227091152Z" level=info msg="StartContainer for \"ce7a4200dff2998432a3521224f2e96967a4eb1eafd0a862d2af9c5f0278562a\"" Nov 13 11:58:21.297343 systemd[1]: Started cri-containerd-ce7a4200dff2998432a3521224f2e96967a4eb1eafd0a862d2af9c5f0278562a.scope - libcontainer container ce7a4200dff2998432a3521224f2e96967a4eb1eafd0a862d2af9c5f0278562a. Nov 13 11:58:21.355077 containerd[1515]: time="2024-11-13T11:58:21.354946226Z" level=info msg="StartContainer for \"ce7a4200dff2998432a3521224f2e96967a4eb1eafd0a862d2af9c5f0278562a\" returns successfully" Nov 13 11:58:21.963477 systemd-networkd[1448]: cali8f7f851bfd3: Gained IPv6LL Nov 13 11:58:22.028945 kubelet[2752]: I1113 11:58:22.028861 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8588545cd8-kdzjc" podStartSLOduration=25.129412313 podStartE2EDuration="30.028822362s" podCreationTimestamp="2024-11-13 11:57:52 +0000 UTC" firstStartedPulling="2024-11-13 11:58:16.299074862 +0000 UTC m=+47.918983536" lastFinishedPulling="2024-11-13 11:58:21.198484897 +0000 UTC m=+52.818393585" observedRunningTime="2024-11-13 11:58:22.028573443 +0000 UTC m=+53.648482186" watchObservedRunningTime="2024-11-13 11:58:22.028822362 +0000 UTC m=+53.648731105" Nov 13 11:58:22.951752 containerd[1515]: time="2024-11-13T11:58:22.951629436Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 11:58:22.953539 containerd[1515]: time="2024-11-13T11:58:22.953057408Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0: active requests=0, bytes read=10501080" Nov 13 11:58:22.953539 containerd[1515]: time="2024-11-13T11:58:22.953411580Z" level=info msg="ImageCreate event name:\"sha256:448cca84519399c3138626aff1a43b0b9168ecbe27e0e8e6df63416012eeeaae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 11:58:22.967174 containerd[1515]: time="2024-11-13T11:58:22.966733001Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:69153d7038238f84185e52b4a84e11c5cf5af716ef8613fb0a475ea311dca0cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 11:58:22.967694 containerd[1515]: time="2024-11-13T11:58:22.967662525Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" with image id \"sha256:448cca84519399c3138626aff1a43b0b9168ecbe27e0e8e6df63416012eeeaae\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:69153d7038238f84185e52b4a84e11c5cf5af716ef8613fb0a475ea311dca0cb\", size \"11994124\" in 1.768657193s" Nov 13 11:58:22.967792 containerd[1515]: time="2024-11-13T11:58:22.967698966Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" returns image reference \"sha256:448cca84519399c3138626aff1a43b0b9168ecbe27e0e8e6df63416012eeeaae\"" Nov 13 11:58:22.969154 containerd[1515]: time="2024-11-13T11:58:22.969127648Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\"" Nov 13 11:58:22.973748 containerd[1515]: time="2024-11-13T11:58:22.973712887Z" level=info msg="CreateContainer within sandbox \"f912b0d667a3cde7a65722e5a3b1dabdc8d29c742e476fad9b79c8885e4457bf\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Nov 13 11:58:22.987325 containerd[1515]: time="2024-11-13T11:58:22.987240494Z" level=info msg="CreateContainer within sandbox \"f912b0d667a3cde7a65722e5a3b1dabdc8d29c742e476fad9b79c8885e4457bf\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"4808df325f71daedd4f6da9fad682e98920f01feef1c0efbc3d50f0a92edbd34\"" Nov 13 11:58:22.989685 containerd[1515]: time="2024-11-13T11:58:22.989654826Z" level=info msg="StartContainer for \"4808df325f71daedd4f6da9fad682e98920f01feef1c0efbc3d50f0a92edbd34\"" Nov 13 11:58:22.991584 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1398353722.mount: Deactivated successfully. Nov 13 11:58:23.006507 kubelet[2752]: I1113 11:58:23.005635 2752 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 13 11:58:23.042461 systemd[1]: Started cri-containerd-4808df325f71daedd4f6da9fad682e98920f01feef1c0efbc3d50f0a92edbd34.scope - libcontainer container 4808df325f71daedd4f6da9fad682e98920f01feef1c0efbc3d50f0a92edbd34. Nov 13 11:58:23.080501 containerd[1515]: time="2024-11-13T11:58:23.080458533Z" level=info msg="StartContainer for \"4808df325f71daedd4f6da9fad682e98920f01feef1c0efbc3d50f0a92edbd34\" returns successfully" Nov 13 11:58:23.790456 kubelet[2752]: I1113 11:58:23.790316 2752 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Nov 13 11:58:23.790456 kubelet[2752]: I1113 11:58:23.790380 2752 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Nov 13 11:58:24.036207 kubelet[2752]: I1113 11:58:24.035490 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-5tp6q" podStartSLOduration=26.361380322 podStartE2EDuration="33.035446536s" podCreationTimestamp="2024-11-13 11:57:51 +0000 UTC" firstStartedPulling="2024-11-13 11:58:16.294909196 +0000 UTC m=+47.914817871" lastFinishedPulling="2024-11-13 11:58:22.968975389 +0000 UTC m=+54.588884085" observedRunningTime="2024-11-13 11:58:24.0336287 +0000 UTC m=+55.653537401" watchObservedRunningTime="2024-11-13 11:58:24.035446536 +0000 UTC m=+55.655355251" Nov 13 11:58:25.661013 containerd[1515]: time="2024-11-13T11:58:25.660925103Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 11:58:25.662338 containerd[1515]: time="2024-11-13T11:58:25.662162588Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.0: active requests=0, bytes read=34152461" Nov 13 11:58:25.663263 containerd[1515]: time="2024-11-13T11:58:25.663215519Z" level=info msg="ImageCreate event name:\"sha256:48cc7c24253a8037ceea486888a8c75cd74cbf20752c30b86fae718f5a3fc134\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 11:58:25.665297 containerd[1515]: time="2024-11-13T11:58:25.665243192Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:8242cd7e9b9b505c73292dd812ce1669bca95cacc56d30687f49e6e0b95c5535\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 11:58:25.666816 containerd[1515]: time="2024-11-13T11:58:25.666491925Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" with image id \"sha256:48cc7c24253a8037ceea486888a8c75cd74cbf20752c30b86fae718f5a3fc134\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:8242cd7e9b9b505c73292dd812ce1669bca95cacc56d30687f49e6e0b95c5535\", size \"35645521\" in 2.697331239s" Nov 13 11:58:25.666816 containerd[1515]: time="2024-11-13T11:58:25.666536321Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" returns image reference \"sha256:48cc7c24253a8037ceea486888a8c75cd74cbf20752c30b86fae718f5a3fc134\"" Nov 13 11:58:25.669075 containerd[1515]: time="2024-11-13T11:58:25.668984798Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\"" Nov 13 11:58:25.692998 containerd[1515]: time="2024-11-13T11:58:25.692952333Z" level=info msg="CreateContainer within sandbox \"83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Nov 13 11:58:25.707988 containerd[1515]: time="2024-11-13T11:58:25.707777202Z" level=info msg="CreateContainer within sandbox \"83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"fe293b2e887f77be7c3b14b2645531fb805d4fd075df6c4ff3ce0df6baa714c7\"" Nov 13 11:58:25.709627 containerd[1515]: time="2024-11-13T11:58:25.709584141Z" level=info msg="StartContainer for \"fe293b2e887f77be7c3b14b2645531fb805d4fd075df6c4ff3ce0df6baa714c7\"" Nov 13 11:58:25.793368 systemd[1]: Started cri-containerd-fe293b2e887f77be7c3b14b2645531fb805d4fd075df6c4ff3ce0df6baa714c7.scope - libcontainer container fe293b2e887f77be7c3b14b2645531fb805d4fd075df6c4ff3ce0df6baa714c7. Nov 13 11:58:25.862781 containerd[1515]: time="2024-11-13T11:58:25.861456106Z" level=info msg="StartContainer for \"fe293b2e887f77be7c3b14b2645531fb805d4fd075df6c4ff3ce0df6baa714c7\" returns successfully" Nov 13 11:58:26.019453 containerd[1515]: time="2024-11-13T11:58:26.019396812Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 11:58:26.023246 containerd[1515]: time="2024-11-13T11:58:26.022663985Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.0: active requests=0, bytes read=77" Nov 13 11:58:26.028354 containerd[1515]: time="2024-11-13T11:58:26.027978843Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" with image id \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\", size \"43457038\" in 358.798664ms" Nov 13 11:58:26.028354 containerd[1515]: time="2024-11-13T11:58:26.028020807Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" returns image reference \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\"" Nov 13 11:58:26.033471 containerd[1515]: time="2024-11-13T11:58:26.033437346Z" level=info msg="CreateContainer within sandbox \"255ee61a18e99c3652a33e3c4f66398f9abd42728de8ed20322c0a65e7b7dcb6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Nov 13 11:58:26.057313 containerd[1515]: time="2024-11-13T11:58:26.056560659Z" level=info msg="CreateContainer within sandbox \"255ee61a18e99c3652a33e3c4f66398f9abd42728de8ed20322c0a65e7b7dcb6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e828e0d2dcf96ea65f2ef4613887d85a7d9edd2bf1c3f1b418cd3022ac7613ee\"" Nov 13 11:58:26.060082 containerd[1515]: time="2024-11-13T11:58:26.059970116Z" level=info msg="StartContainer for \"e828e0d2dcf96ea65f2ef4613887d85a7d9edd2bf1c3f1b418cd3022ac7613ee\"" Nov 13 11:58:26.118408 systemd[1]: Started cri-containerd-e828e0d2dcf96ea65f2ef4613887d85a7d9edd2bf1c3f1b418cd3022ac7613ee.scope - libcontainer container e828e0d2dcf96ea65f2ef4613887d85a7d9edd2bf1c3f1b418cd3022ac7613ee. Nov 13 11:58:26.157891 kubelet[2752]: I1113 11:58:26.157787 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7f4895d8cb-7gh4j" podStartSLOduration=27.798613361 podStartE2EDuration="34.15738175s" podCreationTimestamp="2024-11-13 11:57:52 +0000 UTC" firstStartedPulling="2024-11-13 11:58:19.310046213 +0000 UTC m=+50.929954891" lastFinishedPulling="2024-11-13 11:58:25.668814602 +0000 UTC m=+57.288723280" observedRunningTime="2024-11-13 11:58:26.059015266 +0000 UTC m=+57.678923965" watchObservedRunningTime="2024-11-13 11:58:26.15738175 +0000 UTC m=+57.777290448" Nov 13 11:58:26.217356 containerd[1515]: time="2024-11-13T11:58:26.217255264Z" level=info msg="StartContainer for \"e828e0d2dcf96ea65f2ef4613887d85a7d9edd2bf1c3f1b418cd3022ac7613ee\" returns successfully" Nov 13 11:58:27.078962 kubelet[2752]: I1113 11:58:27.077065 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8588545cd8-hgc4l" podStartSLOduration=29.188680877 podStartE2EDuration="35.077034402s" podCreationTimestamp="2024-11-13 11:57:52 +0000 UTC" firstStartedPulling="2024-11-13 11:58:20.142234874 +0000 UTC m=+51.762143562" lastFinishedPulling="2024-11-13 11:58:26.030588412 +0000 UTC m=+57.650497087" observedRunningTime="2024-11-13 11:58:27.072399369 +0000 UTC m=+58.692308067" watchObservedRunningTime="2024-11-13 11:58:27.077034402 +0000 UTC m=+58.696943099" Nov 13 11:58:28.051363 kubelet[2752]: I1113 11:58:28.050124 2752 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 13 11:58:28.598457 containerd[1515]: time="2024-11-13T11:58:28.598361063Z" level=info msg="StopPodSandbox for \"bf7cc5c3037e25e5ea5ae6693908f8aa9d5cc145ab62795380efaa23baaf6923\"" Nov 13 11:58:28.801469 containerd[1515]: 2024-11-13 11:58:28.741 [WARNING][5075] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bf7cc5c3037e25e5ea5ae6693908f8aa9d5cc145ab62795380efaa23baaf6923" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gr2mf.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--29ddh-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c2089e4d-0914-4208-bd3f-ebfa5baa6636", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2024, time.November, 13, 11, 57, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gr2mf.gb1.brightbox.com", ContainerID:"ffd6f344ea50e3bf3f87b7bbccb2a8caba45e5ab2c78fb7397cd1bc8c781af71", Pod:"coredns-7db6d8ff4d-29ddh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.55.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali86b37a1fb50", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 13 11:58:28.801469 containerd[1515]: 2024-11-13 11:58:28.742 [INFO][5075] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bf7cc5c3037e25e5ea5ae6693908f8aa9d5cc145ab62795380efaa23baaf6923" Nov 13 11:58:28.801469 containerd[1515]: 2024-11-13 11:58:28.742 [INFO][5075] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bf7cc5c3037e25e5ea5ae6693908f8aa9d5cc145ab62795380efaa23baaf6923" iface="eth0" netns="" Nov 13 11:58:28.801469 containerd[1515]: 2024-11-13 11:58:28.742 [INFO][5075] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bf7cc5c3037e25e5ea5ae6693908f8aa9d5cc145ab62795380efaa23baaf6923" Nov 13 11:58:28.801469 containerd[1515]: 2024-11-13 11:58:28.742 [INFO][5075] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bf7cc5c3037e25e5ea5ae6693908f8aa9d5cc145ab62795380efaa23baaf6923" Nov 13 11:58:28.801469 containerd[1515]: 2024-11-13 11:58:28.787 [INFO][5081] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bf7cc5c3037e25e5ea5ae6693908f8aa9d5cc145ab62795380efaa23baaf6923" HandleID="k8s-pod-network.bf7cc5c3037e25e5ea5ae6693908f8aa9d5cc145ab62795380efaa23baaf6923" Workload="srv--gr2mf.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--29ddh-eth0" Nov 13 11:58:28.801469 containerd[1515]: 2024-11-13 11:58:28.787 [INFO][5081] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 13 11:58:28.801469 containerd[1515]: 2024-11-13 11:58:28.787 [INFO][5081] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 13 11:58:28.801469 containerd[1515]: 2024-11-13 11:58:28.794 [WARNING][5081] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bf7cc5c3037e25e5ea5ae6693908f8aa9d5cc145ab62795380efaa23baaf6923" HandleID="k8s-pod-network.bf7cc5c3037e25e5ea5ae6693908f8aa9d5cc145ab62795380efaa23baaf6923" Workload="srv--gr2mf.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--29ddh-eth0" Nov 13 11:58:28.801469 containerd[1515]: 2024-11-13 11:58:28.794 [INFO][5081] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bf7cc5c3037e25e5ea5ae6693908f8aa9d5cc145ab62795380efaa23baaf6923" HandleID="k8s-pod-network.bf7cc5c3037e25e5ea5ae6693908f8aa9d5cc145ab62795380efaa23baaf6923" Workload="srv--gr2mf.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--29ddh-eth0" Nov 13 11:58:28.801469 containerd[1515]: 2024-11-13 11:58:28.796 [INFO][5081] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 13 11:58:28.801469 containerd[1515]: 2024-11-13 11:58:28.799 [INFO][5075] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bf7cc5c3037e25e5ea5ae6693908f8aa9d5cc145ab62795380efaa23baaf6923" Nov 13 11:58:28.803799 containerd[1515]: time="2024-11-13T11:58:28.802369620Z" level=info msg="TearDown network for sandbox \"bf7cc5c3037e25e5ea5ae6693908f8aa9d5cc145ab62795380efaa23baaf6923\" successfully" Nov 13 11:58:28.803799 containerd[1515]: time="2024-11-13T11:58:28.802410084Z" level=info msg="StopPodSandbox for \"bf7cc5c3037e25e5ea5ae6693908f8aa9d5cc145ab62795380efaa23baaf6923\" returns successfully" Nov 13 11:58:28.810933 containerd[1515]: time="2024-11-13T11:58:28.810887940Z" level=info msg="RemovePodSandbox for \"bf7cc5c3037e25e5ea5ae6693908f8aa9d5cc145ab62795380efaa23baaf6923\"" Nov 13 11:58:28.814337 containerd[1515]: time="2024-11-13T11:58:28.814296466Z" level=info msg="Forcibly stopping sandbox \"bf7cc5c3037e25e5ea5ae6693908f8aa9d5cc145ab62795380efaa23baaf6923\"" Nov 13 11:58:28.909654 containerd[1515]: 2024-11-13 11:58:28.862 [WARNING][5099] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bf7cc5c3037e25e5ea5ae6693908f8aa9d5cc145ab62795380efaa23baaf6923" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gr2mf.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--29ddh-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c2089e4d-0914-4208-bd3f-ebfa5baa6636", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2024, time.November, 13, 11, 57, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gr2mf.gb1.brightbox.com", ContainerID:"ffd6f344ea50e3bf3f87b7bbccb2a8caba45e5ab2c78fb7397cd1bc8c781af71", Pod:"coredns-7db6d8ff4d-29ddh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.55.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali86b37a1fb50", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 13 11:58:28.909654 containerd[1515]: 2024-11-13 11:58:28.863 [INFO][5099] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bf7cc5c3037e25e5ea5ae6693908f8aa9d5cc145ab62795380efaa23baaf6923" Nov 13 11:58:28.909654 containerd[1515]: 2024-11-13 11:58:28.863 [INFO][5099] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bf7cc5c3037e25e5ea5ae6693908f8aa9d5cc145ab62795380efaa23baaf6923" iface="eth0" netns="" Nov 13 11:58:28.909654 containerd[1515]: 2024-11-13 11:58:28.863 [INFO][5099] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bf7cc5c3037e25e5ea5ae6693908f8aa9d5cc145ab62795380efaa23baaf6923" Nov 13 11:58:28.909654 containerd[1515]: 2024-11-13 11:58:28.863 [INFO][5099] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bf7cc5c3037e25e5ea5ae6693908f8aa9d5cc145ab62795380efaa23baaf6923" Nov 13 11:58:28.909654 containerd[1515]: 2024-11-13 11:58:28.895 [INFO][5105] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bf7cc5c3037e25e5ea5ae6693908f8aa9d5cc145ab62795380efaa23baaf6923" HandleID="k8s-pod-network.bf7cc5c3037e25e5ea5ae6693908f8aa9d5cc145ab62795380efaa23baaf6923" Workload="srv--gr2mf.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--29ddh-eth0" Nov 13 11:58:28.909654 containerd[1515]: 2024-11-13 11:58:28.895 [INFO][5105] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 13 11:58:28.909654 containerd[1515]: 2024-11-13 11:58:28.895 [INFO][5105] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 13 11:58:28.909654 containerd[1515]: 2024-11-13 11:58:28.903 [WARNING][5105] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bf7cc5c3037e25e5ea5ae6693908f8aa9d5cc145ab62795380efaa23baaf6923" HandleID="k8s-pod-network.bf7cc5c3037e25e5ea5ae6693908f8aa9d5cc145ab62795380efaa23baaf6923" Workload="srv--gr2mf.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--29ddh-eth0" Nov 13 11:58:28.909654 containerd[1515]: 2024-11-13 11:58:28.903 [INFO][5105] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bf7cc5c3037e25e5ea5ae6693908f8aa9d5cc145ab62795380efaa23baaf6923" HandleID="k8s-pod-network.bf7cc5c3037e25e5ea5ae6693908f8aa9d5cc145ab62795380efaa23baaf6923" Workload="srv--gr2mf.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--29ddh-eth0" Nov 13 11:58:28.909654 containerd[1515]: 2024-11-13 11:58:28.905 [INFO][5105] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 13 11:58:28.909654 containerd[1515]: 2024-11-13 11:58:28.907 [INFO][5099] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bf7cc5c3037e25e5ea5ae6693908f8aa9d5cc145ab62795380efaa23baaf6923" Nov 13 11:58:28.909654 containerd[1515]: time="2024-11-13T11:58:28.909580831Z" level=info msg="TearDown network for sandbox \"bf7cc5c3037e25e5ea5ae6693908f8aa9d5cc145ab62795380efaa23baaf6923\" successfully" Nov 13 11:58:28.920529 containerd[1515]: time="2024-11-13T11:58:28.920482289Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bf7cc5c3037e25e5ea5ae6693908f8aa9d5cc145ab62795380efaa23baaf6923\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 13 11:58:28.920639 containerd[1515]: time="2024-11-13T11:58:28.920583025Z" level=info msg="RemovePodSandbox \"bf7cc5c3037e25e5ea5ae6693908f8aa9d5cc145ab62795380efaa23baaf6923\" returns successfully" Nov 13 11:58:28.921506 containerd[1515]: time="2024-11-13T11:58:28.921235100Z" level=info msg="StopPodSandbox for \"064bc4be65a841d0edadb1da42894914b0b94459fbcedfc3e1cd96a74d0c5fc7\"" Nov 13 11:58:29.040302 containerd[1515]: 2024-11-13 11:58:28.976 [WARNING][5123] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="064bc4be65a841d0edadb1da42894914b0b94459fbcedfc3e1cd96a74d0c5fc7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gr2mf.gb1.brightbox.com-k8s-calico--kube--controllers--7f4895d8cb--7gh4j-eth0", GenerateName:"calico-kube-controllers-7f4895d8cb-", Namespace:"calico-system", SelfLink:"", UID:"b9eb2b70-ede6-4290-8483-7657e4c96a8b", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2024, time.November, 13, 11, 57, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f4895d8cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gr2mf.gb1.brightbox.com", ContainerID:"83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b", Pod:"calico-kube-controllers-7f4895d8cb-7gh4j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.55.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali75ef0cf2a16", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 13 11:58:29.040302 containerd[1515]: 2024-11-13 11:58:28.976 [INFO][5123] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="064bc4be65a841d0edadb1da42894914b0b94459fbcedfc3e1cd96a74d0c5fc7" Nov 13 11:58:29.040302 containerd[1515]: 2024-11-13 11:58:28.976 [INFO][5123] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="064bc4be65a841d0edadb1da42894914b0b94459fbcedfc3e1cd96a74d0c5fc7" iface="eth0" netns="" Nov 13 11:58:29.040302 containerd[1515]: 2024-11-13 11:58:28.976 [INFO][5123] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="064bc4be65a841d0edadb1da42894914b0b94459fbcedfc3e1cd96a74d0c5fc7" Nov 13 11:58:29.040302 containerd[1515]: 2024-11-13 11:58:28.977 [INFO][5123] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="064bc4be65a841d0edadb1da42894914b0b94459fbcedfc3e1cd96a74d0c5fc7" Nov 13 11:58:29.040302 containerd[1515]: 2024-11-13 11:58:29.020 [INFO][5129] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="064bc4be65a841d0edadb1da42894914b0b94459fbcedfc3e1cd96a74d0c5fc7" HandleID="k8s-pod-network.064bc4be65a841d0edadb1da42894914b0b94459fbcedfc3e1cd96a74d0c5fc7" Workload="srv--gr2mf.gb1.brightbox.com-k8s-calico--kube--controllers--7f4895d8cb--7gh4j-eth0" Nov 13 11:58:29.040302 containerd[1515]: 2024-11-13 11:58:29.020 [INFO][5129] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 13 11:58:29.040302 containerd[1515]: 2024-11-13 11:58:29.020 [INFO][5129] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 13 11:58:29.040302 containerd[1515]: 2024-11-13 11:58:29.030 [WARNING][5129] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="064bc4be65a841d0edadb1da42894914b0b94459fbcedfc3e1cd96a74d0c5fc7" HandleID="k8s-pod-network.064bc4be65a841d0edadb1da42894914b0b94459fbcedfc3e1cd96a74d0c5fc7" Workload="srv--gr2mf.gb1.brightbox.com-k8s-calico--kube--controllers--7f4895d8cb--7gh4j-eth0" Nov 13 11:58:29.040302 containerd[1515]: 2024-11-13 11:58:29.030 [INFO][5129] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="064bc4be65a841d0edadb1da42894914b0b94459fbcedfc3e1cd96a74d0c5fc7" HandleID="k8s-pod-network.064bc4be65a841d0edadb1da42894914b0b94459fbcedfc3e1cd96a74d0c5fc7" Workload="srv--gr2mf.gb1.brightbox.com-k8s-calico--kube--controllers--7f4895d8cb--7gh4j-eth0" Nov 13 11:58:29.040302 containerd[1515]: 2024-11-13 11:58:29.033 [INFO][5129] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 13 11:58:29.040302 containerd[1515]: 2024-11-13 11:58:29.036 [INFO][5123] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="064bc4be65a841d0edadb1da42894914b0b94459fbcedfc3e1cd96a74d0c5fc7" Nov 13 11:58:29.041911 containerd[1515]: time="2024-11-13T11:58:29.040466022Z" level=info msg="TearDown network for sandbox \"064bc4be65a841d0edadb1da42894914b0b94459fbcedfc3e1cd96a74d0c5fc7\" successfully" Nov 13 11:58:29.041911 containerd[1515]: time="2024-11-13T11:58:29.040586165Z" level=info msg="StopPodSandbox for \"064bc4be65a841d0edadb1da42894914b0b94459fbcedfc3e1cd96a74d0c5fc7\" returns successfully" Nov 13 11:58:29.041911 containerd[1515]: time="2024-11-13T11:58:29.041801496Z" level=info msg="RemovePodSandbox for \"064bc4be65a841d0edadb1da42894914b0b94459fbcedfc3e1cd96a74d0c5fc7\"" Nov 13 11:58:29.041911 containerd[1515]: time="2024-11-13T11:58:29.041897953Z" level=info msg="Forcibly stopping sandbox \"064bc4be65a841d0edadb1da42894914b0b94459fbcedfc3e1cd96a74d0c5fc7\"" Nov 13 11:58:29.161322 containerd[1515]: 2024-11-13 11:58:29.098 [WARNING][5147] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="064bc4be65a841d0edadb1da42894914b0b94459fbcedfc3e1cd96a74d0c5fc7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gr2mf.gb1.brightbox.com-k8s-calico--kube--controllers--7f4895d8cb--7gh4j-eth0", GenerateName:"calico-kube-controllers-7f4895d8cb-", Namespace:"calico-system", SelfLink:"", UID:"b9eb2b70-ede6-4290-8483-7657e4c96a8b", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2024, time.November, 13, 11, 57, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f4895d8cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gr2mf.gb1.brightbox.com", ContainerID:"83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b", Pod:"calico-kube-controllers-7f4895d8cb-7gh4j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.55.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali75ef0cf2a16", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 13 11:58:29.161322 containerd[1515]: 2024-11-13 11:58:29.099 [INFO][5147] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="064bc4be65a841d0edadb1da42894914b0b94459fbcedfc3e1cd96a74d0c5fc7" Nov 13 11:58:29.161322 containerd[1515]: 2024-11-13 11:58:29.099 [INFO][5147] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="064bc4be65a841d0edadb1da42894914b0b94459fbcedfc3e1cd96a74d0c5fc7" iface="eth0" netns="" Nov 13 11:58:29.161322 containerd[1515]: 2024-11-13 11:58:29.099 [INFO][5147] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="064bc4be65a841d0edadb1da42894914b0b94459fbcedfc3e1cd96a74d0c5fc7" Nov 13 11:58:29.161322 containerd[1515]: 2024-11-13 11:58:29.099 [INFO][5147] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="064bc4be65a841d0edadb1da42894914b0b94459fbcedfc3e1cd96a74d0c5fc7" Nov 13 11:58:29.161322 containerd[1515]: 2024-11-13 11:58:29.144 [INFO][5153] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="064bc4be65a841d0edadb1da42894914b0b94459fbcedfc3e1cd96a74d0c5fc7" HandleID="k8s-pod-network.064bc4be65a841d0edadb1da42894914b0b94459fbcedfc3e1cd96a74d0c5fc7" Workload="srv--gr2mf.gb1.brightbox.com-k8s-calico--kube--controllers--7f4895d8cb--7gh4j-eth0" Nov 13 11:58:29.161322 containerd[1515]: 2024-11-13 11:58:29.145 [INFO][5153] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 13 11:58:29.161322 containerd[1515]: 2024-11-13 11:58:29.145 [INFO][5153] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 13 11:58:29.161322 containerd[1515]: 2024-11-13 11:58:29.153 [WARNING][5153] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="064bc4be65a841d0edadb1da42894914b0b94459fbcedfc3e1cd96a74d0c5fc7" HandleID="k8s-pod-network.064bc4be65a841d0edadb1da42894914b0b94459fbcedfc3e1cd96a74d0c5fc7" Workload="srv--gr2mf.gb1.brightbox.com-k8s-calico--kube--controllers--7f4895d8cb--7gh4j-eth0" Nov 13 11:58:29.161322 containerd[1515]: 2024-11-13 11:58:29.154 [INFO][5153] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="064bc4be65a841d0edadb1da42894914b0b94459fbcedfc3e1cd96a74d0c5fc7" HandleID="k8s-pod-network.064bc4be65a841d0edadb1da42894914b0b94459fbcedfc3e1cd96a74d0c5fc7" Workload="srv--gr2mf.gb1.brightbox.com-k8s-calico--kube--controllers--7f4895d8cb--7gh4j-eth0" Nov 13 11:58:29.161322 containerd[1515]: 2024-11-13 11:58:29.156 [INFO][5153] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 13 11:58:29.161322 containerd[1515]: 2024-11-13 11:58:29.157 [INFO][5147] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="064bc4be65a841d0edadb1da42894914b0b94459fbcedfc3e1cd96a74d0c5fc7" Nov 13 11:58:29.161322 containerd[1515]: time="2024-11-13T11:58:29.159678827Z" level=info msg="TearDown network for sandbox \"064bc4be65a841d0edadb1da42894914b0b94459fbcedfc3e1cd96a74d0c5fc7\" successfully" Nov 13 11:58:29.163054 containerd[1515]: time="2024-11-13T11:58:29.163019337Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"064bc4be65a841d0edadb1da42894914b0b94459fbcedfc3e1cd96a74d0c5fc7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 13 11:58:29.163255 containerd[1515]: time="2024-11-13T11:58:29.163236448Z" level=info msg="RemovePodSandbox \"064bc4be65a841d0edadb1da42894914b0b94459fbcedfc3e1cd96a74d0c5fc7\" returns successfully" Nov 13 11:58:29.163781 containerd[1515]: time="2024-11-13T11:58:29.163762623Z" level=info msg="StopPodSandbox for \"daefabc137c78c71cba1858efcb6631812e380fde6818cff42d819cc290c2b1e\"" Nov 13 11:58:29.263996 containerd[1515]: 2024-11-13 11:58:29.213 [WARNING][5174] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="daefabc137c78c71cba1858efcb6631812e380fde6818cff42d819cc290c2b1e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gr2mf.gb1.brightbox.com-k8s-calico--apiserver--8588545cd8--hgc4l-eth0", GenerateName:"calico-apiserver-8588545cd8-", Namespace:"calico-apiserver", SelfLink:"", UID:"f8b9f6f4-7488-43e0-87ea-b08a68022038", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2024, time.November, 13, 11, 57, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8588545cd8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gr2mf.gb1.brightbox.com", ContainerID:"255ee61a18e99c3652a33e3c4f66398f9abd42728de8ed20322c0a65e7b7dcb6", Pod:"calico-apiserver-8588545cd8-hgc4l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.55.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8f7f851bfd3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 13 11:58:29.263996 containerd[1515]: 2024-11-13 11:58:29.213 [INFO][5174] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="daefabc137c78c71cba1858efcb6631812e380fde6818cff42d819cc290c2b1e" Nov 13 11:58:29.263996 containerd[1515]: 2024-11-13 11:58:29.213 [INFO][5174] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="daefabc137c78c71cba1858efcb6631812e380fde6818cff42d819cc290c2b1e" iface="eth0" netns="" Nov 13 11:58:29.263996 containerd[1515]: 2024-11-13 11:58:29.213 [INFO][5174] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="daefabc137c78c71cba1858efcb6631812e380fde6818cff42d819cc290c2b1e" Nov 13 11:58:29.263996 containerd[1515]: 2024-11-13 11:58:29.213 [INFO][5174] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="daefabc137c78c71cba1858efcb6631812e380fde6818cff42d819cc290c2b1e" Nov 13 11:58:29.263996 containerd[1515]: 2024-11-13 11:58:29.240 [INFO][5180] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="daefabc137c78c71cba1858efcb6631812e380fde6818cff42d819cc290c2b1e" HandleID="k8s-pod-network.daefabc137c78c71cba1858efcb6631812e380fde6818cff42d819cc290c2b1e" Workload="srv--gr2mf.gb1.brightbox.com-k8s-calico--apiserver--8588545cd8--hgc4l-eth0" Nov 13 11:58:29.263996 containerd[1515]: 2024-11-13 11:58:29.240 [INFO][5180] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 13 11:58:29.263996 containerd[1515]: 2024-11-13 11:58:29.240 [INFO][5180] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 13 11:58:29.263996 containerd[1515]: 2024-11-13 11:58:29.253 [WARNING][5180] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="daefabc137c78c71cba1858efcb6631812e380fde6818cff42d819cc290c2b1e" HandleID="k8s-pod-network.daefabc137c78c71cba1858efcb6631812e380fde6818cff42d819cc290c2b1e" Workload="srv--gr2mf.gb1.brightbox.com-k8s-calico--apiserver--8588545cd8--hgc4l-eth0" Nov 13 11:58:29.263996 containerd[1515]: 2024-11-13 11:58:29.253 [INFO][5180] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="daefabc137c78c71cba1858efcb6631812e380fde6818cff42d819cc290c2b1e" HandleID="k8s-pod-network.daefabc137c78c71cba1858efcb6631812e380fde6818cff42d819cc290c2b1e" Workload="srv--gr2mf.gb1.brightbox.com-k8s-calico--apiserver--8588545cd8--hgc4l-eth0" Nov 13 11:58:29.263996 containerd[1515]: 2024-11-13 11:58:29.258 [INFO][5180] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 13 11:58:29.263996 containerd[1515]: 2024-11-13 11:58:29.261 [INFO][5174] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="daefabc137c78c71cba1858efcb6631812e380fde6818cff42d819cc290c2b1e" Nov 13 11:58:29.265040 containerd[1515]: time="2024-11-13T11:58:29.264052249Z" level=info msg="TearDown network for sandbox \"daefabc137c78c71cba1858efcb6631812e380fde6818cff42d819cc290c2b1e\" successfully" Nov 13 11:58:29.265040 containerd[1515]: time="2024-11-13T11:58:29.264086652Z" level=info msg="StopPodSandbox for \"daefabc137c78c71cba1858efcb6631812e380fde6818cff42d819cc290c2b1e\" returns successfully" Nov 13 11:58:29.266811 containerd[1515]: time="2024-11-13T11:58:29.266772176Z" level=info msg="RemovePodSandbox for \"daefabc137c78c71cba1858efcb6631812e380fde6818cff42d819cc290c2b1e\"" Nov 13 11:58:29.266885 containerd[1515]: time="2024-11-13T11:58:29.266829280Z" level=info msg="Forcibly stopping sandbox \"daefabc137c78c71cba1858efcb6631812e380fde6818cff42d819cc290c2b1e\"" Nov 13 11:58:29.414367 containerd[1515]: 2024-11-13 11:58:29.360 [WARNING][5198] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="daefabc137c78c71cba1858efcb6631812e380fde6818cff42d819cc290c2b1e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gr2mf.gb1.brightbox.com-k8s-calico--apiserver--8588545cd8--hgc4l-eth0", GenerateName:"calico-apiserver-8588545cd8-", Namespace:"calico-apiserver", SelfLink:"", UID:"f8b9f6f4-7488-43e0-87ea-b08a68022038", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2024, time.November, 13, 11, 57, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8588545cd8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gr2mf.gb1.brightbox.com", ContainerID:"255ee61a18e99c3652a33e3c4f66398f9abd42728de8ed20322c0a65e7b7dcb6", Pod:"calico-apiserver-8588545cd8-hgc4l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.55.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8f7f851bfd3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 13 11:58:29.414367 containerd[1515]: 2024-11-13 11:58:29.360 [INFO][5198] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="daefabc137c78c71cba1858efcb6631812e380fde6818cff42d819cc290c2b1e" Nov 13 11:58:29.414367 containerd[1515]: 2024-11-13 11:58:29.360 [INFO][5198] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="daefabc137c78c71cba1858efcb6631812e380fde6818cff42d819cc290c2b1e" iface="eth0" netns="" Nov 13 11:58:29.414367 containerd[1515]: 2024-11-13 11:58:29.360 [INFO][5198] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="daefabc137c78c71cba1858efcb6631812e380fde6818cff42d819cc290c2b1e" Nov 13 11:58:29.414367 containerd[1515]: 2024-11-13 11:58:29.360 [INFO][5198] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="daefabc137c78c71cba1858efcb6631812e380fde6818cff42d819cc290c2b1e" Nov 13 11:58:29.414367 containerd[1515]: 2024-11-13 11:58:29.394 [INFO][5204] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="daefabc137c78c71cba1858efcb6631812e380fde6818cff42d819cc290c2b1e" HandleID="k8s-pod-network.daefabc137c78c71cba1858efcb6631812e380fde6818cff42d819cc290c2b1e" Workload="srv--gr2mf.gb1.brightbox.com-k8s-calico--apiserver--8588545cd8--hgc4l-eth0" Nov 13 11:58:29.414367 containerd[1515]: 2024-11-13 11:58:29.395 [INFO][5204] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 13 11:58:29.414367 containerd[1515]: 2024-11-13 11:58:29.395 [INFO][5204] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 13 11:58:29.414367 containerd[1515]: 2024-11-13 11:58:29.403 [WARNING][5204] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="daefabc137c78c71cba1858efcb6631812e380fde6818cff42d819cc290c2b1e" HandleID="k8s-pod-network.daefabc137c78c71cba1858efcb6631812e380fde6818cff42d819cc290c2b1e" Workload="srv--gr2mf.gb1.brightbox.com-k8s-calico--apiserver--8588545cd8--hgc4l-eth0" Nov 13 11:58:29.414367 containerd[1515]: 2024-11-13 11:58:29.404 [INFO][5204] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="daefabc137c78c71cba1858efcb6631812e380fde6818cff42d819cc290c2b1e" HandleID="k8s-pod-network.daefabc137c78c71cba1858efcb6631812e380fde6818cff42d819cc290c2b1e" Workload="srv--gr2mf.gb1.brightbox.com-k8s-calico--apiserver--8588545cd8--hgc4l-eth0" Nov 13 11:58:29.414367 containerd[1515]: 2024-11-13 11:58:29.406 [INFO][5204] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 13 11:58:29.414367 containerd[1515]: 2024-11-13 11:58:29.409 [INFO][5198] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="daefabc137c78c71cba1858efcb6631812e380fde6818cff42d819cc290c2b1e" Nov 13 11:58:29.414367 containerd[1515]: time="2024-11-13T11:58:29.411231197Z" level=info msg="TearDown network for sandbox \"daefabc137c78c71cba1858efcb6631812e380fde6818cff42d819cc290c2b1e\" successfully" Nov 13 11:58:29.417419 containerd[1515]: time="2024-11-13T11:58:29.417383604Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"daefabc137c78c71cba1858efcb6631812e380fde6818cff42d819cc290c2b1e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 13 11:58:29.418330 containerd[1515]: time="2024-11-13T11:58:29.418305376Z" level=info msg="RemovePodSandbox \"daefabc137c78c71cba1858efcb6631812e380fde6818cff42d819cc290c2b1e\" returns successfully" Nov 13 11:58:29.419217 containerd[1515]: time="2024-11-13T11:58:29.419116452Z" level=info msg="StopPodSandbox for \"cf80b4606d40efa79952746e4e2c57f405cf1c723c111f8b81e878e6d059b8f4\"" Nov 13 11:58:29.527964 containerd[1515]: 2024-11-13 11:58:29.472 [WARNING][5221] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cf80b4606d40efa79952746e4e2c57f405cf1c723c111f8b81e878e6d059b8f4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gr2mf.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--gjw7n-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"f284fb92-56b8-452d-85a1-fc02bb9810b6", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2024, time.November, 13, 11, 57, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gr2mf.gb1.brightbox.com", ContainerID:"f69e8f87d6a28573e7e020d16c40d91477a6bfd3ab2dea52b33bd4a56482ca7a", Pod:"coredns-7db6d8ff4d-gjw7n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.55.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib78b4920e14", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 13 11:58:29.527964 containerd[1515]: 2024-11-13 11:58:29.472 [INFO][5221] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cf80b4606d40efa79952746e4e2c57f405cf1c723c111f8b81e878e6d059b8f4" Nov 13 11:58:29.527964 containerd[1515]: 2024-11-13 11:58:29.472 [INFO][5221] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cf80b4606d40efa79952746e4e2c57f405cf1c723c111f8b81e878e6d059b8f4" iface="eth0" netns="" Nov 13 11:58:29.527964 containerd[1515]: 2024-11-13 11:58:29.472 [INFO][5221] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cf80b4606d40efa79952746e4e2c57f405cf1c723c111f8b81e878e6d059b8f4" Nov 13 11:58:29.527964 containerd[1515]: 2024-11-13 11:58:29.472 [INFO][5221] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cf80b4606d40efa79952746e4e2c57f405cf1c723c111f8b81e878e6d059b8f4" Nov 13 11:58:29.527964 containerd[1515]: 2024-11-13 11:58:29.513 [INFO][5227] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cf80b4606d40efa79952746e4e2c57f405cf1c723c111f8b81e878e6d059b8f4" HandleID="k8s-pod-network.cf80b4606d40efa79952746e4e2c57f405cf1c723c111f8b81e878e6d059b8f4" Workload="srv--gr2mf.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--gjw7n-eth0" Nov 13 11:58:29.527964 containerd[1515]: 2024-11-13 11:58:29.513 [INFO][5227] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 13 11:58:29.527964 containerd[1515]: 2024-11-13 11:58:29.513 [INFO][5227] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 13 11:58:29.527964 containerd[1515]: 2024-11-13 11:58:29.521 [WARNING][5227] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cf80b4606d40efa79952746e4e2c57f405cf1c723c111f8b81e878e6d059b8f4" HandleID="k8s-pod-network.cf80b4606d40efa79952746e4e2c57f405cf1c723c111f8b81e878e6d059b8f4" Workload="srv--gr2mf.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--gjw7n-eth0" Nov 13 11:58:29.527964 containerd[1515]: 2024-11-13 11:58:29.521 [INFO][5227] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cf80b4606d40efa79952746e4e2c57f405cf1c723c111f8b81e878e6d059b8f4" HandleID="k8s-pod-network.cf80b4606d40efa79952746e4e2c57f405cf1c723c111f8b81e878e6d059b8f4" Workload="srv--gr2mf.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--gjw7n-eth0" Nov 13 11:58:29.527964 containerd[1515]: 2024-11-13 11:58:29.523 [INFO][5227] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 13 11:58:29.527964 containerd[1515]: 2024-11-13 11:58:29.526 [INFO][5221] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cf80b4606d40efa79952746e4e2c57f405cf1c723c111f8b81e878e6d059b8f4" Nov 13 11:58:29.529746 containerd[1515]: time="2024-11-13T11:58:29.528065483Z" level=info msg="TearDown network for sandbox \"cf80b4606d40efa79952746e4e2c57f405cf1c723c111f8b81e878e6d059b8f4\" successfully" Nov 13 11:58:29.529746 containerd[1515]: time="2024-11-13T11:58:29.528139746Z" level=info msg="StopPodSandbox for \"cf80b4606d40efa79952746e4e2c57f405cf1c723c111f8b81e878e6d059b8f4\" returns successfully" Nov 13 11:58:29.529746 containerd[1515]: time="2024-11-13T11:58:29.529066157Z" level=info msg="RemovePodSandbox for \"cf80b4606d40efa79952746e4e2c57f405cf1c723c111f8b81e878e6d059b8f4\"" Nov 13 11:58:29.529746 containerd[1515]: time="2024-11-13T11:58:29.529096775Z" level=info msg="Forcibly stopping sandbox \"cf80b4606d40efa79952746e4e2c57f405cf1c723c111f8b81e878e6d059b8f4\"" Nov 13 11:58:29.641730 containerd[1515]: 2024-11-13 11:58:29.586 [WARNING][5245] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cf80b4606d40efa79952746e4e2c57f405cf1c723c111f8b81e878e6d059b8f4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gr2mf.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--gjw7n-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"f284fb92-56b8-452d-85a1-fc02bb9810b6", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2024, time.November, 13, 11, 57, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gr2mf.gb1.brightbox.com", ContainerID:"f69e8f87d6a28573e7e020d16c40d91477a6bfd3ab2dea52b33bd4a56482ca7a", Pod:"coredns-7db6d8ff4d-gjw7n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.55.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib78b4920e14", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 13 11:58:29.641730 containerd[1515]: 2024-11-13 11:58:29.587 [INFO][5245] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cf80b4606d40efa79952746e4e2c57f405cf1c723c111f8b81e878e6d059b8f4" Nov 13 11:58:29.641730 containerd[1515]: 2024-11-13 11:58:29.587 [INFO][5245] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cf80b4606d40efa79952746e4e2c57f405cf1c723c111f8b81e878e6d059b8f4" iface="eth0" netns="" Nov 13 11:58:29.641730 containerd[1515]: 2024-11-13 11:58:29.587 [INFO][5245] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cf80b4606d40efa79952746e4e2c57f405cf1c723c111f8b81e878e6d059b8f4" Nov 13 11:58:29.641730 containerd[1515]: 2024-11-13 11:58:29.587 [INFO][5245] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cf80b4606d40efa79952746e4e2c57f405cf1c723c111f8b81e878e6d059b8f4" Nov 13 11:58:29.641730 containerd[1515]: 2024-11-13 11:58:29.622 [INFO][5251] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cf80b4606d40efa79952746e4e2c57f405cf1c723c111f8b81e878e6d059b8f4" HandleID="k8s-pod-network.cf80b4606d40efa79952746e4e2c57f405cf1c723c111f8b81e878e6d059b8f4" Workload="srv--gr2mf.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--gjw7n-eth0" Nov 13 11:58:29.641730 containerd[1515]: 2024-11-13 11:58:29.623 [INFO][5251] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 13 11:58:29.641730 containerd[1515]: 2024-11-13 11:58:29.623 [INFO][5251] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 13 11:58:29.641730 containerd[1515]: 2024-11-13 11:58:29.631 [WARNING][5251] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cf80b4606d40efa79952746e4e2c57f405cf1c723c111f8b81e878e6d059b8f4" HandleID="k8s-pod-network.cf80b4606d40efa79952746e4e2c57f405cf1c723c111f8b81e878e6d059b8f4" Workload="srv--gr2mf.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--gjw7n-eth0" Nov 13 11:58:29.641730 containerd[1515]: 2024-11-13 11:58:29.631 [INFO][5251] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cf80b4606d40efa79952746e4e2c57f405cf1c723c111f8b81e878e6d059b8f4" HandleID="k8s-pod-network.cf80b4606d40efa79952746e4e2c57f405cf1c723c111f8b81e878e6d059b8f4" Workload="srv--gr2mf.gb1.brightbox.com-k8s-coredns--7db6d8ff4d--gjw7n-eth0" Nov 13 11:58:29.641730 containerd[1515]: 2024-11-13 11:58:29.634 [INFO][5251] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 13 11:58:29.641730 containerd[1515]: 2024-11-13 11:58:29.638 [INFO][5245] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cf80b4606d40efa79952746e4e2c57f405cf1c723c111f8b81e878e6d059b8f4" Nov 13 11:58:29.648505 containerd[1515]: time="2024-11-13T11:58:29.641823706Z" level=info msg="TearDown network for sandbox \"cf80b4606d40efa79952746e4e2c57f405cf1c723c111f8b81e878e6d059b8f4\" successfully" Nov 13 11:58:29.670421 containerd[1515]: time="2024-11-13T11:58:29.669901567Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cf80b4606d40efa79952746e4e2c57f405cf1c723c111f8b81e878e6d059b8f4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 13 11:58:29.670421 containerd[1515]: time="2024-11-13T11:58:29.670094421Z" level=info msg="RemovePodSandbox \"cf80b4606d40efa79952746e4e2c57f405cf1c723c111f8b81e878e6d059b8f4\" returns successfully" Nov 13 11:58:29.672168 containerd[1515]: time="2024-11-13T11:58:29.672081872Z" level=info msg="StopPodSandbox for \"006fa4d57cb1a807046c8487ecd1cce84ebbdadba53e8507eecf1524e3cea65d\"" Nov 13 11:58:29.807667 containerd[1515]: 2024-11-13 11:58:29.749 [WARNING][5269] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="006fa4d57cb1a807046c8487ecd1cce84ebbdadba53e8507eecf1524e3cea65d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gr2mf.gb1.brightbox.com-k8s-csi--node--driver--5tp6q-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a73a079a-5c98-427b-b55a-3d27769f0826", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2024, time.November, 13, 11, 57, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"85bdc57578", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gr2mf.gb1.brightbox.com", ContainerID:"f912b0d667a3cde7a65722e5a3b1dabdc8d29c742e476fad9b79c8885e4457bf", Pod:"csi-node-driver-5tp6q", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.55.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali63df1322774", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 13 11:58:29.807667 containerd[1515]: 2024-11-13 11:58:29.750 [INFO][5269] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="006fa4d57cb1a807046c8487ecd1cce84ebbdadba53e8507eecf1524e3cea65d" Nov 13 11:58:29.807667 containerd[1515]: 2024-11-13 11:58:29.750 [INFO][5269] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="006fa4d57cb1a807046c8487ecd1cce84ebbdadba53e8507eecf1524e3cea65d" iface="eth0" netns="" Nov 13 11:58:29.807667 containerd[1515]: 2024-11-13 11:58:29.750 [INFO][5269] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="006fa4d57cb1a807046c8487ecd1cce84ebbdadba53e8507eecf1524e3cea65d" Nov 13 11:58:29.807667 containerd[1515]: 2024-11-13 11:58:29.751 [INFO][5269] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="006fa4d57cb1a807046c8487ecd1cce84ebbdadba53e8507eecf1524e3cea65d" Nov 13 11:58:29.807667 containerd[1515]: 2024-11-13 11:58:29.791 [INFO][5275] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="006fa4d57cb1a807046c8487ecd1cce84ebbdadba53e8507eecf1524e3cea65d" HandleID="k8s-pod-network.006fa4d57cb1a807046c8487ecd1cce84ebbdadba53e8507eecf1524e3cea65d" Workload="srv--gr2mf.gb1.brightbox.com-k8s-csi--node--driver--5tp6q-eth0" Nov 13 11:58:29.807667 containerd[1515]: 2024-11-13 11:58:29.791 [INFO][5275] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 13 11:58:29.807667 containerd[1515]: 2024-11-13 11:58:29.791 [INFO][5275] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 13 11:58:29.807667 containerd[1515]: 2024-11-13 11:58:29.800 [WARNING][5275] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="006fa4d57cb1a807046c8487ecd1cce84ebbdadba53e8507eecf1524e3cea65d" HandleID="k8s-pod-network.006fa4d57cb1a807046c8487ecd1cce84ebbdadba53e8507eecf1524e3cea65d" Workload="srv--gr2mf.gb1.brightbox.com-k8s-csi--node--driver--5tp6q-eth0" Nov 13 11:58:29.807667 containerd[1515]: 2024-11-13 11:58:29.800 [INFO][5275] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="006fa4d57cb1a807046c8487ecd1cce84ebbdadba53e8507eecf1524e3cea65d" HandleID="k8s-pod-network.006fa4d57cb1a807046c8487ecd1cce84ebbdadba53e8507eecf1524e3cea65d" Workload="srv--gr2mf.gb1.brightbox.com-k8s-csi--node--driver--5tp6q-eth0" Nov 13 11:58:29.807667 containerd[1515]: 2024-11-13 11:58:29.802 [INFO][5275] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 13 11:58:29.807667 containerd[1515]: 2024-11-13 11:58:29.805 [INFO][5269] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="006fa4d57cb1a807046c8487ecd1cce84ebbdadba53e8507eecf1524e3cea65d" Nov 13 11:58:29.809733 containerd[1515]: time="2024-11-13T11:58:29.807738788Z" level=info msg="TearDown network for sandbox \"006fa4d57cb1a807046c8487ecd1cce84ebbdadba53e8507eecf1524e3cea65d\" successfully" Nov 13 11:58:29.809733 containerd[1515]: time="2024-11-13T11:58:29.807787172Z" level=info msg="StopPodSandbox for \"006fa4d57cb1a807046c8487ecd1cce84ebbdadba53e8507eecf1524e3cea65d\" returns successfully" Nov 13 11:58:29.809733 containerd[1515]: time="2024-11-13T11:58:29.808930992Z" level=info msg="RemovePodSandbox for \"006fa4d57cb1a807046c8487ecd1cce84ebbdadba53e8507eecf1524e3cea65d\"" Nov 13 11:58:29.809733 containerd[1515]: time="2024-11-13T11:58:29.809020507Z" level=info msg="Forcibly stopping sandbox \"006fa4d57cb1a807046c8487ecd1cce84ebbdadba53e8507eecf1524e3cea65d\"" Nov 13 11:58:29.931229 containerd[1515]: 2024-11-13 11:58:29.872 [WARNING][5293] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="006fa4d57cb1a807046c8487ecd1cce84ebbdadba53e8507eecf1524e3cea65d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gr2mf.gb1.brightbox.com-k8s-csi--node--driver--5tp6q-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a73a079a-5c98-427b-b55a-3d27769f0826", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2024, time.November, 13, 11, 57, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"85bdc57578", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gr2mf.gb1.brightbox.com", ContainerID:"f912b0d667a3cde7a65722e5a3b1dabdc8d29c742e476fad9b79c8885e4457bf", Pod:"csi-node-driver-5tp6q", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.55.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali63df1322774", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 13 11:58:29.931229 containerd[1515]: 2024-11-13 11:58:29.873 [INFO][5293] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="006fa4d57cb1a807046c8487ecd1cce84ebbdadba53e8507eecf1524e3cea65d" Nov 13 11:58:29.931229 containerd[1515]: 2024-11-13 11:58:29.873 [INFO][5293] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="006fa4d57cb1a807046c8487ecd1cce84ebbdadba53e8507eecf1524e3cea65d" iface="eth0" netns="" Nov 13 11:58:29.931229 containerd[1515]: 2024-11-13 11:58:29.873 [INFO][5293] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="006fa4d57cb1a807046c8487ecd1cce84ebbdadba53e8507eecf1524e3cea65d" Nov 13 11:58:29.931229 containerd[1515]: 2024-11-13 11:58:29.873 [INFO][5293] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="006fa4d57cb1a807046c8487ecd1cce84ebbdadba53e8507eecf1524e3cea65d" Nov 13 11:58:29.931229 containerd[1515]: 2024-11-13 11:58:29.910 [INFO][5299] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="006fa4d57cb1a807046c8487ecd1cce84ebbdadba53e8507eecf1524e3cea65d" HandleID="k8s-pod-network.006fa4d57cb1a807046c8487ecd1cce84ebbdadba53e8507eecf1524e3cea65d" Workload="srv--gr2mf.gb1.brightbox.com-k8s-csi--node--driver--5tp6q-eth0" Nov 13 11:58:29.931229 containerd[1515]: 2024-11-13 11:58:29.910 [INFO][5299] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 13 11:58:29.931229 containerd[1515]: 2024-11-13 11:58:29.910 [INFO][5299] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 13 11:58:29.931229 containerd[1515]: 2024-11-13 11:58:29.921 [WARNING][5299] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="006fa4d57cb1a807046c8487ecd1cce84ebbdadba53e8507eecf1524e3cea65d" HandleID="k8s-pod-network.006fa4d57cb1a807046c8487ecd1cce84ebbdadba53e8507eecf1524e3cea65d" Workload="srv--gr2mf.gb1.brightbox.com-k8s-csi--node--driver--5tp6q-eth0" Nov 13 11:58:29.931229 containerd[1515]: 2024-11-13 11:58:29.921 [INFO][5299] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="006fa4d57cb1a807046c8487ecd1cce84ebbdadba53e8507eecf1524e3cea65d" HandleID="k8s-pod-network.006fa4d57cb1a807046c8487ecd1cce84ebbdadba53e8507eecf1524e3cea65d" Workload="srv--gr2mf.gb1.brightbox.com-k8s-csi--node--driver--5tp6q-eth0" Nov 13 11:58:29.931229 containerd[1515]: 2024-11-13 11:58:29.925 [INFO][5299] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 13 11:58:29.931229 containerd[1515]: 2024-11-13 11:58:29.928 [INFO][5293] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="006fa4d57cb1a807046c8487ecd1cce84ebbdadba53e8507eecf1524e3cea65d" Nov 13 11:58:29.932554 containerd[1515]: time="2024-11-13T11:58:29.931277239Z" level=info msg="TearDown network for sandbox \"006fa4d57cb1a807046c8487ecd1cce84ebbdadba53e8507eecf1524e3cea65d\" successfully" Nov 13 11:58:29.936136 containerd[1515]: time="2024-11-13T11:58:29.936055345Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"006fa4d57cb1a807046c8487ecd1cce84ebbdadba53e8507eecf1524e3cea65d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 13 11:58:29.936435 containerd[1515]: time="2024-11-13T11:58:29.936220959Z" level=info msg="RemovePodSandbox \"006fa4d57cb1a807046c8487ecd1cce84ebbdadba53e8507eecf1524e3cea65d\" returns successfully" Nov 13 11:58:29.937594 containerd[1515]: time="2024-11-13T11:58:29.937545714Z" level=info msg="StopPodSandbox for \"188c707f1dc1ccc79f3dc1b617cca7702a851fa5b4b03d4f3b7f1cc152ebe11e\"" Nov 13 11:58:30.036669 containerd[1515]: 2024-11-13 11:58:29.995 [WARNING][5317] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="188c707f1dc1ccc79f3dc1b617cca7702a851fa5b4b03d4f3b7f1cc152ebe11e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gr2mf.gb1.brightbox.com-k8s-calico--apiserver--8588545cd8--kdzjc-eth0", GenerateName:"calico-apiserver-8588545cd8-", Namespace:"calico-apiserver", SelfLink:"", UID:"3cfa244a-99c4-4135-8a9f-f544de1085c6", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2024, time.November, 13, 11, 57, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8588545cd8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gr2mf.gb1.brightbox.com", ContainerID:"7dee4b55cf43fb3bbfde3b2222dc00740643b0f49a749a0fbf618bed5706de99", Pod:"calico-apiserver-8588545cd8-kdzjc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.55.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali712d876c997", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 13 11:58:30.036669 containerd[1515]: 2024-11-13 11:58:29.995 [INFO][5317] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="188c707f1dc1ccc79f3dc1b617cca7702a851fa5b4b03d4f3b7f1cc152ebe11e" Nov 13 11:58:30.036669 containerd[1515]: 2024-11-13 11:58:29.995 [INFO][5317] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="188c707f1dc1ccc79f3dc1b617cca7702a851fa5b4b03d4f3b7f1cc152ebe11e" iface="eth0" netns="" Nov 13 11:58:30.036669 containerd[1515]: 2024-11-13 11:58:29.995 [INFO][5317] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="188c707f1dc1ccc79f3dc1b617cca7702a851fa5b4b03d4f3b7f1cc152ebe11e" Nov 13 11:58:30.036669 containerd[1515]: 2024-11-13 11:58:29.995 [INFO][5317] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="188c707f1dc1ccc79f3dc1b617cca7702a851fa5b4b03d4f3b7f1cc152ebe11e" Nov 13 11:58:30.036669 containerd[1515]: 2024-11-13 11:58:30.023 [INFO][5323] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="188c707f1dc1ccc79f3dc1b617cca7702a851fa5b4b03d4f3b7f1cc152ebe11e" HandleID="k8s-pod-network.188c707f1dc1ccc79f3dc1b617cca7702a851fa5b4b03d4f3b7f1cc152ebe11e" Workload="srv--gr2mf.gb1.brightbox.com-k8s-calico--apiserver--8588545cd8--kdzjc-eth0" Nov 13 11:58:30.036669 containerd[1515]: 2024-11-13 11:58:30.024 [INFO][5323] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 13 11:58:30.036669 containerd[1515]: 2024-11-13 11:58:30.024 [INFO][5323] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 13 11:58:30.036669 containerd[1515]: 2024-11-13 11:58:30.030 [WARNING][5323] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="188c707f1dc1ccc79f3dc1b617cca7702a851fa5b4b03d4f3b7f1cc152ebe11e" HandleID="k8s-pod-network.188c707f1dc1ccc79f3dc1b617cca7702a851fa5b4b03d4f3b7f1cc152ebe11e" Workload="srv--gr2mf.gb1.brightbox.com-k8s-calico--apiserver--8588545cd8--kdzjc-eth0" Nov 13 11:58:30.036669 containerd[1515]: 2024-11-13 11:58:30.031 [INFO][5323] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="188c707f1dc1ccc79f3dc1b617cca7702a851fa5b4b03d4f3b7f1cc152ebe11e" HandleID="k8s-pod-network.188c707f1dc1ccc79f3dc1b617cca7702a851fa5b4b03d4f3b7f1cc152ebe11e" Workload="srv--gr2mf.gb1.brightbox.com-k8s-calico--apiserver--8588545cd8--kdzjc-eth0" Nov 13 11:58:30.036669 containerd[1515]: 2024-11-13 11:58:30.032 [INFO][5323] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 13 11:58:30.036669 containerd[1515]: 2024-11-13 11:58:30.034 [INFO][5317] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="188c707f1dc1ccc79f3dc1b617cca7702a851fa5b4b03d4f3b7f1cc152ebe11e" Nov 13 11:58:30.039023 containerd[1515]: time="2024-11-13T11:58:30.036724427Z" level=info msg="TearDown network for sandbox \"188c707f1dc1ccc79f3dc1b617cca7702a851fa5b4b03d4f3b7f1cc152ebe11e\" successfully" Nov 13 11:58:30.039023 containerd[1515]: time="2024-11-13T11:58:30.036750276Z" level=info msg="StopPodSandbox for \"188c707f1dc1ccc79f3dc1b617cca7702a851fa5b4b03d4f3b7f1cc152ebe11e\" returns successfully" Nov 13 11:58:30.039023 containerd[1515]: time="2024-11-13T11:58:30.037523289Z" level=info msg="RemovePodSandbox for \"188c707f1dc1ccc79f3dc1b617cca7702a851fa5b4b03d4f3b7f1cc152ebe11e\"" Nov 13 11:58:30.039023 containerd[1515]: time="2024-11-13T11:58:30.037551456Z" level=info msg="Forcibly stopping sandbox \"188c707f1dc1ccc79f3dc1b617cca7702a851fa5b4b03d4f3b7f1cc152ebe11e\"" Nov 13 11:58:30.169998 containerd[1515]: 2024-11-13 11:58:30.113 [WARNING][5343] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="188c707f1dc1ccc79f3dc1b617cca7702a851fa5b4b03d4f3b7f1cc152ebe11e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gr2mf.gb1.brightbox.com-k8s-calico--apiserver--8588545cd8--kdzjc-eth0", GenerateName:"calico-apiserver-8588545cd8-", Namespace:"calico-apiserver", SelfLink:"", UID:"3cfa244a-99c4-4135-8a9f-f544de1085c6", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2024, time.November, 13, 11, 57, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8588545cd8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gr2mf.gb1.brightbox.com", ContainerID:"7dee4b55cf43fb3bbfde3b2222dc00740643b0f49a749a0fbf618bed5706de99", Pod:"calico-apiserver-8588545cd8-kdzjc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.55.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali712d876c997", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 13 11:58:30.169998 containerd[1515]: 2024-11-13 11:58:30.115 [INFO][5343] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="188c707f1dc1ccc79f3dc1b617cca7702a851fa5b4b03d4f3b7f1cc152ebe11e" Nov 13 11:58:30.169998 containerd[1515]: 2024-11-13 11:58:30.116 [INFO][5343] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="188c707f1dc1ccc79f3dc1b617cca7702a851fa5b4b03d4f3b7f1cc152ebe11e" iface="eth0" netns="" Nov 13 11:58:30.169998 containerd[1515]: 2024-11-13 11:58:30.116 [INFO][5343] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="188c707f1dc1ccc79f3dc1b617cca7702a851fa5b4b03d4f3b7f1cc152ebe11e" Nov 13 11:58:30.169998 containerd[1515]: 2024-11-13 11:58:30.118 [INFO][5343] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="188c707f1dc1ccc79f3dc1b617cca7702a851fa5b4b03d4f3b7f1cc152ebe11e" Nov 13 11:58:30.169998 containerd[1515]: 2024-11-13 11:58:30.152 [INFO][5350] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="188c707f1dc1ccc79f3dc1b617cca7702a851fa5b4b03d4f3b7f1cc152ebe11e" HandleID="k8s-pod-network.188c707f1dc1ccc79f3dc1b617cca7702a851fa5b4b03d4f3b7f1cc152ebe11e" Workload="srv--gr2mf.gb1.brightbox.com-k8s-calico--apiserver--8588545cd8--kdzjc-eth0" Nov 13 11:58:30.169998 containerd[1515]: 2024-11-13 11:58:30.153 [INFO][5350] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 13 11:58:30.169998 containerd[1515]: 2024-11-13 11:58:30.153 [INFO][5350] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 13 11:58:30.169998 containerd[1515]: 2024-11-13 11:58:30.162 [WARNING][5350] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="188c707f1dc1ccc79f3dc1b617cca7702a851fa5b4b03d4f3b7f1cc152ebe11e" HandleID="k8s-pod-network.188c707f1dc1ccc79f3dc1b617cca7702a851fa5b4b03d4f3b7f1cc152ebe11e" Workload="srv--gr2mf.gb1.brightbox.com-k8s-calico--apiserver--8588545cd8--kdzjc-eth0" Nov 13 11:58:30.169998 containerd[1515]: 2024-11-13 11:58:30.162 [INFO][5350] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="188c707f1dc1ccc79f3dc1b617cca7702a851fa5b4b03d4f3b7f1cc152ebe11e" HandleID="k8s-pod-network.188c707f1dc1ccc79f3dc1b617cca7702a851fa5b4b03d4f3b7f1cc152ebe11e" Workload="srv--gr2mf.gb1.brightbox.com-k8s-calico--apiserver--8588545cd8--kdzjc-eth0" Nov 13 11:58:30.169998 containerd[1515]: 2024-11-13 11:58:30.164 [INFO][5350] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 13 11:58:30.169998 containerd[1515]: 2024-11-13 11:58:30.167 [INFO][5343] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="188c707f1dc1ccc79f3dc1b617cca7702a851fa5b4b03d4f3b7f1cc152ebe11e" Nov 13 11:58:30.169998 containerd[1515]: time="2024-11-13T11:58:30.169485163Z" level=info msg="TearDown network for sandbox \"188c707f1dc1ccc79f3dc1b617cca7702a851fa5b4b03d4f3b7f1cc152ebe11e\" successfully" Nov 13 11:58:30.174436 containerd[1515]: time="2024-11-13T11:58:30.174100457Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"188c707f1dc1ccc79f3dc1b617cca7702a851fa5b4b03d4f3b7f1cc152ebe11e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 13 11:58:30.174436 containerd[1515]: time="2024-11-13T11:58:30.174186444Z" level=info msg="RemovePodSandbox \"188c707f1dc1ccc79f3dc1b617cca7702a851fa5b4b03d4f3b7f1cc152ebe11e\" returns successfully" Nov 13 11:58:42.712935 update_engine[1497]: I20241113 11:58:42.712670 1497 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Nov 13 11:58:42.712935 update_engine[1497]: I20241113 11:58:42.712776 1497 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Nov 13 11:58:42.715274 update_engine[1497]: I20241113 11:58:42.714781 1497 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Nov 13 11:58:42.715590 update_engine[1497]: I20241113 11:58:42.715561 1497 omaha_request_params.cc:62] Current group set to stable Nov 13 11:58:42.716974 update_engine[1497]: I20241113 11:58:42.716316 1497 update_attempter.cc:499] Already updated boot flags. Skipping. Nov 13 11:58:42.716974 update_engine[1497]: I20241113 11:58:42.716346 1497 update_attempter.cc:643] Scheduling an action processor start. Nov 13 11:58:42.716974 update_engine[1497]: I20241113 11:58:42.716375 1497 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Nov 13 11:58:42.716974 update_engine[1497]: I20241113 11:58:42.716429 1497 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Nov 13 11:58:42.716974 update_engine[1497]: I20241113 11:58:42.716500 1497 omaha_request_action.cc:271] Posting an Omaha request to disabled Nov 13 11:58:42.716974 update_engine[1497]: I20241113 11:58:42.716512 1497 omaha_request_action.cc:272] Request: Nov 13 11:58:42.716974 update_engine[1497]: Nov 13 11:58:42.716974 update_engine[1497]: Nov 13 11:58:42.716974 update_engine[1497]: Nov 13 11:58:42.716974 update_engine[1497]: Nov 13 11:58:42.716974 update_engine[1497]: Nov 13 11:58:42.716974 update_engine[1497]: Nov 13 11:58:42.716974 update_engine[1497]: Nov 13 11:58:42.716974 update_engine[1497]: Nov 13 11:58:42.716974 update_engine[1497]: I20241113 11:58:42.716521 1497 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 13 11:58:42.739834 locksmithd[1523]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Nov 13 11:58:42.745205 update_engine[1497]: I20241113 11:58:42.745155 1497 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 13 11:58:42.745671 update_engine[1497]: I20241113 11:58:42.745633 1497 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 13 11:58:42.756398 update_engine[1497]: E20241113 11:58:42.756368 1497 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 13 11:58:42.756605 update_engine[1497]: I20241113 11:58:42.756587 1497 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Nov 13 11:58:43.740503 systemd[1]: Started sshd@11-10.244.96.58:22-147.75.109.163:60710.service - OpenSSH per-connection server daemon (147.75.109.163:60710). Nov 13 11:58:44.679683 sshd[5413]: Accepted publickey for core from 147.75.109.163 port 60710 ssh2: RSA SHA256:6zq1KeZH3fhJd7rNbiqRD8Qhg+Zgu4M5RIFDzzh/o6k Nov 13 11:58:44.683034 sshd[5413]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 11:58:44.695522 systemd-logind[1496]: New session 12 of user core. Nov 13 11:58:44.700385 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 13 11:58:45.863276 sshd[5413]: pam_unix(sshd:session): session closed for user core Nov 13 11:58:45.873270 systemd[1]: sshd@11-10.244.96.58:22-147.75.109.163:60710.service: Deactivated successfully. Nov 13 11:58:45.876417 systemd[1]: session-12.scope: Deactivated successfully. Nov 13 11:58:45.877984 systemd-logind[1496]: Session 12 logged out. Waiting for processes to exit. Nov 13 11:58:45.879348 systemd-logind[1496]: Removed session 12. Nov 13 11:58:51.031113 systemd[1]: Started sshd@12-10.244.96.58:22-147.75.109.163:37688.service - OpenSSH per-connection server daemon (147.75.109.163:37688). Nov 13 11:58:51.947991 sshd[5431]: Accepted publickey for core from 147.75.109.163 port 37688 ssh2: RSA SHA256:6zq1KeZH3fhJd7rNbiqRD8Qhg+Zgu4M5RIFDzzh/o6k Nov 13 11:58:51.952817 sshd[5431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 11:58:51.962898 systemd-logind[1496]: New session 13 of user core. Nov 13 11:58:51.971436 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 13 11:58:52.648582 update_engine[1497]: I20241113 11:58:52.648324 1497 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 13 11:58:52.650518 update_engine[1497]: I20241113 11:58:52.649557 1497 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 13 11:58:52.650965 update_engine[1497]: I20241113 11:58:52.650935 1497 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 13 11:58:52.652035 update_engine[1497]: E20241113 11:58:52.651958 1497 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 13 11:58:52.654417 update_engine[1497]: I20241113 11:58:52.654356 1497 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Nov 13 11:58:52.740268 sshd[5431]: pam_unix(sshd:session): session closed for user core Nov 13 11:58:52.748734 systemd[1]: sshd@12-10.244.96.58:22-147.75.109.163:37688.service: Deactivated successfully. Nov 13 11:58:52.754955 systemd[1]: session-13.scope: Deactivated successfully. Nov 13 11:58:52.760636 systemd-logind[1496]: Session 13 logged out. Waiting for processes to exit. Nov 13 11:58:52.762184 systemd-logind[1496]: Removed session 13. Nov 13 11:58:52.872186 containerd[1515]: time="2024-11-13T11:58:52.872033148Z" level=info msg="StopContainer for \"92971ca5c4ff6b24398c5f8aa445fabe04d9343a5247a4be4467dee0f224af49\" with timeout 300 (s)" Nov 13 11:58:52.876635 containerd[1515]: time="2024-11-13T11:58:52.876477684Z" level=info msg="Stop container \"92971ca5c4ff6b24398c5f8aa445fabe04d9343a5247a4be4467dee0f224af49\" with signal terminated" Nov 13 11:58:52.997031 containerd[1515]: time="2024-11-13T11:58:52.996987224Z" level=info msg="StopContainer for \"fe293b2e887f77be7c3b14b2645531fb805d4fd075df6c4ff3ce0df6baa714c7\" with timeout 30 (s)" Nov 13 11:58:52.998400 containerd[1515]: time="2024-11-13T11:58:52.998370655Z" level=info msg="Stop container \"fe293b2e887f77be7c3b14b2645531fb805d4fd075df6c4ff3ce0df6baa714c7\" with signal terminated" Nov 13 11:58:53.031362 systemd[1]: cri-containerd-fe293b2e887f77be7c3b14b2645531fb805d4fd075df6c4ff3ce0df6baa714c7.scope: Deactivated successfully. Nov 13 11:58:53.082691 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fe293b2e887f77be7c3b14b2645531fb805d4fd075df6c4ff3ce0df6baa714c7-rootfs.mount: Deactivated successfully. Nov 13 11:58:53.132233 containerd[1515]: time="2024-11-13T11:58:53.092668759Z" level=info msg="shim disconnected" id=fe293b2e887f77be7c3b14b2645531fb805d4fd075df6c4ff3ce0df6baa714c7 namespace=k8s.io Nov 13 11:58:53.141858 containerd[1515]: time="2024-11-13T11:58:53.141796727Z" level=warning msg="cleaning up after shim disconnected" id=fe293b2e887f77be7c3b14b2645531fb805d4fd075df6c4ff3ce0df6baa714c7 namespace=k8s.io Nov 13 11:58:53.141858 containerd[1515]: time="2024-11-13T11:58:53.141852372Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 13 11:58:53.182591 containerd[1515]: time="2024-11-13T11:58:53.182456649Z" level=info msg="StopContainer for \"fe293b2e887f77be7c3b14b2645531fb805d4fd075df6c4ff3ce0df6baa714c7\" returns successfully" Nov 13 11:58:53.184387 containerd[1515]: time="2024-11-13T11:58:53.183139249Z" level=info msg="StopPodSandbox for \"83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b\"" Nov 13 11:58:53.189494 containerd[1515]: time="2024-11-13T11:58:53.189456878Z" level=info msg="Container to stop \"fe293b2e887f77be7c3b14b2645531fb805d4fd075df6c4ff3ce0df6baa714c7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 13 11:58:53.192931 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b-shm.mount: Deactivated successfully. Nov 13 11:58:53.202001 systemd[1]: cri-containerd-83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b.scope: Deactivated successfully. Nov 13 11:58:53.229029 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b-rootfs.mount: Deactivated successfully. Nov 13 11:58:53.231158 containerd[1515]: time="2024-11-13T11:58:53.230598316Z" level=info msg="shim disconnected" id=83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b namespace=k8s.io Nov 13 11:58:53.231158 containerd[1515]: time="2024-11-13T11:58:53.230656646Z" level=warning msg="cleaning up after shim disconnected" id=83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b namespace=k8s.io Nov 13 11:58:53.231158 containerd[1515]: time="2024-11-13T11:58:53.230665914Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 13 11:58:53.434328 systemd-networkd[1448]: cali75ef0cf2a16: Link DOWN Nov 13 11:58:53.434340 systemd-networkd[1448]: cali75ef0cf2a16: Lost carrier Nov 13 11:58:53.541187 containerd[1515]: 2024-11-13 11:58:53.422 [INFO][5527] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b" Nov 13 11:58:53.541187 containerd[1515]: 2024-11-13 11:58:53.425 [INFO][5527] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b" iface="eth0" netns="/var/run/netns/cni-d814483f-cf72-7b77-13d4-e2d0f51bfc45" Nov 13 11:58:53.541187 containerd[1515]: 2024-11-13 11:58:53.426 [INFO][5527] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b" iface="eth0" netns="/var/run/netns/cni-d814483f-cf72-7b77-13d4-e2d0f51bfc45" Nov 13 11:58:53.541187 containerd[1515]: 2024-11-13 11:58:53.441 [INFO][5527] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b" after=16.205872ms iface="eth0" netns="/var/run/netns/cni-d814483f-cf72-7b77-13d4-e2d0f51bfc45" Nov 13 11:58:53.541187 containerd[1515]: 2024-11-13 11:58:53.441 [INFO][5527] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b" Nov 13 11:58:53.541187 containerd[1515]: 2024-11-13 11:58:53.441 [INFO][5527] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b" Nov 13 11:58:53.541187 containerd[1515]: 2024-11-13 11:58:53.484 [INFO][5534] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b" HandleID="k8s-pod-network.83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b" Workload="srv--gr2mf.gb1.brightbox.com-k8s-calico--kube--controllers--7f4895d8cb--7gh4j-eth0" Nov 13 11:58:53.541187 containerd[1515]: 2024-11-13 11:58:53.485 [INFO][5534] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 13 11:58:53.541187 containerd[1515]: 2024-11-13 11:58:53.485 [INFO][5534] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 13 11:58:53.541187 containerd[1515]: 2024-11-13 11:58:53.531 [INFO][5534] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b" HandleID="k8s-pod-network.83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b" Workload="srv--gr2mf.gb1.brightbox.com-k8s-calico--kube--controllers--7f4895d8cb--7gh4j-eth0" Nov 13 11:58:53.541187 containerd[1515]: 2024-11-13 11:58:53.531 [INFO][5534] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b" HandleID="k8s-pod-network.83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b" Workload="srv--gr2mf.gb1.brightbox.com-k8s-calico--kube--controllers--7f4895d8cb--7gh4j-eth0" Nov 13 11:58:53.541187 containerd[1515]: 2024-11-13 11:58:53.533 [INFO][5534] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 13 11:58:53.541187 containerd[1515]: 2024-11-13 11:58:53.535 [INFO][5527] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b" Nov 13 11:58:53.543495 containerd[1515]: time="2024-11-13T11:58:53.541526946Z" level=info msg="TearDown network for sandbox \"83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b\" successfully" Nov 13 11:58:53.543495 containerd[1515]: time="2024-11-13T11:58:53.541562260Z" level=info msg="StopPodSandbox for \"83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b\" returns successfully" Nov 13 11:58:53.542144 systemd[1]: run-netns-cni\x2dd814483f\x2dcf72\x2d7b77\x2d13d4\x2de2d0f51bfc45.mount: Deactivated successfully. Nov 13 11:58:53.583407 kubelet[2752]: I1113 11:58:53.583211 2752 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mmst\" (UniqueName: \"kubernetes.io/projected/b9eb2b70-ede6-4290-8483-7657e4c96a8b-kube-api-access-6mmst\") pod \"b9eb2b70-ede6-4290-8483-7657e4c96a8b\" (UID: \"b9eb2b70-ede6-4290-8483-7657e4c96a8b\") " Nov 13 11:58:53.583407 kubelet[2752]: I1113 11:58:53.583306 2752 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9eb2b70-ede6-4290-8483-7657e4c96a8b-tigera-ca-bundle\") pod \"b9eb2b70-ede6-4290-8483-7657e4c96a8b\" (UID: \"b9eb2b70-ede6-4290-8483-7657e4c96a8b\") " Nov 13 11:58:53.598275 kubelet[2752]: I1113 11:58:53.596472 2752 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9eb2b70-ede6-4290-8483-7657e4c96a8b-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "b9eb2b70-ede6-4290-8483-7657e4c96a8b" (UID: "b9eb2b70-ede6-4290-8483-7657e4c96a8b"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 13 11:58:53.599638 systemd[1]: var-lib-kubelet-pods-b9eb2b70\x2dede6\x2d4290\x2d8483\x2d7657e4c96a8b-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dkube\x2dcontrollers-1.mount: Deactivated successfully. Nov 13 11:58:53.619855 kubelet[2752]: I1113 11:58:53.619813 2752 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9eb2b70-ede6-4290-8483-7657e4c96a8b-kube-api-access-6mmst" (OuterVolumeSpecName: "kube-api-access-6mmst") pod "b9eb2b70-ede6-4290-8483-7657e4c96a8b" (UID: "b9eb2b70-ede6-4290-8483-7657e4c96a8b"). InnerVolumeSpecName "kube-api-access-6mmst". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 13 11:58:53.686706 kubelet[2752]: I1113 11:58:53.686465 2752 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-6mmst\" (UniqueName: \"kubernetes.io/projected/b9eb2b70-ede6-4290-8483-7657e4c96a8b-kube-api-access-6mmst\") on node \"srv-gr2mf.gb1.brightbox.com\" DevicePath \"\"" Nov 13 11:58:53.687312 kubelet[2752]: I1113 11:58:53.687251 2752 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9eb2b70-ede6-4290-8483-7657e4c96a8b-tigera-ca-bundle\") on node \"srv-gr2mf.gb1.brightbox.com\" DevicePath \"\"" Nov 13 11:58:54.083852 systemd[1]: var-lib-kubelet-pods-b9eb2b70\x2dede6\x2d4290\x2d8483\x2d7657e4c96a8b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6mmst.mount: Deactivated successfully. Nov 13 11:58:54.195325 kubelet[2752]: I1113 11:58:54.195059 2752 scope.go:117] "RemoveContainer" containerID="fe293b2e887f77be7c3b14b2645531fb805d4fd075df6c4ff3ce0df6baa714c7" Nov 13 11:58:54.199104 containerd[1515]: time="2024-11-13T11:58:54.198695829Z" level=info msg="RemoveContainer for \"fe293b2e887f77be7c3b14b2645531fb805d4fd075df6c4ff3ce0df6baa714c7\"" Nov 13 11:58:54.206041 containerd[1515]: time="2024-11-13T11:58:54.206005355Z" level=info msg="RemoveContainer for \"fe293b2e887f77be7c3b14b2645531fb805d4fd075df6c4ff3ce0df6baa714c7\" returns successfully" Nov 13 11:58:54.210986 systemd[1]: Removed slice kubepods-besteffort-podb9eb2b70_ede6_4290_8483_7657e4c96a8b.slice - libcontainer container kubepods-besteffort-podb9eb2b70_ede6_4290_8483_7657e4c96a8b.slice. Nov 13 11:58:54.283352 kubelet[2752]: I1113 11:58:54.278837 2752 topology_manager.go:215] "Topology Admit Handler" podUID="4b96a5a4-8d7c-444c-b6f6-1bab38388480" podNamespace="calico-system" podName="calico-kube-controllers-586dbf8fd5-7snm5" Nov 13 11:58:54.294091 kubelet[2752]: E1113 11:58:54.293865 2752 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b9eb2b70-ede6-4290-8483-7657e4c96a8b" containerName="calico-kube-controllers" Nov 13 11:58:54.294091 kubelet[2752]: I1113 11:58:54.294011 2752 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9eb2b70-ede6-4290-8483-7657e4c96a8b" containerName="calico-kube-controllers" Nov 13 11:58:54.320500 systemd[1]: Created slice kubepods-besteffort-pod4b96a5a4_8d7c_444c_b6f6_1bab38388480.slice - libcontainer container kubepods-besteffort-pod4b96a5a4_8d7c_444c_b6f6_1bab38388480.slice. Nov 13 11:58:54.394425 kubelet[2752]: I1113 11:58:54.393790 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8m4pc\" (UniqueName: \"kubernetes.io/projected/4b96a5a4-8d7c-444c-b6f6-1bab38388480-kube-api-access-8m4pc\") pod \"calico-kube-controllers-586dbf8fd5-7snm5\" (UID: \"4b96a5a4-8d7c-444c-b6f6-1bab38388480\") " pod="calico-system/calico-kube-controllers-586dbf8fd5-7snm5" Nov 13 11:58:54.394425 kubelet[2752]: I1113 11:58:54.393878 2752 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b96a5a4-8d7c-444c-b6f6-1bab38388480-tigera-ca-bundle\") pod \"calico-kube-controllers-586dbf8fd5-7snm5\" (UID: \"4b96a5a4-8d7c-444c-b6f6-1bab38388480\") " pod="calico-system/calico-kube-controllers-586dbf8fd5-7snm5" Nov 13 11:58:54.604940 kubelet[2752]: I1113 11:58:54.604707 2752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9eb2b70-ede6-4290-8483-7657e4c96a8b" path="/var/lib/kubelet/pods/b9eb2b70-ede6-4290-8483-7657e4c96a8b/volumes" Nov 13 11:58:54.633534 containerd[1515]: time="2024-11-13T11:58:54.633443579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-586dbf8fd5-7snm5,Uid:4b96a5a4-8d7c-444c-b6f6-1bab38388480,Namespace:calico-system,Attempt:0,}" Nov 13 11:58:54.870515 systemd-networkd[1448]: cali50337471c21: Link UP Nov 13 11:58:54.870703 systemd-networkd[1448]: cali50337471c21: Gained carrier Nov 13 11:58:54.907686 containerd[1515]: 2024-11-13 11:58:54.751 [INFO][5563] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--gr2mf.gb1.brightbox.com-k8s-calico--kube--controllers--586dbf8fd5--7snm5-eth0 calico-kube-controllers-586dbf8fd5- calico-system 4b96a5a4-8d7c-444c-b6f6-1bab38388480 1094 0 2024-11-13 11:58:54 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:586dbf8fd5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-gr2mf.gb1.brightbox.com calico-kube-controllers-586dbf8fd5-7snm5 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali50337471c21 [] []}} ContainerID="7f10d42b5541c02a1da6e2f04079f5fcb77d2a50f980d5e352c388dd113dfcee" Namespace="calico-system" Pod="calico-kube-controllers-586dbf8fd5-7snm5" WorkloadEndpoint="srv--gr2mf.gb1.brightbox.com-k8s-calico--kube--controllers--586dbf8fd5--7snm5-" Nov 13 11:58:54.907686 containerd[1515]: 2024-11-13 11:58:54.752 [INFO][5563] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7f10d42b5541c02a1da6e2f04079f5fcb77d2a50f980d5e352c388dd113dfcee" Namespace="calico-system" Pod="calico-kube-controllers-586dbf8fd5-7snm5" WorkloadEndpoint="srv--gr2mf.gb1.brightbox.com-k8s-calico--kube--controllers--586dbf8fd5--7snm5-eth0" Nov 13 11:58:54.907686 containerd[1515]: 2024-11-13 11:58:54.801 [INFO][5573] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7f10d42b5541c02a1da6e2f04079f5fcb77d2a50f980d5e352c388dd113dfcee" HandleID="k8s-pod-network.7f10d42b5541c02a1da6e2f04079f5fcb77d2a50f980d5e352c388dd113dfcee" Workload="srv--gr2mf.gb1.brightbox.com-k8s-calico--kube--controllers--586dbf8fd5--7snm5-eth0" Nov 13 11:58:54.907686 containerd[1515]: 2024-11-13 11:58:54.813 [INFO][5573] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7f10d42b5541c02a1da6e2f04079f5fcb77d2a50f980d5e352c388dd113dfcee" HandleID="k8s-pod-network.7f10d42b5541c02a1da6e2f04079f5fcb77d2a50f980d5e352c388dd113dfcee" Workload="srv--gr2mf.gb1.brightbox.com-k8s-calico--kube--controllers--586dbf8fd5--7snm5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00042d690), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-gr2mf.gb1.brightbox.com", "pod":"calico-kube-controllers-586dbf8fd5-7snm5", "timestamp":"2024-11-13 11:58:54.80179127 +0000 UTC"}, Hostname:"srv-gr2mf.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 13 11:58:54.907686 containerd[1515]: 2024-11-13 11:58:54.814 [INFO][5573] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 13 11:58:54.907686 containerd[1515]: 2024-11-13 11:58:54.814 [INFO][5573] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 13 11:58:54.907686 containerd[1515]: 2024-11-13 11:58:54.814 [INFO][5573] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-gr2mf.gb1.brightbox.com' Nov 13 11:58:54.907686 containerd[1515]: 2024-11-13 11:58:54.816 [INFO][5573] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7f10d42b5541c02a1da6e2f04079f5fcb77d2a50f980d5e352c388dd113dfcee" host="srv-gr2mf.gb1.brightbox.com" Nov 13 11:58:54.907686 containerd[1515]: 2024-11-13 11:58:54.823 [INFO][5573] ipam/ipam.go 372: Looking up existing affinities for host host="srv-gr2mf.gb1.brightbox.com" Nov 13 11:58:54.907686 containerd[1515]: 2024-11-13 11:58:54.831 [INFO][5573] ipam/ipam.go 489: Trying affinity for 192.168.55.0/26 host="srv-gr2mf.gb1.brightbox.com" Nov 13 11:58:54.907686 containerd[1515]: 2024-11-13 11:58:54.837 [INFO][5573] ipam/ipam.go 155: Attempting to load block cidr=192.168.55.0/26 host="srv-gr2mf.gb1.brightbox.com" Nov 13 11:58:54.907686 containerd[1515]: 2024-11-13 11:58:54.840 [INFO][5573] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.55.0/26 host="srv-gr2mf.gb1.brightbox.com" Nov 13 11:58:54.907686 containerd[1515]: 2024-11-13 11:58:54.840 [INFO][5573] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.55.0/26 handle="k8s-pod-network.7f10d42b5541c02a1da6e2f04079f5fcb77d2a50f980d5e352c388dd113dfcee" host="srv-gr2mf.gb1.brightbox.com" Nov 13 11:58:54.907686 containerd[1515]: 2024-11-13 11:58:54.843 [INFO][5573] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7f10d42b5541c02a1da6e2f04079f5fcb77d2a50f980d5e352c388dd113dfcee Nov 13 11:58:54.907686 containerd[1515]: 2024-11-13 11:58:54.852 [INFO][5573] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.55.0/26 handle="k8s-pod-network.7f10d42b5541c02a1da6e2f04079f5fcb77d2a50f980d5e352c388dd113dfcee" host="srv-gr2mf.gb1.brightbox.com" Nov 13 11:58:54.907686 containerd[1515]: 2024-11-13 11:58:54.861 [INFO][5573] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.55.7/26] block=192.168.55.0/26 handle="k8s-pod-network.7f10d42b5541c02a1da6e2f04079f5fcb77d2a50f980d5e352c388dd113dfcee" host="srv-gr2mf.gb1.brightbox.com" Nov 13 11:58:54.907686 containerd[1515]: 2024-11-13 11:58:54.861 [INFO][5573] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.55.7/26] handle="k8s-pod-network.7f10d42b5541c02a1da6e2f04079f5fcb77d2a50f980d5e352c388dd113dfcee" host="srv-gr2mf.gb1.brightbox.com" Nov 13 11:58:54.907686 containerd[1515]: 2024-11-13 11:58:54.861 [INFO][5573] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 13 11:58:54.907686 containerd[1515]: 2024-11-13 11:58:54.861 [INFO][5573] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.55.7/26] IPv6=[] ContainerID="7f10d42b5541c02a1da6e2f04079f5fcb77d2a50f980d5e352c388dd113dfcee" HandleID="k8s-pod-network.7f10d42b5541c02a1da6e2f04079f5fcb77d2a50f980d5e352c388dd113dfcee" Workload="srv--gr2mf.gb1.brightbox.com-k8s-calico--kube--controllers--586dbf8fd5--7snm5-eth0" Nov 13 11:58:54.908513 containerd[1515]: 2024-11-13 11:58:54.866 [INFO][5563] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7f10d42b5541c02a1da6e2f04079f5fcb77d2a50f980d5e352c388dd113dfcee" Namespace="calico-system" Pod="calico-kube-controllers-586dbf8fd5-7snm5" WorkloadEndpoint="srv--gr2mf.gb1.brightbox.com-k8s-calico--kube--controllers--586dbf8fd5--7snm5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gr2mf.gb1.brightbox.com-k8s-calico--kube--controllers--586dbf8fd5--7snm5-eth0", GenerateName:"calico-kube-controllers-586dbf8fd5-", Namespace:"calico-system", SelfLink:"", UID:"4b96a5a4-8d7c-444c-b6f6-1bab38388480", ResourceVersion:"1094", Generation:0, CreationTimestamp:time.Date(2024, time.November, 13, 11, 58, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"586dbf8fd5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gr2mf.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-586dbf8fd5-7snm5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.55.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali50337471c21", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 13 11:58:54.908513 containerd[1515]: 2024-11-13 11:58:54.866 [INFO][5563] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.55.7/32] ContainerID="7f10d42b5541c02a1da6e2f04079f5fcb77d2a50f980d5e352c388dd113dfcee" Namespace="calico-system" Pod="calico-kube-controllers-586dbf8fd5-7snm5" WorkloadEndpoint="srv--gr2mf.gb1.brightbox.com-k8s-calico--kube--controllers--586dbf8fd5--7snm5-eth0" Nov 13 11:58:54.908513 containerd[1515]: 2024-11-13 11:58:54.866 [INFO][5563] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali50337471c21 ContainerID="7f10d42b5541c02a1da6e2f04079f5fcb77d2a50f980d5e352c388dd113dfcee" Namespace="calico-system" Pod="calico-kube-controllers-586dbf8fd5-7snm5" WorkloadEndpoint="srv--gr2mf.gb1.brightbox.com-k8s-calico--kube--controllers--586dbf8fd5--7snm5-eth0" Nov 13 11:58:54.908513 containerd[1515]: 2024-11-13 11:58:54.868 [INFO][5563] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7f10d42b5541c02a1da6e2f04079f5fcb77d2a50f980d5e352c388dd113dfcee" Namespace="calico-system" Pod="calico-kube-controllers-586dbf8fd5-7snm5" WorkloadEndpoint="srv--gr2mf.gb1.brightbox.com-k8s-calico--kube--controllers--586dbf8fd5--7snm5-eth0" Nov 13 11:58:54.908513 containerd[1515]: 2024-11-13 11:58:54.869 [INFO][5563] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7f10d42b5541c02a1da6e2f04079f5fcb77d2a50f980d5e352c388dd113dfcee" Namespace="calico-system" Pod="calico-kube-controllers-586dbf8fd5-7snm5" WorkloadEndpoint="srv--gr2mf.gb1.brightbox.com-k8s-calico--kube--controllers--586dbf8fd5--7snm5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gr2mf.gb1.brightbox.com-k8s-calico--kube--controllers--586dbf8fd5--7snm5-eth0", GenerateName:"calico-kube-controllers-586dbf8fd5-", Namespace:"calico-system", SelfLink:"", UID:"4b96a5a4-8d7c-444c-b6f6-1bab38388480", ResourceVersion:"1094", Generation:0, CreationTimestamp:time.Date(2024, time.November, 13, 11, 58, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"586dbf8fd5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gr2mf.gb1.brightbox.com", ContainerID:"7f10d42b5541c02a1da6e2f04079f5fcb77d2a50f980d5e352c388dd113dfcee", Pod:"calico-kube-controllers-586dbf8fd5-7snm5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.55.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali50337471c21", MAC:"d6:f1:69:ea:a1:4f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 13 11:58:54.908513 containerd[1515]: 2024-11-13 11:58:54.903 [INFO][5563] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7f10d42b5541c02a1da6e2f04079f5fcb77d2a50f980d5e352c388dd113dfcee" Namespace="calico-system" Pod="calico-kube-controllers-586dbf8fd5-7snm5" WorkloadEndpoint="srv--gr2mf.gb1.brightbox.com-k8s-calico--kube--controllers--586dbf8fd5--7snm5-eth0" Nov 13 11:58:54.950675 containerd[1515]: time="2024-11-13T11:58:54.946669944Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 13 11:58:54.950675 containerd[1515]: time="2024-11-13T11:58:54.946745191Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 13 11:58:54.950675 containerd[1515]: time="2024-11-13T11:58:54.946780152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 11:58:54.950675 containerd[1515]: time="2024-11-13T11:58:54.946891824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 11:58:54.985673 systemd[1]: Started cri-containerd-7f10d42b5541c02a1da6e2f04079f5fcb77d2a50f980d5e352c388dd113dfcee.scope - libcontainer container 7f10d42b5541c02a1da6e2f04079f5fcb77d2a50f980d5e352c388dd113dfcee. Nov 13 11:58:55.048054 containerd[1515]: time="2024-11-13T11:58:55.048017106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-586dbf8fd5-7snm5,Uid:4b96a5a4-8d7c-444c-b6f6-1bab38388480,Namespace:calico-system,Attempt:0,} returns sandbox id \"7f10d42b5541c02a1da6e2f04079f5fcb77d2a50f980d5e352c388dd113dfcee\"" Nov 13 11:58:55.069705 containerd[1515]: time="2024-11-13T11:58:55.069568915Z" level=info msg="CreateContainer within sandbox \"7f10d42b5541c02a1da6e2f04079f5fcb77d2a50f980d5e352c388dd113dfcee\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Nov 13 11:58:55.080601 containerd[1515]: time="2024-11-13T11:58:55.080537783Z" level=info msg="CreateContainer within sandbox \"7f10d42b5541c02a1da6e2f04079f5fcb77d2a50f980d5e352c388dd113dfcee\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"09b7b04dbd3c01669b659677cc607ec6f5f9aafcbebf1630e41f518324da775e\"" Nov 13 11:58:55.085365 containerd[1515]: time="2024-11-13T11:58:55.082288172Z" level=info msg="StartContainer for \"09b7b04dbd3c01669b659677cc607ec6f5f9aafcbebf1630e41f518324da775e\"" Nov 13 11:58:55.140360 systemd[1]: Started cri-containerd-09b7b04dbd3c01669b659677cc607ec6f5f9aafcbebf1630e41f518324da775e.scope - libcontainer container 09b7b04dbd3c01669b659677cc607ec6f5f9aafcbebf1630e41f518324da775e. Nov 13 11:58:55.204666 containerd[1515]: time="2024-11-13T11:58:55.204576037Z" level=info msg="StartContainer for \"09b7b04dbd3c01669b659677cc607ec6f5f9aafcbebf1630e41f518324da775e\" returns successfully" Nov 13 11:58:56.225533 kubelet[2752]: I1113 11:58:56.225338 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-586dbf8fd5-7snm5" podStartSLOduration=2.22518741 podStartE2EDuration="2.22518741s" podCreationTimestamp="2024-11-13 11:58:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-13 11:58:56.221625973 +0000 UTC m=+87.841534705" watchObservedRunningTime="2024-11-13 11:58:56.22518741 +0000 UTC m=+87.845119357" Nov 13 11:58:56.715652 systemd-networkd[1448]: cali50337471c21: Gained IPv6LL Nov 13 11:58:57.245476 systemd[1]: run-containerd-runc-k8s.io-09b7b04dbd3c01669b659677cc607ec6f5f9aafcbebf1630e41f518324da775e-runc.bLPdXh.mount: Deactivated successfully. Nov 13 11:58:57.332862 systemd[1]: cri-containerd-92971ca5c4ff6b24398c5f8aa445fabe04d9343a5247a4be4467dee0f224af49.scope: Deactivated successfully. Nov 13 11:58:57.363272 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-92971ca5c4ff6b24398c5f8aa445fabe04d9343a5247a4be4467dee0f224af49-rootfs.mount: Deactivated successfully. Nov 13 11:58:57.365625 containerd[1515]: time="2024-11-13T11:58:57.363739010Z" level=info msg="shim disconnected" id=92971ca5c4ff6b24398c5f8aa445fabe04d9343a5247a4be4467dee0f224af49 namespace=k8s.io Nov 13 11:58:57.366060 containerd[1515]: time="2024-11-13T11:58:57.365708362Z" level=warning msg="cleaning up after shim disconnected" id=92971ca5c4ff6b24398c5f8aa445fabe04d9343a5247a4be4467dee0f224af49 namespace=k8s.io Nov 13 11:58:57.366060 containerd[1515]: time="2024-11-13T11:58:57.365724446Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 13 11:58:57.391350 containerd[1515]: time="2024-11-13T11:58:57.390626040Z" level=info msg="StopContainer for \"92971ca5c4ff6b24398c5f8aa445fabe04d9343a5247a4be4467dee0f224af49\" returns successfully" Nov 13 11:58:57.392420 containerd[1515]: time="2024-11-13T11:58:57.391742947Z" level=info msg="StopPodSandbox for \"f2c4694e9a612be6be59892172d2752be07f0489be877df395d20597b9b8cc12\"" Nov 13 11:58:57.392420 containerd[1515]: time="2024-11-13T11:58:57.391777603Z" level=info msg="Container to stop \"92971ca5c4ff6b24398c5f8aa445fabe04d9343a5247a4be4467dee0f224af49\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 13 11:58:57.395990 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f2c4694e9a612be6be59892172d2752be07f0489be877df395d20597b9b8cc12-shm.mount: Deactivated successfully. Nov 13 11:58:57.401659 systemd[1]: cri-containerd-f2c4694e9a612be6be59892172d2752be07f0489be877df395d20597b9b8cc12.scope: Deactivated successfully. Nov 13 11:58:57.441797 containerd[1515]: time="2024-11-13T11:58:57.441641154Z" level=info msg="shim disconnected" id=f2c4694e9a612be6be59892172d2752be07f0489be877df395d20597b9b8cc12 namespace=k8s.io Nov 13 11:58:57.441797 containerd[1515]: time="2024-11-13T11:58:57.441788819Z" level=warning msg="cleaning up after shim disconnected" id=f2c4694e9a612be6be59892172d2752be07f0489be877df395d20597b9b8cc12 namespace=k8s.io Nov 13 11:58:57.441797 containerd[1515]: time="2024-11-13T11:58:57.441798695Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 13 11:58:57.442917 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f2c4694e9a612be6be59892172d2752be07f0489be877df395d20597b9b8cc12-rootfs.mount: Deactivated successfully. Nov 13 11:58:57.463777 containerd[1515]: time="2024-11-13T11:58:57.463694022Z" level=info msg="TearDown network for sandbox \"f2c4694e9a612be6be59892172d2752be07f0489be877df395d20597b9b8cc12\" successfully" Nov 13 11:58:57.464317 containerd[1515]: time="2024-11-13T11:58:57.464277797Z" level=info msg="StopPodSandbox for \"f2c4694e9a612be6be59892172d2752be07f0489be877df395d20597b9b8cc12\" returns successfully" Nov 13 11:58:57.514607 kubelet[2752]: I1113 11:58:57.514487 2752 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lc6xw\" (UniqueName: \"kubernetes.io/projected/77a3ac83-6add-48c1-903c-e90868b112ef-kube-api-access-lc6xw\") pod \"77a3ac83-6add-48c1-903c-e90868b112ef\" (UID: \"77a3ac83-6add-48c1-903c-e90868b112ef\") " Nov 13 11:58:57.515255 kubelet[2752]: I1113 11:58:57.515234 2752 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77a3ac83-6add-48c1-903c-e90868b112ef-tigera-ca-bundle\") pod \"77a3ac83-6add-48c1-903c-e90868b112ef\" (UID: \"77a3ac83-6add-48c1-903c-e90868b112ef\") " Nov 13 11:58:57.515443 kubelet[2752]: I1113 11:58:57.515430 2752 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/77a3ac83-6add-48c1-903c-e90868b112ef-typha-certs\") pod \"77a3ac83-6add-48c1-903c-e90868b112ef\" (UID: \"77a3ac83-6add-48c1-903c-e90868b112ef\") " Nov 13 11:58:57.524528 kubelet[2752]: I1113 11:58:57.524314 2752 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77a3ac83-6add-48c1-903c-e90868b112ef-kube-api-access-lc6xw" (OuterVolumeSpecName: "kube-api-access-lc6xw") pod "77a3ac83-6add-48c1-903c-e90868b112ef" (UID: "77a3ac83-6add-48c1-903c-e90868b112ef"). InnerVolumeSpecName "kube-api-access-lc6xw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 13 11:58:57.524888 kubelet[2752]: I1113 11:58:57.524537 2752 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77a3ac83-6add-48c1-903c-e90868b112ef-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "77a3ac83-6add-48c1-903c-e90868b112ef" (UID: "77a3ac83-6add-48c1-903c-e90868b112ef"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 13 11:58:57.526443 kubelet[2752]: I1113 11:58:57.526392 2752 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77a3ac83-6add-48c1-903c-e90868b112ef-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "77a3ac83-6add-48c1-903c-e90868b112ef" (UID: "77a3ac83-6add-48c1-903c-e90868b112ef"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 13 11:58:57.616383 kubelet[2752]: I1113 11:58:57.616273 2752 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77a3ac83-6add-48c1-903c-e90868b112ef-tigera-ca-bundle\") on node \"srv-gr2mf.gb1.brightbox.com\" DevicePath \"\"" Nov 13 11:58:57.616383 kubelet[2752]: I1113 11:58:57.616313 2752 reconciler_common.go:289] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/77a3ac83-6add-48c1-903c-e90868b112ef-typha-certs\") on node \"srv-gr2mf.gb1.brightbox.com\" DevicePath \"\"" Nov 13 11:58:57.616383 kubelet[2752]: I1113 11:58:57.616335 2752 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-lc6xw\" (UniqueName: \"kubernetes.io/projected/77a3ac83-6add-48c1-903c-e90868b112ef-kube-api-access-lc6xw\") on node \"srv-gr2mf.gb1.brightbox.com\" DevicePath \"\"" Nov 13 11:58:57.901699 systemd[1]: Started sshd@13-10.244.96.58:22-147.75.109.163:37700.service - OpenSSH per-connection server daemon (147.75.109.163:37700). Nov 13 11:58:58.215473 kubelet[2752]: I1113 11:58:58.215391 2752 scope.go:117] "RemoveContainer" containerID="92971ca5c4ff6b24398c5f8aa445fabe04d9343a5247a4be4467dee0f224af49" Nov 13 11:58:58.218916 containerd[1515]: time="2024-11-13T11:58:58.218782525Z" level=info msg="RemoveContainer for \"92971ca5c4ff6b24398c5f8aa445fabe04d9343a5247a4be4467dee0f224af49\"" Nov 13 11:58:58.228613 systemd[1]: Removed slice kubepods-besteffort-pod77a3ac83_6add_48c1_903c_e90868b112ef.slice - libcontainer container kubepods-besteffort-pod77a3ac83_6add_48c1_903c_e90868b112ef.slice. Nov 13 11:58:58.231834 containerd[1515]: time="2024-11-13T11:58:58.231594698Z" level=info msg="RemoveContainer for \"92971ca5c4ff6b24398c5f8aa445fabe04d9343a5247a4be4467dee0f224af49\" returns successfully" Nov 13 11:58:58.237701 systemd[1]: var-lib-kubelet-pods-77a3ac83\x2d6add\x2d48c1\x2d903c\x2de90868b112ef-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Nov 13 11:58:58.237820 systemd[1]: var-lib-kubelet-pods-77a3ac83\x2d6add\x2d48c1\x2d903c\x2de90868b112ef-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlc6xw.mount: Deactivated successfully. Nov 13 11:58:58.237887 systemd[1]: var-lib-kubelet-pods-77a3ac83\x2d6add\x2d48c1\x2d903c\x2de90868b112ef-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Nov 13 11:58:58.240519 kubelet[2752]: I1113 11:58:58.240494 2752 scope.go:117] "RemoveContainer" containerID="92971ca5c4ff6b24398c5f8aa445fabe04d9343a5247a4be4467dee0f224af49" Nov 13 11:58:58.263109 containerd[1515]: time="2024-11-13T11:58:58.250020141Z" level=error msg="ContainerStatus for \"92971ca5c4ff6b24398c5f8aa445fabe04d9343a5247a4be4467dee0f224af49\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"92971ca5c4ff6b24398c5f8aa445fabe04d9343a5247a4be4467dee0f224af49\": not found" Nov 13 11:58:58.291919 kubelet[2752]: E1113 11:58:58.291824 2752 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"92971ca5c4ff6b24398c5f8aa445fabe04d9343a5247a4be4467dee0f224af49\": not found" containerID="92971ca5c4ff6b24398c5f8aa445fabe04d9343a5247a4be4467dee0f224af49" Nov 13 11:58:58.291919 kubelet[2752]: I1113 11:58:58.291880 2752 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"92971ca5c4ff6b24398c5f8aa445fabe04d9343a5247a4be4467dee0f224af49"} err="failed to get container status \"92971ca5c4ff6b24398c5f8aa445fabe04d9343a5247a4be4467dee0f224af49\": rpc error: code = NotFound desc = an error occurred when try to find container \"92971ca5c4ff6b24398c5f8aa445fabe04d9343a5247a4be4467dee0f224af49\": not found" Nov 13 11:58:58.543459 kubelet[2752]: I1113 11:58:58.543041 2752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77a3ac83-6add-48c1-903c-e90868b112ef" path="/var/lib/kubelet/pods/77a3ac83-6add-48c1-903c-e90868b112ef/volumes" Nov 13 11:58:58.845363 sshd[5805]: Accepted publickey for core from 147.75.109.163 port 37700 ssh2: RSA SHA256:6zq1KeZH3fhJd7rNbiqRD8Qhg+Zgu4M5RIFDzzh/o6k Nov 13 11:58:58.849095 sshd[5805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 11:58:58.857447 systemd-logind[1496]: New session 14 of user core. Nov 13 11:58:58.864371 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 13 11:58:59.599073 sshd[5805]: pam_unix(sshd:session): session closed for user core Nov 13 11:58:59.607951 systemd[1]: sshd@13-10.244.96.58:22-147.75.109.163:37700.service: Deactivated successfully. Nov 13 11:58:59.610919 systemd[1]: session-14.scope: Deactivated successfully. Nov 13 11:58:59.612161 systemd-logind[1496]: Session 14 logged out. Waiting for processes to exit. Nov 13 11:58:59.613917 systemd-logind[1496]: Removed session 14. Nov 13 11:59:02.419315 kubelet[2752]: I1113 11:59:02.418510 2752 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 13 11:59:02.651239 update_engine[1497]: I20241113 11:59:02.649827 1497 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 13 11:59:02.656066 update_engine[1497]: I20241113 11:59:02.653561 1497 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 13 11:59:02.656066 update_engine[1497]: I20241113 11:59:02.655686 1497 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 13 11:59:02.658024 update_engine[1497]: E20241113 11:59:02.657797 1497 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 13 11:59:02.658024 update_engine[1497]: I20241113 11:59:02.657964 1497 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Nov 13 11:59:04.771815 systemd[1]: Started sshd@14-10.244.96.58:22-147.75.109.163:57038.service - OpenSSH per-connection server daemon (147.75.109.163:57038). Nov 13 11:59:05.711227 sshd[5984]: Accepted publickey for core from 147.75.109.163 port 57038 ssh2: RSA SHA256:6zq1KeZH3fhJd7rNbiqRD8Qhg+Zgu4M5RIFDzzh/o6k Nov 13 11:59:05.715685 sshd[5984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 11:59:05.727979 systemd-logind[1496]: New session 15 of user core. Nov 13 11:59:05.737790 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 13 11:59:06.472300 sshd[5984]: pam_unix(sshd:session): session closed for user core Nov 13 11:59:06.476982 systemd[1]: sshd@14-10.244.96.58:22-147.75.109.163:57038.service: Deactivated successfully. Nov 13 11:59:06.479330 systemd[1]: session-15.scope: Deactivated successfully. Nov 13 11:59:06.481163 systemd-logind[1496]: Session 15 logged out. Waiting for processes to exit. Nov 13 11:59:06.483789 systemd-logind[1496]: Removed session 15. Nov 13 11:59:06.625445 systemd[1]: Started sshd@15-10.244.96.58:22-147.75.109.163:57048.service - OpenSSH per-connection server daemon (147.75.109.163:57048). Nov 13 11:59:07.522348 sshd[6043]: Accepted publickey for core from 147.75.109.163 port 57048 ssh2: RSA SHA256:6zq1KeZH3fhJd7rNbiqRD8Qhg+Zgu4M5RIFDzzh/o6k Nov 13 11:59:07.527402 sshd[6043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 11:59:07.536723 systemd-logind[1496]: New session 16 of user core. Nov 13 11:59:07.539380 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 13 11:59:08.282627 sshd[6043]: pam_unix(sshd:session): session closed for user core Nov 13 11:59:08.288980 systemd-logind[1496]: Session 16 logged out. Waiting for processes to exit. Nov 13 11:59:08.290338 systemd[1]: sshd@15-10.244.96.58:22-147.75.109.163:57048.service: Deactivated successfully. Nov 13 11:59:08.295411 systemd[1]: session-16.scope: Deactivated successfully. Nov 13 11:59:08.296946 systemd-logind[1496]: Removed session 16. Nov 13 11:59:08.446658 systemd[1]: Started sshd@16-10.244.96.58:22-147.75.109.163:57064.service - OpenSSH per-connection server daemon (147.75.109.163:57064). Nov 13 11:59:09.353318 sshd[6075]: Accepted publickey for core from 147.75.109.163 port 57064 ssh2: RSA SHA256:6zq1KeZH3fhJd7rNbiqRD8Qhg+Zgu4M5RIFDzzh/o6k Nov 13 11:59:09.356140 sshd[6075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 11:59:09.367505 systemd-logind[1496]: New session 17 of user core. Nov 13 11:59:09.376537 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 13 11:59:10.083679 sshd[6075]: pam_unix(sshd:session): session closed for user core Nov 13 11:59:10.101995 systemd[1]: sshd@16-10.244.96.58:22-147.75.109.163:57064.service: Deactivated successfully. Nov 13 11:59:10.112794 systemd[1]: session-17.scope: Deactivated successfully. Nov 13 11:59:10.115611 systemd-logind[1496]: Session 17 logged out. Waiting for processes to exit. Nov 13 11:59:10.121538 systemd-logind[1496]: Removed session 17. Nov 13 11:59:12.649443 update_engine[1497]: I20241113 11:59:12.648885 1497 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 13 11:59:12.650597 update_engine[1497]: I20241113 11:59:12.650476 1497 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 13 11:59:12.651461 update_engine[1497]: I20241113 11:59:12.651389 1497 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 13 11:59:12.652324 update_engine[1497]: E20241113 11:59:12.652242 1497 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 13 11:59:12.652541 update_engine[1497]: I20241113 11:59:12.652444 1497 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Nov 13 11:59:12.652541 update_engine[1497]: I20241113 11:59:12.652485 1497 omaha_request_action.cc:617] Omaha request response: Nov 13 11:59:12.652946 update_engine[1497]: E20241113 11:59:12.652885 1497 omaha_request_action.cc:636] Omaha request network transfer failed. Nov 13 11:59:12.653379 update_engine[1497]: I20241113 11:59:12.653331 1497 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Nov 13 11:59:12.653379 update_engine[1497]: I20241113 11:59:12.653366 1497 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Nov 13 11:59:12.653507 update_engine[1497]: I20241113 11:59:12.653383 1497 update_attempter.cc:306] Processing Done. Nov 13 11:59:12.653507 update_engine[1497]: E20241113 11:59:12.653456 1497 update_attempter.cc:619] Update failed. Nov 13 11:59:12.653507 update_engine[1497]: I20241113 11:59:12.653496 1497 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Nov 13 11:59:12.653842 update_engine[1497]: I20241113 11:59:12.653512 1497 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Nov 13 11:59:12.653842 update_engine[1497]: I20241113 11:59:12.653528 1497 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Nov 13 11:59:12.654366 update_engine[1497]: I20241113 11:59:12.654047 1497 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Nov 13 11:59:12.654366 update_engine[1497]: I20241113 11:59:12.654162 1497 omaha_request_action.cc:271] Posting an Omaha request to disabled Nov 13 11:59:12.654366 update_engine[1497]: I20241113 11:59:12.654181 1497 omaha_request_action.cc:272] Request: Nov 13 11:59:12.654366 update_engine[1497]: Nov 13 11:59:12.654366 update_engine[1497]: Nov 13 11:59:12.654366 update_engine[1497]: Nov 13 11:59:12.654366 update_engine[1497]: Nov 13 11:59:12.654366 update_engine[1497]: Nov 13 11:59:12.654366 update_engine[1497]: Nov 13 11:59:12.654366 update_engine[1497]: I20241113 11:59:12.654237 1497 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 13 11:59:12.655894 update_engine[1497]: I20241113 11:59:12.654721 1497 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 13 11:59:12.655894 update_engine[1497]: I20241113 11:59:12.655730 1497 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 13 11:59:12.656601 update_engine[1497]: E20241113 11:59:12.656563 1497 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 13 11:59:12.656800 locksmithd[1523]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Nov 13 11:59:12.657676 update_engine[1497]: I20241113 11:59:12.657404 1497 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Nov 13 11:59:12.657676 update_engine[1497]: I20241113 11:59:12.657438 1497 omaha_request_action.cc:617] Omaha request response: Nov 13 11:59:12.657676 update_engine[1497]: I20241113 11:59:12.657452 1497 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Nov 13 11:59:12.657676 update_engine[1497]: I20241113 11:59:12.657462 1497 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Nov 13 11:59:12.657676 update_engine[1497]: I20241113 11:59:12.657471 1497 update_attempter.cc:306] Processing Done. Nov 13 11:59:12.657676 update_engine[1497]: I20241113 11:59:12.657483 1497 update_attempter.cc:310] Error event sent. Nov 13 11:59:12.657676 update_engine[1497]: I20241113 11:59:12.657507 1497 update_check_scheduler.cc:74] Next update check in 42m52s Nov 13 11:59:12.658562 locksmithd[1523]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Nov 13 11:59:15.250516 systemd[1]: Started sshd@17-10.244.96.58:22-147.75.109.163:43172.service - OpenSSH per-connection server daemon (147.75.109.163:43172). Nov 13 11:59:16.171292 sshd[6234]: Accepted publickey for core from 147.75.109.163 port 43172 ssh2: RSA SHA256:6zq1KeZH3fhJd7rNbiqRD8Qhg+Zgu4M5RIFDzzh/o6k Nov 13 11:59:16.174021 sshd[6234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 11:59:16.184222 systemd-logind[1496]: New session 18 of user core. Nov 13 11:59:16.194380 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 13 11:59:16.897541 sshd[6234]: pam_unix(sshd:session): session closed for user core Nov 13 11:59:16.902920 systemd-logind[1496]: Session 18 logged out. Waiting for processes to exit. Nov 13 11:59:16.903566 systemd[1]: sshd@17-10.244.96.58:22-147.75.109.163:43172.service: Deactivated successfully. Nov 13 11:59:16.909390 systemd[1]: session-18.scope: Deactivated successfully. Nov 13 11:59:16.912675 systemd-logind[1496]: Removed session 18. Nov 13 11:59:22.071676 systemd[1]: Started sshd@18-10.244.96.58:22-147.75.109.163:48834.service - OpenSSH per-connection server daemon (147.75.109.163:48834). Nov 13 11:59:22.983095 sshd[6356]: Accepted publickey for core from 147.75.109.163 port 48834 ssh2: RSA SHA256:6zq1KeZH3fhJd7rNbiqRD8Qhg+Zgu4M5RIFDzzh/o6k Nov 13 11:59:22.987702 sshd[6356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 11:59:22.999495 systemd-logind[1496]: New session 19 of user core. Nov 13 11:59:23.005600 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 13 11:59:23.768135 sshd[6356]: pam_unix(sshd:session): session closed for user core Nov 13 11:59:23.782183 systemd[1]: sshd@18-10.244.96.58:22-147.75.109.163:48834.service: Deactivated successfully. Nov 13 11:59:23.786769 systemd[1]: session-19.scope: Deactivated successfully. Nov 13 11:59:23.787824 systemd-logind[1496]: Session 19 logged out. Waiting for processes to exit. Nov 13 11:59:23.789274 systemd-logind[1496]: Removed session 19. Nov 13 11:59:24.658361 systemd[1]: run-containerd-runc-k8s.io-09b7b04dbd3c01669b659677cc607ec6f5f9aafcbebf1630e41f518324da775e-runc.IY9aMu.mount: Deactivated successfully. Nov 13 11:59:28.929074 systemd[1]: Started sshd@19-10.244.96.58:22-147.75.109.163:48840.service - OpenSSH per-connection server daemon (147.75.109.163:48840). Nov 13 11:59:29.841994 sshd[6514]: Accepted publickey for core from 147.75.109.163 port 48840 ssh2: RSA SHA256:6zq1KeZH3fhJd7rNbiqRD8Qhg+Zgu4M5RIFDzzh/o6k Nov 13 11:59:29.846647 sshd[6514]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 11:59:29.860689 systemd-logind[1496]: New session 20 of user core. Nov 13 11:59:29.870459 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 13 11:59:30.188152 containerd[1515]: time="2024-11-13T11:59:30.188075180Z" level=info msg="StopPodSandbox for \"83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b\"" Nov 13 11:59:30.296923 containerd[1515]: 2024-11-13 11:59:30.252 [WARNING][6550] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b" WorkloadEndpoint="srv--gr2mf.gb1.brightbox.com-k8s-calico--kube--controllers--7f4895d8cb--7gh4j-eth0" Nov 13 11:59:30.296923 containerd[1515]: 2024-11-13 11:59:30.252 [INFO][6550] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b" Nov 13 11:59:30.296923 containerd[1515]: 2024-11-13 11:59:30.252 [INFO][6550] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b" iface="eth0" netns="" Nov 13 11:59:30.296923 containerd[1515]: 2024-11-13 11:59:30.252 [INFO][6550] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b" Nov 13 11:59:30.296923 containerd[1515]: 2024-11-13 11:59:30.252 [INFO][6550] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b" Nov 13 11:59:30.296923 containerd[1515]: 2024-11-13 11:59:30.283 [INFO][6556] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b" HandleID="k8s-pod-network.83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b" Workload="srv--gr2mf.gb1.brightbox.com-k8s-calico--kube--controllers--7f4895d8cb--7gh4j-eth0" Nov 13 11:59:30.296923 containerd[1515]: 2024-11-13 11:59:30.283 [INFO][6556] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 13 11:59:30.296923 containerd[1515]: 2024-11-13 11:59:30.283 [INFO][6556] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 13 11:59:30.296923 containerd[1515]: 2024-11-13 11:59:30.290 [WARNING][6556] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b" HandleID="k8s-pod-network.83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b" Workload="srv--gr2mf.gb1.brightbox.com-k8s-calico--kube--controllers--7f4895d8cb--7gh4j-eth0" Nov 13 11:59:30.296923 containerd[1515]: 2024-11-13 11:59:30.290 [INFO][6556] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b" HandleID="k8s-pod-network.83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b" Workload="srv--gr2mf.gb1.brightbox.com-k8s-calico--kube--controllers--7f4895d8cb--7gh4j-eth0" Nov 13 11:59:30.296923 containerd[1515]: 2024-11-13 11:59:30.292 [INFO][6556] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 13 11:59:30.296923 containerd[1515]: 2024-11-13 11:59:30.294 [INFO][6550] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b" Nov 13 11:59:30.297468 containerd[1515]: time="2024-11-13T11:59:30.296982706Z" level=info msg="TearDown network for sandbox \"83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b\" successfully" Nov 13 11:59:30.297468 containerd[1515]: time="2024-11-13T11:59:30.297016477Z" level=info msg="StopPodSandbox for \"83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b\" returns successfully" Nov 13 11:59:30.297775 containerd[1515]: time="2024-11-13T11:59:30.297746785Z" level=info msg="RemovePodSandbox for \"83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b\"" Nov 13 11:59:30.297817 containerd[1515]: time="2024-11-13T11:59:30.297789945Z" level=info msg="Forcibly stopping sandbox \"83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b\"" Nov 13 11:59:30.393961 containerd[1515]: 2024-11-13 11:59:30.349 [WARNING][6574] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b" WorkloadEndpoint="srv--gr2mf.gb1.brightbox.com-k8s-calico--kube--controllers--7f4895d8cb--7gh4j-eth0" Nov 13 11:59:30.393961 containerd[1515]: 2024-11-13 11:59:30.349 [INFO][6574] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b" Nov 13 11:59:30.393961 containerd[1515]: 2024-11-13 11:59:30.350 [INFO][6574] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b" iface="eth0" netns="" Nov 13 11:59:30.393961 containerd[1515]: 2024-11-13 11:59:30.350 [INFO][6574] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b" Nov 13 11:59:30.393961 containerd[1515]: 2024-11-13 11:59:30.350 [INFO][6574] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b" Nov 13 11:59:30.393961 containerd[1515]: 2024-11-13 11:59:30.375 [INFO][6588] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b" HandleID="k8s-pod-network.83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b" Workload="srv--gr2mf.gb1.brightbox.com-k8s-calico--kube--controllers--7f4895d8cb--7gh4j-eth0" Nov 13 11:59:30.393961 containerd[1515]: 2024-11-13 11:59:30.375 [INFO][6588] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 13 11:59:30.393961 containerd[1515]: 2024-11-13 11:59:30.375 [INFO][6588] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 13 11:59:30.393961 containerd[1515]: 2024-11-13 11:59:30.383 [WARNING][6588] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b" HandleID="k8s-pod-network.83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b" Workload="srv--gr2mf.gb1.brightbox.com-k8s-calico--kube--controllers--7f4895d8cb--7gh4j-eth0" Nov 13 11:59:30.393961 containerd[1515]: 2024-11-13 11:59:30.384 [INFO][6588] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b" HandleID="k8s-pod-network.83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b" Workload="srv--gr2mf.gb1.brightbox.com-k8s-calico--kube--controllers--7f4895d8cb--7gh4j-eth0" Nov 13 11:59:30.393961 containerd[1515]: 2024-11-13 11:59:30.390 [INFO][6588] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 13 11:59:30.393961 containerd[1515]: 2024-11-13 11:59:30.392 [INFO][6574] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b" Nov 13 11:59:30.393961 containerd[1515]: time="2024-11-13T11:59:30.393914093Z" level=info msg="TearDown network for sandbox \"83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b\" successfully" Nov 13 11:59:30.397702 containerd[1515]: time="2024-11-13T11:59:30.397657090Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 13 11:59:30.397806 containerd[1515]: time="2024-11-13T11:59:30.397737678Z" level=info msg="RemovePodSandbox \"83b9e6cf4bec177d5614bccb8d0b8e79dcfcbfdcc41c1dab400c3d4fa4e3fa4b\" returns successfully" Nov 13 11:59:30.398622 containerd[1515]: time="2024-11-13T11:59:30.398596494Z" level=info msg="StopPodSandbox for \"f2c4694e9a612be6be59892172d2752be07f0489be877df395d20597b9b8cc12\"" Nov 13 11:59:30.398705 containerd[1515]: time="2024-11-13T11:59:30.398677259Z" level=info msg="TearDown network for sandbox \"f2c4694e9a612be6be59892172d2752be07f0489be877df395d20597b9b8cc12\" successfully" Nov 13 11:59:30.398705 containerd[1515]: time="2024-11-13T11:59:30.398688544Z" level=info msg="StopPodSandbox for \"f2c4694e9a612be6be59892172d2752be07f0489be877df395d20597b9b8cc12\" returns successfully" Nov 13 11:59:30.399033 containerd[1515]: time="2024-11-13T11:59:30.399004964Z" level=info msg="RemovePodSandbox for \"f2c4694e9a612be6be59892172d2752be07f0489be877df395d20597b9b8cc12\"" Nov 13 11:59:30.399090 containerd[1515]: time="2024-11-13T11:59:30.399034107Z" level=info msg="Forcibly stopping sandbox \"f2c4694e9a612be6be59892172d2752be07f0489be877df395d20597b9b8cc12\"" Nov 13 11:59:30.399090 containerd[1515]: time="2024-11-13T11:59:30.399083822Z" level=info msg="TearDown network for sandbox \"f2c4694e9a612be6be59892172d2752be07f0489be877df395d20597b9b8cc12\" successfully" Nov 13 11:59:30.406475 containerd[1515]: time="2024-11-13T11:59:30.406277707Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f2c4694e9a612be6be59892172d2752be07f0489be877df395d20597b9b8cc12\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 13 11:59:30.406475 containerd[1515]: time="2024-11-13T11:59:30.406356647Z" level=info msg="RemovePodSandbox \"f2c4694e9a612be6be59892172d2752be07f0489be877df395d20597b9b8cc12\" returns successfully" Nov 13 11:59:30.560165 sshd[6514]: pam_unix(sshd:session): session closed for user core Nov 13 11:59:30.569881 systemd[1]: sshd@19-10.244.96.58:22-147.75.109.163:48840.service: Deactivated successfully. Nov 13 11:59:30.573483 systemd[1]: session-20.scope: Deactivated successfully. Nov 13 11:59:30.575164 systemd-logind[1496]: Session 20 logged out. Waiting for processes to exit. Nov 13 11:59:30.578061 systemd-logind[1496]: Removed session 20. Nov 13 11:59:30.719290 systemd[1]: Started sshd@20-10.244.96.58:22-147.75.109.163:39342.service - OpenSSH per-connection server daemon (147.75.109.163:39342). Nov 13 11:59:31.620961 sshd[6601]: Accepted publickey for core from 147.75.109.163 port 39342 ssh2: RSA SHA256:6zq1KeZH3fhJd7rNbiqRD8Qhg+Zgu4M5RIFDzzh/o6k Nov 13 11:59:31.625300 sshd[6601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 11:59:31.636907 systemd-logind[1496]: New session 21 of user core. Nov 13 11:59:31.644355 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 13 11:59:32.648030 sshd[6601]: pam_unix(sshd:session): session closed for user core Nov 13 11:59:32.659711 systemd-logind[1496]: Session 21 logged out. Waiting for processes to exit. Nov 13 11:59:32.660441 systemd[1]: sshd@20-10.244.96.58:22-147.75.109.163:39342.service: Deactivated successfully. Nov 13 11:59:32.664337 systemd[1]: session-21.scope: Deactivated successfully. Nov 13 11:59:32.667269 systemd-logind[1496]: Removed session 21. Nov 13 11:59:32.808156 systemd[1]: Started sshd@21-10.244.96.58:22-147.75.109.163:39354.service - OpenSSH per-connection server daemon (147.75.109.163:39354). Nov 13 11:59:33.729648 sshd[6648]: Accepted publickey for core from 147.75.109.163 port 39354 ssh2: RSA SHA256:6zq1KeZH3fhJd7rNbiqRD8Qhg+Zgu4M5RIFDzzh/o6k Nov 13 11:59:33.742362 sshd[6648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 11:59:33.749324 systemd-logind[1496]: New session 22 of user core. Nov 13 11:59:33.759484 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 13 11:59:37.065430 sshd[6648]: pam_unix(sshd:session): session closed for user core Nov 13 11:59:37.074590 systemd[1]: sshd@21-10.244.96.58:22-147.75.109.163:39354.service: Deactivated successfully. Nov 13 11:59:37.079298 systemd[1]: session-22.scope: Deactivated successfully. Nov 13 11:59:37.082414 systemd-logind[1496]: Session 22 logged out. Waiting for processes to exit. Nov 13 11:59:37.085035 systemd-logind[1496]: Removed session 22. Nov 13 11:59:37.229772 systemd[1]: Started sshd@22-10.244.96.58:22-147.75.109.163:39360.service - OpenSSH per-connection server daemon (147.75.109.163:39360). Nov 13 11:59:38.170011 sshd[6775]: Accepted publickey for core from 147.75.109.163 port 39360 ssh2: RSA SHA256:6zq1KeZH3fhJd7rNbiqRD8Qhg+Zgu4M5RIFDzzh/o6k Nov 13 11:59:38.171103 sshd[6775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 11:59:38.179998 systemd-logind[1496]: New session 23 of user core. Nov 13 11:59:38.186375 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 13 11:59:39.371019 sshd[6775]: pam_unix(sshd:session): session closed for user core Nov 13 11:59:39.377789 systemd-logind[1496]: Session 23 logged out. Waiting for processes to exit. Nov 13 11:59:39.378130 systemd[1]: sshd@22-10.244.96.58:22-147.75.109.163:39360.service: Deactivated successfully. Nov 13 11:59:39.382465 systemd[1]: session-23.scope: Deactivated successfully. Nov 13 11:59:39.386515 systemd-logind[1496]: Removed session 23. Nov 13 11:59:39.528498 systemd[1]: Started sshd@23-10.244.96.58:22-147.75.109.163:33484.service - OpenSSH per-connection server daemon (147.75.109.163:33484). Nov 13 11:59:40.467182 sshd[6828]: Accepted publickey for core from 147.75.109.163 port 33484 ssh2: RSA SHA256:6zq1KeZH3fhJd7rNbiqRD8Qhg+Zgu4M5RIFDzzh/o6k Nov 13 11:59:40.475185 sshd[6828]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 11:59:40.488353 systemd-logind[1496]: New session 24 of user core. Nov 13 11:59:40.493406 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 13 11:59:41.368652 sshd[6828]: pam_unix(sshd:session): session closed for user core Nov 13 11:59:41.376350 systemd[1]: sshd@23-10.244.96.58:22-147.75.109.163:33484.service: Deactivated successfully. Nov 13 11:59:41.379810 systemd[1]: session-24.scope: Deactivated successfully. Nov 13 11:59:41.382711 systemd-logind[1496]: Session 24 logged out. Waiting for processes to exit. Nov 13 11:59:41.384553 systemd-logind[1496]: Removed session 24. Nov 13 11:59:46.534557 systemd[1]: Started sshd@24-10.244.96.58:22-147.75.109.163:33498.service - OpenSSH per-connection server daemon (147.75.109.163:33498). Nov 13 11:59:47.474110 sshd[7040]: Accepted publickey for core from 147.75.109.163 port 33498 ssh2: RSA SHA256:6zq1KeZH3fhJd7rNbiqRD8Qhg+Zgu4M5RIFDzzh/o6k Nov 13 11:59:47.479680 sshd[7040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 11:59:47.491401 systemd-logind[1496]: New session 25 of user core. Nov 13 11:59:47.496660 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 13 11:59:48.376349 sshd[7040]: pam_unix(sshd:session): session closed for user core Nov 13 11:59:48.384453 systemd[1]: sshd@24-10.244.96.58:22-147.75.109.163:33498.service: Deactivated successfully. Nov 13 11:59:48.388039 systemd[1]: session-25.scope: Deactivated successfully. Nov 13 11:59:48.389990 systemd-logind[1496]: Session 25 logged out. Waiting for processes to exit. Nov 13 11:59:48.391314 systemd-logind[1496]: Removed session 25. Nov 13 11:59:53.546675 systemd[1]: Started sshd@25-10.244.96.58:22-147.75.109.163:41970.service - OpenSSH per-connection server daemon (147.75.109.163:41970). Nov 13 11:59:54.455004 sshd[7061]: Accepted publickey for core from 147.75.109.163 port 41970 ssh2: RSA SHA256:6zq1KeZH3fhJd7rNbiqRD8Qhg+Zgu4M5RIFDzzh/o6k Nov 13 11:59:54.459628 sshd[7061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 11:59:54.472218 systemd-logind[1496]: New session 26 of user core. Nov 13 11:59:54.479431 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 13 11:59:55.248398 sshd[7061]: pam_unix(sshd:session): session closed for user core Nov 13 11:59:55.255498 systemd[1]: sshd@25-10.244.96.58:22-147.75.109.163:41970.service: Deactivated successfully. Nov 13 11:59:55.259973 systemd[1]: session-26.scope: Deactivated successfully. Nov 13 11:59:55.263382 systemd-logind[1496]: Session 26 logged out. Waiting for processes to exit. Nov 13 11:59:55.264540 systemd-logind[1496]: Removed session 26. Nov 13 12:00:00.416914 systemd[1]: Started sshd@26-10.244.96.58:22-147.75.109.163:51566.service - OpenSSH per-connection server daemon (147.75.109.163:51566). Nov 13 12:00:01.344346 sshd[7120]: Accepted publickey for core from 147.75.109.163 port 51566 ssh2: RSA SHA256:6zq1KeZH3fhJd7rNbiqRD8Qhg+Zgu4M5RIFDzzh/o6k Nov 13 12:00:01.345081 sshd[7120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 12:00:01.353212 systemd-logind[1496]: New session 27 of user core. Nov 13 12:00:01.357382 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 13 12:00:02.051435 sshd[7120]: pam_unix(sshd:session): session closed for user core Nov 13 12:00:02.063069 systemd[1]: sshd@26-10.244.96.58:22-147.75.109.163:51566.service: Deactivated successfully. Nov 13 12:00:02.067117 systemd[1]: session-27.scope: Deactivated successfully. Nov 13 12:00:02.068505 systemd-logind[1496]: Session 27 logged out. Waiting for processes to exit. Nov 13 12:00:02.069843 systemd-logind[1496]: Removed session 27.