Sep 4 17:51:39.125293 kernel: Linux version 6.6.48-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Sep 4 15:54:07 -00 2024 Sep 4 17:51:39.125340 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=ceda2dd706627da8006bcd6ae77ea155b2a7de6732e2c1c7ab4bed271400663d Sep 4 17:51:39.125360 kernel: BIOS-provided physical RAM map: Sep 4 17:51:39.125372 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Sep 4 17:51:39.125383 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Sep 4 17:51:39.125395 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Sep 4 17:51:39.125412 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Sep 4 17:51:39.125432 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Sep 4 17:51:39.125447 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Sep 4 17:51:39.125462 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Sep 4 17:51:39.125478 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Sep 4 17:51:39.125493 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Sep 4 17:51:39.125508 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Sep 4 17:51:39.125523 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Sep 4 17:51:39.125546 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Sep 4 17:51:39.125563 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Sep 4 17:51:39.125580 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Sep 4 17:51:39.125597 kernel: NX (Execute Disable) protection: active Sep 4 17:51:39.125613 kernel: APIC: Static calls initialized Sep 4 17:51:39.125630 kernel: efi: EFI v2.7 by EDK II Sep 4 17:51:39.125646 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Sep 4 17:51:39.125687 kernel: SMBIOS 2.4 present. Sep 4 17:51:39.125705 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/06/2024 Sep 4 17:51:39.125722 kernel: Hypervisor detected: KVM Sep 4 17:51:39.125743 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 4 17:51:39.125759 kernel: kvm-clock: using sched offset of 11674557486 cycles Sep 4 17:51:39.125777 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 4 17:51:39.125794 kernel: tsc: Detected 2299.998 MHz processor Sep 4 17:51:39.125818 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 4 17:51:39.125836 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 4 17:51:39.125853 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Sep 4 17:51:39.125870 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Sep 4 17:51:39.125888 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 4 17:51:39.125908 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Sep 4 17:51:39.125926 kernel: Using GB pages for direct mapping Sep 4 17:51:39.125943 kernel: Secure boot disabled Sep 4 17:51:39.125960 kernel: ACPI: Early table checksum verification disabled Sep 4 17:51:39.125977 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Sep 4 17:51:39.125995 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Sep 4 17:51:39.126012 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Sep 4 17:51:39.126037 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Sep 4 17:51:39.126058 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Sep 4 17:51:39.126077 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Sep 4 17:51:39.126095 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Sep 4 17:51:39.126114 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Sep 4 17:51:39.126132 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Sep 4 17:51:39.126150 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Sep 4 17:51:39.126172 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Sep 4 17:51:39.126190 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Sep 4 17:51:39.126209 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Sep 4 17:51:39.126227 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Sep 4 17:51:39.126245 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Sep 4 17:51:39.126263 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Sep 4 17:51:39.126281 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Sep 4 17:51:39.126300 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Sep 4 17:51:39.126318 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Sep 4 17:51:39.126340 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Sep 4 17:51:39.126358 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 4 17:51:39.126377 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 4 17:51:39.126394 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 4 17:51:39.126412 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Sep 4 17:51:39.126430 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Sep 4 17:51:39.126448 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Sep 4 17:51:39.126467 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Sep 4 17:51:39.126485 kernel: NODE_DATA(0) allocated [mem 0x21fff8000-0x21fffdfff] Sep 4 17:51:39.126508 kernel: Zone ranges: Sep 4 17:51:39.126525 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 4 17:51:39.126544 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 4 17:51:39.126563 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Sep 4 17:51:39.126581 kernel: Movable zone start for each node Sep 4 17:51:39.126599 kernel: Early memory node ranges Sep 4 17:51:39.126618 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Sep 4 17:51:39.126636 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Sep 4 17:51:39.126654 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Sep 4 17:51:39.126962 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Sep 4 17:51:39.126980 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Sep 4 17:51:39.126999 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Sep 4 17:51:39.127018 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 4 17:51:39.127037 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Sep 4 17:51:39.127055 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Sep 4 17:51:39.127073 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 4 17:51:39.127092 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Sep 4 17:51:39.127112 kernel: ACPI: PM-Timer IO Port: 0xb008 Sep 4 17:51:39.127134 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 4 17:51:39.127153 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 4 17:51:39.127171 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 4 17:51:39.127189 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 4 17:51:39.127208 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 4 17:51:39.127226 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 4 17:51:39.127245 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 4 17:51:39.127264 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 4 17:51:39.127282 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Sep 4 17:51:39.127303 kernel: Booting paravirtualized kernel on KVM Sep 4 17:51:39.127322 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 4 17:51:39.127341 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 4 17:51:39.127360 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Sep 4 17:51:39.127379 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Sep 4 17:51:39.127397 kernel: pcpu-alloc: [0] 0 1 Sep 4 17:51:39.127414 kernel: kvm-guest: PV spinlocks enabled Sep 4 17:51:39.127433 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 4 17:51:39.127454 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=ceda2dd706627da8006bcd6ae77ea155b2a7de6732e2c1c7ab4bed271400663d Sep 4 17:51:39.127477 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 17:51:39.127496 kernel: random: crng init done Sep 4 17:51:39.127513 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Sep 4 17:51:39.127532 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 17:51:39.127551 kernel: Fallback order for Node 0: 0 Sep 4 17:51:39.127569 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Sep 4 17:51:39.127588 kernel: Policy zone: Normal Sep 4 17:51:39.127606 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 17:51:39.127628 kernel: software IO TLB: area num 2. Sep 4 17:51:39.127647 kernel: Memory: 7515640K/7860584K available (12288K kernel code, 2304K rwdata, 22708K rodata, 42704K init, 2488K bss, 344684K reserved, 0K cma-reserved) Sep 4 17:51:39.127681 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 4 17:51:39.127700 kernel: Kernel/User page tables isolation: enabled Sep 4 17:51:39.127719 kernel: ftrace: allocating 37748 entries in 148 pages Sep 4 17:51:39.127737 kernel: ftrace: allocated 148 pages with 3 groups Sep 4 17:51:39.127755 kernel: Dynamic Preempt: voluntary Sep 4 17:51:39.127774 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 17:51:39.127794 kernel: rcu: RCU event tracing is enabled. Sep 4 17:51:39.127838 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 4 17:51:39.127858 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 17:51:39.127878 kernel: Rude variant of Tasks RCU enabled. Sep 4 17:51:39.127902 kernel: Tracing variant of Tasks RCU enabled. Sep 4 17:51:39.127921 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 17:51:39.127941 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 4 17:51:39.127961 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 4 17:51:39.127981 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 17:51:39.128000 kernel: Console: colour dummy device 80x25 Sep 4 17:51:39.128024 kernel: printk: console [ttyS0] enabled Sep 4 17:51:39.128044 kernel: ACPI: Core revision 20230628 Sep 4 17:51:39.128062 kernel: APIC: Switch to symmetric I/O mode setup Sep 4 17:51:39.128082 kernel: x2apic enabled Sep 4 17:51:39.128102 kernel: APIC: Switched APIC routing to: physical x2apic Sep 4 17:51:39.128122 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Sep 4 17:51:39.128143 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Sep 4 17:51:39.128162 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Sep 4 17:51:39.128186 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Sep 4 17:51:39.128206 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Sep 4 17:51:39.128226 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 4 17:51:39.128246 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Sep 4 17:51:39.128265 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Sep 4 17:51:39.128285 kernel: Spectre V2 : Mitigation: IBRS Sep 4 17:51:39.128305 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Sep 4 17:51:39.128325 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Sep 4 17:51:39.128344 kernel: RETBleed: Mitigation: IBRS Sep 4 17:51:39.128368 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 4 17:51:39.128387 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Sep 4 17:51:39.128407 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 4 17:51:39.128426 kernel: MDS: Mitigation: Clear CPU buffers Sep 4 17:51:39.128446 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 4 17:51:39.128466 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 4 17:51:39.128485 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 4 17:51:39.128500 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 4 17:51:39.128517 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 4 17:51:39.128540 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 4 17:51:39.128560 kernel: Freeing SMP alternatives memory: 32K Sep 4 17:51:39.128580 kernel: pid_max: default: 32768 minimum: 301 Sep 4 17:51:39.128600 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 4 17:51:39.128619 kernel: landlock: Up and running. Sep 4 17:51:39.128639 kernel: SELinux: Initializing. Sep 4 17:51:39.128669 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 4 17:51:39.128702 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 4 17:51:39.128722 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Sep 4 17:51:39.128745 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:51:39.128761 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:51:39.128778 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:51:39.128803 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Sep 4 17:51:39.128821 kernel: signal: max sigframe size: 1776 Sep 4 17:51:39.128836 kernel: rcu: Hierarchical SRCU implementation. Sep 4 17:51:39.128857 kernel: rcu: Max phase no-delay instances is 400. Sep 4 17:51:39.128880 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 4 17:51:39.128903 kernel: smp: Bringing up secondary CPUs ... Sep 4 17:51:39.128929 kernel: smpboot: x86: Booting SMP configuration: Sep 4 17:51:39.128945 kernel: .... node #0, CPUs: #1 Sep 4 17:51:39.128966 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Sep 4 17:51:39.128986 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 4 17:51:39.129005 kernel: smp: Brought up 1 node, 2 CPUs Sep 4 17:51:39.129024 kernel: smpboot: Max logical packages: 1 Sep 4 17:51:39.129043 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Sep 4 17:51:39.129061 kernel: devtmpfs: initialized Sep 4 17:51:39.129084 kernel: x86/mm: Memory block size: 128MB Sep 4 17:51:39.129103 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Sep 4 17:51:39.129123 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 17:51:39.129142 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 4 17:51:39.129161 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 17:51:39.129180 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 17:51:39.129199 kernel: audit: initializing netlink subsys (disabled) Sep 4 17:51:39.129218 kernel: audit: type=2000 audit(1725472297.753:1): state=initialized audit_enabled=0 res=1 Sep 4 17:51:39.129236 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 17:51:39.129259 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 4 17:51:39.129278 kernel: cpuidle: using governor menu Sep 4 17:51:39.129296 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 17:51:39.129315 kernel: dca service started, version 1.12.1 Sep 4 17:51:39.129334 kernel: PCI: Using configuration type 1 for base access Sep 4 17:51:39.129353 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 4 17:51:39.129372 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 17:51:39.129391 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 17:51:39.129410 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 17:51:39.129433 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 17:51:39.129452 kernel: ACPI: Added _OSI(Module Device) Sep 4 17:51:39.129470 kernel: ACPI: Added _OSI(Processor Device) Sep 4 17:51:39.129489 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Sep 4 17:51:39.129509 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 17:51:39.129528 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Sep 4 17:51:39.129546 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 4 17:51:39.129565 kernel: ACPI: Interpreter enabled Sep 4 17:51:39.129584 kernel: ACPI: PM: (supports S0 S3 S5) Sep 4 17:51:39.129606 kernel: ACPI: Using IOAPIC for interrupt routing Sep 4 17:51:39.129626 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 4 17:51:39.129644 kernel: PCI: Ignoring E820 reservations for host bridge windows Sep 4 17:51:39.129685 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Sep 4 17:51:39.129705 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 4 17:51:39.130006 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 4 17:51:39.130204 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 4 17:51:39.130382 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 4 17:51:39.130410 kernel: PCI host bridge to bus 0000:00 Sep 4 17:51:39.130583 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 4 17:51:39.130777 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 4 17:51:39.130950 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 4 17:51:39.131111 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Sep 4 17:51:39.131271 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 4 17:51:39.131470 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 4 17:51:39.131710 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Sep 4 17:51:39.132088 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Sep 4 17:51:39.132308 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Sep 4 17:51:39.132511 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Sep 4 17:51:39.132763 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Sep 4 17:51:39.132963 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Sep 4 17:51:39.133173 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 4 17:51:39.133362 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Sep 4 17:51:39.133550 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Sep 4 17:51:39.133811 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Sep 4 17:51:39.133997 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Sep 4 17:51:39.134187 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Sep 4 17:51:39.134219 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 4 17:51:39.134238 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 4 17:51:39.134257 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 4 17:51:39.134276 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 4 17:51:39.134295 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 4 17:51:39.134314 kernel: iommu: Default domain type: Translated Sep 4 17:51:39.134333 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 4 17:51:39.134351 kernel: efivars: Registered efivars operations Sep 4 17:51:39.134371 kernel: PCI: Using ACPI for IRQ routing Sep 4 17:51:39.134390 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 4 17:51:39.134413 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Sep 4 17:51:39.134432 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Sep 4 17:51:39.134450 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Sep 4 17:51:39.134469 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Sep 4 17:51:39.134487 kernel: vgaarb: loaded Sep 4 17:51:39.134506 kernel: clocksource: Switched to clocksource kvm-clock Sep 4 17:51:39.134525 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 17:51:39.134543 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 17:51:39.134566 kernel: pnp: PnP ACPI init Sep 4 17:51:39.134584 kernel: pnp: PnP ACPI: found 7 devices Sep 4 17:51:39.134601 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 4 17:51:39.134620 kernel: NET: Registered PF_INET protocol family Sep 4 17:51:39.134639 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 4 17:51:39.134671 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Sep 4 17:51:39.134692 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 17:51:39.134710 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 17:51:39.134735 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Sep 4 17:51:39.134759 kernel: TCP: Hash tables configured (established 65536 bind 65536) Sep 4 17:51:39.134786 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 4 17:51:39.134804 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 4 17:51:39.134823 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 17:51:39.134842 kernel: NET: Registered PF_XDP protocol family Sep 4 17:51:39.135027 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 4 17:51:39.135197 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 4 17:51:39.135366 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 4 17:51:39.135540 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Sep 4 17:51:39.138188 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 4 17:51:39.138343 kernel: PCI: CLS 0 bytes, default 64 Sep 4 17:51:39.138364 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 4 17:51:39.138384 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Sep 4 17:51:39.138403 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 4 17:51:39.138423 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Sep 4 17:51:39.138549 kernel: clocksource: Switched to clocksource tsc Sep 4 17:51:39.138575 kernel: Initialise system trusted keyrings Sep 4 17:51:39.138594 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Sep 4 17:51:39.138613 kernel: Key type asymmetric registered Sep 4 17:51:39.138631 kernel: Asymmetric key parser 'x509' registered Sep 4 17:51:39.139233 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 4 17:51:39.139254 kernel: io scheduler mq-deadline registered Sep 4 17:51:39.139273 kernel: io scheduler kyber registered Sep 4 17:51:39.139292 kernel: io scheduler bfq registered Sep 4 17:51:39.139311 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 4 17:51:39.139438 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Sep 4 17:51:39.140052 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Sep 4 17:51:39.140089 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Sep 4 17:51:39.140314 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Sep 4 17:51:39.140339 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Sep 4 17:51:39.140534 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Sep 4 17:51:39.140559 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 17:51:39.140578 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 4 17:51:39.140599 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 4 17:51:39.140625 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Sep 4 17:51:39.140645 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Sep 4 17:51:39.140921 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Sep 4 17:51:39.140949 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 4 17:51:39.140968 kernel: i8042: Warning: Keylock active Sep 4 17:51:39.140984 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 4 17:51:39.141000 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 4 17:51:39.141189 kernel: rtc_cmos 00:00: RTC can wake from S4 Sep 4 17:51:39.141382 kernel: rtc_cmos 00:00: registered as rtc0 Sep 4 17:51:39.141562 kernel: rtc_cmos 00:00: setting system clock to 2024-09-04T17:51:38 UTC (1725472298) Sep 4 17:51:39.143811 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Sep 4 17:51:39.143851 kernel: intel_pstate: CPU model not supported Sep 4 17:51:39.143868 kernel: pstore: Using crash dump compression: deflate Sep 4 17:51:39.143887 kernel: pstore: Registered efi_pstore as persistent store backend Sep 4 17:51:39.143906 kernel: NET: Registered PF_INET6 protocol family Sep 4 17:51:39.143922 kernel: Segment Routing with IPv6 Sep 4 17:51:39.144049 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 17:51:39.144069 kernel: NET: Registered PF_PACKET protocol family Sep 4 17:51:39.144089 kernel: Key type dns_resolver registered Sep 4 17:51:39.144109 kernel: IPI shorthand broadcast: enabled Sep 4 17:51:39.144128 kernel: sched_clock: Marking stable (858003948, 146449383)->(1041876786, -37423455) Sep 4 17:51:39.144148 kernel: registered taskstats version 1 Sep 4 17:51:39.144231 kernel: Loading compiled-in X.509 certificates Sep 4 17:51:39.144250 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.48-flatcar: 8669771ab5e11f458b79e6634fe685dacc266b18' Sep 4 17:51:39.144269 kernel: Key type .fscrypt registered Sep 4 17:51:39.144293 kernel: Key type fscrypt-provisioning registered Sep 4 17:51:39.144313 kernel: ima: Allocated hash algorithm: sha1 Sep 4 17:51:39.144333 kernel: ima: No architecture policies found Sep 4 17:51:39.144352 kernel: clk: Disabling unused clocks Sep 4 17:51:39.144371 kernel: Freeing unused kernel image (initmem) memory: 42704K Sep 4 17:51:39.144391 kernel: Write protecting the kernel read-only data: 36864k Sep 4 17:51:39.144411 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Sep 4 17:51:39.144431 kernel: Freeing unused kernel image (rodata/data gap) memory: 1868K Sep 4 17:51:39.144455 kernel: Run /init as init process Sep 4 17:51:39.144475 kernel: with arguments: Sep 4 17:51:39.144492 kernel: /init Sep 4 17:51:39.144508 kernel: with environment: Sep 4 17:51:39.144525 kernel: HOME=/ Sep 4 17:51:39.144543 kernel: TERM=linux Sep 4 17:51:39.144562 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 17:51:39.144586 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:51:39.144613 systemd[1]: Detected virtualization google. Sep 4 17:51:39.144633 systemd[1]: Detected architecture x86-64. Sep 4 17:51:39.144652 systemd[1]: Running in initrd. Sep 4 17:51:39.144687 systemd[1]: No hostname configured, using default hostname. Sep 4 17:51:39.144707 systemd[1]: Hostname set to . Sep 4 17:51:39.144736 systemd[1]: Initializing machine ID from random generator. Sep 4 17:51:39.144753 systemd[1]: Queued start job for default target initrd.target. Sep 4 17:51:39.144774 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:51:39.144797 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:51:39.144819 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 17:51:39.144838 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:51:39.144857 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 17:51:39.144877 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 17:51:39.144899 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 17:51:39.144919 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 17:51:39.144941 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:51:39.144962 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:51:39.145000 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:51:39.145025 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:51:39.145044 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:51:39.145065 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:51:39.145088 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:51:39.145109 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:51:39.145129 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 17:51:39.145149 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 4 17:51:39.145170 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:51:39.145190 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:51:39.145211 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:51:39.145231 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:51:39.145251 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 17:51:39.145276 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:51:39.145296 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 17:51:39.145316 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 17:51:39.145336 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:51:39.145357 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:51:39.145376 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:51:39.145397 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 17:51:39.145454 systemd-journald[183]: Collecting audit messages is disabled. Sep 4 17:51:39.145502 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:51:39.145522 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 17:51:39.145548 systemd-journald[183]: Journal started Sep 4 17:51:39.145587 systemd-journald[183]: Runtime Journal (/run/log/journal/7503804a1c0d45a192737c4c4fa7832a) is 8.0M, max 148.7M, 140.7M free. Sep 4 17:51:39.143853 systemd-modules-load[184]: Inserted module 'overlay' Sep 4 17:51:39.151817 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:51:39.174940 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:51:39.187699 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 17:51:39.189017 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 17:51:39.197960 kernel: Bridge firewalling registered Sep 4 17:51:39.190968 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:51:39.192131 systemd-modules-load[184]: Inserted module 'br_netfilter' Sep 4 17:51:39.203318 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:51:39.210088 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:51:39.221293 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 17:51:39.235937 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:51:39.243866 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:51:39.256820 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:51:39.275955 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:51:39.286874 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:51:39.299047 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:51:39.299845 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:51:39.321970 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 17:51:39.337253 systemd-resolved[212]: Positive Trust Anchors: Sep 4 17:51:39.337272 systemd-resolved[212]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:51:39.337337 systemd-resolved[212]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 17:51:39.343250 systemd-resolved[212]: Defaulting to hostname 'linux'. Sep 4 17:51:39.371861 dracut-cmdline[217]: dracut-dracut-053 Sep 4 17:51:39.371861 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=ceda2dd706627da8006bcd6ae77ea155b2a7de6732e2c1c7ab4bed271400663d Sep 4 17:51:39.345713 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:51:39.362963 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:51:39.456708 kernel: SCSI subsystem initialized Sep 4 17:51:39.466689 kernel: Loading iSCSI transport class v2.0-870. Sep 4 17:51:39.479693 kernel: iscsi: registered transport (tcp) Sep 4 17:51:39.503714 kernel: iscsi: registered transport (qla4xxx) Sep 4 17:51:39.503812 kernel: QLogic iSCSI HBA Driver Sep 4 17:51:39.556382 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 17:51:39.562856 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 17:51:39.596996 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 17:51:39.597087 kernel: device-mapper: uevent: version 1.0.3 Sep 4 17:51:39.597115 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 17:51:39.642707 kernel: raid6: avx2x4 gen() 17393 MB/s Sep 4 17:51:39.659701 kernel: raid6: avx2x2 gen() 17695 MB/s Sep 4 17:51:39.677061 kernel: raid6: avx2x1 gen() 13773 MB/s Sep 4 17:51:39.677114 kernel: raid6: using algorithm avx2x2 gen() 17695 MB/s Sep 4 17:51:39.695160 kernel: raid6: .... xor() 17658 MB/s, rmw enabled Sep 4 17:51:39.695219 kernel: raid6: using avx2x2 recovery algorithm Sep 4 17:51:39.718709 kernel: xor: automatically using best checksumming function avx Sep 4 17:51:39.899733 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 17:51:39.913413 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:51:39.924906 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:51:39.959158 systemd-udevd[399]: Using default interface naming scheme 'v255'. Sep 4 17:51:39.966457 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:51:39.977908 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 17:51:40.008905 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Sep 4 17:51:40.045869 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:51:40.052013 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:51:40.146387 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:51:40.159897 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 17:51:40.196653 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 17:51:40.208845 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:51:40.216783 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:51:40.221997 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:51:40.238925 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 17:51:40.250978 kernel: cryptd: max_cpu_qlen set to 1000 Sep 4 17:51:40.271876 kernel: AVX2 version of gcm_enc/dec engaged. Sep 4 17:51:40.274133 kernel: AES CTR mode by8 optimization enabled Sep 4 17:51:40.284094 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:51:40.317134 kernel: scsi host0: Virtio SCSI HBA Sep 4 17:51:40.328829 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Sep 4 17:51:40.366412 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:51:40.367383 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:51:40.380736 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:51:40.382326 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:51:40.382597 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:51:40.382896 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:51:40.391299 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:51:40.416768 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Sep 4 17:51:40.417100 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Sep 4 17:51:40.417313 kernel: sd 0:0:1:0: [sda] Write Protect is off Sep 4 17:51:40.417718 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Sep 4 17:51:40.418345 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 4 17:51:40.426715 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 17:51:40.426799 kernel: GPT:17805311 != 25165823 Sep 4 17:51:40.426823 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 17:51:40.426845 kernel: GPT:17805311 != 25165823 Sep 4 17:51:40.426867 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 17:51:40.426889 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 17:51:40.431696 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Sep 4 17:51:40.432486 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:51:40.443913 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:51:40.487694 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (460) Sep 4 17:51:40.490731 kernel: BTRFS: device fsid 0dc40443-7f77-4fa7-b5e4-579d4bba0772 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (449) Sep 4 17:51:40.513008 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:51:40.528015 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Sep 4 17:51:40.535854 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Sep 4 17:51:40.547903 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Sep 4 17:51:40.548166 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Sep 4 17:51:40.565799 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Sep 4 17:51:40.577908 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 17:51:40.591815 disk-uuid[549]: Primary Header is updated. Sep 4 17:51:40.591815 disk-uuid[549]: Secondary Entries is updated. Sep 4 17:51:40.591815 disk-uuid[549]: Secondary Header is updated. Sep 4 17:51:40.605728 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 17:51:40.622688 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 17:51:40.641708 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 17:51:41.637705 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 17:51:41.639203 disk-uuid[550]: The operation has completed successfully. Sep 4 17:51:41.724612 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 17:51:41.724782 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 17:51:41.743882 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 17:51:41.782797 sh[567]: Success Sep 4 17:51:41.807785 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 4 17:51:41.897239 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 17:51:41.904647 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 17:51:41.944550 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 17:51:41.976735 kernel: BTRFS info (device dm-0): first mount of filesystem 0dc40443-7f77-4fa7-b5e4-579d4bba0772 Sep 4 17:51:41.976826 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:51:42.002310 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 17:51:42.002405 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 17:51:42.002430 kernel: BTRFS info (device dm-0): using free space tree Sep 4 17:51:42.039804 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 4 17:51:42.046748 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 17:51:42.047768 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 17:51:42.052927 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 17:51:42.127946 kernel: BTRFS info (device sda6): first mount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 17:51:42.127993 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:51:42.128019 kernel: BTRFS info (device sda6): using free space tree Sep 4 17:51:42.128042 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 4 17:51:42.128072 kernel: BTRFS info (device sda6): auto enabling async discard Sep 4 17:51:42.108992 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 17:51:42.150928 kernel: BTRFS info (device sda6): last unmount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 17:51:42.165233 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 17:51:42.186026 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 17:51:42.357592 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:51:42.378018 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:51:42.397783 ignition[645]: Ignition 2.19.0 Sep 4 17:51:42.397801 ignition[645]: Stage: fetch-offline Sep 4 17:51:42.399543 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:51:42.397861 ignition[645]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:51:42.436474 systemd-networkd[753]: lo: Link UP Sep 4 17:51:42.397876 ignition[645]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 4 17:51:42.436480 systemd-networkd[753]: lo: Gained carrier Sep 4 17:51:42.398018 ignition[645]: parsed url from cmdline: "" Sep 4 17:51:42.438107 systemd-networkd[753]: Enumeration completed Sep 4 17:51:42.398026 ignition[645]: no config URL provided Sep 4 17:51:42.438805 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:51:42.398035 ignition[645]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 17:51:42.438887 systemd-networkd[753]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:51:42.398050 ignition[645]: no config at "/usr/lib/ignition/user.ign" Sep 4 17:51:42.438893 systemd-networkd[753]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:51:42.398061 ignition[645]: failed to fetch config: resource requires networking Sep 4 17:51:42.440923 systemd-networkd[753]: eth0: Link UP Sep 4 17:51:42.398397 ignition[645]: Ignition finished successfully Sep 4 17:51:42.440930 systemd-networkd[753]: eth0: Gained carrier Sep 4 17:51:42.520678 ignition[758]: Ignition 2.19.0 Sep 4 17:51:42.440941 systemd-networkd[753]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:51:42.520692 ignition[758]: Stage: fetch Sep 4 17:51:42.456744 systemd-networkd[753]: eth0: DHCPv4 address 10.128.0.52/32, gateway 10.128.0.1 acquired from 169.254.169.254 Sep 4 17:51:42.520946 ignition[758]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:51:42.458198 systemd[1]: Reached target network.target - Network. Sep 4 17:51:42.520958 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 4 17:51:42.473021 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 4 17:51:42.521079 ignition[758]: parsed url from cmdline: "" Sep 4 17:51:42.530555 unknown[758]: fetched base config from "system" Sep 4 17:51:42.521086 ignition[758]: no config URL provided Sep 4 17:51:42.530569 unknown[758]: fetched base config from "system" Sep 4 17:51:42.521095 ignition[758]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 17:51:42.530580 unknown[758]: fetched user config from "gcp" Sep 4 17:51:42.521108 ignition[758]: no config at "/usr/lib/ignition/user.ign" Sep 4 17:51:42.533016 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 4 17:51:42.521134 ignition[758]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Sep 4 17:51:42.551921 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 17:51:42.525589 ignition[758]: GET result: OK Sep 4 17:51:42.606835 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 17:51:42.525728 ignition[758]: parsing config with SHA512: 31a4fc7d3870f947626d12e6917d5893c83954c269c2cc20de9816afdbfcacf3666baf3ff1cf86aa059a389a537d01105b86236f7876cf55059d21e6f96426fb Sep 4 17:51:42.613943 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 17:51:42.531047 ignition[758]: fetch: fetch complete Sep 4 17:51:42.652921 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 17:51:42.531054 ignition[758]: fetch: fetch passed Sep 4 17:51:42.676118 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 17:51:42.531116 ignition[758]: Ignition finished successfully Sep 4 17:51:42.693863 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 17:51:42.604278 ignition[764]: Ignition 2.19.0 Sep 4 17:51:42.715888 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:51:42.604288 ignition[764]: Stage: kargs Sep 4 17:51:42.731926 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:51:42.604506 ignition[764]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:51:42.747848 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:51:42.604518 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 4 17:51:42.771918 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 17:51:42.605558 ignition[764]: kargs: kargs passed Sep 4 17:51:42.605615 ignition[764]: Ignition finished successfully Sep 4 17:51:42.650437 ignition[769]: Ignition 2.19.0 Sep 4 17:51:42.650452 ignition[769]: Stage: disks Sep 4 17:51:42.650709 ignition[769]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:51:42.650723 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 4 17:51:42.651700 ignition[769]: disks: disks passed Sep 4 17:51:42.651776 ignition[769]: Ignition finished successfully Sep 4 17:51:42.811416 systemd-fsck[779]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Sep 4 17:51:43.008861 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 17:51:43.013857 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 17:51:43.164819 kernel: EXT4-fs (sda9): mounted filesystem bdbe0f61-2675-40b7-b9ae-5653402e9b23 r/w with ordered data mode. Quota mode: none. Sep 4 17:51:43.165728 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 17:51:43.166554 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 17:51:43.188808 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:51:43.213123 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 17:51:43.233695 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (787) Sep 4 17:51:43.253024 kernel: BTRFS info (device sda6): first mount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 17:51:43.253127 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:51:43.253154 kernel: BTRFS info (device sda6): using free space tree Sep 4 17:51:43.259995 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 4 17:51:43.299868 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 4 17:51:43.299914 kernel: BTRFS info (device sda6): auto enabling async discard Sep 4 17:51:43.260082 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 17:51:43.260125 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:51:43.274002 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 17:51:43.309652 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:51:43.339877 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 17:51:43.474262 initrd-setup-root[811]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 17:51:43.485857 initrd-setup-root[818]: cut: /sysroot/etc/group: No such file or directory Sep 4 17:51:43.495839 initrd-setup-root[825]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 17:51:43.506840 initrd-setup-root[832]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 17:51:43.643967 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 17:51:43.648846 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 17:51:43.666933 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 17:51:43.699710 kernel: BTRFS info (device sda6): last unmount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 17:51:43.706108 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 17:51:43.745571 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 17:51:43.754992 ignition[899]: INFO : Ignition 2.19.0 Sep 4 17:51:43.754992 ignition[899]: INFO : Stage: mount Sep 4 17:51:43.754992 ignition[899]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:51:43.754992 ignition[899]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 4 17:51:43.754992 ignition[899]: INFO : mount: mount passed Sep 4 17:51:43.754992 ignition[899]: INFO : Ignition finished successfully Sep 4 17:51:43.765205 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 17:51:43.788827 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 17:51:44.124894 systemd-networkd[753]: eth0: Gained IPv6LL Sep 4 17:51:44.172967 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:51:44.213651 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (911) Sep 4 17:51:44.213725 kernel: BTRFS info (device sda6): first mount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 17:51:44.213750 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:51:44.213773 kernel: BTRFS info (device sda6): using free space tree Sep 4 17:51:44.236801 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 4 17:51:44.236898 kernel: BTRFS info (device sda6): auto enabling async discard Sep 4 17:51:44.240808 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:51:44.281716 ignition[928]: INFO : Ignition 2.19.0 Sep 4 17:51:44.281716 ignition[928]: INFO : Stage: files Sep 4 17:51:44.295816 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:51:44.295816 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 4 17:51:44.295816 ignition[928]: DEBUG : files: compiled without relabeling support, skipping Sep 4 17:51:44.295816 ignition[928]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 17:51:44.295816 ignition[928]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 17:51:44.295816 ignition[928]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 17:51:44.295816 ignition[928]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 17:51:44.295816 ignition[928]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 17:51:44.293147 unknown[928]: wrote ssh authorized keys file for user: core Sep 4 17:51:44.397836 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 17:51:44.397836 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 4 17:51:44.397836 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 17:51:44.496772 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 17:51:44.513856 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 4 17:51:44.513856 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 17:51:44.513856 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:51:44.513856 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:51:44.513856 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:51:44.513856 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:51:44.513856 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:51:44.513856 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:51:44.513856 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:51:44.513856 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:51:44.513856 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Sep 4 17:51:44.513856 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Sep 4 17:51:44.513856 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Sep 4 17:51:44.513856 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Sep 4 17:51:44.792706 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 4 17:51:45.341682 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Sep 4 17:51:45.359875 ignition[928]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 4 17:51:45.359875 ignition[928]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:51:45.359875 ignition[928]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:51:45.359875 ignition[928]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 4 17:51:45.359875 ignition[928]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 4 17:51:45.359875 ignition[928]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 17:51:45.359875 ignition[928]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:51:45.359875 ignition[928]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:51:45.359875 ignition[928]: INFO : files: files passed Sep 4 17:51:45.359875 ignition[928]: INFO : Ignition finished successfully Sep 4 17:51:45.346256 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 17:51:45.365909 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 17:51:45.384872 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 17:51:45.420352 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 17:51:45.585903 initrd-setup-root-after-ignition[956]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:51:45.585903 initrd-setup-root-after-ignition[956]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:51:45.420479 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 17:51:45.652865 initrd-setup-root-after-ignition[960]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:51:45.485703 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:51:45.490144 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 17:51:45.520982 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 17:51:45.585244 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 17:51:45.585374 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 17:51:45.597080 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 17:51:45.610974 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 17:51:45.643099 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 17:51:45.649997 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 17:51:45.703956 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:51:45.729904 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 17:51:45.761768 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:51:45.777191 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:51:45.787193 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 17:51:45.817135 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 17:51:45.817339 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:51:45.848218 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 17:51:45.859597 systemd[1]: Stopped target basic.target - Basic System. Sep 4 17:51:45.878272 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 17:51:45.905103 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:51:45.924109 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 17:51:45.943107 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 17:51:45.965167 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:51:45.976332 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 17:51:45.998378 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 17:51:46.017225 systemd[1]: Stopped target swap.target - Swaps. Sep 4 17:51:46.035123 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 17:51:46.035322 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:51:46.069158 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:51:46.080182 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:51:46.098175 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 17:51:46.098348 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:51:46.135072 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 17:51:46.135271 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 17:51:46.259887 ignition[981]: INFO : Ignition 2.19.0 Sep 4 17:51:46.259887 ignition[981]: INFO : Stage: umount Sep 4 17:51:46.259887 ignition[981]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:51:46.259887 ignition[981]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 4 17:51:46.259887 ignition[981]: INFO : umount: umount passed Sep 4 17:51:46.259887 ignition[981]: INFO : Ignition finished successfully Sep 4 17:51:46.163316 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 17:51:46.163561 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:51:46.174593 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 17:51:46.174930 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 17:51:46.213126 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 17:51:46.249829 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 17:51:46.250218 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:51:46.278039 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 17:51:46.293844 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 17:51:46.294200 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:51:46.304292 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 17:51:46.304478 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:51:46.345466 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 17:51:46.345618 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 17:51:46.374242 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 17:51:46.375353 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 17:51:46.375476 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 17:51:46.396559 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 17:51:46.396816 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 17:51:46.415315 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 17:51:46.415380 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 17:51:46.433028 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 17:51:46.433109 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 17:51:46.443150 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 4 17:51:46.443243 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 4 17:51:46.461139 systemd[1]: Stopped target network.target - Network. Sep 4 17:51:46.490910 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 17:51:46.491129 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:51:46.520089 systemd[1]: Stopped target paths.target - Path Units. Sep 4 17:51:46.538930 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 17:51:46.543786 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:51:46.559830 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 17:51:46.576868 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 17:51:46.591908 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 17:51:46.591992 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:51:46.609992 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 17:51:46.610067 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:51:46.628999 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 17:51:46.629123 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 17:51:46.646934 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 17:51:46.647033 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 17:51:46.664942 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 17:51:46.665040 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 17:51:46.683174 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 17:51:46.693758 systemd-networkd[753]: eth0: DHCPv6 lease lost Sep 4 17:51:46.712076 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 17:51:46.730321 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 17:51:46.730464 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 17:51:46.749376 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 17:51:46.749780 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 17:51:46.757430 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 17:51:46.757483 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:51:46.777823 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 17:51:46.799860 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 17:51:46.799995 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:51:46.812942 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 17:51:46.813039 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:51:47.311817 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Sep 4 17:51:46.833067 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 17:51:46.833151 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 17:51:46.854047 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 17:51:46.854135 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 17:51:46.877470 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:51:46.909444 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 17:51:46.909627 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:51:46.941215 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 17:51:46.941288 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 17:51:46.963025 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 17:51:46.963096 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:51:46.982961 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 17:51:46.983070 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:51:47.009847 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 17:51:47.009977 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 17:51:47.037859 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:51:47.038023 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:51:47.073934 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 17:51:47.078072 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 17:51:47.078170 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:51:47.133124 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:51:47.133214 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:51:47.155591 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 17:51:47.155762 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 17:51:47.164819 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 17:51:47.164965 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 17:51:47.207772 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 17:51:47.233983 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 17:51:47.258602 systemd[1]: Switching root. Sep 4 17:51:47.603859 systemd-journald[183]: Journal stopped Sep 4 17:51:39.125293 kernel: Linux version 6.6.48-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Sep 4 15:54:07 -00 2024 Sep 4 17:51:39.125340 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=ceda2dd706627da8006bcd6ae77ea155b2a7de6732e2c1c7ab4bed271400663d Sep 4 17:51:39.125360 kernel: BIOS-provided physical RAM map: Sep 4 17:51:39.125372 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Sep 4 17:51:39.125383 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Sep 4 17:51:39.125395 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Sep 4 17:51:39.125412 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Sep 4 17:51:39.125432 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Sep 4 17:51:39.125447 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Sep 4 17:51:39.125462 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Sep 4 17:51:39.125478 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Sep 4 17:51:39.125493 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Sep 4 17:51:39.125508 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Sep 4 17:51:39.125523 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Sep 4 17:51:39.125546 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Sep 4 17:51:39.125563 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Sep 4 17:51:39.125580 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Sep 4 17:51:39.125597 kernel: NX (Execute Disable) protection: active Sep 4 17:51:39.125613 kernel: APIC: Static calls initialized Sep 4 17:51:39.125630 kernel: efi: EFI v2.7 by EDK II Sep 4 17:51:39.125646 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Sep 4 17:51:39.125687 kernel: SMBIOS 2.4 present. Sep 4 17:51:39.125705 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/06/2024 Sep 4 17:51:39.125722 kernel: Hypervisor detected: KVM Sep 4 17:51:39.125743 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 4 17:51:39.125759 kernel: kvm-clock: using sched offset of 11674557486 cycles Sep 4 17:51:39.125777 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 4 17:51:39.125794 kernel: tsc: Detected 2299.998 MHz processor Sep 4 17:51:39.125818 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 4 17:51:39.125836 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 4 17:51:39.125853 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Sep 4 17:51:39.125870 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Sep 4 17:51:39.125888 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 4 17:51:39.125908 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Sep 4 17:51:39.125926 kernel: Using GB pages for direct mapping Sep 4 17:51:39.125943 kernel: Secure boot disabled Sep 4 17:51:39.125960 kernel: ACPI: Early table checksum verification disabled Sep 4 17:51:39.125977 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Sep 4 17:51:39.125995 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Sep 4 17:51:39.126012 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Sep 4 17:51:39.126037 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Sep 4 17:51:39.126058 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Sep 4 17:51:39.126077 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Sep 4 17:51:39.126095 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Sep 4 17:51:39.126114 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Sep 4 17:51:39.126132 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Sep 4 17:51:39.126150 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Sep 4 17:51:39.126172 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Sep 4 17:51:39.126190 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Sep 4 17:51:39.126209 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Sep 4 17:51:39.126227 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Sep 4 17:51:39.126245 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Sep 4 17:51:39.126263 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Sep 4 17:51:39.126281 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Sep 4 17:51:39.126300 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Sep 4 17:51:39.126318 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Sep 4 17:51:39.126340 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Sep 4 17:51:39.126358 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 4 17:51:39.126377 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 4 17:51:39.126394 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 4 17:51:39.126412 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Sep 4 17:51:39.126430 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Sep 4 17:51:39.126448 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Sep 4 17:51:39.126467 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Sep 4 17:51:39.126485 kernel: NODE_DATA(0) allocated [mem 0x21fff8000-0x21fffdfff] Sep 4 17:51:39.126508 kernel: Zone ranges: Sep 4 17:51:39.126525 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 4 17:51:39.126544 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 4 17:51:39.126563 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Sep 4 17:51:39.126581 kernel: Movable zone start for each node Sep 4 17:51:39.126599 kernel: Early memory node ranges Sep 4 17:51:39.126618 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Sep 4 17:51:39.126636 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Sep 4 17:51:39.126654 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Sep 4 17:51:39.126962 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Sep 4 17:51:39.126980 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Sep 4 17:51:39.126999 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Sep 4 17:51:39.127018 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 4 17:51:39.127037 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Sep 4 17:51:39.127055 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Sep 4 17:51:39.127073 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 4 17:51:39.127092 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Sep 4 17:51:39.127112 kernel: ACPI: PM-Timer IO Port: 0xb008 Sep 4 17:51:39.127134 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 4 17:51:39.127153 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 4 17:51:39.127171 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 4 17:51:39.127189 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 4 17:51:39.127208 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 4 17:51:39.127226 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 4 17:51:39.127245 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 4 17:51:39.127264 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 4 17:51:39.127282 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Sep 4 17:51:39.127303 kernel: Booting paravirtualized kernel on KVM Sep 4 17:51:39.127322 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 4 17:51:39.127341 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 4 17:51:39.127360 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Sep 4 17:51:39.127379 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Sep 4 17:51:39.127397 kernel: pcpu-alloc: [0] 0 1 Sep 4 17:51:39.127414 kernel: kvm-guest: PV spinlocks enabled Sep 4 17:51:39.127433 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 4 17:51:39.127454 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=ceda2dd706627da8006bcd6ae77ea155b2a7de6732e2c1c7ab4bed271400663d Sep 4 17:51:39.127477 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 17:51:39.127496 kernel: random: crng init done Sep 4 17:51:39.127513 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Sep 4 17:51:39.127532 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 17:51:39.127551 kernel: Fallback order for Node 0: 0 Sep 4 17:51:39.127569 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Sep 4 17:51:39.127588 kernel: Policy zone: Normal Sep 4 17:51:39.127606 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 17:51:39.127628 kernel: software IO TLB: area num 2. Sep 4 17:51:39.127647 kernel: Memory: 7515640K/7860584K available (12288K kernel code, 2304K rwdata, 22708K rodata, 42704K init, 2488K bss, 344684K reserved, 0K cma-reserved) Sep 4 17:51:39.127681 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 4 17:51:39.127700 kernel: Kernel/User page tables isolation: enabled Sep 4 17:51:39.127719 kernel: ftrace: allocating 37748 entries in 148 pages Sep 4 17:51:39.127737 kernel: ftrace: allocated 148 pages with 3 groups Sep 4 17:51:39.127755 kernel: Dynamic Preempt: voluntary Sep 4 17:51:39.127774 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 17:51:39.127794 kernel: rcu: RCU event tracing is enabled. Sep 4 17:51:39.127838 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 4 17:51:39.127858 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 17:51:39.127878 kernel: Rude variant of Tasks RCU enabled. Sep 4 17:51:39.127902 kernel: Tracing variant of Tasks RCU enabled. Sep 4 17:51:39.127921 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 17:51:39.127941 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 4 17:51:39.127961 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 4 17:51:39.127981 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 17:51:39.128000 kernel: Console: colour dummy device 80x25 Sep 4 17:51:39.128024 kernel: printk: console [ttyS0] enabled Sep 4 17:51:39.128044 kernel: ACPI: Core revision 20230628 Sep 4 17:51:39.128062 kernel: APIC: Switch to symmetric I/O mode setup Sep 4 17:51:39.128082 kernel: x2apic enabled Sep 4 17:51:39.128102 kernel: APIC: Switched APIC routing to: physical x2apic Sep 4 17:51:39.128122 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Sep 4 17:51:39.128143 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Sep 4 17:51:39.128162 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Sep 4 17:51:39.128186 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Sep 4 17:51:39.128206 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Sep 4 17:51:39.128226 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 4 17:51:39.128246 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Sep 4 17:51:39.128265 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Sep 4 17:51:39.128285 kernel: Spectre V2 : Mitigation: IBRS Sep 4 17:51:39.128305 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Sep 4 17:51:39.128325 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Sep 4 17:51:39.128344 kernel: RETBleed: Mitigation: IBRS Sep 4 17:51:39.128368 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 4 17:51:39.128387 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Sep 4 17:51:39.128407 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 4 17:51:39.128426 kernel: MDS: Mitigation: Clear CPU buffers Sep 4 17:51:39.128446 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 4 17:51:39.128466 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 4 17:51:39.128485 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 4 17:51:39.128500 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 4 17:51:39.128517 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 4 17:51:39.128540 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 4 17:51:39.128560 kernel: Freeing SMP alternatives memory: 32K Sep 4 17:51:39.128580 kernel: pid_max: default: 32768 minimum: 301 Sep 4 17:51:39.128600 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 4 17:51:39.128619 kernel: landlock: Up and running. Sep 4 17:51:39.128639 kernel: SELinux: Initializing. Sep 4 17:51:39.128669 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 4 17:51:39.128702 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 4 17:51:39.128722 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Sep 4 17:51:39.128745 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:51:39.128761 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:51:39.128778 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:51:39.128803 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Sep 4 17:51:39.128821 kernel: signal: max sigframe size: 1776 Sep 4 17:51:39.128836 kernel: rcu: Hierarchical SRCU implementation. Sep 4 17:51:39.128857 kernel: rcu: Max phase no-delay instances is 400. Sep 4 17:51:39.128880 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 4 17:51:39.128903 kernel: smp: Bringing up secondary CPUs ... Sep 4 17:51:39.128929 kernel: smpboot: x86: Booting SMP configuration: Sep 4 17:51:39.128945 kernel: .... node #0, CPUs: #1 Sep 4 17:51:39.128966 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Sep 4 17:51:39.128986 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 4 17:51:39.129005 kernel: smp: Brought up 1 node, 2 CPUs Sep 4 17:51:39.129024 kernel: smpboot: Max logical packages: 1 Sep 4 17:51:39.129043 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Sep 4 17:51:39.129061 kernel: devtmpfs: initialized Sep 4 17:51:39.129084 kernel: x86/mm: Memory block size: 128MB Sep 4 17:51:39.129103 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Sep 4 17:51:39.129123 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 17:51:39.129142 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 4 17:51:39.129161 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 17:51:39.129180 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 17:51:39.129199 kernel: audit: initializing netlink subsys (disabled) Sep 4 17:51:39.129218 kernel: audit: type=2000 audit(1725472297.753:1): state=initialized audit_enabled=0 res=1 Sep 4 17:51:39.129236 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 17:51:39.129259 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 4 17:51:39.129278 kernel: cpuidle: using governor menu Sep 4 17:51:39.129296 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 17:51:39.129315 kernel: dca service started, version 1.12.1 Sep 4 17:51:39.129334 kernel: PCI: Using configuration type 1 for base access Sep 4 17:51:39.129353 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 4 17:51:39.129372 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 17:51:39.129391 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 17:51:39.129410 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 17:51:39.129433 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 17:51:39.129452 kernel: ACPI: Added _OSI(Module Device) Sep 4 17:51:39.129470 kernel: ACPI: Added _OSI(Processor Device) Sep 4 17:51:39.129489 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Sep 4 17:51:39.129509 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 17:51:39.129528 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Sep 4 17:51:39.129546 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 4 17:51:39.129565 kernel: ACPI: Interpreter enabled Sep 4 17:51:39.129584 kernel: ACPI: PM: (supports S0 S3 S5) Sep 4 17:51:39.129606 kernel: ACPI: Using IOAPIC for interrupt routing Sep 4 17:51:39.129626 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 4 17:51:39.129644 kernel: PCI: Ignoring E820 reservations for host bridge windows Sep 4 17:51:39.129685 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Sep 4 17:51:39.129705 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 4 17:51:39.130006 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 4 17:51:39.130204 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 4 17:51:39.130382 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 4 17:51:39.130410 kernel: PCI host bridge to bus 0000:00 Sep 4 17:51:39.130583 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 4 17:51:39.130777 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 4 17:51:39.130950 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 4 17:51:39.131111 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Sep 4 17:51:39.131271 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 4 17:51:39.131470 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 4 17:51:39.131710 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Sep 4 17:51:39.132088 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Sep 4 17:51:39.132308 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Sep 4 17:51:39.132511 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Sep 4 17:51:39.132763 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Sep 4 17:51:39.132963 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Sep 4 17:51:39.133173 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 4 17:51:39.133362 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Sep 4 17:51:39.133550 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Sep 4 17:51:39.133811 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Sep 4 17:51:39.133997 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Sep 4 17:51:39.134187 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Sep 4 17:51:39.134219 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 4 17:51:39.134238 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 4 17:51:39.134257 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 4 17:51:39.134276 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 4 17:51:39.134295 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 4 17:51:39.134314 kernel: iommu: Default domain type: Translated Sep 4 17:51:39.134333 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 4 17:51:39.134351 kernel: efivars: Registered efivars operations Sep 4 17:51:39.134371 kernel: PCI: Using ACPI for IRQ routing Sep 4 17:51:39.134390 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 4 17:51:39.134413 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Sep 4 17:51:39.134432 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Sep 4 17:51:39.134450 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Sep 4 17:51:39.134469 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Sep 4 17:51:39.134487 kernel: vgaarb: loaded Sep 4 17:51:39.134506 kernel: clocksource: Switched to clocksource kvm-clock Sep 4 17:51:39.134525 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 17:51:39.134543 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 17:51:39.134566 kernel: pnp: PnP ACPI init Sep 4 17:51:39.134584 kernel: pnp: PnP ACPI: found 7 devices Sep 4 17:51:39.134601 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 4 17:51:39.134620 kernel: NET: Registered PF_INET protocol family Sep 4 17:51:39.134639 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 4 17:51:39.134671 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Sep 4 17:51:39.134692 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 17:51:39.134710 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 17:51:39.134735 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Sep 4 17:51:39.134759 kernel: TCP: Hash tables configured (established 65536 bind 65536) Sep 4 17:51:39.134786 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 4 17:51:39.134804 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 4 17:51:39.134823 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 17:51:39.134842 kernel: NET: Registered PF_XDP protocol family Sep 4 17:51:39.135027 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 4 17:51:39.135197 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 4 17:51:39.135366 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 4 17:51:39.135540 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Sep 4 17:51:39.138188 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 4 17:51:39.138343 kernel: PCI: CLS 0 bytes, default 64 Sep 4 17:51:39.138364 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 4 17:51:39.138384 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Sep 4 17:51:39.138403 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 4 17:51:39.138423 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Sep 4 17:51:39.138549 kernel: clocksource: Switched to clocksource tsc Sep 4 17:51:39.138575 kernel: Initialise system trusted keyrings Sep 4 17:51:39.138594 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Sep 4 17:51:39.138613 kernel: Key type asymmetric registered Sep 4 17:51:39.138631 kernel: Asymmetric key parser 'x509' registered Sep 4 17:51:39.139233 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 4 17:51:39.139254 kernel: io scheduler mq-deadline registered Sep 4 17:51:39.139273 kernel: io scheduler kyber registered Sep 4 17:51:39.139292 kernel: io scheduler bfq registered Sep 4 17:51:39.139311 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 4 17:51:39.139438 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Sep 4 17:51:39.140052 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Sep 4 17:51:39.140089 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Sep 4 17:51:39.140314 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Sep 4 17:51:39.140339 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Sep 4 17:51:39.140534 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Sep 4 17:51:39.140559 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 17:51:39.140578 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 4 17:51:39.140599 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 4 17:51:39.140625 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Sep 4 17:51:39.140645 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Sep 4 17:51:39.140921 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Sep 4 17:51:39.140949 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 4 17:51:39.140968 kernel: i8042: Warning: Keylock active Sep 4 17:51:39.140984 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 4 17:51:39.141000 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 4 17:51:39.141189 kernel: rtc_cmos 00:00: RTC can wake from S4 Sep 4 17:51:39.141382 kernel: rtc_cmos 00:00: registered as rtc0 Sep 4 17:51:39.141562 kernel: rtc_cmos 00:00: setting system clock to 2024-09-04T17:51:38 UTC (1725472298) Sep 4 17:51:39.143811 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Sep 4 17:51:39.143851 kernel: intel_pstate: CPU model not supported Sep 4 17:51:39.143868 kernel: pstore: Using crash dump compression: deflate Sep 4 17:51:39.143887 kernel: pstore: Registered efi_pstore as persistent store backend Sep 4 17:51:39.143906 kernel: NET: Registered PF_INET6 protocol family Sep 4 17:51:39.143922 kernel: Segment Routing with IPv6 Sep 4 17:51:39.144049 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 17:51:39.144069 kernel: NET: Registered PF_PACKET protocol family Sep 4 17:51:39.144089 kernel: Key type dns_resolver registered Sep 4 17:51:39.144109 kernel: IPI shorthand broadcast: enabled Sep 4 17:51:39.144128 kernel: sched_clock: Marking stable (858003948, 146449383)->(1041876786, -37423455) Sep 4 17:51:39.144148 kernel: registered taskstats version 1 Sep 4 17:51:39.144231 kernel: Loading compiled-in X.509 certificates Sep 4 17:51:39.144250 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.48-flatcar: 8669771ab5e11f458b79e6634fe685dacc266b18' Sep 4 17:51:39.144269 kernel: Key type .fscrypt registered Sep 4 17:51:39.144293 kernel: Key type fscrypt-provisioning registered Sep 4 17:51:39.144313 kernel: ima: Allocated hash algorithm: sha1 Sep 4 17:51:39.144333 kernel: ima: No architecture policies found Sep 4 17:51:39.144352 kernel: clk: Disabling unused clocks Sep 4 17:51:39.144371 kernel: Freeing unused kernel image (initmem) memory: 42704K Sep 4 17:51:39.144391 kernel: Write protecting the kernel read-only data: 36864k Sep 4 17:51:39.144411 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Sep 4 17:51:39.144431 kernel: Freeing unused kernel image (rodata/data gap) memory: 1868K Sep 4 17:51:39.144455 kernel: Run /init as init process Sep 4 17:51:39.144475 kernel: with arguments: Sep 4 17:51:39.144492 kernel: /init Sep 4 17:51:39.144508 kernel: with environment: Sep 4 17:51:39.144525 kernel: HOME=/ Sep 4 17:51:39.144543 kernel: TERM=linux Sep 4 17:51:39.144562 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 17:51:39.144586 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:51:39.144613 systemd[1]: Detected virtualization google. Sep 4 17:51:39.144633 systemd[1]: Detected architecture x86-64. Sep 4 17:51:39.144652 systemd[1]: Running in initrd. Sep 4 17:51:39.144687 systemd[1]: No hostname configured, using default hostname. Sep 4 17:51:39.144707 systemd[1]: Hostname set to . Sep 4 17:51:39.144736 systemd[1]: Initializing machine ID from random generator. Sep 4 17:51:39.144753 systemd[1]: Queued start job for default target initrd.target. Sep 4 17:51:39.144774 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:51:39.144797 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:51:39.144819 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 17:51:39.144838 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:51:39.144857 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 17:51:39.144877 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 17:51:39.144899 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 17:51:39.144919 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 17:51:39.144941 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:51:39.144962 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:51:39.145000 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:51:39.145025 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:51:39.145044 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:51:39.145065 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:51:39.145088 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:51:39.145109 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:51:39.145129 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 17:51:39.145149 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 4 17:51:39.145170 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:51:39.145190 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:51:39.145211 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:51:39.145231 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:51:39.145251 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 17:51:39.145276 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:51:39.145296 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 17:51:39.145316 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 17:51:39.145336 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:51:39.145357 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:51:39.145376 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:51:39.145397 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 17:51:39.145454 systemd-journald[183]: Collecting audit messages is disabled. Sep 4 17:51:39.145502 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:51:39.145522 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 17:51:39.145548 systemd-journald[183]: Journal started Sep 4 17:51:39.145587 systemd-journald[183]: Runtime Journal (/run/log/journal/7503804a1c0d45a192737c4c4fa7832a) is 8.0M, max 148.7M, 140.7M free. Sep 4 17:51:39.143853 systemd-modules-load[184]: Inserted module 'overlay' Sep 4 17:51:39.151817 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:51:39.174940 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:51:39.187699 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 17:51:39.189017 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 17:51:39.197960 kernel: Bridge firewalling registered Sep 4 17:51:39.190968 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:51:39.192131 systemd-modules-load[184]: Inserted module 'br_netfilter' Sep 4 17:51:39.203318 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:51:39.210088 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:51:39.221293 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 17:51:39.235937 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:51:39.243866 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:51:39.256820 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:51:39.275955 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:51:39.286874 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:51:39.299047 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:51:39.299845 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:51:39.321970 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 17:51:39.337253 systemd-resolved[212]: Positive Trust Anchors: Sep 4 17:51:39.337272 systemd-resolved[212]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:51:39.337337 systemd-resolved[212]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 17:51:39.343250 systemd-resolved[212]: Defaulting to hostname 'linux'. Sep 4 17:51:39.371861 dracut-cmdline[217]: dracut-dracut-053 Sep 4 17:51:39.371861 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=ceda2dd706627da8006bcd6ae77ea155b2a7de6732e2c1c7ab4bed271400663d Sep 4 17:51:39.345713 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:51:39.362963 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:51:39.456708 kernel: SCSI subsystem initialized Sep 4 17:51:39.466689 kernel: Loading iSCSI transport class v2.0-870. Sep 4 17:51:39.479693 kernel: iscsi: registered transport (tcp) Sep 4 17:51:39.503714 kernel: iscsi: registered transport (qla4xxx) Sep 4 17:51:39.503812 kernel: QLogic iSCSI HBA Driver Sep 4 17:51:39.556382 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 17:51:39.562856 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 17:51:39.596996 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 17:51:39.597087 kernel: device-mapper: uevent: version 1.0.3 Sep 4 17:51:39.597115 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 17:51:39.642707 kernel: raid6: avx2x4 gen() 17393 MB/s Sep 4 17:51:39.659701 kernel: raid6: avx2x2 gen() 17695 MB/s Sep 4 17:51:39.677061 kernel: raid6: avx2x1 gen() 13773 MB/s Sep 4 17:51:39.677114 kernel: raid6: using algorithm avx2x2 gen() 17695 MB/s Sep 4 17:51:39.695160 kernel: raid6: .... xor() 17658 MB/s, rmw enabled Sep 4 17:51:39.695219 kernel: raid6: using avx2x2 recovery algorithm Sep 4 17:51:39.718709 kernel: xor: automatically using best checksumming function avx Sep 4 17:51:39.899733 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 17:51:39.913413 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:51:39.924906 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:51:39.959158 systemd-udevd[399]: Using default interface naming scheme 'v255'. Sep 4 17:51:39.966457 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:51:39.977908 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 17:51:40.008905 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Sep 4 17:51:40.045869 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:51:40.052013 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:51:40.146387 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:51:40.159897 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 17:51:40.196653 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 17:51:40.208845 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:51:40.216783 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:51:40.221997 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:51:40.238925 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 17:51:40.250978 kernel: cryptd: max_cpu_qlen set to 1000 Sep 4 17:51:40.271876 kernel: AVX2 version of gcm_enc/dec engaged. Sep 4 17:51:40.274133 kernel: AES CTR mode by8 optimization enabled Sep 4 17:51:40.284094 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:51:40.317134 kernel: scsi host0: Virtio SCSI HBA Sep 4 17:51:40.328829 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Sep 4 17:51:40.366412 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:51:40.367383 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:51:40.380736 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:51:40.382326 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:51:40.382597 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:51:40.382896 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:51:40.391299 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:51:40.416768 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Sep 4 17:51:40.417100 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Sep 4 17:51:40.417313 kernel: sd 0:0:1:0: [sda] Write Protect is off Sep 4 17:51:40.417718 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Sep 4 17:51:40.418345 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 4 17:51:40.426715 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 17:51:40.426799 kernel: GPT:17805311 != 25165823 Sep 4 17:51:40.426823 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 17:51:40.426845 kernel: GPT:17805311 != 25165823 Sep 4 17:51:40.426867 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 17:51:40.426889 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 17:51:40.431696 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Sep 4 17:51:40.432486 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:51:40.443913 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:51:40.487694 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (460) Sep 4 17:51:40.490731 kernel: BTRFS: device fsid 0dc40443-7f77-4fa7-b5e4-579d4bba0772 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (449) Sep 4 17:51:40.513008 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:51:40.528015 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Sep 4 17:51:40.535854 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Sep 4 17:51:40.547903 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Sep 4 17:51:40.548166 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Sep 4 17:51:40.565799 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Sep 4 17:51:40.577908 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 17:51:40.591815 disk-uuid[549]: Primary Header is updated. Sep 4 17:51:40.591815 disk-uuid[549]: Secondary Entries is updated. Sep 4 17:51:40.591815 disk-uuid[549]: Secondary Header is updated. Sep 4 17:51:40.605728 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 17:51:40.622688 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 17:51:40.641708 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 17:51:41.637705 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 17:51:41.639203 disk-uuid[550]: The operation has completed successfully. Sep 4 17:51:41.724612 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 17:51:41.724782 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 17:51:41.743882 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 17:51:41.782797 sh[567]: Success Sep 4 17:51:41.807785 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 4 17:51:41.897239 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 17:51:41.904647 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 17:51:41.944550 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 17:51:41.976735 kernel: BTRFS info (device dm-0): first mount of filesystem 0dc40443-7f77-4fa7-b5e4-579d4bba0772 Sep 4 17:51:41.976826 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:51:42.002310 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 17:51:42.002405 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 17:51:42.002430 kernel: BTRFS info (device dm-0): using free space tree Sep 4 17:51:42.039804 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 4 17:51:42.046748 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 17:51:42.047768 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 17:51:42.052927 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 17:51:42.127946 kernel: BTRFS info (device sda6): first mount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 17:51:42.127993 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:51:42.128019 kernel: BTRFS info (device sda6): using free space tree Sep 4 17:51:42.128042 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 4 17:51:42.128072 kernel: BTRFS info (device sda6): auto enabling async discard Sep 4 17:51:42.108992 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 17:51:42.150928 kernel: BTRFS info (device sda6): last unmount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 17:51:42.165233 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 17:51:42.186026 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 17:51:42.357592 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:51:42.378018 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:51:42.397783 ignition[645]: Ignition 2.19.0 Sep 4 17:51:42.397801 ignition[645]: Stage: fetch-offline Sep 4 17:51:42.399543 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:51:42.397861 ignition[645]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:51:42.436474 systemd-networkd[753]: lo: Link UP Sep 4 17:51:42.397876 ignition[645]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 4 17:51:42.436480 systemd-networkd[753]: lo: Gained carrier Sep 4 17:51:42.398018 ignition[645]: parsed url from cmdline: "" Sep 4 17:51:42.438107 systemd-networkd[753]: Enumeration completed Sep 4 17:51:42.398026 ignition[645]: no config URL provided Sep 4 17:51:42.438805 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:51:42.398035 ignition[645]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 17:51:42.438887 systemd-networkd[753]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:51:42.398050 ignition[645]: no config at "/usr/lib/ignition/user.ign" Sep 4 17:51:42.438893 systemd-networkd[753]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:51:42.398061 ignition[645]: failed to fetch config: resource requires networking Sep 4 17:51:42.440923 systemd-networkd[753]: eth0: Link UP Sep 4 17:51:42.398397 ignition[645]: Ignition finished successfully Sep 4 17:51:42.440930 systemd-networkd[753]: eth0: Gained carrier Sep 4 17:51:42.520678 ignition[758]: Ignition 2.19.0 Sep 4 17:51:42.440941 systemd-networkd[753]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:51:42.520692 ignition[758]: Stage: fetch Sep 4 17:51:42.456744 systemd-networkd[753]: eth0: DHCPv4 address 10.128.0.52/32, gateway 10.128.0.1 acquired from 169.254.169.254 Sep 4 17:51:42.520946 ignition[758]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:51:42.458198 systemd[1]: Reached target network.target - Network. Sep 4 17:51:42.520958 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 4 17:51:42.473021 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 4 17:51:42.521079 ignition[758]: parsed url from cmdline: "" Sep 4 17:51:42.530555 unknown[758]: fetched base config from "system" Sep 4 17:51:42.521086 ignition[758]: no config URL provided Sep 4 17:51:42.530569 unknown[758]: fetched base config from "system" Sep 4 17:51:42.521095 ignition[758]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 17:51:42.530580 unknown[758]: fetched user config from "gcp" Sep 4 17:51:42.521108 ignition[758]: no config at "/usr/lib/ignition/user.ign" Sep 4 17:51:42.533016 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 4 17:51:42.521134 ignition[758]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Sep 4 17:51:42.551921 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 17:51:42.525589 ignition[758]: GET result: OK Sep 4 17:51:42.606835 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 17:51:42.525728 ignition[758]: parsing config with SHA512: 31a4fc7d3870f947626d12e6917d5893c83954c269c2cc20de9816afdbfcacf3666baf3ff1cf86aa059a389a537d01105b86236f7876cf55059d21e6f96426fb Sep 4 17:51:42.613943 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 17:51:42.531047 ignition[758]: fetch: fetch complete Sep 4 17:51:42.652921 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 17:51:42.531054 ignition[758]: fetch: fetch passed Sep 4 17:51:42.676118 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 17:51:42.531116 ignition[758]: Ignition finished successfully Sep 4 17:51:42.693863 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 17:51:42.604278 ignition[764]: Ignition 2.19.0 Sep 4 17:51:42.715888 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:51:42.604288 ignition[764]: Stage: kargs Sep 4 17:51:42.731926 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:51:42.604506 ignition[764]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:51:42.747848 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:51:42.604518 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 4 17:51:42.771918 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 17:51:42.605558 ignition[764]: kargs: kargs passed Sep 4 17:51:42.605615 ignition[764]: Ignition finished successfully Sep 4 17:51:42.650437 ignition[769]: Ignition 2.19.0 Sep 4 17:51:42.650452 ignition[769]: Stage: disks Sep 4 17:51:42.650709 ignition[769]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:51:42.650723 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 4 17:51:42.651700 ignition[769]: disks: disks passed Sep 4 17:51:42.651776 ignition[769]: Ignition finished successfully Sep 4 17:51:42.811416 systemd-fsck[779]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Sep 4 17:51:43.008861 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 17:51:43.013857 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 17:51:43.164819 kernel: EXT4-fs (sda9): mounted filesystem bdbe0f61-2675-40b7-b9ae-5653402e9b23 r/w with ordered data mode. Quota mode: none. Sep 4 17:51:43.165728 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 17:51:43.166554 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 17:51:43.188808 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:51:43.213123 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 17:51:43.233695 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (787) Sep 4 17:51:43.253024 kernel: BTRFS info (device sda6): first mount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 17:51:43.253127 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:51:43.253154 kernel: BTRFS info (device sda6): using free space tree Sep 4 17:51:43.259995 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 4 17:51:43.299868 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 4 17:51:43.299914 kernel: BTRFS info (device sda6): auto enabling async discard Sep 4 17:51:43.260082 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 17:51:43.260125 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:51:43.274002 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 17:51:43.309652 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:51:43.339877 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 17:51:43.474262 initrd-setup-root[811]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 17:51:43.485857 initrd-setup-root[818]: cut: /sysroot/etc/group: No such file or directory Sep 4 17:51:43.495839 initrd-setup-root[825]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 17:51:43.506840 initrd-setup-root[832]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 17:51:43.643967 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 17:51:43.648846 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 17:51:43.666933 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 17:51:43.699710 kernel: BTRFS info (device sda6): last unmount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 17:51:43.706108 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 17:51:43.745571 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 17:51:43.754992 ignition[899]: INFO : Ignition 2.19.0 Sep 4 17:51:43.754992 ignition[899]: INFO : Stage: mount Sep 4 17:51:43.754992 ignition[899]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:51:43.754992 ignition[899]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 4 17:51:43.754992 ignition[899]: INFO : mount: mount passed Sep 4 17:51:43.754992 ignition[899]: INFO : Ignition finished successfully Sep 4 17:51:43.765205 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 17:51:43.788827 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 17:51:44.124894 systemd-networkd[753]: eth0: Gained IPv6LL Sep 4 17:51:44.172967 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:51:44.213651 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (911) Sep 4 17:51:44.213725 kernel: BTRFS info (device sda6): first mount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 17:51:44.213750 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:51:44.213773 kernel: BTRFS info (device sda6): using free space tree Sep 4 17:51:44.236801 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 4 17:51:44.236898 kernel: BTRFS info (device sda6): auto enabling async discard Sep 4 17:51:44.240808 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:51:44.281716 ignition[928]: INFO : Ignition 2.19.0 Sep 4 17:51:44.281716 ignition[928]: INFO : Stage: files Sep 4 17:51:44.295816 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:51:44.295816 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 4 17:51:44.295816 ignition[928]: DEBUG : files: compiled without relabeling support, skipping Sep 4 17:51:44.295816 ignition[928]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 17:51:44.295816 ignition[928]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 17:51:44.295816 ignition[928]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 17:51:44.295816 ignition[928]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 17:51:44.295816 ignition[928]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 17:51:44.293147 unknown[928]: wrote ssh authorized keys file for user: core Sep 4 17:51:44.397836 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 17:51:44.397836 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 4 17:51:44.397836 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 17:51:44.496772 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 17:51:44.513856 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 4 17:51:44.513856 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 17:51:44.513856 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:51:44.513856 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:51:44.513856 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:51:44.513856 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:51:44.513856 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:51:44.513856 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:51:44.513856 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:51:44.513856 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:51:44.513856 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Sep 4 17:51:44.513856 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Sep 4 17:51:44.513856 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Sep 4 17:51:44.513856 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Sep 4 17:51:44.792706 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 4 17:51:45.341682 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Sep 4 17:51:45.359875 ignition[928]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 4 17:51:45.359875 ignition[928]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:51:45.359875 ignition[928]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:51:45.359875 ignition[928]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 4 17:51:45.359875 ignition[928]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 4 17:51:45.359875 ignition[928]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 17:51:45.359875 ignition[928]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:51:45.359875 ignition[928]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:51:45.359875 ignition[928]: INFO : files: files passed Sep 4 17:51:45.359875 ignition[928]: INFO : Ignition finished successfully Sep 4 17:51:45.346256 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 17:51:45.365909 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 17:51:45.384872 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 17:51:45.420352 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 17:51:45.585903 initrd-setup-root-after-ignition[956]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:51:45.585903 initrd-setup-root-after-ignition[956]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:51:45.420479 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 17:51:45.652865 initrd-setup-root-after-ignition[960]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:51:45.485703 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:51:45.490144 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 17:51:45.520982 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 17:51:45.585244 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 17:51:45.585374 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 17:51:45.597080 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 17:51:45.610974 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 17:51:45.643099 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 17:51:45.649997 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 17:51:45.703956 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:51:45.729904 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 17:51:45.761768 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:51:45.777191 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:51:45.787193 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 17:51:45.817135 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 17:51:45.817339 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:51:45.848218 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 17:51:45.859597 systemd[1]: Stopped target basic.target - Basic System. Sep 4 17:51:45.878272 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 17:51:45.905103 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:51:45.924109 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 17:51:45.943107 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 17:51:45.965167 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:51:45.976332 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 17:51:45.998378 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 17:51:46.017225 systemd[1]: Stopped target swap.target - Swaps. Sep 4 17:51:46.035123 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 17:51:46.035322 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:51:46.069158 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:51:46.080182 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:51:46.098175 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 17:51:46.098348 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:51:46.135072 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 17:51:46.135271 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 17:51:46.259887 ignition[981]: INFO : Ignition 2.19.0 Sep 4 17:51:46.259887 ignition[981]: INFO : Stage: umount Sep 4 17:51:46.259887 ignition[981]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:51:46.259887 ignition[981]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 4 17:51:46.259887 ignition[981]: INFO : umount: umount passed Sep 4 17:51:46.259887 ignition[981]: INFO : Ignition finished successfully Sep 4 17:51:46.163316 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 17:51:46.163561 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:51:46.174593 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 17:51:46.174930 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 17:51:46.213126 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 17:51:46.249829 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 17:51:46.250218 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:51:46.278039 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 17:51:46.293844 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 17:51:46.294200 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:51:46.304292 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 17:51:46.304478 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:51:46.345466 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 17:51:46.345618 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 17:51:46.374242 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 17:51:46.375353 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 17:51:46.375476 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 17:51:46.396559 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 17:51:46.396816 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 17:51:46.415315 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 17:51:46.415380 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 17:51:46.433028 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 17:51:46.433109 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 17:51:46.443150 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 4 17:51:46.443243 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 4 17:51:46.461139 systemd[1]: Stopped target network.target - Network. Sep 4 17:51:46.490910 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 17:51:46.491129 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:51:46.520089 systemd[1]: Stopped target paths.target - Path Units. Sep 4 17:51:46.538930 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 17:51:46.543786 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:51:46.559830 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 17:51:46.576868 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 17:51:46.591908 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 17:51:46.591992 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:51:46.609992 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 17:51:46.610067 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:51:46.628999 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 17:51:46.629123 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 17:51:46.646934 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 17:51:46.647033 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 17:51:46.664942 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 17:51:46.665040 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 17:51:46.683174 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 17:51:46.693758 systemd-networkd[753]: eth0: DHCPv6 lease lost Sep 4 17:51:46.712076 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 17:51:46.730321 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 17:51:46.730464 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 17:51:46.749376 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 17:51:46.749780 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 17:51:46.757430 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 17:51:46.757483 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:51:46.777823 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 17:51:46.799860 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 17:51:46.799995 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:51:46.812942 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 17:51:46.813039 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:51:47.311817 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Sep 4 17:51:46.833067 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 17:51:46.833151 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 17:51:46.854047 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 17:51:46.854135 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 17:51:46.877470 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:51:46.909444 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 17:51:46.909627 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:51:46.941215 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 17:51:46.941288 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 17:51:46.963025 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 17:51:46.963096 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:51:46.982961 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 17:51:46.983070 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:51:47.009847 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 17:51:47.009977 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 17:51:47.037859 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:51:47.038023 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:51:47.073934 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 17:51:47.078072 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 17:51:47.078170 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:51:47.133124 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:51:47.133214 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:51:47.155591 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 17:51:47.155762 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 17:51:47.164819 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 17:51:47.164965 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 17:51:47.207772 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 17:51:47.233983 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 17:51:47.258602 systemd[1]: Switching root. Sep 4 17:51:47.603859 systemd-journald[183]: Journal stopped Sep 4 17:51:50.098076 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 17:51:50.098138 kernel: SELinux: policy capability open_perms=1 Sep 4 17:51:50.098162 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 17:51:50.098181 kernel: SELinux: policy capability always_check_network=0 Sep 4 17:51:50.098200 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 17:51:50.098219 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 17:51:50.098240 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 17:51:50.098265 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 17:51:50.098285 kernel: audit: type=1403 audit(1725472307.904:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 17:51:50.098309 systemd[1]: Successfully loaded SELinux policy in 83.069ms. Sep 4 17:51:50.098333 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.368ms. Sep 4 17:51:50.098356 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:51:50.098378 systemd[1]: Detected virtualization google. Sep 4 17:51:50.098399 systemd[1]: Detected architecture x86-64. Sep 4 17:51:50.098427 systemd[1]: Detected first boot. Sep 4 17:51:50.098451 systemd[1]: Initializing machine ID from random generator. Sep 4 17:51:50.098474 zram_generator::config[1022]: No configuration found. Sep 4 17:51:50.098499 systemd[1]: Populated /etc with preset unit settings. Sep 4 17:51:50.098522 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 17:51:50.098549 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 17:51:50.098570 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 17:51:50.098592 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 17:51:50.098615 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 17:51:50.098635 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 17:51:50.098690 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 17:51:50.098713 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 17:51:50.098751 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 17:51:50.098773 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 17:51:50.098793 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 17:51:50.098814 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:51:50.098837 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:51:50.098858 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 17:51:50.098879 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 17:51:50.098900 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 17:51:50.098926 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:51:50.098959 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 4 17:51:50.098980 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:51:50.099001 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 17:51:50.099025 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 17:51:50.099048 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 17:51:50.099078 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 17:51:50.099101 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:51:50.099123 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:51:50.099150 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:51:50.099172 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:51:50.099193 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 17:51:50.099214 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 17:51:50.099235 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:51:50.099256 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:51:50.099277 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:51:50.099304 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 17:51:50.099325 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 17:51:50.099347 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 17:51:50.099369 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 17:51:50.099391 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:51:50.099417 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 17:51:50.099438 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 17:51:50.099461 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 17:51:50.099484 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 17:51:50.099507 systemd[1]: Reached target machines.target - Containers. Sep 4 17:51:50.099530 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 17:51:50.099552 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:51:50.099575 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:51:50.099601 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 17:51:50.099623 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:51:50.099691 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:51:50.099722 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:51:50.099744 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 17:51:50.099764 kernel: ACPI: bus type drm_connector registered Sep 4 17:51:50.099783 kernel: fuse: init (API version 7.39) Sep 4 17:51:50.099803 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:51:50.099830 kernel: loop: module loaded Sep 4 17:51:50.099851 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 17:51:50.099873 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 17:51:50.099896 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 17:51:50.099917 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 17:51:50.099939 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 17:51:50.099962 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:51:50.099984 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:51:50.100006 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 17:51:50.100074 systemd-journald[1108]: Collecting audit messages is disabled. Sep 4 17:51:50.100122 systemd-journald[1108]: Journal started Sep 4 17:51:50.100169 systemd-journald[1108]: Runtime Journal (/run/log/journal/dbff05f0aa014b26843cc91c1894fb5d) is 8.0M, max 148.7M, 140.7M free. Sep 4 17:51:50.102394 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 17:51:48.825224 systemd[1]: Queued start job for default target multi-user.target. Sep 4 17:51:48.848691 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 4 17:51:48.849297 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 17:51:50.121720 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:51:50.146813 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 17:51:50.146900 systemd[1]: Stopped verity-setup.service. Sep 4 17:51:50.176703 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:51:50.186698 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:51:50.197243 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 17:51:50.207063 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 17:51:50.217082 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 17:51:50.227082 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 17:51:50.238133 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 17:51:50.248096 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 17:51:50.258283 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 17:51:50.270297 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:51:50.282313 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 17:51:50.282557 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 17:51:50.294248 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:51:50.294482 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:51:50.306220 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:51:50.306458 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:51:50.317206 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:51:50.317439 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:51:50.329255 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 17:51:50.329494 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 17:51:50.340297 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:51:50.340544 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:51:50.351260 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:51:50.361165 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 17:51:50.373247 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 17:51:50.385271 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:51:50.410238 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 17:51:50.428858 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 17:51:50.443823 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 17:51:50.453810 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 17:51:50.453879 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:51:50.465171 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 4 17:51:50.481955 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 17:51:50.500357 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 17:51:50.510088 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:51:50.515475 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 17:51:50.529946 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 17:51:50.540884 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:51:50.549154 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 17:51:50.559838 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:51:50.565993 systemd-journald[1108]: Time spent on flushing to /var/log/journal/dbff05f0aa014b26843cc91c1894fb5d is 106.862ms for 926 entries. Sep 4 17:51:50.565993 systemd-journald[1108]: System Journal (/var/log/journal/dbff05f0aa014b26843cc91c1894fb5d) is 8.0M, max 584.8M, 576.8M free. Sep 4 17:51:50.698931 systemd-journald[1108]: Received client request to flush runtime journal. Sep 4 17:51:50.699004 kernel: loop0: detected capacity change from 0 to 89168 Sep 4 17:51:50.576992 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:51:50.595096 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 17:51:50.619957 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 17:51:50.637966 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 4 17:51:50.655748 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 17:51:50.668088 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 17:51:50.679198 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 17:51:50.696290 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 17:51:50.708813 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 17:51:50.721421 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:51:50.749970 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 17:51:50.759690 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 17:51:50.778886 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 4 17:51:50.796329 udevadm[1142]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 4 17:51:50.819144 kernel: loop1: detected capacity change from 0 to 140728 Sep 4 17:51:50.849084 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 17:51:50.853227 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 4 17:51:50.867390 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 17:51:50.891001 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:51:50.926714 kernel: loop2: detected capacity change from 0 to 89336 Sep 4 17:51:50.990429 systemd-tmpfiles[1157]: ACLs are not supported, ignoring. Sep 4 17:51:50.991770 systemd-tmpfiles[1157]: ACLs are not supported, ignoring. Sep 4 17:51:51.008827 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:51:51.034058 kernel: loop3: detected capacity change from 0 to 210664 Sep 4 17:51:51.100915 kernel: loop4: detected capacity change from 0 to 89168 Sep 4 17:51:51.142711 kernel: loop5: detected capacity change from 0 to 140728 Sep 4 17:51:51.207779 kernel: loop6: detected capacity change from 0 to 89336 Sep 4 17:51:51.259760 kernel: loop7: detected capacity change from 0 to 210664 Sep 4 17:51:51.301649 (sd-merge)[1165]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Sep 4 17:51:51.302544 (sd-merge)[1165]: Merged extensions into '/usr'. Sep 4 17:51:51.310399 systemd[1]: Reloading requested from client PID 1139 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 17:51:51.310428 systemd[1]: Reloading... Sep 4 17:51:51.462701 zram_generator::config[1186]: No configuration found. Sep 4 17:51:51.770308 ldconfig[1134]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 17:51:51.788143 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:51:51.894459 systemd[1]: Reloading finished in 582 ms. Sep 4 17:51:51.930622 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 17:51:51.941548 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 17:51:51.964987 systemd[1]: Starting ensure-sysext.service... Sep 4 17:51:51.980979 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 17:51:52.000258 systemd[1]: Reloading requested from client PID 1229 ('systemctl') (unit ensure-sysext.service)... Sep 4 17:51:52.000469 systemd[1]: Reloading... Sep 4 17:51:52.047274 systemd-tmpfiles[1230]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 17:51:52.048496 systemd-tmpfiles[1230]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 17:51:52.051512 systemd-tmpfiles[1230]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 17:51:52.053498 systemd-tmpfiles[1230]: ACLs are not supported, ignoring. Sep 4 17:51:52.053757 systemd-tmpfiles[1230]: ACLs are not supported, ignoring. Sep 4 17:51:52.062758 systemd-tmpfiles[1230]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:51:52.062943 systemd-tmpfiles[1230]: Skipping /boot Sep 4 17:51:52.086522 systemd-tmpfiles[1230]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:51:52.086724 systemd-tmpfiles[1230]: Skipping /boot Sep 4 17:51:52.148686 zram_generator::config[1258]: No configuration found. Sep 4 17:51:52.281256 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:51:52.346697 systemd[1]: Reloading finished in 345 ms. Sep 4 17:51:52.363630 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 17:51:52.381449 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 17:51:52.409125 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:51:52.425118 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 17:51:52.446330 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 17:51:52.470569 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:51:52.491697 augenrules[1316]: No rules Sep 4 17:51:52.493084 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:51:52.517368 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 17:51:52.531739 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:51:52.542696 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 17:51:52.559670 systemd-udevd[1318]: Using default interface naming scheme 'v255'. Sep 4 17:51:52.565569 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:51:52.566319 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:51:52.573452 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:51:52.591814 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:51:52.611218 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:51:52.621020 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:51:52.629290 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 17:51:52.645187 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 17:51:52.654828 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:51:52.660469 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:51:52.675730 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 17:51:52.687863 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:51:52.688504 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:51:52.700866 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:51:52.701136 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:51:52.715092 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:51:52.715723 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:51:52.726971 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 17:51:52.745624 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 17:51:52.762405 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 17:51:52.823286 systemd[1]: Finished ensure-sysext.service. Sep 4 17:51:52.840416 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 4 17:51:52.841437 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:51:52.842873 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:51:52.851937 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:51:52.869718 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1342) Sep 4 17:51:52.885720 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1342) Sep 4 17:51:52.889898 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:51:52.908950 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:51:52.927921 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:51:52.945925 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 4 17:51:52.954975 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:51:52.967965 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:51:52.978709 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 17:51:52.988865 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 17:51:52.988918 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:51:52.992166 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:51:52.992451 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:51:53.004377 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:51:53.005745 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:51:53.016314 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:51:53.016780 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:51:53.030697 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 4 17:51:53.036366 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:51:53.037359 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:51:53.049685 kernel: ACPI: button: Power Button [PWRF] Sep 4 17:51:53.091473 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:51:53.091588 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:51:53.104031 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 4 17:51:53.121500 systemd-resolved[1314]: Positive Trust Anchors: Sep 4 17:51:53.121525 systemd-resolved[1314]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:51:53.121609 systemd-resolved[1314]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 17:51:53.127905 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Sep 4 17:51:53.141684 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1338) Sep 4 17:51:53.157688 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Sep 4 17:51:53.160876 systemd-resolved[1314]: Defaulting to hostname 'linux'. Sep 4 17:51:53.165068 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:51:53.178057 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:51:53.190696 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Sep 4 17:51:53.225758 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 Sep 4 17:51:53.262683 kernel: EDAC MC: Ver: 3.0.0 Sep 4 17:51:53.269963 kernel: ACPI: button: Sleep Button [SLPF] Sep 4 17:51:53.284621 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Sep 4 17:51:53.302701 kernel: mousedev: PS/2 mouse device common for all mice Sep 4 17:51:53.313119 systemd-networkd[1375]: lo: Link UP Sep 4 17:51:53.313137 systemd-networkd[1375]: lo: Gained carrier Sep 4 17:51:53.317951 systemd-networkd[1375]: Enumeration completed Sep 4 17:51:53.318651 systemd-networkd[1375]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:51:53.318687 systemd-networkd[1375]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:51:53.321646 systemd-networkd[1375]: eth0: Link UP Sep 4 17:51:53.321680 systemd-networkd[1375]: eth0: Gained carrier Sep 4 17:51:53.321713 systemd-networkd[1375]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:51:53.329087 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:51:53.336762 systemd-networkd[1375]: eth0: DHCPv4 address 10.128.0.52/32, gateway 10.128.0.1 acquired from 169.254.169.254 Sep 4 17:51:53.340139 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:51:53.355893 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Sep 4 17:51:53.369386 systemd[1]: Reached target network.target - Network. Sep 4 17:51:53.376773 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 17:51:53.380175 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 17:51:53.395137 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 4 17:51:53.397897 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 4 17:51:53.409417 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 17:51:53.427026 lvm[1406]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:51:53.458362 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 4 17:51:53.459648 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:51:53.466963 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 4 17:51:53.483218 lvm[1410]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:51:53.490079 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:51:53.502349 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:51:53.513237 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 17:51:53.524967 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 17:51:53.537132 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 17:51:53.547048 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 17:51:53.557900 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 17:51:53.568863 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 17:51:53.568928 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:51:53.577908 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:51:53.588745 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 17:51:53.600651 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 17:51:53.620680 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 17:51:53.631948 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 4 17:51:53.644183 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 17:51:53.654843 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:51:53.664893 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:51:53.673962 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:51:53.674012 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:51:53.678871 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 17:51:53.701536 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 4 17:51:53.718817 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 17:51:53.737281 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 17:51:53.764970 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 17:51:53.774817 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 17:51:53.782197 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 17:51:53.785772 jq[1419]: false Sep 4 17:51:53.801902 systemd[1]: Started ntpd.service - Network Time Service. Sep 4 17:51:53.816810 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 17:51:53.823008 extend-filesystems[1420]: Found loop4 Sep 4 17:51:53.834978 extend-filesystems[1420]: Found loop5 Sep 4 17:51:53.834978 extend-filesystems[1420]: Found loop6 Sep 4 17:51:53.834978 extend-filesystems[1420]: Found loop7 Sep 4 17:51:53.834978 extend-filesystems[1420]: Found sda Sep 4 17:51:53.834978 extend-filesystems[1420]: Found sda1 Sep 4 17:51:53.834978 extend-filesystems[1420]: Found sda2 Sep 4 17:51:53.834978 extend-filesystems[1420]: Found sda3 Sep 4 17:51:53.834978 extend-filesystems[1420]: Found usr Sep 4 17:51:53.834978 extend-filesystems[1420]: Found sda4 Sep 4 17:51:53.834978 extend-filesystems[1420]: Found sda6 Sep 4 17:51:53.834978 extend-filesystems[1420]: Found sda7 Sep 4 17:51:53.834978 extend-filesystems[1420]: Found sda9 Sep 4 17:51:53.834978 extend-filesystems[1420]: Checking size of /dev/sda9 Sep 4 17:51:53.955007 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Sep 4 17:51:53.955065 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Sep 4 17:51:53.955101 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1338) Sep 4 17:51:53.830924 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 17:51:53.955320 extend-filesystems[1420]: Resized partition /dev/sda9 Sep 4 17:51:53.955402 coreos-metadata[1417]: Sep 04 17:51:53.852 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Sep 4 17:51:53.955402 coreos-metadata[1417]: Sep 04 17:51:53.856 INFO Fetch successful Sep 4 17:51:53.955402 coreos-metadata[1417]: Sep 04 17:51:53.856 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Sep 4 17:51:53.955402 coreos-metadata[1417]: Sep 04 17:51:53.858 INFO Fetch successful Sep 4 17:51:53.955402 coreos-metadata[1417]: Sep 04 17:51:53.862 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Sep 4 17:51:53.955402 coreos-metadata[1417]: Sep 04 17:51:53.865 INFO Fetch successful Sep 4 17:51:53.955402 coreos-metadata[1417]: Sep 04 17:51:53.865 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Sep 4 17:51:53.955402 coreos-metadata[1417]: Sep 04 17:51:53.867 INFO Fetch successful Sep 4 17:51:53.910160 dbus-daemon[1418]: [system] SELinux support is enabled Sep 4 17:51:53.956530 ntpd[1425]: 4 Sep 17:51:53 ntpd[1425]: ntpd 4.2.8p17@1.4004-o Wed Sep 4 15:17:38 UTC 2024 (1): Starting Sep 4 17:51:53.956530 ntpd[1425]: 4 Sep 17:51:53 ntpd[1425]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 4 17:51:53.956530 ntpd[1425]: 4 Sep 17:51:53 ntpd[1425]: ---------------------------------------------------- Sep 4 17:51:53.956530 ntpd[1425]: 4 Sep 17:51:53 ntpd[1425]: ntp-4 is maintained by Network Time Foundation, Sep 4 17:51:53.956530 ntpd[1425]: 4 Sep 17:51:53 ntpd[1425]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 4 17:51:53.956530 ntpd[1425]: 4 Sep 17:51:53 ntpd[1425]: corporation. Support and training for ntp-4 are Sep 4 17:51:53.956530 ntpd[1425]: 4 Sep 17:51:53 ntpd[1425]: available at https://www.nwtime.org/support Sep 4 17:51:53.956530 ntpd[1425]: 4 Sep 17:51:53 ntpd[1425]: ---------------------------------------------------- Sep 4 17:51:53.956530 ntpd[1425]: 4 Sep 17:51:53 ntpd[1425]: proto: precision = 0.080 usec (-23) Sep 4 17:51:53.956530 ntpd[1425]: 4 Sep 17:51:53 ntpd[1425]: basedate set to 2024-08-23 Sep 4 17:51:53.956530 ntpd[1425]: 4 Sep 17:51:53 ntpd[1425]: gps base set to 2024-08-25 (week 2329) Sep 4 17:51:53.956530 ntpd[1425]: 4 Sep 17:51:53 ntpd[1425]: Listen and drop on 0 v6wildcard [::]:123 Sep 4 17:51:53.956530 ntpd[1425]: 4 Sep 17:51:53 ntpd[1425]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 4 17:51:53.956530 ntpd[1425]: 4 Sep 17:51:53 ntpd[1425]: Listen normally on 2 lo 127.0.0.1:123 Sep 4 17:51:53.956530 ntpd[1425]: 4 Sep 17:51:53 ntpd[1425]: Listen normally on 3 eth0 10.128.0.52:123 Sep 4 17:51:53.956530 ntpd[1425]: 4 Sep 17:51:53 ntpd[1425]: Listen normally on 4 lo [::1]:123 Sep 4 17:51:53.956530 ntpd[1425]: 4 Sep 17:51:53 ntpd[1425]: bind(21) AF_INET6 fe80::4001:aff:fe80:34%2#123 flags 0x11 failed: Cannot assign requested address Sep 4 17:51:53.956530 ntpd[1425]: 4 Sep 17:51:53 ntpd[1425]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:34%2#123 Sep 4 17:51:53.956530 ntpd[1425]: 4 Sep 17:51:53 ntpd[1425]: failed to init interface for address fe80::4001:aff:fe80:34%2 Sep 4 17:51:53.956530 ntpd[1425]: 4 Sep 17:51:53 ntpd[1425]: Listening on routing socket on fd #21 for interface updates Sep 4 17:51:53.956530 ntpd[1425]: 4 Sep 17:51:53 ntpd[1425]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 17:51:53.956530 ntpd[1425]: 4 Sep 17:51:53 ntpd[1425]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 17:51:53.857356 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 17:51:53.959874 extend-filesystems[1440]: resize2fs 1.47.1 (20-May-2024) Sep 4 17:51:53.913828 dbus-daemon[1418]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1375 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 4 17:51:53.894803 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 17:51:54.015330 extend-filesystems[1440]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Sep 4 17:51:54.015330 extend-filesystems[1440]: old_desc_blocks = 1, new_desc_blocks = 2 Sep 4 17:51:54.015330 extend-filesystems[1440]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Sep 4 17:51:53.922235 ntpd[1425]: ntpd 4.2.8p17@1.4004-o Wed Sep 4 15:17:38 UTC 2024 (1): Starting Sep 4 17:51:53.913466 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Sep 4 17:51:54.049543 extend-filesystems[1420]: Resized filesystem in /dev/sda9 Sep 4 17:51:53.922273 ntpd[1425]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 4 17:51:54.059634 update_engine[1444]: I0904 17:51:53.998119 1444 main.cc:92] Flatcar Update Engine starting Sep 4 17:51:54.059634 update_engine[1444]: I0904 17:51:54.006335 1444 update_check_scheduler.cc:74] Next update check in 2m55s Sep 4 17:51:53.914286 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 17:51:53.922288 ntpd[1425]: ---------------------------------------------------- Sep 4 17:51:53.919060 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 17:51:53.922301 ntpd[1425]: ntp-4 is maintained by Network Time Foundation, Sep 4 17:51:53.956856 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 17:51:53.922316 ntpd[1425]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 4 17:51:53.959059 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 17:51:53.922331 ntpd[1425]: corporation. Support and training for ntp-4 are Sep 4 17:51:53.977337 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 17:51:53.922346 ntpd[1425]: available at https://www.nwtime.org/support Sep 4 17:51:53.977617 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 17:51:53.922361 ntpd[1425]: ---------------------------------------------------- Sep 4 17:51:53.980004 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 17:51:53.924432 ntpd[1425]: proto: precision = 0.080 usec (-23) Sep 4 17:51:53.980289 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 17:51:53.925551 ntpd[1425]: basedate set to 2024-08-23 Sep 4 17:51:53.925574 ntpd[1425]: gps base set to 2024-08-25 (week 2329) Sep 4 17:51:53.937077 ntpd[1425]: Listen and drop on 0 v6wildcard [::]:123 Sep 4 17:51:53.937145 ntpd[1425]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 4 17:51:53.937427 ntpd[1425]: Listen normally on 2 lo 127.0.0.1:123 Sep 4 17:51:53.937477 ntpd[1425]: Listen normally on 3 eth0 10.128.0.52:123 Sep 4 17:51:53.937531 ntpd[1425]: Listen normally on 4 lo [::1]:123 Sep 4 17:51:53.937588 ntpd[1425]: bind(21) AF_INET6 fe80::4001:aff:fe80:34%2#123 flags 0x11 failed: Cannot assign requested address Sep 4 17:51:53.937616 ntpd[1425]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:34%2#123 Sep 4 17:51:53.937639 ntpd[1425]: failed to init interface for address fe80::4001:aff:fe80:34%2 Sep 4 17:51:53.937699 ntpd[1425]: Listening on routing socket on fd #21 for interface updates Sep 4 17:51:53.944178 ntpd[1425]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 17:51:53.944218 ntpd[1425]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 17:51:54.076831 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 17:51:54.077092 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 17:51:54.099337 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 17:51:54.099633 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 17:51:54.100018 jq[1446]: true Sep 4 17:51:54.115187 systemd-logind[1441]: Watching system buttons on /dev/input/event1 (Power Button) Sep 4 17:51:54.115243 systemd-logind[1441]: Watching system buttons on /dev/input/event3 (Sleep Button) Sep 4 17:51:54.115274 systemd-logind[1441]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 4 17:51:54.115797 systemd-logind[1441]: New seat seat0. Sep 4 17:51:54.118014 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 17:51:54.145124 (ntainerd)[1456]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 17:51:54.157035 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 4 17:51:54.162676 jq[1455]: true Sep 4 17:51:54.253914 dbus-daemon[1418]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 4 17:51:54.268948 systemd[1]: Started update-engine.service - Update Engine. Sep 4 17:51:54.287721 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 17:51:54.299741 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 17:51:54.300015 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 17:51:54.300267 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 17:51:54.324235 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 4 17:51:54.331860 tar[1454]: linux-amd64/helm Sep 4 17:51:54.333835 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 17:51:54.334094 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 17:51:54.356054 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 17:51:54.381695 bash[1487]: Updated "/home/core/.ssh/authorized_keys" Sep 4 17:51:54.382789 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 17:51:54.413467 systemd[1]: Starting sshkeys.service... Sep 4 17:51:54.500525 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 4 17:51:54.525548 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 4 17:51:54.590551 dbus-daemon[1418]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 4 17:51:54.590800 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 4 17:51:54.597308 dbus-daemon[1418]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1486 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 4 17:51:54.619227 systemd[1]: Starting polkit.service - Authorization Manager... Sep 4 17:51:54.624023 sshd_keygen[1453]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 17:51:54.679452 coreos-metadata[1491]: Sep 04 17:51:54.679 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Sep 4 17:51:54.687061 coreos-metadata[1491]: Sep 04 17:51:54.686 INFO Fetch failed with 404: resource not found Sep 4 17:51:54.687061 coreos-metadata[1491]: Sep 04 17:51:54.687 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Sep 4 17:51:54.688159 coreos-metadata[1491]: Sep 04 17:51:54.687 INFO Fetch successful Sep 4 17:51:54.688159 coreos-metadata[1491]: Sep 04 17:51:54.688 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Sep 4 17:51:54.689028 coreos-metadata[1491]: Sep 04 17:51:54.688 INFO Fetch failed with 404: resource not found Sep 4 17:51:54.689028 coreos-metadata[1491]: Sep 04 17:51:54.688 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Sep 4 17:51:54.689854 coreos-metadata[1491]: Sep 04 17:51:54.689 INFO Fetch failed with 404: resource not found Sep 4 17:51:54.689854 coreos-metadata[1491]: Sep 04 17:51:54.689 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Sep 4 17:51:54.690778 coreos-metadata[1491]: Sep 04 17:51:54.690 INFO Fetch successful Sep 4 17:51:54.700606 unknown[1491]: wrote ssh authorized keys file for user: core Sep 4 17:51:54.725875 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 17:51:54.728164 polkitd[1496]: Started polkitd version 121 Sep 4 17:51:54.736261 locksmithd[1488]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 17:51:54.748717 update-ssh-keys[1512]: Updated "/home/core/.ssh/authorized_keys" Sep 4 17:51:54.751283 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 17:51:54.757796 polkitd[1496]: Loading rules from directory /etc/polkit-1/rules.d Sep 4 17:51:54.757903 polkitd[1496]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 4 17:51:54.760107 polkitd[1496]: Finished loading, compiling and executing 2 rules Sep 4 17:51:54.763397 dbus-daemon[1418]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 4 17:51:54.764729 polkitd[1496]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 4 17:51:54.768795 systemd[1]: Started sshd@0-10.128.0.52:22-147.75.109.163:38338.service - OpenSSH per-connection server daemon (147.75.109.163:38338). Sep 4 17:51:54.783111 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 4 17:51:54.796599 systemd[1]: Started polkit.service - Authorization Manager. Sep 4 17:51:54.805669 systemd[1]: Finished sshkeys.service. Sep 4 17:51:54.813943 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 17:51:54.814204 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 17:51:54.835535 systemd-hostnamed[1486]: Hostname set to (transient) Sep 4 17:51:54.836803 systemd-resolved[1314]: System hostname changed to 'ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal'. Sep 4 17:51:54.866092 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 17:51:54.917943 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 17:51:54.923288 ntpd[1425]: bind(24) AF_INET6 fe80::4001:aff:fe80:34%2#123 flags 0x11 failed: Cannot assign requested address Sep 4 17:51:54.926155 ntpd[1425]: 4 Sep 17:51:54 ntpd[1425]: bind(24) AF_INET6 fe80::4001:aff:fe80:34%2#123 flags 0x11 failed: Cannot assign requested address Sep 4 17:51:54.926155 ntpd[1425]: 4 Sep 17:51:54 ntpd[1425]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:34%2#123 Sep 4 17:51:54.926155 ntpd[1425]: 4 Sep 17:51:54 ntpd[1425]: failed to init interface for address fe80::4001:aff:fe80:34%2 Sep 4 17:51:54.925960 ntpd[1425]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:34%2#123 Sep 4 17:51:54.925983 ntpd[1425]: failed to init interface for address fe80::4001:aff:fe80:34%2 Sep 4 17:51:54.943196 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 17:51:54.959397 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 4 17:51:54.970183 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 17:51:54.995695 containerd[1456]: time="2024-09-04T17:51:54.995102488Z" level=info msg="starting containerd" revision=8ccfc03e4e2b73c22899202ae09d0caf906d3863 version=v1.7.20 Sep 4 17:51:55.057527 containerd[1456]: time="2024-09-04T17:51:55.057384437Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:51:55.061258 containerd[1456]: time="2024-09-04T17:51:55.060653551Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.48-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:51:55.061258 containerd[1456]: time="2024-09-04T17:51:55.060736507Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 4 17:51:55.061258 containerd[1456]: time="2024-09-04T17:51:55.060764365Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 4 17:51:55.061258 containerd[1456]: time="2024-09-04T17:51:55.061015790Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 4 17:51:55.061258 containerd[1456]: time="2024-09-04T17:51:55.061043158Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 4 17:51:55.061258 containerd[1456]: time="2024-09-04T17:51:55.061141293Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:51:55.061258 containerd[1456]: time="2024-09-04T17:51:55.061163756Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:51:55.061919 containerd[1456]: time="2024-09-04T17:51:55.061886182Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:51:55.062051 containerd[1456]: time="2024-09-04T17:51:55.062029578Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 4 17:51:55.062649 containerd[1456]: time="2024-09-04T17:51:55.062121243Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:51:55.062649 containerd[1456]: time="2024-09-04T17:51:55.062144411Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 4 17:51:55.062649 containerd[1456]: time="2024-09-04T17:51:55.062276132Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:51:55.062649 containerd[1456]: time="2024-09-04T17:51:55.062594671Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:51:55.063202 containerd[1456]: time="2024-09-04T17:51:55.063167095Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:51:55.063306 containerd[1456]: time="2024-09-04T17:51:55.063286438Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 4 17:51:55.063533 containerd[1456]: time="2024-09-04T17:51:55.063510446Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 4 17:51:55.063853 containerd[1456]: time="2024-09-04T17:51:55.063739003Z" level=info msg="metadata content store policy set" policy=shared Sep 4 17:51:55.072375 containerd[1456]: time="2024-09-04T17:51:55.071093610Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 4 17:51:55.072375 containerd[1456]: time="2024-09-04T17:51:55.071265192Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 4 17:51:55.075687 containerd[1456]: time="2024-09-04T17:51:55.073686398Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 4 17:51:55.075687 containerd[1456]: time="2024-09-04T17:51:55.073754679Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 4 17:51:55.075687 containerd[1456]: time="2024-09-04T17:51:55.073784073Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 4 17:51:55.075687 containerd[1456]: time="2024-09-04T17:51:55.074005612Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 4 17:51:55.075687 containerd[1456]: time="2024-09-04T17:51:55.074568663Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 4 17:51:55.075687 containerd[1456]: time="2024-09-04T17:51:55.074762288Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 4 17:51:55.075687 containerd[1456]: time="2024-09-04T17:51:55.074788183Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 4 17:51:55.075687 containerd[1456]: time="2024-09-04T17:51:55.074810801Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 4 17:51:55.075687 containerd[1456]: time="2024-09-04T17:51:55.074836374Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 4 17:51:55.075687 containerd[1456]: time="2024-09-04T17:51:55.074857161Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 4 17:51:55.075687 containerd[1456]: time="2024-09-04T17:51:55.074878498Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 4 17:51:55.075687 containerd[1456]: time="2024-09-04T17:51:55.074902946Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 4 17:51:55.075687 containerd[1456]: time="2024-09-04T17:51:55.074927001Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 4 17:51:55.075687 containerd[1456]: time="2024-09-04T17:51:55.074950232Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 4 17:51:55.076345 containerd[1456]: time="2024-09-04T17:51:55.074971800Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 4 17:51:55.076345 containerd[1456]: time="2024-09-04T17:51:55.074991914Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 4 17:51:55.076345 containerd[1456]: time="2024-09-04T17:51:55.075025422Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 4 17:51:55.076345 containerd[1456]: time="2024-09-04T17:51:55.075050011Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 4 17:51:55.076345 containerd[1456]: time="2024-09-04T17:51:55.075071881Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 4 17:51:55.076345 containerd[1456]: time="2024-09-04T17:51:55.075102960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 4 17:51:55.076345 containerd[1456]: time="2024-09-04T17:51:55.075122737Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 4 17:51:55.076345 containerd[1456]: time="2024-09-04T17:51:55.075143983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 4 17:51:55.076345 containerd[1456]: time="2024-09-04T17:51:55.075163440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 4 17:51:55.076345 containerd[1456]: time="2024-09-04T17:51:55.075184936Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 4 17:51:55.076345 containerd[1456]: time="2024-09-04T17:51:55.075206035Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 4 17:51:55.076345 containerd[1456]: time="2024-09-04T17:51:55.075229305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 4 17:51:55.076345 containerd[1456]: time="2024-09-04T17:51:55.075251648Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 4 17:51:55.076345 containerd[1456]: time="2024-09-04T17:51:55.075272078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 4 17:51:55.076959 containerd[1456]: time="2024-09-04T17:51:55.075292403Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 4 17:51:55.076959 containerd[1456]: time="2024-09-04T17:51:55.075323104Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 4 17:51:55.076959 containerd[1456]: time="2024-09-04T17:51:55.075357941Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 4 17:51:55.076959 containerd[1456]: time="2024-09-04T17:51:55.075378753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 4 17:51:55.076959 containerd[1456]: time="2024-09-04T17:51:55.075408083Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 4 17:51:55.076959 containerd[1456]: time="2024-09-04T17:51:55.075518425Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 4 17:51:55.076959 containerd[1456]: time="2024-09-04T17:51:55.075548713Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 4 17:51:55.076959 containerd[1456]: time="2024-09-04T17:51:55.075654036Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 4 17:51:55.076959 containerd[1456]: time="2024-09-04T17:51:55.075691656Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 4 17:51:55.076959 containerd[1456]: time="2024-09-04T17:51:55.075707576Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 4 17:51:55.076959 containerd[1456]: time="2024-09-04T17:51:55.075729762Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 4 17:51:55.076959 containerd[1456]: time="2024-09-04T17:51:55.075745376Z" level=info msg="NRI interface is disabled by configuration." Sep 4 17:51:55.076959 containerd[1456]: time="2024-09-04T17:51:55.075762652Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 4 17:51:55.077613 containerd[1456]: time="2024-09-04T17:51:55.076250024Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 4 17:51:55.077613 containerd[1456]: time="2024-09-04T17:51:55.076350081Z" level=info msg="Connect containerd service" Sep 4 17:51:55.077613 containerd[1456]: time="2024-09-04T17:51:55.076393147Z" level=info msg="using legacy CRI server" Sep 4 17:51:55.077613 containerd[1456]: time="2024-09-04T17:51:55.076407813Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 17:51:55.077613 containerd[1456]: time="2024-09-04T17:51:55.076563006Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 4 17:51:55.077613 containerd[1456]: time="2024-09-04T17:51:55.077587003Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 17:51:55.079308 containerd[1456]: time="2024-09-04T17:51:55.077706177Z" level=info msg="Start subscribing containerd event" Sep 4 17:51:55.079308 containerd[1456]: time="2024-09-04T17:51:55.077771671Z" level=info msg="Start recovering state" Sep 4 17:51:55.079308 containerd[1456]: time="2024-09-04T17:51:55.077859121Z" level=info msg="Start event monitor" Sep 4 17:51:55.079308 containerd[1456]: time="2024-09-04T17:51:55.077882058Z" level=info msg="Start snapshots syncer" Sep 4 17:51:55.079308 containerd[1456]: time="2024-09-04T17:51:55.077895714Z" level=info msg="Start cni network conf syncer for default" Sep 4 17:51:55.079308 containerd[1456]: time="2024-09-04T17:51:55.077908540Z" level=info msg="Start streaming server" Sep 4 17:51:55.079308 containerd[1456]: time="2024-09-04T17:51:55.078519976Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 17:51:55.079308 containerd[1456]: time="2024-09-04T17:51:55.078593646Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 17:51:55.079308 containerd[1456]: time="2024-09-04T17:51:55.078687694Z" level=info msg="containerd successfully booted in 0.085399s" Sep 4 17:51:55.078848 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 17:51:55.133846 systemd-networkd[1375]: eth0: Gained IPv6LL Sep 4 17:51:55.143022 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 17:51:55.154745 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 17:51:55.174748 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:51:55.196109 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 17:51:55.203152 sshd[1521]: Accepted publickey for core from 147.75.109.163 port 38338 ssh2: RSA SHA256:3UawMEk03AfeR6A6/drmeg302df853gVTK6IGVvrB/U Sep 4 17:51:55.208899 sshd[1521]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:51:55.214081 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Sep 4 17:51:55.246220 init.sh[1540]: + '[' -e /etc/default/instance_configs.cfg.template ']' Sep 4 17:51:55.246783 init.sh[1540]: + echo -e '[InstanceSetup]\nset_host_keys = false' Sep 4 17:51:55.248160 init.sh[1540]: + /usr/bin/google_instance_setup Sep 4 17:51:55.261086 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 17:51:55.280749 systemd-logind[1441]: New session 1 of user core. Sep 4 17:51:55.283593 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 17:51:55.304137 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 17:51:55.362995 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 17:51:55.387511 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 17:51:55.437325 (systemd)[1552]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:51:55.529305 tar[1454]: linux-amd64/LICENSE Sep 4 17:51:55.529305 tar[1454]: linux-amd64/README.md Sep 4 17:51:55.563571 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 17:51:55.702447 systemd[1552]: Queued start job for default target default.target. Sep 4 17:51:55.709293 systemd[1552]: Created slice app.slice - User Application Slice. Sep 4 17:51:55.709343 systemd[1552]: Reached target paths.target - Paths. Sep 4 17:51:55.709366 systemd[1552]: Reached target timers.target - Timers. Sep 4 17:51:55.713856 systemd[1552]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 17:51:55.741887 systemd[1552]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 17:51:55.744469 systemd[1552]: Reached target sockets.target - Sockets. Sep 4 17:51:55.744702 systemd[1552]: Reached target basic.target - Basic System. Sep 4 17:51:55.744985 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 17:51:55.745513 systemd[1552]: Reached target default.target - Main User Target. Sep 4 17:51:55.745711 systemd[1552]: Startup finished in 290ms. Sep 4 17:51:55.763336 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 17:51:56.014197 systemd[1]: Started sshd@1-10.128.0.52:22-147.75.109.163:34220.service - OpenSSH per-connection server daemon (147.75.109.163:34220). Sep 4 17:51:56.176932 instance-setup[1547]: INFO Running google_set_multiqueue. Sep 4 17:51:56.195642 instance-setup[1547]: INFO Set channels for eth0 to 2. Sep 4 17:51:56.200398 instance-setup[1547]: INFO Setting /proc/irq/27/smp_affinity_list to 0 for device virtio1. Sep 4 17:51:56.202819 instance-setup[1547]: INFO /proc/irq/27/smp_affinity_list: real affinity 0 Sep 4 17:51:56.202884 instance-setup[1547]: INFO Setting /proc/irq/28/smp_affinity_list to 0 for device virtio1. Sep 4 17:51:56.204848 instance-setup[1547]: INFO /proc/irq/28/smp_affinity_list: real affinity 0 Sep 4 17:51:56.205712 instance-setup[1547]: INFO Setting /proc/irq/29/smp_affinity_list to 1 for device virtio1. Sep 4 17:51:56.208092 instance-setup[1547]: INFO /proc/irq/29/smp_affinity_list: real affinity 1 Sep 4 17:51:56.208148 instance-setup[1547]: INFO Setting /proc/irq/30/smp_affinity_list to 1 for device virtio1. Sep 4 17:51:56.210652 instance-setup[1547]: INFO /proc/irq/30/smp_affinity_list: real affinity 1 Sep 4 17:51:56.222254 instance-setup[1547]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Sep 4 17:51:56.227082 instance-setup[1547]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Sep 4 17:51:56.229721 instance-setup[1547]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Sep 4 17:51:56.229956 instance-setup[1547]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Sep 4 17:51:56.263785 init.sh[1540]: + /usr/bin/google_metadata_script_runner --script-type startup Sep 4 17:51:56.356808 sshd[1568]: Accepted publickey for core from 147.75.109.163 port 34220 ssh2: RSA SHA256:3UawMEk03AfeR6A6/drmeg302df853gVTK6IGVvrB/U Sep 4 17:51:56.358743 sshd[1568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:51:56.372344 systemd-logind[1441]: New session 2 of user core. Sep 4 17:51:56.376756 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 17:51:56.482340 startup-script[1598]: INFO Starting startup scripts. Sep 4 17:51:56.489436 startup-script[1598]: INFO No startup scripts found in metadata. Sep 4 17:51:56.489518 startup-script[1598]: INFO Finished running startup scripts. Sep 4 17:51:56.518267 init.sh[1540]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Sep 4 17:51:56.518267 init.sh[1540]: + daemon_pids=() Sep 4 17:51:56.518267 init.sh[1540]: + for d in accounts clock_skew network Sep 4 17:51:56.518267 init.sh[1540]: + daemon_pids+=($!) Sep 4 17:51:56.518267 init.sh[1540]: + for d in accounts clock_skew network Sep 4 17:51:56.518267 init.sh[1540]: + daemon_pids+=($!) Sep 4 17:51:56.518267 init.sh[1540]: + for d in accounts clock_skew network Sep 4 17:51:56.518645 init.sh[1603]: + /usr/bin/google_clock_skew_daemon Sep 4 17:51:56.519065 init.sh[1604]: + /usr/bin/google_network_daemon Sep 4 17:51:56.520449 init.sh[1540]: + daemon_pids+=($!) Sep 4 17:51:56.520449 init.sh[1540]: + NOTIFY_SOCKET=/run/systemd/notify Sep 4 17:51:56.520449 init.sh[1540]: + /usr/bin/systemd-notify --ready Sep 4 17:51:56.520856 init.sh[1602]: + /usr/bin/google_accounts_daemon Sep 4 17:51:56.538778 systemd[1]: Started oem-gce.service - GCE Linux Agent. Sep 4 17:51:56.551834 init.sh[1540]: + wait -n 1602 1603 1604 Sep 4 17:51:56.581925 sshd[1568]: pam_unix(sshd:session): session closed for user core Sep 4 17:51:56.590635 systemd[1]: sshd@1-10.128.0.52:22-147.75.109.163:34220.service: Deactivated successfully. Sep 4 17:51:56.594289 systemd[1]: session-2.scope: Deactivated successfully. Sep 4 17:51:56.595604 systemd-logind[1441]: Session 2 logged out. Waiting for processes to exit. Sep 4 17:51:56.597453 systemd-logind[1441]: Removed session 2. Sep 4 17:51:56.642126 systemd[1]: Started sshd@2-10.128.0.52:22-147.75.109.163:34236.service - OpenSSH per-connection server daemon (147.75.109.163:34236). Sep 4 17:51:56.982106 sshd[1610]: Accepted publickey for core from 147.75.109.163 port 34236 ssh2: RSA SHA256:3UawMEk03AfeR6A6/drmeg302df853gVTK6IGVvrB/U Sep 4 17:51:56.982979 sshd[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:51:56.998585 systemd-logind[1441]: New session 3 of user core. Sep 4 17:51:57.002981 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 17:51:57.043784 google-networking[1604]: INFO Starting Google Networking daemon. Sep 4 17:51:57.124284 google-clock-skew[1603]: INFO Starting Google Clock Skew daemon. Sep 4 17:51:57.137016 google-clock-skew[1603]: INFO Clock drift token has changed: 0. Sep 4 17:51:57.164001 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:51:57.177037 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 17:51:57.183299 (kubelet)[1628]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:51:57.186565 groupadd[1624]: group added to /etc/group: name=google-sudoers, GID=1000 Sep 4 17:51:57.187727 systemd[1]: Startup finished in 1.042s (kernel) + 9.113s (initrd) + 9.364s (userspace) = 19.521s. Sep 4 17:51:57.191776 groupadd[1624]: group added to /etc/gshadow: name=google-sudoers Sep 4 17:51:57.223038 sshd[1610]: pam_unix(sshd:session): session closed for user core Sep 4 17:51:57.232329 systemd[1]: sshd@2-10.128.0.52:22-147.75.109.163:34236.service: Deactivated successfully. Sep 4 17:51:57.236541 systemd[1]: session-3.scope: Deactivated successfully. Sep 4 17:51:57.239381 systemd-logind[1441]: Session 3 logged out. Waiting for processes to exit. Sep 4 17:51:57.242150 systemd-logind[1441]: Removed session 3. Sep 4 17:51:57.266632 groupadd[1624]: new group: name=google-sudoers, GID=1000 Sep 4 17:51:57.296755 google-accounts[1602]: INFO Starting Google Accounts daemon. Sep 4 17:51:57.309502 google-accounts[1602]: WARNING OS Login not installed. Sep 4 17:51:57.311304 google-accounts[1602]: INFO Creating a new user account for 0. Sep 4 17:51:57.316396 init.sh[1642]: useradd: invalid user name '0': use --badname to ignore Sep 4 17:51:57.316708 google-accounts[1602]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Sep 4 17:51:57.922925 ntpd[1425]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:34%2]:123 Sep 4 17:51:57.923516 ntpd[1425]: 4 Sep 17:51:57 ntpd[1425]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:34%2]:123 Sep 4 17:51:58.000684 systemd-resolved[1314]: Clock change detected. Flushing caches. Sep 4 17:51:58.001033 google-clock-skew[1603]: INFO Synced system time with hardware clock. Sep 4 17:51:58.155366 kubelet[1628]: E0904 17:51:58.155226 1628 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:51:58.164023 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:51:58.164514 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:51:58.165107 systemd[1]: kubelet.service: Consumed 1.308s CPU time. Sep 4 17:52:06.186004 systemd[1]: Started sshd@3-10.128.0.52:22-220.134.146.222:34266.service - OpenSSH per-connection server daemon (220.134.146.222:34266). Sep 4 17:52:07.276626 systemd[1]: Started sshd@4-10.128.0.52:22-147.75.109.163:44510.service - OpenSSH per-connection server daemon (147.75.109.163:44510). Sep 4 17:52:07.309199 sshd[1652]: Received disconnect from 220.134.146.222 port 34266:11: Bye Bye [preauth] Sep 4 17:52:07.309199 sshd[1652]: Disconnected from authenticating user root 220.134.146.222 port 34266 [preauth] Sep 4 17:52:07.312061 systemd[1]: sshd@3-10.128.0.52:22-220.134.146.222:34266.service: Deactivated successfully. Sep 4 17:52:07.569354 sshd[1655]: Accepted publickey for core from 147.75.109.163 port 44510 ssh2: RSA SHA256:3UawMEk03AfeR6A6/drmeg302df853gVTK6IGVvrB/U Sep 4 17:52:07.571343 sshd[1655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:52:07.577520 systemd-logind[1441]: New session 4 of user core. Sep 4 17:52:07.588500 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 17:52:07.782684 sshd[1655]: pam_unix(sshd:session): session closed for user core Sep 4 17:52:07.787088 systemd[1]: sshd@4-10.128.0.52:22-147.75.109.163:44510.service: Deactivated successfully. Sep 4 17:52:07.789595 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 17:52:07.791374 systemd-logind[1441]: Session 4 logged out. Waiting for processes to exit. Sep 4 17:52:07.793072 systemd-logind[1441]: Removed session 4. Sep 4 17:52:07.838001 systemd[1]: Started sshd@5-10.128.0.52:22-147.75.109.163:44520.service - OpenSSH per-connection server daemon (147.75.109.163:44520). Sep 4 17:52:08.131948 sshd[1664]: Accepted publickey for core from 147.75.109.163 port 44520 ssh2: RSA SHA256:3UawMEk03AfeR6A6/drmeg302df853gVTK6IGVvrB/U Sep 4 17:52:08.133649 sshd[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:52:08.140115 systemd-logind[1441]: New session 5 of user core. Sep 4 17:52:08.150512 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 17:52:08.338876 sshd[1664]: pam_unix(sshd:session): session closed for user core Sep 4 17:52:08.343222 systemd[1]: sshd@5-10.128.0.52:22-147.75.109.163:44520.service: Deactivated successfully. Sep 4 17:52:08.345676 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 17:52:08.346831 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 17:52:08.348805 systemd-logind[1441]: Session 5 logged out. Waiting for processes to exit. Sep 4 17:52:08.354522 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:52:08.357050 systemd-logind[1441]: Removed session 5. Sep 4 17:52:08.389661 systemd[1]: Started sshd@6-10.128.0.52:22-147.75.109.163:44524.service - OpenSSH per-connection server daemon (147.75.109.163:44524). Sep 4 17:52:08.675351 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:52:08.687832 (kubelet)[1681]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:52:08.691668 sshd[1674]: Accepted publickey for core from 147.75.109.163 port 44524 ssh2: RSA SHA256:3UawMEk03AfeR6A6/drmeg302df853gVTK6IGVvrB/U Sep 4 17:52:08.694772 sshd[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:52:08.704040 systemd-logind[1441]: New session 6 of user core. Sep 4 17:52:08.711546 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 17:52:08.760623 kubelet[1681]: E0904 17:52:08.760572 1681 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:52:08.765970 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:52:08.766247 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:52:08.908512 sshd[1674]: pam_unix(sshd:session): session closed for user core Sep 4 17:52:08.912935 systemd[1]: sshd@6-10.128.0.52:22-147.75.109.163:44524.service: Deactivated successfully. Sep 4 17:52:08.915418 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 17:52:08.917533 systemd-logind[1441]: Session 6 logged out. Waiting for processes to exit. Sep 4 17:52:08.919311 systemd-logind[1441]: Removed session 6. Sep 4 17:52:08.969788 systemd[1]: Started sshd@7-10.128.0.52:22-147.75.109.163:44528.service - OpenSSH per-connection server daemon (147.75.109.163:44528). Sep 4 17:52:09.251132 sshd[1694]: Accepted publickey for core from 147.75.109.163 port 44528 ssh2: RSA SHA256:3UawMEk03AfeR6A6/drmeg302df853gVTK6IGVvrB/U Sep 4 17:52:09.253092 sshd[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:52:09.258905 systemd-logind[1441]: New session 7 of user core. Sep 4 17:52:09.270514 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 17:52:09.447259 sudo[1697]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 17:52:09.447839 sudo[1697]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:52:09.468527 sudo[1697]: pam_unix(sudo:session): session closed for user root Sep 4 17:52:09.511831 sshd[1694]: pam_unix(sshd:session): session closed for user core Sep 4 17:52:09.516923 systemd[1]: sshd@7-10.128.0.52:22-147.75.109.163:44528.service: Deactivated successfully. Sep 4 17:52:09.519330 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 17:52:09.521311 systemd-logind[1441]: Session 7 logged out. Waiting for processes to exit. Sep 4 17:52:09.522851 systemd-logind[1441]: Removed session 7. Sep 4 17:52:09.566993 systemd[1]: Started sshd@8-10.128.0.52:22-147.75.109.163:44542.service - OpenSSH per-connection server daemon (147.75.109.163:44542). Sep 4 17:52:09.849229 sshd[1702]: Accepted publickey for core from 147.75.109.163 port 44542 ssh2: RSA SHA256:3UawMEk03AfeR6A6/drmeg302df853gVTK6IGVvrB/U Sep 4 17:52:09.851204 sshd[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:52:09.858433 systemd-logind[1441]: New session 8 of user core. Sep 4 17:52:09.865440 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 17:52:10.027066 sudo[1706]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 17:52:10.027582 sudo[1706]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:52:10.032905 sudo[1706]: pam_unix(sudo:session): session closed for user root Sep 4 17:52:10.050357 sudo[1705]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 4 17:52:10.050869 sudo[1705]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:52:10.070671 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 4 17:52:10.082547 auditctl[1709]: No rules Sep 4 17:52:10.083477 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 17:52:10.083768 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 4 17:52:10.091906 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:52:10.126982 augenrules[1727]: No rules Sep 4 17:52:10.127816 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:52:10.130021 sudo[1705]: pam_unix(sudo:session): session closed for user root Sep 4 17:52:10.172939 sshd[1702]: pam_unix(sshd:session): session closed for user core Sep 4 17:52:10.178426 systemd[1]: sshd@8-10.128.0.52:22-147.75.109.163:44542.service: Deactivated successfully. Sep 4 17:52:10.180644 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 17:52:10.181607 systemd-logind[1441]: Session 8 logged out. Waiting for processes to exit. Sep 4 17:52:10.183102 systemd-logind[1441]: Removed session 8. Sep 4 17:52:10.232600 systemd[1]: Started sshd@9-10.128.0.52:22-147.75.109.163:44544.service - OpenSSH per-connection server daemon (147.75.109.163:44544). Sep 4 17:52:10.529533 sshd[1735]: Accepted publickey for core from 147.75.109.163 port 44544 ssh2: RSA SHA256:3UawMEk03AfeR6A6/drmeg302df853gVTK6IGVvrB/U Sep 4 17:52:10.531435 sshd[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:52:10.537576 systemd-logind[1441]: New session 9 of user core. Sep 4 17:52:10.544360 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 17:52:10.710497 sudo[1738]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 17:52:10.710988 sudo[1738]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:52:10.873611 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 17:52:10.876685 (dockerd)[1747]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 17:52:11.322976 dockerd[1747]: time="2024-09-04T17:52:11.322804088Z" level=info msg="Starting up" Sep 4 17:52:11.477842 dockerd[1747]: time="2024-09-04T17:52:11.477786884Z" level=info msg="Loading containers: start." Sep 4 17:52:11.642245 kernel: Initializing XFRM netlink socket Sep 4 17:52:11.759068 systemd-networkd[1375]: docker0: Link UP Sep 4 17:52:11.782427 dockerd[1747]: time="2024-09-04T17:52:11.781393017Z" level=info msg="Loading containers: done." Sep 4 17:52:11.803118 dockerd[1747]: time="2024-09-04T17:52:11.803051417Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 17:52:11.803527 dockerd[1747]: time="2024-09-04T17:52:11.803497602Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 4 17:52:11.803769 dockerd[1747]: time="2024-09-04T17:52:11.803744508Z" level=info msg="Daemon has completed initialization" Sep 4 17:52:11.843296 dockerd[1747]: time="2024-09-04T17:52:11.843200410Z" level=info msg="API listen on /run/docker.sock" Sep 4 17:52:11.843750 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 17:52:12.931087 containerd[1456]: time="2024-09-04T17:52:12.931036744Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.4\"" Sep 4 17:52:13.423344 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2937662946.mount: Deactivated successfully. Sep 4 17:52:15.250969 containerd[1456]: time="2024-09-04T17:52:15.250882851Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:52:15.252726 containerd[1456]: time="2024-09-04T17:52:15.252653997Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.4: active requests=0, bytes read=32779044" Sep 4 17:52:15.259636 containerd[1456]: time="2024-09-04T17:52:15.259498491Z" level=info msg="ImageCreate event name:\"sha256:8a97b1fb3e2ebd03bf97ce8ae894b3dc8a68ab1f4ecfd0a284921c45c56f5aa4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:52:15.264719 containerd[1456]: time="2024-09-04T17:52:15.264101810Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7b0c4a959aaee5660e1234452dc3123310231b9f92d29ebd175c86dc9f797ee7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:52:15.265800 containerd[1456]: time="2024-09-04T17:52:15.265744254Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.4\" with image id \"sha256:8a97b1fb3e2ebd03bf97ce8ae894b3dc8a68ab1f4ecfd0a284921c45c56f5aa4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7b0c4a959aaee5660e1234452dc3123310231b9f92d29ebd175c86dc9f797ee7\", size \"32769216\" in 2.333936985s" Sep 4 17:52:15.265921 containerd[1456]: time="2024-09-04T17:52:15.265808108Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.4\" returns image reference \"sha256:8a97b1fb3e2ebd03bf97ce8ae894b3dc8a68ab1f4ecfd0a284921c45c56f5aa4\"" Sep 4 17:52:15.299417 containerd[1456]: time="2024-09-04T17:52:15.299366085Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.4\"" Sep 4 17:52:17.223782 containerd[1456]: time="2024-09-04T17:52:17.223697596Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:52:17.225626 containerd[1456]: time="2024-09-04T17:52:17.225420444Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.4: active requests=0, bytes read=29595999" Sep 4 17:52:17.227474 containerd[1456]: time="2024-09-04T17:52:17.227398589Z" level=info msg="ImageCreate event name:\"sha256:8398ad49a121d58ecf8a36e8371c0928fdf75eb0a83d28232ab2b39b1c6a9050\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:52:17.231875 containerd[1456]: time="2024-09-04T17:52:17.231780911Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:992cccbf652fa951c1a3d41b0c1033ae0bf64f33da03d50395282c551900af9e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:52:17.233512 containerd[1456]: time="2024-09-04T17:52:17.233333020Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.4\" with image id \"sha256:8398ad49a121d58ecf8a36e8371c0928fdf75eb0a83d28232ab2b39b1c6a9050\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:992cccbf652fa951c1a3d41b0c1033ae0bf64f33da03d50395282c551900af9e\", size \"31144011\" in 1.933915093s" Sep 4 17:52:17.233512 containerd[1456]: time="2024-09-04T17:52:17.233390318Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.4\" returns image reference \"sha256:8398ad49a121d58ecf8a36e8371c0928fdf75eb0a83d28232ab2b39b1c6a9050\"" Sep 4 17:52:17.267272 containerd[1456]: time="2024-09-04T17:52:17.267195124Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.4\"" Sep 4 17:52:18.466622 containerd[1456]: time="2024-09-04T17:52:18.466556209Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:52:18.468590 containerd[1456]: time="2024-09-04T17:52:18.468319433Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.4: active requests=0, bytes read=17782149" Sep 4 17:52:18.476118 containerd[1456]: time="2024-09-04T17:52:18.475424949Z" level=info msg="ImageCreate event name:\"sha256:4939f82ab9ab456e782c06ed37b245127c8a9ac29a72982346a7160f18107833\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:52:18.481303 containerd[1456]: time="2024-09-04T17:52:18.481252137Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:37eaeee5bca8da34ad3d36e37586dd29f5edb1e2927e7644dfb113e70062bda8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:52:18.482749 containerd[1456]: time="2024-09-04T17:52:18.482689008Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.4\" with image id \"sha256:4939f82ab9ab456e782c06ed37b245127c8a9ac29a72982346a7160f18107833\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:37eaeee5bca8da34ad3d36e37586dd29f5edb1e2927e7644dfb113e70062bda8\", size \"19330197\" in 1.215441913s" Sep 4 17:52:18.482749 containerd[1456]: time="2024-09-04T17:52:18.482746727Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.4\" returns image reference \"sha256:4939f82ab9ab456e782c06ed37b245127c8a9ac29a72982346a7160f18107833\"" Sep 4 17:52:18.512816 containerd[1456]: time="2024-09-04T17:52:18.512768277Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.4\"" Sep 4 17:52:19.016770 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 17:52:19.024858 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:52:19.476541 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:52:19.489432 (kubelet)[1981]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:52:19.583479 kubelet[1981]: E0904 17:52:19.583195 1981 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:52:19.587706 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:52:19.587940 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:52:20.049142 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount556902188.mount: Deactivated successfully. Sep 4 17:52:20.710140 containerd[1456]: time="2024-09-04T17:52:20.710053020Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:52:20.712194 containerd[1456]: time="2024-09-04T17:52:20.711937948Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.4: active requests=0, bytes read=29039056" Sep 4 17:52:20.714181 containerd[1456]: time="2024-09-04T17:52:20.714094725Z" level=info msg="ImageCreate event name:\"sha256:568d5ba88d944bcd67415d8c358fce615824410f3a43bab2b353336bc3795a10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:52:20.721931 containerd[1456]: time="2024-09-04T17:52:20.720306340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:33ee1df1ba70e41bf9506d54bb5e64ef5f3ba9fc1b3021aaa4468606a7802acc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:52:20.721931 containerd[1456]: time="2024-09-04T17:52:20.721699532Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.4\" with image id \"sha256:568d5ba88d944bcd67415d8c358fce615824410f3a43bab2b353336bc3795a10\", repo tag \"registry.k8s.io/kube-proxy:v1.30.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:33ee1df1ba70e41bf9506d54bb5e64ef5f3ba9fc1b3021aaa4468606a7802acc\", size \"29036180\" in 2.208665538s" Sep 4 17:52:20.721931 containerd[1456]: time="2024-09-04T17:52:20.721765977Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.4\" returns image reference \"sha256:568d5ba88d944bcd67415d8c358fce615824410f3a43bab2b353336bc3795a10\"" Sep 4 17:52:20.759945 containerd[1456]: time="2024-09-04T17:52:20.759893356Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Sep 4 17:52:21.152494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2868377521.mount: Deactivated successfully. Sep 4 17:52:22.295924 containerd[1456]: time="2024-09-04T17:52:22.295843262Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:52:22.297868 containerd[1456]: time="2024-09-04T17:52:22.297803107Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18192419" Sep 4 17:52:22.299352 containerd[1456]: time="2024-09-04T17:52:22.299274699Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:52:22.308684 containerd[1456]: time="2024-09-04T17:52:22.308601441Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:52:22.310702 containerd[1456]: time="2024-09-04T17:52:22.310147408Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.549924234s" Sep 4 17:52:22.310702 containerd[1456]: time="2024-09-04T17:52:22.310225867Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Sep 4 17:52:22.339147 containerd[1456]: time="2024-09-04T17:52:22.339100046Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Sep 4 17:52:22.697345 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount839061719.mount: Deactivated successfully. Sep 4 17:52:22.705671 containerd[1456]: time="2024-09-04T17:52:22.705606377Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:52:22.707319 containerd[1456]: time="2024-09-04T17:52:22.707180775Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=324188" Sep 4 17:52:22.708629 containerd[1456]: time="2024-09-04T17:52:22.708558007Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:52:22.714758 containerd[1456]: time="2024-09-04T17:52:22.713570958Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:52:22.714758 containerd[1456]: time="2024-09-04T17:52:22.714592063Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 375.222512ms" Sep 4 17:52:22.714758 containerd[1456]: time="2024-09-04T17:52:22.714636175Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Sep 4 17:52:22.745024 containerd[1456]: time="2024-09-04T17:52:22.744966232Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Sep 4 17:52:23.218725 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1196783693.mount: Deactivated successfully. Sep 4 17:52:24.872014 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 4 17:52:25.772538 containerd[1456]: time="2024-09-04T17:52:25.772463345Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:52:25.774232 containerd[1456]: time="2024-09-04T17:52:25.774162489Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57246061" Sep 4 17:52:25.775332 containerd[1456]: time="2024-09-04T17:52:25.775260370Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:52:25.780190 containerd[1456]: time="2024-09-04T17:52:25.779333316Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:52:25.781133 containerd[1456]: time="2024-09-04T17:52:25.780925611Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.035914877s" Sep 4 17:52:25.781133 containerd[1456]: time="2024-09-04T17:52:25.780974899Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Sep 4 17:52:29.838542 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 4 17:52:29.849573 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:52:30.208183 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 4 17:52:30.208337 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 4 17:52:30.208793 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:52:30.223569 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:52:30.251973 systemd[1]: Reloading requested from client PID 2167 ('systemctl') (unit session-9.scope)... Sep 4 17:52:30.252000 systemd[1]: Reloading... Sep 4 17:52:30.397227 zram_generator::config[2204]: No configuration found. Sep 4 17:52:30.582399 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:52:30.704830 systemd[1]: Reloading finished in 452 ms. Sep 4 17:52:30.776019 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 4 17:52:30.776126 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 4 17:52:30.776654 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:52:30.780763 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:52:31.106125 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:52:31.120762 (kubelet)[2258]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:52:31.179486 kubelet[2258]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:52:31.179486 kubelet[2258]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:52:31.179486 kubelet[2258]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:52:31.180062 kubelet[2258]: I0904 17:52:31.179584 2258 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:52:31.679047 kubelet[2258]: I0904 17:52:31.678996 2258 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Sep 4 17:52:31.679047 kubelet[2258]: I0904 17:52:31.679030 2258 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:52:31.679428 kubelet[2258]: I0904 17:52:31.679391 2258 server.go:927] "Client rotation is on, will bootstrap in background" Sep 4 17:52:31.702684 kubelet[2258]: I0904 17:52:31.701520 2258 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:52:31.702684 kubelet[2258]: E0904 17:52:31.702629 2258 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.128.0.52:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.128.0.52:6443: connect: connection refused Sep 4 17:52:31.728684 kubelet[2258]: I0904 17:52:31.728635 2258 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:52:31.732030 kubelet[2258]: I0904 17:52:31.731943 2258 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:52:31.732441 kubelet[2258]: I0904 17:52:31.732012 2258 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:52:31.732697 kubelet[2258]: I0904 17:52:31.732456 2258 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:52:31.732697 kubelet[2258]: I0904 17:52:31.732477 2258 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:52:31.732697 kubelet[2258]: I0904 17:52:31.732697 2258 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:52:31.734550 kubelet[2258]: I0904 17:52:31.734471 2258 kubelet.go:400] "Attempting to sync node with API server" Sep 4 17:52:31.734550 kubelet[2258]: I0904 17:52:31.734536 2258 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:52:31.735113 kubelet[2258]: I0904 17:52:31.734591 2258 kubelet.go:312] "Adding apiserver pod source" Sep 4 17:52:31.735113 kubelet[2258]: I0904 17:52:31.734620 2258 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:52:31.742159 kubelet[2258]: W0904 17:52:31.741807 2258 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.52:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.52:6443: connect: connection refused Sep 4 17:52:31.742159 kubelet[2258]: E0904 17:52:31.741901 2258 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.52:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.52:6443: connect: connection refused Sep 4 17:52:31.742452 kubelet[2258]: W0904 17:52:31.742394 2258 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.52:6443: connect: connection refused Sep 4 17:52:31.742452 kubelet[2258]: E0904 17:52:31.742441 2258 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.52:6443: connect: connection refused Sep 4 17:52:31.742608 kubelet[2258]: I0904 17:52:31.742579 2258 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.20" apiVersion="v1" Sep 4 17:52:31.746001 kubelet[2258]: I0904 17:52:31.745865 2258 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 17:52:31.746184 kubelet[2258]: W0904 17:52:31.746060 2258 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 17:52:31.747770 kubelet[2258]: I0904 17:52:31.747495 2258 server.go:1264] "Started kubelet" Sep 4 17:52:31.750226 kubelet[2258]: I0904 17:52:31.750144 2258 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:52:31.751638 kubelet[2258]: I0904 17:52:31.751580 2258 server.go:455] "Adding debug handlers to kubelet server" Sep 4 17:52:31.754693 kubelet[2258]: I0904 17:52:31.754636 2258 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:52:31.761035 kubelet[2258]: I0904 17:52:31.759953 2258 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 17:52:31.761035 kubelet[2258]: I0904 17:52:31.760329 2258 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:52:31.761035 kubelet[2258]: E0904 17:52:31.760554 2258 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.52:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.52:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal.17f21bf643aa4033 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal,UID:ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal,},FirstTimestamp:2024-09-04 17:52:31.747457075 +0000 UTC m=+0.620423406,LastTimestamp:2024-09-04 17:52:31.747457075 +0000 UTC m=+0.620423406,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal,}" Sep 4 17:52:31.763657 kubelet[2258]: I0904 17:52:31.763623 2258 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:52:31.766143 kubelet[2258]: I0904 17:52:31.764448 2258 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Sep 4 17:52:31.766143 kubelet[2258]: I0904 17:52:31.764530 2258 reconciler.go:26] "Reconciler: start to sync state" Sep 4 17:52:31.766143 kubelet[2258]: W0904 17:52:31.764914 2258 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.52:6443: connect: connection refused Sep 4 17:52:31.766143 kubelet[2258]: E0904 17:52:31.764978 2258 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.52:6443: connect: connection refused Sep 4 17:52:31.766143 kubelet[2258]: E0904 17:52:31.765046 2258 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.52:6443: connect: connection refused" interval="200ms" Sep 4 17:52:31.767087 kubelet[2258]: I0904 17:52:31.767055 2258 factory.go:221] Registration of the systemd container factory successfully Sep 4 17:52:31.767214 kubelet[2258]: I0904 17:52:31.767187 2258 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 17:52:31.770273 kubelet[2258]: I0904 17:52:31.770248 2258 factory.go:221] Registration of the containerd container factory successfully Sep 4 17:52:31.770535 kubelet[2258]: E0904 17:52:31.770511 2258 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:52:31.793909 kubelet[2258]: I0904 17:52:31.793850 2258 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:52:31.795846 kubelet[2258]: I0904 17:52:31.795811 2258 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:52:31.795965 kubelet[2258]: I0904 17:52:31.795856 2258 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:52:31.795965 kubelet[2258]: I0904 17:52:31.795887 2258 kubelet.go:2337] "Starting kubelet main sync loop" Sep 4 17:52:31.795965 kubelet[2258]: E0904 17:52:31.795953 2258 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:52:31.807516 kubelet[2258]: W0904 17:52:31.807443 2258 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.52:6443: connect: connection refused Sep 4 17:52:31.807741 kubelet[2258]: E0904 17:52:31.807721 2258 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.52:6443: connect: connection refused Sep 4 17:52:31.809306 kubelet[2258]: I0904 17:52:31.809280 2258 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:52:31.809306 kubelet[2258]: I0904 17:52:31.809302 2258 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:52:31.809459 kubelet[2258]: I0904 17:52:31.809341 2258 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:52:31.812203 kubelet[2258]: I0904 17:52:31.812143 2258 policy_none.go:49] "None policy: Start" Sep 4 17:52:31.813173 kubelet[2258]: I0904 17:52:31.813115 2258 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 17:52:31.813173 kubelet[2258]: I0904 17:52:31.813165 2258 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:52:31.820896 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 17:52:31.829764 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 17:52:31.835317 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 17:52:31.848693 kubelet[2258]: I0904 17:52:31.848650 2258 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:52:31.849014 kubelet[2258]: I0904 17:52:31.848960 2258 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 17:52:31.849221 kubelet[2258]: I0904 17:52:31.849200 2258 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:52:31.851856 kubelet[2258]: E0904 17:52:31.851812 2258 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal\" not found" Sep 4 17:52:31.870895 kubelet[2258]: I0904 17:52:31.870845 2258 kubelet_node_status.go:73] "Attempting to register node" node="ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:52:31.871388 kubelet[2258]: E0904 17:52:31.871333 2258 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.52:6443/api/v1/nodes\": dial tcp 10.128.0.52:6443: connect: connection refused" node="ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:52:31.896970 kubelet[2258]: I0904 17:52:31.896810 2258 topology_manager.go:215] "Topology Admit Handler" podUID="8574320d7ccd85816ae74528c6ffea56" podNamespace="kube-system" podName="kube-apiserver-ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:52:31.903368 kubelet[2258]: I0904 17:52:31.903304 2258 topology_manager.go:215] "Topology Admit Handler" podUID="1799e8b44409f55219d0dc04e79c4af8" podNamespace="kube-system" podName="kube-controller-manager-ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:52:31.908559 kubelet[2258]: I0904 17:52:31.908517 2258 topology_manager.go:215] "Topology Admit Handler" podUID="e663d30317abfd35f1bbf92e7f3f51e3" podNamespace="kube-system" podName="kube-scheduler-ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:52:31.918301 systemd[1]: Created slice kubepods-burstable-pod8574320d7ccd85816ae74528c6ffea56.slice - libcontainer container kubepods-burstable-pod8574320d7ccd85816ae74528c6ffea56.slice. Sep 4 17:52:31.936671 systemd[1]: Created slice kubepods-burstable-pod1799e8b44409f55219d0dc04e79c4af8.slice - libcontainer container kubepods-burstable-pod1799e8b44409f55219d0dc04e79c4af8.slice. Sep 4 17:52:31.950681 systemd[1]: Created slice kubepods-burstable-pode663d30317abfd35f1bbf92e7f3f51e3.slice - libcontainer container kubepods-burstable-pode663d30317abfd35f1bbf92e7f3f51e3.slice. Sep 4 17:52:31.966333 kubelet[2258]: I0904 17:52:31.965876 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8574320d7ccd85816ae74528c6ffea56-ca-certs\") pod \"kube-apiserver-ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal\" (UID: \"8574320d7ccd85816ae74528c6ffea56\") " pod="kube-system/kube-apiserver-ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:52:31.966333 kubelet[2258]: I0904 17:52:31.965937 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1799e8b44409f55219d0dc04e79c4af8-flexvolume-dir\") pod \"kube-controller-manager-ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal\" (UID: \"1799e8b44409f55219d0dc04e79c4af8\") " pod="kube-system/kube-controller-manager-ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:52:31.966333 kubelet[2258]: I0904 17:52:31.965973 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1799e8b44409f55219d0dc04e79c4af8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal\" (UID: \"1799e8b44409f55219d0dc04e79c4af8\") " pod="kube-system/kube-controller-manager-ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:52:31.966333 kubelet[2258]: I0904 17:52:31.966004 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e663d30317abfd35f1bbf92e7f3f51e3-kubeconfig\") pod \"kube-scheduler-ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal\" (UID: \"e663d30317abfd35f1bbf92e7f3f51e3\") " pod="kube-system/kube-scheduler-ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:52:31.966634 kubelet[2258]: I0904 17:52:31.966035 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8574320d7ccd85816ae74528c6ffea56-k8s-certs\") pod \"kube-apiserver-ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal\" (UID: \"8574320d7ccd85816ae74528c6ffea56\") " pod="kube-system/kube-apiserver-ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:52:31.966634 kubelet[2258]: I0904 17:52:31.966105 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8574320d7ccd85816ae74528c6ffea56-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal\" (UID: \"8574320d7ccd85816ae74528c6ffea56\") " pod="kube-system/kube-apiserver-ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:52:31.966634 kubelet[2258]: E0904 17:52:31.966135 2258 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.52:6443: connect: connection refused" interval="400ms" Sep 4 17:52:31.966634 kubelet[2258]: I0904 17:52:31.966196 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1799e8b44409f55219d0dc04e79c4af8-ca-certs\") pod \"kube-controller-manager-ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal\" (UID: \"1799e8b44409f55219d0dc04e79c4af8\") " pod="kube-system/kube-controller-manager-ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:52:31.966759 kubelet[2258]: I0904 17:52:31.966243 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1799e8b44409f55219d0dc04e79c4af8-k8s-certs\") pod \"kube-controller-manager-ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal\" (UID: \"1799e8b44409f55219d0dc04e79c4af8\") " pod="kube-system/kube-controller-manager-ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:52:31.966759 kubelet[2258]: I0904 17:52:31.966278 2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1799e8b44409f55219d0dc04e79c4af8-kubeconfig\") pod \"kube-controller-manager-ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal\" (UID: \"1799e8b44409f55219d0dc04e79c4af8\") " pod="kube-system/kube-controller-manager-ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:52:32.077672 kubelet[2258]: I0904 17:52:32.077635 2258 kubelet_node_status.go:73] "Attempting to register node" node="ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:52:32.078104 kubelet[2258]: E0904 17:52:32.078053 2258 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.52:6443/api/v1/nodes\": dial tcp 10.128.0.52:6443: connect: connection refused" node="ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:52:32.235992 containerd[1456]: time="2024-09-04T17:52:32.235743295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal,Uid:8574320d7ccd85816ae74528c6ffea56,Namespace:kube-system,Attempt:0,}" Sep 4 17:52:32.253606 containerd[1456]: time="2024-09-04T17:52:32.253537441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal,Uid:1799e8b44409f55219d0dc04e79c4af8,Namespace:kube-system,Attempt:0,}" Sep 4 17:52:32.255705 containerd[1456]: time="2024-09-04T17:52:32.255644893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal,Uid:e663d30317abfd35f1bbf92e7f3f51e3,Namespace:kube-system,Attempt:0,}" Sep 4 17:52:32.367418 kubelet[2258]: E0904 17:52:32.367333 2258 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.52:6443: connect: connection refused" interval="800ms" Sep 4 17:52:32.484628 kubelet[2258]: I0904 17:52:32.484591 2258 kubelet_node_status.go:73] "Attempting to register node" node="ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:52:32.485053 kubelet[2258]: E0904 17:52:32.485004 2258 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.52:6443/api/v1/nodes\": dial tcp 10.128.0.52:6443: connect: connection refused" node="ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:52:32.620361 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1247970631.mount: Deactivated successfully. Sep 4 17:52:32.629985 containerd[1456]: time="2024-09-04T17:52:32.629917986Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:52:32.631289 containerd[1456]: time="2024-09-04T17:52:32.631224539Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=313954" Sep 4 17:52:32.633030 containerd[1456]: time="2024-09-04T17:52:32.632932053Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:52:32.634535 containerd[1456]: time="2024-09-04T17:52:32.634476343Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:52:32.636170 containerd[1456]: time="2024-09-04T17:52:32.636083813Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:52:32.640186 containerd[1456]: time="2024-09-04T17:52:32.639079126Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:52:32.640186 containerd[1456]: time="2024-09-04T17:52:32.640061456Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:52:32.645216 containerd[1456]: time="2024-09-04T17:52:32.645172422Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:52:32.646387 containerd[1456]: time="2024-09-04T17:52:32.646340681Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 410.396424ms" Sep 4 17:52:32.649687 containerd[1456]: time="2024-09-04T17:52:32.649576181Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 395.925085ms" Sep 4 17:52:32.650576 containerd[1456]: time="2024-09-04T17:52:32.650538369Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 394.783878ms" Sep 4 17:52:32.685455 kubelet[2258]: W0904 17:52:32.685407 2258 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.52:6443: connect: connection refused Sep 4 17:52:32.685618 kubelet[2258]: E0904 17:52:32.685469 2258 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.52:6443: connect: connection refused Sep 4 17:52:32.730179 kubelet[2258]: W0904 17:52:32.730015 2258 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.52:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.52:6443: connect: connection refused Sep 4 17:52:32.730179 kubelet[2258]: E0904 17:52:32.730116 2258 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.52:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.52:6443: connect: connection refused Sep 4 17:52:32.889206 containerd[1456]: time="2024-09-04T17:52:32.888330736Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:52:32.889206 containerd[1456]: time="2024-09-04T17:52:32.888421590Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:52:32.889206 containerd[1456]: time="2024-09-04T17:52:32.888448368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:52:32.889206 containerd[1456]: time="2024-09-04T17:52:32.888593293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:52:32.889206 containerd[1456]: time="2024-09-04T17:52:32.888417708Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:52:32.889206 containerd[1456]: time="2024-09-04T17:52:32.888483164Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:52:32.889206 containerd[1456]: time="2024-09-04T17:52:32.888509697Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:52:32.889206 containerd[1456]: time="2024-09-04T17:52:32.888613258Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:52:32.892341 containerd[1456]: time="2024-09-04T17:52:32.891940727Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:52:32.892341 containerd[1456]: time="2024-09-04T17:52:32.892055914Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:52:32.892341 containerd[1456]: time="2024-09-04T17:52:32.892076697Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:52:32.894706 containerd[1456]: time="2024-09-04T17:52:32.894577974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:52:32.943094 systemd[1]: Started cri-containerd-8d57de634a80daeba3921a26c33ca7d3115cc0007b41f8abc6c3c24867f0851a.scope - libcontainer container 8d57de634a80daeba3921a26c33ca7d3115cc0007b41f8abc6c3c24867f0851a. Sep 4 17:52:32.960424 systemd[1]: Started cri-containerd-146af23745c19feb688bbed67f069fb1b34f3e67ce6810283475e5a7fbfe72fe.scope - libcontainer container 146af23745c19feb688bbed67f069fb1b34f3e67ce6810283475e5a7fbfe72fe. Sep 4 17:52:32.968475 systemd[1]: Started cri-containerd-ff2319756c7d29a654a7093b984a04f72d6ade5ab0a407c78440582938124086.scope - libcontainer container ff2319756c7d29a654a7093b984a04f72d6ade5ab0a407c78440582938124086. Sep 4 17:52:32.976878 kubelet[2258]: W0904 17:52:32.973874 2258 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.52:6443: connect: connection refused Sep 4 17:52:32.976878 kubelet[2258]: E0904 17:52:32.973991 2258 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.52:6443: connect: connection refused Sep 4 17:52:33.005461 kubelet[2258]: W0904 17:52:33.005363 2258 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.52:6443: connect: connection refused Sep 4 17:52:33.005638 kubelet[2258]: E0904 17:52:33.005475 2258 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.52:6443: connect: connection refused Sep 4 17:52:33.049147 containerd[1456]: time="2024-09-04T17:52:33.049087463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal,Uid:1799e8b44409f55219d0dc04e79c4af8,Namespace:kube-system,Attempt:0,} returns sandbox id \"8d57de634a80daeba3921a26c33ca7d3115cc0007b41f8abc6c3c24867f0851a\"" Sep 4 17:52:33.052521 kubelet[2258]: E0904 17:52:33.052439 2258 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4054-1-0-a2db7da4357f84326c59.c.flat" Sep 4 17:52:33.055948 containerd[1456]: time="2024-09-04T17:52:33.055898998Z" level=info msg="CreateContainer within sandbox \"8d57de634a80daeba3921a26c33ca7d3115cc0007b41f8abc6c3c24867f0851a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 17:52:33.083418 containerd[1456]: time="2024-09-04T17:52:33.083362056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal,Uid:e663d30317abfd35f1bbf92e7f3f51e3,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff2319756c7d29a654a7093b984a04f72d6ade5ab0a407c78440582938124086\"" Sep 4 17:52:33.087800 kubelet[2258]: E0904 17:52:33.087693 2258 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-21291" Sep 4 17:52:33.091423 containerd[1456]: time="2024-09-04T17:52:33.090924113Z" level=info msg="CreateContainer within sandbox \"ff2319756c7d29a654a7093b984a04f72d6ade5ab0a407c78440582938124086\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 17:52:33.097999 containerd[1456]: time="2024-09-04T17:52:33.097924169Z" level=info msg="CreateContainer within sandbox \"8d57de634a80daeba3921a26c33ca7d3115cc0007b41f8abc6c3c24867f0851a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"81dff68601ee56702ceb37c21ac1d4e42fe91d520a1720539390521c69a0c11b\"" Sep 4 17:52:33.099453 containerd[1456]: time="2024-09-04T17:52:33.099421126Z" level=info msg="StartContainer for \"81dff68601ee56702ceb37c21ac1d4e42fe91d520a1720539390521c69a0c11b\"" Sep 4 17:52:33.101431 containerd[1456]: time="2024-09-04T17:52:33.101393434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal,Uid:8574320d7ccd85816ae74528c6ffea56,Namespace:kube-system,Attempt:0,} returns sandbox id \"146af23745c19feb688bbed67f069fb1b34f3e67ce6810283475e5a7fbfe72fe\"" Sep 4 17:52:33.105422 kubelet[2258]: E0904 17:52:33.105361 2258 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-21291" Sep 4 17:52:33.108505 containerd[1456]: time="2024-09-04T17:52:33.108466920Z" level=info msg="CreateContainer within sandbox \"146af23745c19feb688bbed67f069fb1b34f3e67ce6810283475e5a7fbfe72fe\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 17:52:33.124431 containerd[1456]: time="2024-09-04T17:52:33.124251082Z" level=info msg="CreateContainer within sandbox \"ff2319756c7d29a654a7093b984a04f72d6ade5ab0a407c78440582938124086\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e7e60889651e4cf417f2b15887d367a76e87121215eb2f4517aee51dfd7c879e\"" Sep 4 17:52:33.125342 containerd[1456]: time="2024-09-04T17:52:33.125132669Z" level=info msg="StartContainer for \"e7e60889651e4cf417f2b15887d367a76e87121215eb2f4517aee51dfd7c879e\"" Sep 4 17:52:33.142817 containerd[1456]: time="2024-09-04T17:52:33.141419091Z" level=info msg="CreateContainer within sandbox \"146af23745c19feb688bbed67f069fb1b34f3e67ce6810283475e5a7fbfe72fe\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7d2941ff967c16fc0db7e07b9960a36b66e965b9f8fb8f7fa68827ace233fea5\"" Sep 4 17:52:33.149667 containerd[1456]: time="2024-09-04T17:52:33.149515690Z" level=info msg="StartContainer for \"7d2941ff967c16fc0db7e07b9960a36b66e965b9f8fb8f7fa68827ace233fea5\"" Sep 4 17:52:33.151612 systemd[1]: Started cri-containerd-81dff68601ee56702ceb37c21ac1d4e42fe91d520a1720539390521c69a0c11b.scope - libcontainer container 81dff68601ee56702ceb37c21ac1d4e42fe91d520a1720539390521c69a0c11b. Sep 4 17:52:33.169441 kubelet[2258]: E0904 17:52:33.169354 2258 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.52:6443: connect: connection refused" interval="1.6s" Sep 4 17:52:33.205454 systemd[1]: Started cri-containerd-e7e60889651e4cf417f2b15887d367a76e87121215eb2f4517aee51dfd7c879e.scope - libcontainer container e7e60889651e4cf417f2b15887d367a76e87121215eb2f4517aee51dfd7c879e. Sep 4 17:52:33.225989 systemd[1]: Started cri-containerd-7d2941ff967c16fc0db7e07b9960a36b66e965b9f8fb8f7fa68827ace233fea5.scope - libcontainer container 7d2941ff967c16fc0db7e07b9960a36b66e965b9f8fb8f7fa68827ace233fea5. Sep 4 17:52:33.272815 containerd[1456]: time="2024-09-04T17:52:33.272735800Z" level=info msg="StartContainer for \"81dff68601ee56702ceb37c21ac1d4e42fe91d520a1720539390521c69a0c11b\" returns successfully" Sep 4 17:52:33.291473 kubelet[2258]: I0904 17:52:33.291429 2258 kubelet_node_status.go:73] "Attempting to register node" node="ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:52:33.292422 kubelet[2258]: E0904 17:52:33.292379 2258 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.52:6443/api/v1/nodes\": dial tcp 10.128.0.52:6443: connect: connection refused" node="ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:52:33.350668 containerd[1456]: time="2024-09-04T17:52:33.350587510Z" level=info msg="StartContainer for \"7d2941ff967c16fc0db7e07b9960a36b66e965b9f8fb8f7fa68827ace233fea5\" returns successfully" Sep 4 17:52:33.360183 containerd[1456]: time="2024-09-04T17:52:33.359387350Z" level=info msg="StartContainer for \"e7e60889651e4cf417f2b15887d367a76e87121215eb2f4517aee51dfd7c879e\" returns successfully" Sep 4 17:52:34.899505 kubelet[2258]: I0904 17:52:34.899455 2258 kubelet_node_status.go:73] "Attempting to register node" node="ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:52:36.507168 kubelet[2258]: E0904 17:52:36.507100 2258 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal\" not found" node="ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:52:36.543830 kubelet[2258]: I0904 17:52:36.543757 2258 kubelet_node_status.go:76] "Successfully registered node" node="ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:52:36.584055 kubelet[2258]: E0904 17:52:36.583918 2258 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal.17f21bf643aa4033 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal,UID:ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal,},FirstTimestamp:2024-09-04 17:52:31.747457075 +0000 UTC m=+0.620423406,LastTimestamp:2024-09-04 17:52:31.747457075 +0000 UTC m=+0.620423406,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal,}" Sep 4 17:52:36.745201 kubelet[2258]: I0904 17:52:36.744053 2258 apiserver.go:52] "Watching apiserver" Sep 4 17:52:36.765637 kubelet[2258]: I0904 17:52:36.764625 2258 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Sep 4 17:52:38.574078 systemd[1]: Reloading requested from client PID 2534 ('systemctl') (unit session-9.scope)... Sep 4 17:52:38.574105 systemd[1]: Reloading... Sep 4 17:52:38.733195 zram_generator::config[2575]: No configuration found. Sep 4 17:52:38.879864 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:52:39.017115 systemd[1]: Reloading finished in 442 ms. Sep 4 17:52:39.077762 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:52:39.089720 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 17:52:39.090056 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:52:39.090168 systemd[1]: kubelet.service: Consumed 1.150s CPU time, 116.0M memory peak, 0B memory swap peak. Sep 4 17:52:39.095639 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:52:39.366862 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:52:39.382081 (kubelet)[2620]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:52:39.468652 kubelet[2620]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:52:39.468652 kubelet[2620]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:52:39.468652 kubelet[2620]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:52:39.469307 kubelet[2620]: I0904 17:52:39.468730 2620 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:52:39.474659 kubelet[2620]: I0904 17:52:39.474606 2620 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Sep 4 17:52:39.474659 kubelet[2620]: I0904 17:52:39.474640 2620 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:52:39.474959 kubelet[2620]: I0904 17:52:39.474922 2620 server.go:927] "Client rotation is on, will bootstrap in background" Sep 4 17:52:39.476698 kubelet[2620]: I0904 17:52:39.476661 2620 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 17:52:39.480784 kubelet[2620]: I0904 17:52:39.480756 2620 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:52:39.491593 kubelet[2620]: I0904 17:52:39.491507 2620 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:52:39.491942 kubelet[2620]: I0904 17:52:39.491884 2620 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:52:39.492227 kubelet[2620]: I0904 17:52:39.491972 2620 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:52:39.492398 kubelet[2620]: I0904 17:52:39.492255 2620 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:52:39.492398 kubelet[2620]: I0904 17:52:39.492276 2620 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:52:39.492398 kubelet[2620]: I0904 17:52:39.492339 2620 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:52:39.492588 kubelet[2620]: I0904 17:52:39.492479 2620 kubelet.go:400] "Attempting to sync node with API server" Sep 4 17:52:39.492588 kubelet[2620]: I0904 17:52:39.492498 2620 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:52:39.492588 kubelet[2620]: I0904 17:52:39.492532 2620 kubelet.go:312] "Adding apiserver pod source" Sep 4 17:52:39.492588 kubelet[2620]: I0904 17:52:39.492554 2620 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:52:39.499191 kubelet[2620]: I0904 17:52:39.496679 2620 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.20" apiVersion="v1" Sep 4 17:52:39.499191 kubelet[2620]: I0904 17:52:39.496943 2620 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 17:52:39.499191 kubelet[2620]: I0904 17:52:39.497553 2620 server.go:1264] "Started kubelet" Sep 4 17:52:39.500579 kubelet[2620]: I0904 17:52:39.500532 2620 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:52:39.502058 kubelet[2620]: I0904 17:52:39.502034 2620 server.go:455] "Adding debug handlers to kubelet server" Sep 4 17:52:39.502662 kubelet[2620]: I0904 17:52:39.502600 2620 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 17:52:39.503561 kubelet[2620]: I0904 17:52:39.503528 2620 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:52:39.505384 kubelet[2620]: I0904 17:52:39.505364 2620 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:52:39.520116 kubelet[2620]: I0904 17:52:39.520066 2620 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:52:39.522695 kubelet[2620]: I0904 17:52:39.522661 2620 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Sep 4 17:52:39.522925 kubelet[2620]: I0904 17:52:39.522907 2620 reconciler.go:26] "Reconciler: start to sync state" Sep 4 17:52:39.528679 kubelet[2620]: I0904 17:52:39.528640 2620 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 17:52:39.532466 kubelet[2620]: I0904 17:52:39.532424 2620 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:52:39.534506 kubelet[2620]: I0904 17:52:39.534475 2620 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:52:39.535957 kubelet[2620]: I0904 17:52:39.535936 2620 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:52:39.536118 kubelet[2620]: I0904 17:52:39.536105 2620 kubelet.go:2337] "Starting kubelet main sync loop" Sep 4 17:52:39.536365 kubelet[2620]: E0904 17:52:39.536341 2620 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:52:39.554197 kubelet[2620]: E0904 17:52:39.552354 2620 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:52:39.558406 kubelet[2620]: I0904 17:52:39.558373 2620 factory.go:221] Registration of the containerd container factory successfully Sep 4 17:52:39.558590 kubelet[2620]: I0904 17:52:39.558577 2620 factory.go:221] Registration of the systemd container factory successfully Sep 4 17:52:39.632674 kubelet[2620]: I0904 17:52:39.631786 2620 kubelet_node_status.go:73] "Attempting to register node" node="ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:52:39.638522 kubelet[2620]: E0904 17:52:39.638408 2620 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 17:52:39.644505 kubelet[2620]: I0904 17:52:39.644433 2620 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:52:39.644505 kubelet[2620]: I0904 17:52:39.644458 2620 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:52:39.644505 kubelet[2620]: I0904 17:52:39.644490 2620 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:52:39.644799 kubelet[2620]: I0904 17:52:39.644772 2620 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 17:52:39.644871 kubelet[2620]: I0904 17:52:39.644789 2620 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 17:52:39.644871 kubelet[2620]: I0904 17:52:39.644817 2620 policy_none.go:49] "None policy: Start" Sep 4 17:52:39.648193 kubelet[2620]: I0904 17:52:39.648166 2620 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 17:52:39.648377 kubelet[2620]: I0904 17:52:39.648207 2620 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:52:39.648471 kubelet[2620]: I0904 17:52:39.648447 2620 state_mem.go:75] "Updated machine memory state" Sep 4 17:52:39.650428 kubelet[2620]: I0904 17:52:39.650350 2620 kubelet_node_status.go:112] "Node was previously registered" node="ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:52:39.650585 kubelet[2620]: I0904 17:52:39.650499 2620 kubelet_node_status.go:76] "Successfully registered node" node="ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:52:39.668300 update_engine[1444]: I0904 17:52:39.667924 1444 update_attempter.cc:509] Updating boot flags... Sep 4 17:52:39.671764 kubelet[2620]: I0904 17:52:39.669909 2620 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:52:39.671764 kubelet[2620]: I0904 17:52:39.671285 2620 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 17:52:39.676408 kubelet[2620]: I0904 17:52:39.675783 2620 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:52:39.788318 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2667) Sep 4 17:52:39.843480 kubelet[2620]: I0904 17:52:39.843410 2620 topology_manager.go:215] "Topology Admit Handler" podUID="8574320d7ccd85816ae74528c6ffea56" podNamespace="kube-system" podName="kube-apiserver-ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:52:39.847961 kubelet[2620]: I0904 17:52:39.843770 2620 topology_manager.go:215] "Topology Admit Handler" podUID="1799e8b44409f55219d0dc04e79c4af8" podNamespace="kube-system" podName="kube-controller-manager-ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:52:39.847961 kubelet[2620]: I0904 17:52:39.843874 2620 topology_manager.go:215] "Topology Admit Handler" podUID="e663d30317abfd35f1bbf92e7f3f51e3" podNamespace="kube-system" podName="kube-scheduler-ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:52:39.914982 kubelet[2620]: W0904 17:52:39.913571 2620 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Sep 4 17:52:39.914982 kubelet[2620]: W0904 17:52:39.913852 2620 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Sep 4 17:52:39.914982 kubelet[2620]: W0904 17:52:39.913961 2620 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Sep 4 17:52:39.943232 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2668) Sep 4 17:52:40.028654 kubelet[2620]: I0904 17:52:40.028574 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8574320d7ccd85816ae74528c6ffea56-ca-certs\") pod \"kube-apiserver-ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal\" (UID: \"8574320d7ccd85816ae74528c6ffea56\") " pod="kube-system/kube-apiserver-ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:52:40.028915 kubelet[2620]: I0904 17:52:40.028884 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8574320d7ccd85816ae74528c6ffea56-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal\" (UID: \"8574320d7ccd85816ae74528c6ffea56\") " pod="kube-system/kube-apiserver-ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:52:40.030616 kubelet[2620]: I0904 17:52:40.029036 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1799e8b44409f55219d0dc04e79c4af8-flexvolume-dir\") pod \"kube-controller-manager-ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal\" (UID: \"1799e8b44409f55219d0dc04e79c4af8\") " pod="kube-system/kube-controller-manager-ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:52:40.030616 kubelet[2620]: I0904 17:52:40.029076 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1799e8b44409f55219d0dc04e79c4af8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal\" (UID: \"1799e8b44409f55219d0dc04e79c4af8\") " pod="kube-system/kube-controller-manager-ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:52:40.030616 kubelet[2620]: I0904 17:52:40.029115 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e663d30317abfd35f1bbf92e7f3f51e3-kubeconfig\") pod \"kube-scheduler-ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal\" (UID: \"e663d30317abfd35f1bbf92e7f3f51e3\") " pod="kube-system/kube-scheduler-ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:52:40.030616 kubelet[2620]: I0904 17:52:40.029147 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8574320d7ccd85816ae74528c6ffea56-k8s-certs\") pod \"kube-apiserver-ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal\" (UID: \"8574320d7ccd85816ae74528c6ffea56\") " pod="kube-system/kube-apiserver-ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:52:40.030937 kubelet[2620]: I0904 17:52:40.029198 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1799e8b44409f55219d0dc04e79c4af8-ca-certs\") pod \"kube-controller-manager-ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal\" (UID: \"1799e8b44409f55219d0dc04e79c4af8\") " pod="kube-system/kube-controller-manager-ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:52:40.030937 kubelet[2620]: I0904 17:52:40.029228 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1799e8b44409f55219d0dc04e79c4af8-k8s-certs\") pod \"kube-controller-manager-ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal\" (UID: \"1799e8b44409f55219d0dc04e79c4af8\") " pod="kube-system/kube-controller-manager-ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:52:40.030937 kubelet[2620]: I0904 17:52:40.029259 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1799e8b44409f55219d0dc04e79c4af8-kubeconfig\") pod \"kube-controller-manager-ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal\" (UID: \"1799e8b44409f55219d0dc04e79c4af8\") " pod="kube-system/kube-controller-manager-ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:52:40.493889 kubelet[2620]: I0904 17:52:40.493801 2620 apiserver.go:52] "Watching apiserver" Sep 4 17:52:40.523578 kubelet[2620]: I0904 17:52:40.523510 2620 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Sep 4 17:52:40.614501 kubelet[2620]: I0904 17:52:40.614305 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" podStartSLOduration=1.6142831979999999 podStartE2EDuration="1.614283198s" podCreationTimestamp="2024-09-04 17:52:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:52:40.613493323 +0000 UTC m=+1.223012729" watchObservedRunningTime="2024-09-04 17:52:40.614283198 +0000 UTC m=+1.223802597" Sep 4 17:52:40.629208 kubelet[2620]: I0904 17:52:40.628635 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" podStartSLOduration=1.628610793 podStartE2EDuration="1.628610793s" podCreationTimestamp="2024-09-04 17:52:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:52:40.624107588 +0000 UTC m=+1.233626993" watchObservedRunningTime="2024-09-04 17:52:40.628610793 +0000 UTC m=+1.238130199" Sep 4 17:52:40.657180 kubelet[2620]: I0904 17:52:40.655291 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" podStartSLOduration=1.6552741100000001 podStartE2EDuration="1.65527411s" podCreationTimestamp="2024-09-04 17:52:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:52:40.645829035 +0000 UTC m=+1.255348439" watchObservedRunningTime="2024-09-04 17:52:40.65527411 +0000 UTC m=+1.264793516" Sep 4 17:52:45.444334 sudo[1738]: pam_unix(sudo:session): session closed for user root Sep 4 17:52:45.488032 sshd[1735]: pam_unix(sshd:session): session closed for user core Sep 4 17:52:45.492924 systemd[1]: sshd@9-10.128.0.52:22-147.75.109.163:44544.service: Deactivated successfully. Sep 4 17:52:45.495990 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 17:52:45.496514 systemd[1]: session-9.scope: Consumed 7.335s CPU time, 140.1M memory peak, 0B memory swap peak. Sep 4 17:52:45.498515 systemd-logind[1441]: Session 9 logged out. Waiting for processes to exit. Sep 4 17:52:45.500049 systemd-logind[1441]: Removed session 9. Sep 4 17:52:49.519692 systemd[1]: Started sshd@10-10.128.0.52:22-128.199.100.189:41668.service - OpenSSH per-connection server daemon (128.199.100.189:41668). Sep 4 17:52:50.636732 sshd[2717]: Invalid user gustavo from 128.199.100.189 port 41668 Sep 4 17:52:50.847776 sshd[2717]: Received disconnect from 128.199.100.189 port 41668:11: Bye Bye [preauth] Sep 4 17:52:50.847776 sshd[2717]: Disconnected from invalid user gustavo 128.199.100.189 port 41668 [preauth] Sep 4 17:52:50.851112 systemd[1]: sshd@10-10.128.0.52:22-128.199.100.189:41668.service: Deactivated successfully. Sep 4 17:52:53.846184 kubelet[2620]: I0904 17:52:53.845511 2620 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 17:52:53.850181 containerd[1456]: time="2024-09-04T17:52:53.847509429Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 17:52:53.850736 kubelet[2620]: I0904 17:52:53.848036 2620 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 17:52:54.543277 kubelet[2620]: I0904 17:52:54.543219 2620 topology_manager.go:215] "Topology Admit Handler" podUID="d443bda7-d2a8-4d8c-96c3-744bb8855b12" podNamespace="kube-system" podName="kube-proxy-lfhwl" Sep 4 17:52:54.576843 systemd[1]: Created slice kubepods-besteffort-podd443bda7_d2a8_4d8c_96c3_744bb8855b12.slice - libcontainer container kubepods-besteffort-podd443bda7_d2a8_4d8c_96c3_744bb8855b12.slice. Sep 4 17:52:54.625713 kubelet[2620]: I0904 17:52:54.625407 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d443bda7-d2a8-4d8c-96c3-744bb8855b12-kube-proxy\") pod \"kube-proxy-lfhwl\" (UID: \"d443bda7-d2a8-4d8c-96c3-744bb8855b12\") " pod="kube-system/kube-proxy-lfhwl" Sep 4 17:52:54.625713 kubelet[2620]: I0904 17:52:54.625482 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d443bda7-d2a8-4d8c-96c3-744bb8855b12-lib-modules\") pod \"kube-proxy-lfhwl\" (UID: \"d443bda7-d2a8-4d8c-96c3-744bb8855b12\") " pod="kube-system/kube-proxy-lfhwl" Sep 4 17:52:54.625713 kubelet[2620]: I0904 17:52:54.625518 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5k89\" (UniqueName: \"kubernetes.io/projected/d443bda7-d2a8-4d8c-96c3-744bb8855b12-kube-api-access-g5k89\") pod \"kube-proxy-lfhwl\" (UID: \"d443bda7-d2a8-4d8c-96c3-744bb8855b12\") " pod="kube-system/kube-proxy-lfhwl" Sep 4 17:52:54.625713 kubelet[2620]: I0904 17:52:54.625557 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d443bda7-d2a8-4d8c-96c3-744bb8855b12-xtables-lock\") pod \"kube-proxy-lfhwl\" (UID: \"d443bda7-d2a8-4d8c-96c3-744bb8855b12\") " pod="kube-system/kube-proxy-lfhwl" Sep 4 17:52:54.735512 kubelet[2620]: E0904 17:52:54.735461 2620 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 4 17:52:54.735512 kubelet[2620]: E0904 17:52:54.735512 2620 projected.go:200] Error preparing data for projected volume kube-api-access-g5k89 for pod kube-system/kube-proxy-lfhwl: configmap "kube-root-ca.crt" not found Sep 4 17:52:54.735793 kubelet[2620]: E0904 17:52:54.735595 2620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d443bda7-d2a8-4d8c-96c3-744bb8855b12-kube-api-access-g5k89 podName:d443bda7-d2a8-4d8c-96c3-744bb8855b12 nodeName:}" failed. No retries permitted until 2024-09-04 17:52:55.235567356 +0000 UTC m=+15.845086743 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-g5k89" (UniqueName: "kubernetes.io/projected/d443bda7-d2a8-4d8c-96c3-744bb8855b12-kube-api-access-g5k89") pod "kube-proxy-lfhwl" (UID: "d443bda7-d2a8-4d8c-96c3-744bb8855b12") : configmap "kube-root-ca.crt" not found Sep 4 17:52:54.929067 kubelet[2620]: I0904 17:52:54.929011 2620 topology_manager.go:215] "Topology Admit Handler" podUID="d12921c8-815e-4fd4-b483-19439c3b9c1d" podNamespace="tigera-operator" podName="tigera-operator-77f994b5bb-wkk5s" Sep 4 17:52:54.942967 systemd[1]: Created slice kubepods-besteffort-podd12921c8_815e_4fd4_b483_19439c3b9c1d.slice - libcontainer container kubepods-besteffort-podd12921c8_815e_4fd4_b483_19439c3b9c1d.slice. Sep 4 17:52:55.028901 kubelet[2620]: I0904 17:52:55.028709 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d12921c8-815e-4fd4-b483-19439c3b9c1d-var-lib-calico\") pod \"tigera-operator-77f994b5bb-wkk5s\" (UID: \"d12921c8-815e-4fd4-b483-19439c3b9c1d\") " pod="tigera-operator/tigera-operator-77f994b5bb-wkk5s" Sep 4 17:52:55.028901 kubelet[2620]: I0904 17:52:55.028769 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7ss6\" (UniqueName: \"kubernetes.io/projected/d12921c8-815e-4fd4-b483-19439c3b9c1d-kube-api-access-t7ss6\") pod \"tigera-operator-77f994b5bb-wkk5s\" (UID: \"d12921c8-815e-4fd4-b483-19439c3b9c1d\") " pod="tigera-operator/tigera-operator-77f994b5bb-wkk5s" Sep 4 17:52:55.249537 containerd[1456]: time="2024-09-04T17:52:55.249378730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-77f994b5bb-wkk5s,Uid:d12921c8-815e-4fd4-b483-19439c3b9c1d,Namespace:tigera-operator,Attempt:0,}" Sep 4 17:52:55.289522 containerd[1456]: time="2024-09-04T17:52:55.288842621Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:52:55.289522 containerd[1456]: time="2024-09-04T17:52:55.288943237Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:52:55.289522 containerd[1456]: time="2024-09-04T17:52:55.288971846Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:52:55.289522 containerd[1456]: time="2024-09-04T17:52:55.289097506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:52:55.327445 systemd[1]: Started cri-containerd-7f5a6a7a1d2c1939ddf5e40e8754811650704a86810aec4b5f6248f1414ffa4a.scope - libcontainer container 7f5a6a7a1d2c1939ddf5e40e8754811650704a86810aec4b5f6248f1414ffa4a. Sep 4 17:52:55.397737 containerd[1456]: time="2024-09-04T17:52:55.397659250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-77f994b5bb-wkk5s,Uid:d12921c8-815e-4fd4-b483-19439c3b9c1d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7f5a6a7a1d2c1939ddf5e40e8754811650704a86810aec4b5f6248f1414ffa4a\"" Sep 4 17:52:55.400598 containerd[1456]: time="2024-09-04T17:52:55.400555818Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Sep 4 17:52:55.489933 containerd[1456]: time="2024-09-04T17:52:55.489869535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lfhwl,Uid:d443bda7-d2a8-4d8c-96c3-744bb8855b12,Namespace:kube-system,Attempt:0,}" Sep 4 17:52:55.527029 containerd[1456]: time="2024-09-04T17:52:55.526492348Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:52:55.527029 containerd[1456]: time="2024-09-04T17:52:55.526566043Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:52:55.527029 containerd[1456]: time="2024-09-04T17:52:55.526593771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:52:55.527029 containerd[1456]: time="2024-09-04T17:52:55.526726393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:52:55.555438 systemd[1]: Started cri-containerd-e0937a5f51cb31c69550a794d9e8ac40164ff66eb32b38ff0232f9da6a231d33.scope - libcontainer container e0937a5f51cb31c69550a794d9e8ac40164ff66eb32b38ff0232f9da6a231d33. Sep 4 17:52:55.589421 containerd[1456]: time="2024-09-04T17:52:55.589217359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lfhwl,Uid:d443bda7-d2a8-4d8c-96c3-744bb8855b12,Namespace:kube-system,Attempt:0,} returns sandbox id \"e0937a5f51cb31c69550a794d9e8ac40164ff66eb32b38ff0232f9da6a231d33\"" Sep 4 17:52:55.594422 containerd[1456]: time="2024-09-04T17:52:55.594135476Z" level=info msg="CreateContainer within sandbox \"e0937a5f51cb31c69550a794d9e8ac40164ff66eb32b38ff0232f9da6a231d33\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 17:52:55.622277 containerd[1456]: time="2024-09-04T17:52:55.622052402Z" level=info msg="CreateContainer within sandbox \"e0937a5f51cb31c69550a794d9e8ac40164ff66eb32b38ff0232f9da6a231d33\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2ada4e64b004c07f1cfddf4d424f74b9d40df0a58e75b38a6fa9874105dfb96f\"" Sep 4 17:52:55.623515 containerd[1456]: time="2024-09-04T17:52:55.623236814Z" level=info msg="StartContainer for \"2ada4e64b004c07f1cfddf4d424f74b9d40df0a58e75b38a6fa9874105dfb96f\"" Sep 4 17:52:55.662892 systemd[1]: Started cri-containerd-2ada4e64b004c07f1cfddf4d424f74b9d40df0a58e75b38a6fa9874105dfb96f.scope - libcontainer container 2ada4e64b004c07f1cfddf4d424f74b9d40df0a58e75b38a6fa9874105dfb96f. Sep 4 17:52:55.708775 containerd[1456]: time="2024-09-04T17:52:55.708470364Z" level=info msg="StartContainer for \"2ada4e64b004c07f1cfddf4d424f74b9d40df0a58e75b38a6fa9874105dfb96f\" returns successfully" Sep 4 17:52:57.009956 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2831733167.mount: Deactivated successfully. Sep 4 17:52:57.817182 containerd[1456]: time="2024-09-04T17:52:57.817084745Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:52:57.818505 containerd[1456]: time="2024-09-04T17:52:57.818432567Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=22136517" Sep 4 17:52:57.820319 containerd[1456]: time="2024-09-04T17:52:57.820271570Z" level=info msg="ImageCreate event name:\"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:52:57.824204 containerd[1456]: time="2024-09-04T17:52:57.823552565Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:52:57.825260 containerd[1456]: time="2024-09-04T17:52:57.824552796Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"22130728\" in 2.423941631s" Sep 4 17:52:57.825260 containerd[1456]: time="2024-09-04T17:52:57.824603339Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\"" Sep 4 17:52:57.828567 containerd[1456]: time="2024-09-04T17:52:57.828190503Z" level=info msg="CreateContainer within sandbox \"7f5a6a7a1d2c1939ddf5e40e8754811650704a86810aec4b5f6248f1414ffa4a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 4 17:52:57.851091 containerd[1456]: time="2024-09-04T17:52:57.851026719Z" level=info msg="CreateContainer within sandbox \"7f5a6a7a1d2c1939ddf5e40e8754811650704a86810aec4b5f6248f1414ffa4a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e793b7822377ae6984285a4c6c0caaa1201f7e49149008cdf8ca7038d9a31e05\"" Sep 4 17:52:57.852040 containerd[1456]: time="2024-09-04T17:52:57.851996644Z" level=info msg="StartContainer for \"e793b7822377ae6984285a4c6c0caaa1201f7e49149008cdf8ca7038d9a31e05\"" Sep 4 17:52:57.898375 systemd[1]: Started cri-containerd-e793b7822377ae6984285a4c6c0caaa1201f7e49149008cdf8ca7038d9a31e05.scope - libcontainer container e793b7822377ae6984285a4c6c0caaa1201f7e49149008cdf8ca7038d9a31e05. Sep 4 17:52:57.938958 containerd[1456]: time="2024-09-04T17:52:57.938712277Z" level=info msg="StartContainer for \"e793b7822377ae6984285a4c6c0caaa1201f7e49149008cdf8ca7038d9a31e05\" returns successfully" Sep 4 17:52:58.647572 kubelet[2620]: I0904 17:52:58.647259 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lfhwl" podStartSLOduration=4.6472335860000005 podStartE2EDuration="4.647233586s" podCreationTimestamp="2024-09-04 17:52:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:52:56.642891052 +0000 UTC m=+17.252410459" watchObservedRunningTime="2024-09-04 17:52:58.647233586 +0000 UTC m=+19.256752993" Sep 4 17:52:58.647572 kubelet[2620]: I0904 17:52:58.647401 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-77f994b5bb-wkk5s" podStartSLOduration=2.221163982 podStartE2EDuration="4.647391124s" podCreationTimestamp="2024-09-04 17:52:54 +0000 UTC" firstStartedPulling="2024-09-04 17:52:55.399670084 +0000 UTC m=+16.009189469" lastFinishedPulling="2024-09-04 17:52:57.825897214 +0000 UTC m=+18.435416611" observedRunningTime="2024-09-04 17:52:58.646934223 +0000 UTC m=+19.256453631" watchObservedRunningTime="2024-09-04 17:52:58.647391124 +0000 UTC m=+19.256910530" Sep 4 17:53:01.007704 kubelet[2620]: I0904 17:53:01.007601 2620 topology_manager.go:215] "Topology Admit Handler" podUID="f46cca9a-3668-487d-a7d0-25094f5f3795" podNamespace="calico-system" podName="calico-typha-6d8c7f4569-fgd99" Sep 4 17:53:01.023501 systemd[1]: Created slice kubepods-besteffort-podf46cca9a_3668_487d_a7d0_25094f5f3795.slice - libcontainer container kubepods-besteffort-podf46cca9a_3668_487d_a7d0_25094f5f3795.slice. Sep 4 17:53:01.068539 kubelet[2620]: I0904 17:53:01.068396 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f46cca9a-3668-487d-a7d0-25094f5f3795-typha-certs\") pod \"calico-typha-6d8c7f4569-fgd99\" (UID: \"f46cca9a-3668-487d-a7d0-25094f5f3795\") " pod="calico-system/calico-typha-6d8c7f4569-fgd99" Sep 4 17:53:01.069558 kubelet[2620]: I0904 17:53:01.068928 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8l89z\" (UniqueName: \"kubernetes.io/projected/f46cca9a-3668-487d-a7d0-25094f5f3795-kube-api-access-8l89z\") pod \"calico-typha-6d8c7f4569-fgd99\" (UID: \"f46cca9a-3668-487d-a7d0-25094f5f3795\") " pod="calico-system/calico-typha-6d8c7f4569-fgd99" Sep 4 17:53:01.069558 kubelet[2620]: I0904 17:53:01.069502 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f46cca9a-3668-487d-a7d0-25094f5f3795-tigera-ca-bundle\") pod \"calico-typha-6d8c7f4569-fgd99\" (UID: \"f46cca9a-3668-487d-a7d0-25094f5f3795\") " pod="calico-system/calico-typha-6d8c7f4569-fgd99" Sep 4 17:53:01.136485 kubelet[2620]: I0904 17:53:01.136430 2620 topology_manager.go:215] "Topology Admit Handler" podUID="df466ef6-c2c6-49ac-a3a0-8b16d416984d" podNamespace="calico-system" podName="calico-node-4tp87" Sep 4 17:53:01.151794 systemd[1]: Created slice kubepods-besteffort-poddf466ef6_c2c6_49ac_a3a0_8b16d416984d.slice - libcontainer container kubepods-besteffort-poddf466ef6_c2c6_49ac_a3a0_8b16d416984d.slice. Sep 4 17:53:01.170582 kubelet[2620]: I0904 17:53:01.170539 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/df466ef6-c2c6-49ac-a3a0-8b16d416984d-var-run-calico\") pod \"calico-node-4tp87\" (UID: \"df466ef6-c2c6-49ac-a3a0-8b16d416984d\") " pod="calico-system/calico-node-4tp87" Sep 4 17:53:01.173068 kubelet[2620]: I0904 17:53:01.172242 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/df466ef6-c2c6-49ac-a3a0-8b16d416984d-cni-net-dir\") pod \"calico-node-4tp87\" (UID: \"df466ef6-c2c6-49ac-a3a0-8b16d416984d\") " pod="calico-system/calico-node-4tp87" Sep 4 17:53:01.173068 kubelet[2620]: I0904 17:53:01.172345 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/df466ef6-c2c6-49ac-a3a0-8b16d416984d-lib-modules\") pod \"calico-node-4tp87\" (UID: \"df466ef6-c2c6-49ac-a3a0-8b16d416984d\") " pod="calico-system/calico-node-4tp87" Sep 4 17:53:01.173068 kubelet[2620]: I0904 17:53:01.172378 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/df466ef6-c2c6-49ac-a3a0-8b16d416984d-xtables-lock\") pod \"calico-node-4tp87\" (UID: \"df466ef6-c2c6-49ac-a3a0-8b16d416984d\") " pod="calico-system/calico-node-4tp87" Sep 4 17:53:01.173068 kubelet[2620]: I0904 17:53:01.172417 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/df466ef6-c2c6-49ac-a3a0-8b16d416984d-policysync\") pod \"calico-node-4tp87\" (UID: \"df466ef6-c2c6-49ac-a3a0-8b16d416984d\") " pod="calico-system/calico-node-4tp87" Sep 4 17:53:01.173429 kubelet[2620]: I0904 17:53:01.173405 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/df466ef6-c2c6-49ac-a3a0-8b16d416984d-cni-bin-dir\") pod \"calico-node-4tp87\" (UID: \"df466ef6-c2c6-49ac-a3a0-8b16d416984d\") " pod="calico-system/calico-node-4tp87" Sep 4 17:53:01.173565 kubelet[2620]: I0904 17:53:01.173545 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/df466ef6-c2c6-49ac-a3a0-8b16d416984d-cni-log-dir\") pod \"calico-node-4tp87\" (UID: \"df466ef6-c2c6-49ac-a3a0-8b16d416984d\") " pod="calico-system/calico-node-4tp87" Sep 4 17:53:01.173674 kubelet[2620]: I0904 17:53:01.173653 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/df466ef6-c2c6-49ac-a3a0-8b16d416984d-flexvol-driver-host\") pod \"calico-node-4tp87\" (UID: \"df466ef6-c2c6-49ac-a3a0-8b16d416984d\") " pod="calico-system/calico-node-4tp87" Sep 4 17:53:01.174249 kubelet[2620]: I0904 17:53:01.174223 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxbs7\" (UniqueName: \"kubernetes.io/projected/df466ef6-c2c6-49ac-a3a0-8b16d416984d-kube-api-access-dxbs7\") pod \"calico-node-4tp87\" (UID: \"df466ef6-c2c6-49ac-a3a0-8b16d416984d\") " pod="calico-system/calico-node-4tp87" Sep 4 17:53:01.174433 kubelet[2620]: I0904 17:53:01.174398 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/df466ef6-c2c6-49ac-a3a0-8b16d416984d-node-certs\") pod \"calico-node-4tp87\" (UID: \"df466ef6-c2c6-49ac-a3a0-8b16d416984d\") " pod="calico-system/calico-node-4tp87" Sep 4 17:53:01.174513 kubelet[2620]: I0904 17:53:01.174460 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/df466ef6-c2c6-49ac-a3a0-8b16d416984d-var-lib-calico\") pod \"calico-node-4tp87\" (UID: \"df466ef6-c2c6-49ac-a3a0-8b16d416984d\") " pod="calico-system/calico-node-4tp87" Sep 4 17:53:01.174581 kubelet[2620]: I0904 17:53:01.174533 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df466ef6-c2c6-49ac-a3a0-8b16d416984d-tigera-ca-bundle\") pod \"calico-node-4tp87\" (UID: \"df466ef6-c2c6-49ac-a3a0-8b16d416984d\") " pod="calico-system/calico-node-4tp87" Sep 4 17:53:01.272408 kubelet[2620]: I0904 17:53:01.272234 2620 topology_manager.go:215] "Topology Admit Handler" podUID="3f204459-67ca-4ef3-87db-d2dfa1c8a5a7" podNamespace="calico-system" podName="csi-node-driver-6v7cb" Sep 4 17:53:01.272687 kubelet[2620]: E0904 17:53:01.272632 2620 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6v7cb" podUID="3f204459-67ca-4ef3-87db-d2dfa1c8a5a7" Sep 4 17:53:01.276665 kubelet[2620]: E0904 17:53:01.276474 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.276665 kubelet[2620]: W0904 17:53:01.276520 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.276665 kubelet[2620]: E0904 17:53:01.276551 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.277183 kubelet[2620]: E0904 17:53:01.277017 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.277183 kubelet[2620]: W0904 17:53:01.277033 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.277183 kubelet[2620]: E0904 17:53:01.277082 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.278884 kubelet[2620]: E0904 17:53:01.278373 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.278884 kubelet[2620]: W0904 17:53:01.278393 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.278884 kubelet[2620]: E0904 17:53:01.278501 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.279126 kubelet[2620]: E0904 17:53:01.278954 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.279126 kubelet[2620]: W0904 17:53:01.278967 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.279126 kubelet[2620]: E0904 17:53:01.279023 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.279499 kubelet[2620]: E0904 17:53:01.279430 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.279499 kubelet[2620]: W0904 17:53:01.279444 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.279611 kubelet[2620]: E0904 17:53:01.279540 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.280179 kubelet[2620]: E0904 17:53:01.279867 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.280179 kubelet[2620]: W0904 17:53:01.279900 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.280179 kubelet[2620]: E0904 17:53:01.279980 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.280383 kubelet[2620]: E0904 17:53:01.280325 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.280383 kubelet[2620]: W0904 17:53:01.280339 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.280493 kubelet[2620]: E0904 17:53:01.280400 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.280842 kubelet[2620]: E0904 17:53:01.280755 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.280842 kubelet[2620]: W0904 17:53:01.280768 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.282185 kubelet[2620]: E0904 17:53:01.281244 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.282185 kubelet[2620]: E0904 17:53:01.281733 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.282185 kubelet[2620]: W0904 17:53:01.281756 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.282185 kubelet[2620]: E0904 17:53:01.281846 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.283658 kubelet[2620]: E0904 17:53:01.283632 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.283756 kubelet[2620]: W0904 17:53:01.283672 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.283829 kubelet[2620]: E0904 17:53:01.283767 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.284130 kubelet[2620]: E0904 17:53:01.284105 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.284130 kubelet[2620]: W0904 17:53:01.284127 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.284311 kubelet[2620]: E0904 17:53:01.284235 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.284562 kubelet[2620]: E0904 17:53:01.284539 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.284562 kubelet[2620]: W0904 17:53:01.284559 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.284696 kubelet[2620]: E0904 17:53:01.284648 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.288288 kubelet[2620]: E0904 17:53:01.288247 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.288288 kubelet[2620]: W0904 17:53:01.288272 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.289776 kubelet[2620]: E0904 17:53:01.289497 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.289776 kubelet[2620]: E0904 17:53:01.289679 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.289776 kubelet[2620]: W0904 17:53:01.289691 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.289776 kubelet[2620]: E0904 17:53:01.289741 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.290823 kubelet[2620]: E0904 17:53:01.290680 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.290823 kubelet[2620]: W0904 17:53:01.290697 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.290994 kubelet[2620]: E0904 17:53:01.290832 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.292502 kubelet[2620]: E0904 17:53:01.292483 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.293419 kubelet[2620]: W0904 17:53:01.293232 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.293759 kubelet[2620]: E0904 17:53:01.293619 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.293759 kubelet[2620]: W0904 17:53:01.293647 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.293938 kubelet[2620]: E0904 17:53:01.293919 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.294088 kubelet[2620]: E0904 17:53:01.294017 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.294283 kubelet[2620]: E0904 17:53:01.294243 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.294283 kubelet[2620]: W0904 17:53:01.294257 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.294987 kubelet[2620]: E0904 17:53:01.294855 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.295254 kubelet[2620]: E0904 17:53:01.295219 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.295254 kubelet[2620]: W0904 17:53:01.295234 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.295791 kubelet[2620]: E0904 17:53:01.295772 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.297227 kubelet[2620]: E0904 17:53:01.295933 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.297227 kubelet[2620]: W0904 17:53:01.295947 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.297559 kubelet[2620]: E0904 17:53:01.297484 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.298074 kubelet[2620]: E0904 17:53:01.297932 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.298074 kubelet[2620]: W0904 17:53:01.297947 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.298387 kubelet[2620]: E0904 17:53:01.298318 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.298652 kubelet[2620]: E0904 17:53:01.298504 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.298652 kubelet[2620]: W0904 17:53:01.298517 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.298858 kubelet[2620]: E0904 17:53:01.298809 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.299137 kubelet[2620]: E0904 17:53:01.299037 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.299137 kubelet[2620]: W0904 17:53:01.299051 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.299474 kubelet[2620]: E0904 17:53:01.299313 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.299688 kubelet[2620]: E0904 17:53:01.299673 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.299892 kubelet[2620]: W0904 17:53:01.299770 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.300078 kubelet[2620]: E0904 17:53:01.300060 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.301369 kubelet[2620]: E0904 17:53:01.301240 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.301369 kubelet[2620]: W0904 17:53:01.301320 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.301734 kubelet[2620]: E0904 17:53:01.301668 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.302107 kubelet[2620]: E0904 17:53:01.301968 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.302107 kubelet[2620]: W0904 17:53:01.301986 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.302364 kubelet[2620]: E0904 17:53:01.302321 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.302784 kubelet[2620]: E0904 17:53:01.302679 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.302784 kubelet[2620]: W0904 17:53:01.302695 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.302995 kubelet[2620]: E0904 17:53:01.302939 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.304396 kubelet[2620]: E0904 17:53:01.304272 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.304396 kubelet[2620]: W0904 17:53:01.304292 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.304666 kubelet[2620]: E0904 17:53:01.304555 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.305421 kubelet[2620]: E0904 17:53:01.305402 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.305696 kubelet[2620]: W0904 17:53:01.305529 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.305833 kubelet[2620]: E0904 17:53:01.305811 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.306126 kubelet[2620]: E0904 17:53:01.306014 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.306126 kubelet[2620]: W0904 17:53:01.306030 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.306415 kubelet[2620]: E0904 17:53:01.306313 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.307292 kubelet[2620]: E0904 17:53:01.307273 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.307968 kubelet[2620]: W0904 17:53:01.307400 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.309890 kubelet[2620]: E0904 17:53:01.309853 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.309890 kubelet[2620]: W0904 17:53:01.309879 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.310273 kubelet[2620]: E0904 17:53:01.310244 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.310273 kubelet[2620]: W0904 17:53:01.310263 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.310577 kubelet[2620]: E0904 17:53:01.310559 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.310653 kubelet[2620]: W0904 17:53:01.310580 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.310653 kubelet[2620]: E0904 17:53:01.310599 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.310653 kubelet[2620]: E0904 17:53:01.310633 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.310653 kubelet[2620]: E0904 17:53:01.310650 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.321567 kubelet[2620]: E0904 17:53:01.321423 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.321567 kubelet[2620]: W0904 17:53:01.321455 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.321567 kubelet[2620]: E0904 17:53:01.321481 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.321567 kubelet[2620]: E0904 17:53:01.321521 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.356437 containerd[1456]: time="2024-09-04T17:53:01.355612038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d8c7f4569-fgd99,Uid:f46cca9a-3668-487d-a7d0-25094f5f3795,Namespace:calico-system,Attempt:0,}" Sep 4 17:53:01.359972 kubelet[2620]: E0904 17:53:01.359671 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.361396 kubelet[2620]: W0904 17:53:01.361359 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.361838 kubelet[2620]: E0904 17:53:01.361814 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.363742 kubelet[2620]: E0904 17:53:01.363716 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.363742 kubelet[2620]: W0904 17:53:01.363738 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.365328 kubelet[2620]: E0904 17:53:01.363760 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.365328 kubelet[2620]: E0904 17:53:01.364103 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.365328 kubelet[2620]: W0904 17:53:01.364118 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.365328 kubelet[2620]: E0904 17:53:01.364137 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.366368 kubelet[2620]: E0904 17:53:01.366292 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.366368 kubelet[2620]: W0904 17:53:01.366313 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.366368 kubelet[2620]: E0904 17:53:01.366332 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.368241 kubelet[2620]: E0904 17:53:01.367305 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.368241 kubelet[2620]: W0904 17:53:01.367321 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.368241 kubelet[2620]: E0904 17:53:01.367340 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.369069 kubelet[2620]: E0904 17:53:01.369039 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.369069 kubelet[2620]: W0904 17:53:01.369062 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.370291 kubelet[2620]: E0904 17:53:01.369079 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.371972 kubelet[2620]: E0904 17:53:01.371925 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.371972 kubelet[2620]: W0904 17:53:01.371945 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.371972 kubelet[2620]: E0904 17:53:01.371963 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.373172 kubelet[2620]: E0904 17:53:01.372620 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.373172 kubelet[2620]: W0904 17:53:01.372645 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.373172 kubelet[2620]: E0904 17:53:01.372662 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.374186 kubelet[2620]: E0904 17:53:01.373470 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.374186 kubelet[2620]: W0904 17:53:01.373489 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.374186 kubelet[2620]: E0904 17:53:01.373509 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.374408 kubelet[2620]: E0904 17:53:01.374391 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.374408 kubelet[2620]: W0904 17:53:01.374406 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.374512 kubelet[2620]: E0904 17:53:01.374423 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.383178 kubelet[2620]: E0904 17:53:01.375542 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.383178 kubelet[2620]: W0904 17:53:01.375560 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.383178 kubelet[2620]: E0904 17:53:01.375576 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.383178 kubelet[2620]: E0904 17:53:01.376014 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.383178 kubelet[2620]: W0904 17:53:01.376026 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.383178 kubelet[2620]: E0904 17:53:01.376042 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.383178 kubelet[2620]: E0904 17:53:01.376381 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.383178 kubelet[2620]: W0904 17:53:01.376393 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.383178 kubelet[2620]: E0904 17:53:01.376415 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.383178 kubelet[2620]: E0904 17:53:01.376769 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.383822 kubelet[2620]: W0904 17:53:01.376784 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.383822 kubelet[2620]: E0904 17:53:01.376815 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.383822 kubelet[2620]: E0904 17:53:01.377138 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.383822 kubelet[2620]: W0904 17:53:01.377184 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.383822 kubelet[2620]: E0904 17:53:01.377354 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.383822 kubelet[2620]: E0904 17:53:01.377561 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.383822 kubelet[2620]: W0904 17:53:01.377573 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.383822 kubelet[2620]: E0904 17:53:01.377596 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.383822 kubelet[2620]: E0904 17:53:01.377965 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.383822 kubelet[2620]: W0904 17:53:01.377979 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.395012 kubelet[2620]: E0904 17:53:01.377995 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.395012 kubelet[2620]: E0904 17:53:01.378364 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.395012 kubelet[2620]: W0904 17:53:01.378378 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.395012 kubelet[2620]: E0904 17:53:01.378393 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.395012 kubelet[2620]: E0904 17:53:01.378662 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.395012 kubelet[2620]: W0904 17:53:01.378674 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.395012 kubelet[2620]: E0904 17:53:01.378691 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.395012 kubelet[2620]: E0904 17:53:01.378974 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.395012 kubelet[2620]: W0904 17:53:01.378987 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.395012 kubelet[2620]: E0904 17:53:01.379014 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.395579 kubelet[2620]: E0904 17:53:01.379338 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.395579 kubelet[2620]: W0904 17:53:01.379352 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.395579 kubelet[2620]: E0904 17:53:01.379369 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.395579 kubelet[2620]: E0904 17:53:01.379827 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.395579 kubelet[2620]: W0904 17:53:01.379842 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.395579 kubelet[2620]: E0904 17:53:01.379860 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.395579 kubelet[2620]: I0904 17:53:01.379897 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9skv\" (UniqueName: \"kubernetes.io/projected/3f204459-67ca-4ef3-87db-d2dfa1c8a5a7-kube-api-access-c9skv\") pod \"csi-node-driver-6v7cb\" (UID: \"3f204459-67ca-4ef3-87db-d2dfa1c8a5a7\") " pod="calico-system/csi-node-driver-6v7cb" Sep 4 17:53:01.395579 kubelet[2620]: E0904 17:53:01.380271 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.395579 kubelet[2620]: W0904 17:53:01.380288 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.396001 kubelet[2620]: E0904 17:53:01.380319 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.396001 kubelet[2620]: I0904 17:53:01.380348 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3f204459-67ca-4ef3-87db-d2dfa1c8a5a7-kubelet-dir\") pod \"csi-node-driver-6v7cb\" (UID: \"3f204459-67ca-4ef3-87db-d2dfa1c8a5a7\") " pod="calico-system/csi-node-driver-6v7cb" Sep 4 17:53:01.396001 kubelet[2620]: E0904 17:53:01.380702 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.396001 kubelet[2620]: W0904 17:53:01.380717 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.396001 kubelet[2620]: E0904 17:53:01.380779 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.396001 kubelet[2620]: E0904 17:53:01.382105 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.396001 kubelet[2620]: W0904 17:53:01.382121 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.396001 kubelet[2620]: E0904 17:53:01.382166 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.396001 kubelet[2620]: E0904 17:53:01.382460 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.402617 kubelet[2620]: W0904 17:53:01.382472 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.402617 kubelet[2620]: E0904 17:53:01.382554 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.402617 kubelet[2620]: I0904 17:53:01.382586 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3f204459-67ca-4ef3-87db-d2dfa1c8a5a7-registration-dir\") pod \"csi-node-driver-6v7cb\" (UID: \"3f204459-67ca-4ef3-87db-d2dfa1c8a5a7\") " pod="calico-system/csi-node-driver-6v7cb" Sep 4 17:53:01.402617 kubelet[2620]: E0904 17:53:01.384286 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.402617 kubelet[2620]: W0904 17:53:01.384302 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.402617 kubelet[2620]: E0904 17:53:01.384326 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.402617 kubelet[2620]: E0904 17:53:01.384854 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.402617 kubelet[2620]: W0904 17:53:01.384870 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.402617 kubelet[2620]: E0904 17:53:01.384968 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.403069 kubelet[2620]: E0904 17:53:01.385349 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.403069 kubelet[2620]: W0904 17:53:01.385362 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.403069 kubelet[2620]: E0904 17:53:01.385585 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.403069 kubelet[2620]: I0904 17:53:01.385624 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/3f204459-67ca-4ef3-87db-d2dfa1c8a5a7-varrun\") pod \"csi-node-driver-6v7cb\" (UID: \"3f204459-67ca-4ef3-87db-d2dfa1c8a5a7\") " pod="calico-system/csi-node-driver-6v7cb" Sep 4 17:53:01.403069 kubelet[2620]: E0904 17:53:01.386712 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.403069 kubelet[2620]: W0904 17:53:01.386740 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.403069 kubelet[2620]: E0904 17:53:01.386765 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.403069 kubelet[2620]: E0904 17:53:01.387408 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.403069 kubelet[2620]: W0904 17:53:01.387426 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.406219 kubelet[2620]: E0904 17:53:01.387523 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.406219 kubelet[2620]: E0904 17:53:01.388572 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.406219 kubelet[2620]: W0904 17:53:01.388588 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.406219 kubelet[2620]: E0904 17:53:01.388734 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.406219 kubelet[2620]: I0904 17:53:01.388763 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3f204459-67ca-4ef3-87db-d2dfa1c8a5a7-socket-dir\") pod \"csi-node-driver-6v7cb\" (UID: \"3f204459-67ca-4ef3-87db-d2dfa1c8a5a7\") " pod="calico-system/csi-node-driver-6v7cb" Sep 4 17:53:01.406219 kubelet[2620]: E0904 17:53:01.388985 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.406219 kubelet[2620]: W0904 17:53:01.388996 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.406219 kubelet[2620]: E0904 17:53:01.389038 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.406219 kubelet[2620]: E0904 17:53:01.389947 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.408885 kubelet[2620]: W0904 17:53:01.389962 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.408885 kubelet[2620]: E0904 17:53:01.389997 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.408885 kubelet[2620]: E0904 17:53:01.390363 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.408885 kubelet[2620]: W0904 17:53:01.390376 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.408885 kubelet[2620]: E0904 17:53:01.390405 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.408885 kubelet[2620]: E0904 17:53:01.391288 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.408885 kubelet[2620]: W0904 17:53:01.391304 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.408885 kubelet[2620]: E0904 17:53:01.391323 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.423016 containerd[1456]: time="2024-09-04T17:53:01.422529187Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:53:01.423016 containerd[1456]: time="2024-09-04T17:53:01.422615420Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:53:01.423016 containerd[1456]: time="2024-09-04T17:53:01.422641995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:53:01.423016 containerd[1456]: time="2024-09-04T17:53:01.422764962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:53:01.460959 containerd[1456]: time="2024-09-04T17:53:01.460898183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4tp87,Uid:df466ef6-c2c6-49ac-a3a0-8b16d416984d,Namespace:calico-system,Attempt:0,}" Sep 4 17:53:01.467397 systemd[1]: Started cri-containerd-690afced0909132fc63c1e6fc74e2b76bed290e92ea37f85bd285e38caec02cb.scope - libcontainer container 690afced0909132fc63c1e6fc74e2b76bed290e92ea37f85bd285e38caec02cb. Sep 4 17:53:01.489694 kubelet[2620]: E0904 17:53:01.489433 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.489694 kubelet[2620]: W0904 17:53:01.489466 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.489694 kubelet[2620]: E0904 17:53:01.489496 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.490465 kubelet[2620]: E0904 17:53:01.489920 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.490465 kubelet[2620]: W0904 17:53:01.489936 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.490465 kubelet[2620]: E0904 17:53:01.489972 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.490823 kubelet[2620]: E0904 17:53:01.490469 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.490823 kubelet[2620]: W0904 17:53:01.490489 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.491217 kubelet[2620]: E0904 17:53:01.490827 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.491639 kubelet[2620]: E0904 17:53:01.491615 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.491897 kubelet[2620]: W0904 17:53:01.491641 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.491897 kubelet[2620]: E0904 17:53:01.491665 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.492809 kubelet[2620]: E0904 17:53:01.492786 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.492809 kubelet[2620]: W0904 17:53:01.492808 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.493331 kubelet[2620]: E0904 17:53:01.493235 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.494412 kubelet[2620]: E0904 17:53:01.494375 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.494412 kubelet[2620]: W0904 17:53:01.494410 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.494703 kubelet[2620]: E0904 17:53:01.494676 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.495348 kubelet[2620]: E0904 17:53:01.495221 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.495348 kubelet[2620]: W0904 17:53:01.495242 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.495520 kubelet[2620]: E0904 17:53:01.495414 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.496804 kubelet[2620]: E0904 17:53:01.496399 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.496804 kubelet[2620]: W0904 17:53:01.496416 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.496804 kubelet[2620]: E0904 17:53:01.496529 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.496804 kubelet[2620]: E0904 17:53:01.496753 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.496804 kubelet[2620]: W0904 17:53:01.496764 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.497349 kubelet[2620]: E0904 17:53:01.497121 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.497598 kubelet[2620]: E0904 17:53:01.497574 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.497598 kubelet[2620]: W0904 17:53:01.497596 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.498104 kubelet[2620]: E0904 17:53:01.498022 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.498854 kubelet[2620]: E0904 17:53:01.498827 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.498854 kubelet[2620]: W0904 17:53:01.498851 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.499028 kubelet[2620]: E0904 17:53:01.498974 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.499385 kubelet[2620]: E0904 17:53:01.499348 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.499385 kubelet[2620]: W0904 17:53:01.499372 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.500057 kubelet[2620]: E0904 17:53:01.499967 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.500507 kubelet[2620]: E0904 17:53:01.500465 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.500507 kubelet[2620]: W0904 17:53:01.500486 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.501297 kubelet[2620]: E0904 17:53:01.501282 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.501531 kubelet[2620]: E0904 17:53:01.501509 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.501531 kubelet[2620]: W0904 17:53:01.501530 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.502260 kubelet[2620]: E0904 17:53:01.502229 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.503961 kubelet[2620]: E0904 17:53:01.502527 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.503961 kubelet[2620]: W0904 17:53:01.502542 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.503961 kubelet[2620]: E0904 17:53:01.502665 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.503961 kubelet[2620]: E0904 17:53:01.502868 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.503961 kubelet[2620]: W0904 17:53:01.502879 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.503961 kubelet[2620]: E0904 17:53:01.503040 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.503961 kubelet[2620]: E0904 17:53:01.503323 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.503961 kubelet[2620]: W0904 17:53:01.503336 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.503961 kubelet[2620]: E0904 17:53:01.503447 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.503961 kubelet[2620]: E0904 17:53:01.503647 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.504505 kubelet[2620]: W0904 17:53:01.503657 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.504505 kubelet[2620]: E0904 17:53:01.503750 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.504827 kubelet[2620]: E0904 17:53:01.504719 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.504827 kubelet[2620]: W0904 17:53:01.504736 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.505209 kubelet[2620]: E0904 17:53:01.504961 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.505406 kubelet[2620]: E0904 17:53:01.505382 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.505645 kubelet[2620]: W0904 17:53:01.505488 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.507225 kubelet[2620]: E0904 17:53:01.506582 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.507225 kubelet[2620]: E0904 17:53:01.506861 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.507225 kubelet[2620]: W0904 17:53:01.506874 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.507225 kubelet[2620]: E0904 17:53:01.507025 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.507680 kubelet[2620]: E0904 17:53:01.507656 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.507799 kubelet[2620]: W0904 17:53:01.507778 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.508181 kubelet[2620]: E0904 17:53:01.508016 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.509142 kubelet[2620]: E0904 17:53:01.508642 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.509142 kubelet[2620]: W0904 17:53:01.508659 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.509142 kubelet[2620]: E0904 17:53:01.508807 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.509142 kubelet[2620]: E0904 17:53:01.509040 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.509142 kubelet[2620]: W0904 17:53:01.509052 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.509142 kubelet[2620]: E0904 17:53:01.509087 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.509917 kubelet[2620]: E0904 17:53:01.509847 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.509917 kubelet[2620]: W0904 17:53:01.509865 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.509917 kubelet[2620]: E0904 17:53:01.509881 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.542335 kubelet[2620]: E0904 17:53:01.542198 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:01.542698 kubelet[2620]: W0904 17:53:01.542512 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:01.542698 kubelet[2620]: E0904 17:53:01.542549 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:01.555732 containerd[1456]: time="2024-09-04T17:53:01.548773482Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:53:01.555732 containerd[1456]: time="2024-09-04T17:53:01.548872788Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:53:01.555732 containerd[1456]: time="2024-09-04T17:53:01.548902954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:53:01.555732 containerd[1456]: time="2024-09-04T17:53:01.549029525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:53:01.613428 systemd[1]: Started cri-containerd-35147d2c37b6d2dad67decd188681b5bf6e232bbaaeff01406a9a2fed5ce4b51.scope - libcontainer container 35147d2c37b6d2dad67decd188681b5bf6e232bbaaeff01406a9a2fed5ce4b51. Sep 4 17:53:01.648378 containerd[1456]: time="2024-09-04T17:53:01.648322425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d8c7f4569-fgd99,Uid:f46cca9a-3668-487d-a7d0-25094f5f3795,Namespace:calico-system,Attempt:0,} returns sandbox id \"690afced0909132fc63c1e6fc74e2b76bed290e92ea37f85bd285e38caec02cb\"" Sep 4 17:53:01.651498 containerd[1456]: time="2024-09-04T17:53:01.651452141Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Sep 4 17:53:01.697672 containerd[1456]: time="2024-09-04T17:53:01.697370604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4tp87,Uid:df466ef6-c2c6-49ac-a3a0-8b16d416984d,Namespace:calico-system,Attempt:0,} returns sandbox id \"35147d2c37b6d2dad67decd188681b5bf6e232bbaaeff01406a9a2fed5ce4b51\"" Sep 4 17:53:02.538179 kubelet[2620]: E0904 17:53:02.537246 2620 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6v7cb" podUID="3f204459-67ca-4ef3-87db-d2dfa1c8a5a7" Sep 4 17:53:03.905990 containerd[1456]: time="2024-09-04T17:53:03.905910350Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:53:03.907393 containerd[1456]: time="2024-09-04T17:53:03.907330585Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=29471335" Sep 4 17:53:03.908992 containerd[1456]: time="2024-09-04T17:53:03.908905869Z" level=info msg="ImageCreate event name:\"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:53:03.912343 containerd[1456]: time="2024-09-04T17:53:03.912277720Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:53:03.916056 containerd[1456]: time="2024-09-04T17:53:03.915997350Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"30963728\" in 2.264452799s" Sep 4 17:53:03.917122 containerd[1456]: time="2024-09-04T17:53:03.916206889Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\"" Sep 4 17:53:03.918957 containerd[1456]: time="2024-09-04T17:53:03.918908476Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Sep 4 17:53:03.948580 containerd[1456]: time="2024-09-04T17:53:03.948500686Z" level=info msg="CreateContainer within sandbox \"690afced0909132fc63c1e6fc74e2b76bed290e92ea37f85bd285e38caec02cb\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 4 17:53:03.969480 containerd[1456]: time="2024-09-04T17:53:03.969431456Z" level=info msg="CreateContainer within sandbox \"690afced0909132fc63c1e6fc74e2b76bed290e92ea37f85bd285e38caec02cb\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"599fc48131757575b21cecf34d8ac669e9f37b06dd60ff92b0fb31f7040fe058\"" Sep 4 17:53:03.970398 containerd[1456]: time="2024-09-04T17:53:03.970364330Z" level=info msg="StartContainer for \"599fc48131757575b21cecf34d8ac669e9f37b06dd60ff92b0fb31f7040fe058\"" Sep 4 17:53:04.038403 systemd[1]: Started cri-containerd-599fc48131757575b21cecf34d8ac669e9f37b06dd60ff92b0fb31f7040fe058.scope - libcontainer container 599fc48131757575b21cecf34d8ac669e9f37b06dd60ff92b0fb31f7040fe058. Sep 4 17:53:04.116838 containerd[1456]: time="2024-09-04T17:53:04.116744527Z" level=info msg="StartContainer for \"599fc48131757575b21cecf34d8ac669e9f37b06dd60ff92b0fb31f7040fe058\" returns successfully" Sep 4 17:53:04.537624 kubelet[2620]: E0904 17:53:04.537556 2620 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6v7cb" podUID="3f204459-67ca-4ef3-87db-d2dfa1c8a5a7" Sep 4 17:53:04.678469 kubelet[2620]: I0904 17:53:04.678391 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6d8c7f4569-fgd99" podStartSLOduration=2.411044868 podStartE2EDuration="4.678369673s" podCreationTimestamp="2024-09-04 17:53:00 +0000 UTC" firstStartedPulling="2024-09-04 17:53:01.65093758 +0000 UTC m=+22.260456974" lastFinishedPulling="2024-09-04 17:53:03.918262377 +0000 UTC m=+24.527781779" observedRunningTime="2024-09-04 17:53:04.677354824 +0000 UTC m=+25.286874231" watchObservedRunningTime="2024-09-04 17:53:04.678369673 +0000 UTC m=+25.287889080" Sep 4 17:53:04.708089 kubelet[2620]: E0904 17:53:04.708028 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:04.708089 kubelet[2620]: W0904 17:53:04.708062 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:04.708089 kubelet[2620]: E0904 17:53:04.708093 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:04.708801 kubelet[2620]: E0904 17:53:04.708432 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:04.708801 kubelet[2620]: W0904 17:53:04.708448 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:04.708801 kubelet[2620]: E0904 17:53:04.708467 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:04.708801 kubelet[2620]: E0904 17:53:04.708807 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:04.709175 kubelet[2620]: W0904 17:53:04.708831 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:04.709175 kubelet[2620]: E0904 17:53:04.708848 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:04.709175 kubelet[2620]: E0904 17:53:04.709164 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:04.709650 kubelet[2620]: W0904 17:53:04.709179 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:04.709650 kubelet[2620]: E0904 17:53:04.709196 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:04.709650 kubelet[2620]: E0904 17:53:04.709498 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:04.709650 kubelet[2620]: W0904 17:53:04.709513 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:04.709650 kubelet[2620]: E0904 17:53:04.709529 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:04.710072 kubelet[2620]: E0904 17:53:04.709832 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:04.710072 kubelet[2620]: W0904 17:53:04.709846 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:04.710072 kubelet[2620]: E0904 17:53:04.709870 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:04.710336 kubelet[2620]: E0904 17:53:04.710164 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:04.710336 kubelet[2620]: W0904 17:53:04.710178 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:04.710336 kubelet[2620]: E0904 17:53:04.710194 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:04.710687 kubelet[2620]: E0904 17:53:04.710473 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:04.710687 kubelet[2620]: W0904 17:53:04.710489 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:04.710687 kubelet[2620]: E0904 17:53:04.710505 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:04.710986 kubelet[2620]: E0904 17:53:04.710970 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:04.710986 kubelet[2620]: W0904 17:53:04.710987 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:04.710986 kubelet[2620]: E0904 17:53:04.711002 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:04.711340 kubelet[2620]: E0904 17:53:04.711322 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:04.711340 kubelet[2620]: W0904 17:53:04.711338 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:04.711481 kubelet[2620]: E0904 17:53:04.711355 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:04.711757 kubelet[2620]: E0904 17:53:04.711736 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:04.711757 kubelet[2620]: W0904 17:53:04.711754 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:04.711757 kubelet[2620]: E0904 17:53:04.711770 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:04.712190 kubelet[2620]: E0904 17:53:04.712169 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:04.712190 kubelet[2620]: W0904 17:53:04.712188 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:04.712316 kubelet[2620]: E0904 17:53:04.712205 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:04.712530 kubelet[2620]: E0904 17:53:04.712511 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:04.712530 kubelet[2620]: W0904 17:53:04.712527 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:04.712681 kubelet[2620]: E0904 17:53:04.712547 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:04.712864 kubelet[2620]: E0904 17:53:04.712842 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:04.712864 kubelet[2620]: W0904 17:53:04.712859 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:04.713069 kubelet[2620]: E0904 17:53:04.712875 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:04.713262 kubelet[2620]: E0904 17:53:04.713188 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:04.713262 kubelet[2620]: W0904 17:53:04.713204 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:04.713262 kubelet[2620]: E0904 17:53:04.713220 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:04.718644 kubelet[2620]: E0904 17:53:04.718611 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:04.718817 kubelet[2620]: W0904 17:53:04.718637 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:04.718817 kubelet[2620]: E0904 17:53:04.718681 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:04.719043 kubelet[2620]: E0904 17:53:04.719015 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:04.719130 kubelet[2620]: W0904 17:53:04.719065 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:04.719130 kubelet[2620]: E0904 17:53:04.719086 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:04.719572 kubelet[2620]: E0904 17:53:04.719549 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:04.719572 kubelet[2620]: W0904 17:53:04.719571 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:04.719834 kubelet[2620]: E0904 17:53:04.719589 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:04.719951 kubelet[2620]: E0904 17:53:04.719936 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:04.720051 kubelet[2620]: W0904 17:53:04.719952 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:04.720051 kubelet[2620]: E0904 17:53:04.719969 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:04.720376 kubelet[2620]: E0904 17:53:04.720350 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:04.720376 kubelet[2620]: W0904 17:53:04.720365 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:04.720542 kubelet[2620]: E0904 17:53:04.720383 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:04.720814 kubelet[2620]: E0904 17:53:04.720649 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:04.720814 kubelet[2620]: W0904 17:53:04.720662 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:04.720814 kubelet[2620]: E0904 17:53:04.720677 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:04.721238 kubelet[2620]: E0904 17:53:04.721075 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:04.721238 kubelet[2620]: W0904 17:53:04.721088 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:04.721238 kubelet[2620]: E0904 17:53:04.721105 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:04.721829 kubelet[2620]: E0904 17:53:04.721805 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:04.721829 kubelet[2620]: W0904 17:53:04.721825 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:04.722400 kubelet[2620]: E0904 17:53:04.721872 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:04.722400 kubelet[2620]: E0904 17:53:04.722324 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:04.722400 kubelet[2620]: W0904 17:53:04.722339 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:04.722400 kubelet[2620]: E0904 17:53:04.722356 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:04.723020 kubelet[2620]: E0904 17:53:04.722653 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:04.723020 kubelet[2620]: W0904 17:53:04.722678 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:04.723020 kubelet[2620]: E0904 17:53:04.722693 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:04.723020 kubelet[2620]: E0904 17:53:04.722988 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:04.723279 kubelet[2620]: W0904 17:53:04.723033 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:04.723279 kubelet[2620]: E0904 17:53:04.723051 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:04.724012 kubelet[2620]: E0904 17:53:04.723439 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:04.724012 kubelet[2620]: W0904 17:53:04.723455 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:04.724012 kubelet[2620]: E0904 17:53:04.723470 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:04.725009 kubelet[2620]: E0904 17:53:04.724832 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:04.725009 kubelet[2620]: W0904 17:53:04.724872 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:04.725009 kubelet[2620]: E0904 17:53:04.724946 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:04.725789 kubelet[2620]: E0904 17:53:04.725518 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:04.725789 kubelet[2620]: W0904 17:53:04.725535 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:04.725789 kubelet[2620]: E0904 17:53:04.725551 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:04.725962 kubelet[2620]: E0904 17:53:04.725949 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:04.726016 kubelet[2620]: W0904 17:53:04.725963 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:04.726016 kubelet[2620]: E0904 17:53:04.725979 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:04.727333 kubelet[2620]: E0904 17:53:04.726382 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:04.727333 kubelet[2620]: W0904 17:53:04.726436 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:04.727333 kubelet[2620]: E0904 17:53:04.726456 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:04.727333 kubelet[2620]: E0904 17:53:04.726937 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:04.727333 kubelet[2620]: W0904 17:53:04.726973 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:04.727333 kubelet[2620]: E0904 17:53:04.726993 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:04.728256 kubelet[2620]: E0904 17:53:04.728236 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:53:04.728334 kubelet[2620]: W0904 17:53:04.728256 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:53:04.728334 kubelet[2620]: E0904 17:53:04.728312 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:53:04.969099 containerd[1456]: time="2024-09-04T17:53:04.969036419Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:53:04.970557 containerd[1456]: time="2024-09-04T17:53:04.970338059Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=5141007" Sep 4 17:53:04.972337 containerd[1456]: time="2024-09-04T17:53:04.972273645Z" level=info msg="ImageCreate event name:\"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:53:04.978145 containerd[1456]: time="2024-09-04T17:53:04.977332183Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:53:04.978145 containerd[1456]: time="2024-09-04T17:53:04.977979650Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6633368\" in 1.059018662s" Sep 4 17:53:04.978145 containerd[1456]: time="2024-09-04T17:53:04.978026451Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\"" Sep 4 17:53:04.982260 containerd[1456]: time="2024-09-04T17:53:04.982213885Z" level=info msg="CreateContainer within sandbox \"35147d2c37b6d2dad67decd188681b5bf6e232bbaaeff01406a9a2fed5ce4b51\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 4 17:53:05.010983 containerd[1456]: time="2024-09-04T17:53:05.010927528Z" level=info msg="CreateContainer within sandbox \"35147d2c37b6d2dad67decd188681b5bf6e232bbaaeff01406a9a2fed5ce4b51\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"013bf4894c9e7b97804fbe76f460abe0385a04c4161b23461332c0c8c32f6f59\"" Sep 4 17:53:05.013206 containerd[1456]: time="2024-09-04T17:53:05.011821529Z" level=info msg="StartContainer for \"013bf4894c9e7b97804fbe76f460abe0385a04c4161b23461332c0c8c32f6f59\"" Sep 4 17:53:05.084503 systemd[1]: run-containerd-runc-k8s.io-013bf4894c9e7b97804fbe76f460abe0385a04c4161b23461332c0c8c32f6f59-runc.pai7wW.mount: Deactivated successfully. Sep 4 17:53:05.105674 systemd[1]: Started cri-containerd-013bf4894c9e7b97804fbe76f460abe0385a04c4161b23461332c0c8c32f6f59.scope - libcontainer container 013bf4894c9e7b97804fbe76f460abe0385a04c4161b23461332c0c8c32f6f59. Sep 4 17:53:05.179813 containerd[1456]: time="2024-09-04T17:53:05.179749853Z" level=info msg="StartContainer for \"013bf4894c9e7b97804fbe76f460abe0385a04c4161b23461332c0c8c32f6f59\" returns successfully" Sep 4 17:53:05.209540 systemd[1]: cri-containerd-013bf4894c9e7b97804fbe76f460abe0385a04c4161b23461332c0c8c32f6f59.scope: Deactivated successfully. Sep 4 17:53:05.286449 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-013bf4894c9e7b97804fbe76f460abe0385a04c4161b23461332c0c8c32f6f59-rootfs.mount: Deactivated successfully. Sep 4 17:53:05.671495 kubelet[2620]: I0904 17:53:05.670215 2620 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 17:53:05.844297 containerd[1456]: time="2024-09-04T17:53:05.844178263Z" level=info msg="shim disconnected" id=013bf4894c9e7b97804fbe76f460abe0385a04c4161b23461332c0c8c32f6f59 namespace=k8s.io Sep 4 17:53:05.844297 containerd[1456]: time="2024-09-04T17:53:05.844267214Z" level=warning msg="cleaning up after shim disconnected" id=013bf4894c9e7b97804fbe76f460abe0385a04c4161b23461332c0c8c32f6f59 namespace=k8s.io Sep 4 17:53:05.844297 containerd[1456]: time="2024-09-04T17:53:05.844282856Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:53:06.536837 kubelet[2620]: E0904 17:53:06.536759 2620 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6v7cb" podUID="3f204459-67ca-4ef3-87db-d2dfa1c8a5a7" Sep 4 17:53:06.677457 containerd[1456]: time="2024-09-04T17:53:06.677351178Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Sep 4 17:53:08.537374 kubelet[2620]: E0904 17:53:08.537303 2620 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6v7cb" podUID="3f204459-67ca-4ef3-87db-d2dfa1c8a5a7" Sep 4 17:53:10.539185 kubelet[2620]: E0904 17:53:10.537616 2620 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6v7cb" podUID="3f204459-67ca-4ef3-87db-d2dfa1c8a5a7" Sep 4 17:53:10.925664 containerd[1456]: time="2024-09-04T17:53:10.925595464Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:53:10.926944 containerd[1456]: time="2024-09-04T17:53:10.926881908Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=93083736" Sep 4 17:53:10.928480 containerd[1456]: time="2024-09-04T17:53:10.928411809Z" level=info msg="ImageCreate event name:\"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:53:10.932185 containerd[1456]: time="2024-09-04T17:53:10.931817808Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:53:10.933781 containerd[1456]: time="2024-09-04T17:53:10.932969971Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"94576137\" in 4.255512173s" Sep 4 17:53:10.933781 containerd[1456]: time="2024-09-04T17:53:10.933017976Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\"" Sep 4 17:53:10.937405 containerd[1456]: time="2024-09-04T17:53:10.937163562Z" level=info msg="CreateContainer within sandbox \"35147d2c37b6d2dad67decd188681b5bf6e232bbaaeff01406a9a2fed5ce4b51\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 4 17:53:10.958043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3397914652.mount: Deactivated successfully. Sep 4 17:53:10.958534 containerd[1456]: time="2024-09-04T17:53:10.958405813Z" level=info msg="CreateContainer within sandbox \"35147d2c37b6d2dad67decd188681b5bf6e232bbaaeff01406a9a2fed5ce4b51\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"afa01d8c2553e856788f63dfc026b97037300466917bc4dc6d1483e20bf5086e\"" Sep 4 17:53:10.960374 containerd[1456]: time="2024-09-04T17:53:10.960130792Z" level=info msg="StartContainer for \"afa01d8c2553e856788f63dfc026b97037300466917bc4dc6d1483e20bf5086e\"" Sep 4 17:53:11.008691 systemd[1]: run-containerd-runc-k8s.io-afa01d8c2553e856788f63dfc026b97037300466917bc4dc6d1483e20bf5086e-runc.AVVwMz.mount: Deactivated successfully. Sep 4 17:53:11.017402 systemd[1]: Started cri-containerd-afa01d8c2553e856788f63dfc026b97037300466917bc4dc6d1483e20bf5086e.scope - libcontainer container afa01d8c2553e856788f63dfc026b97037300466917bc4dc6d1483e20bf5086e. Sep 4 17:53:11.056285 containerd[1456]: time="2024-09-04T17:53:11.056110784Z" level=info msg="StartContainer for \"afa01d8c2553e856788f63dfc026b97037300466917bc4dc6d1483e20bf5086e\" returns successfully" Sep 4 17:53:11.921966 systemd[1]: cri-containerd-afa01d8c2553e856788f63dfc026b97037300466917bc4dc6d1483e20bf5086e.scope: Deactivated successfully. Sep 4 17:53:11.956584 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-afa01d8c2553e856788f63dfc026b97037300466917bc4dc6d1483e20bf5086e-rootfs.mount: Deactivated successfully. Sep 4 17:53:11.975737 kubelet[2620]: I0904 17:53:11.975570 2620 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Sep 4 17:53:12.005061 kubelet[2620]: I0904 17:53:12.003431 2620 topology_manager.go:215] "Topology Admit Handler" podUID="f10445bd-c567-4cbb-b259-041496b4f378" podNamespace="kube-system" podName="coredns-7db6d8ff4d-s2r6n" Sep 4 17:53:12.020317 kubelet[2620]: I0904 17:53:12.017768 2620 topology_manager.go:215] "Topology Admit Handler" podUID="f2e7d76b-6bfd-417d-a1c5-8517155d4273" podNamespace="calico-system" podName="calico-kube-controllers-5f4c99c577-f2nlw" Sep 4 17:53:12.020658 kubelet[2620]: I0904 17:53:12.020587 2620 topology_manager.go:215] "Topology Admit Handler" podUID="090627b4-68b0-464b-9437-a497123fe057" podNamespace="kube-system" podName="coredns-7db6d8ff4d-kmp25" Sep 4 17:53:12.022633 systemd[1]: Created slice kubepods-burstable-podf10445bd_c567_4cbb_b259_041496b4f378.slice - libcontainer container kubepods-burstable-podf10445bd_c567_4cbb_b259_041496b4f378.slice. Sep 4 17:53:12.047220 systemd[1]: Created slice kubepods-burstable-pod090627b4_68b0_464b_9437_a497123fe057.slice - libcontainer container kubepods-burstable-pod090627b4_68b0_464b_9437_a497123fe057.slice. Sep 4 17:53:12.058965 systemd[1]: Created slice kubepods-besteffort-podf2e7d76b_6bfd_417d_a1c5_8517155d4273.slice - libcontainer container kubepods-besteffort-podf2e7d76b_6bfd_417d_a1c5_8517155d4273.slice. Sep 4 17:53:12.175700 kubelet[2620]: I0904 17:53:12.175518 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2e7d76b-6bfd-417d-a1c5-8517155d4273-tigera-ca-bundle\") pod \"calico-kube-controllers-5f4c99c577-f2nlw\" (UID: \"f2e7d76b-6bfd-417d-a1c5-8517155d4273\") " pod="calico-system/calico-kube-controllers-5f4c99c577-f2nlw" Sep 4 17:53:12.175700 kubelet[2620]: I0904 17:53:12.175583 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/090627b4-68b0-464b-9437-a497123fe057-config-volume\") pod \"coredns-7db6d8ff4d-kmp25\" (UID: \"090627b4-68b0-464b-9437-a497123fe057\") " pod="kube-system/coredns-7db6d8ff4d-kmp25" Sep 4 17:53:12.175700 kubelet[2620]: I0904 17:53:12.175617 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npthn\" (UniqueName: \"kubernetes.io/projected/090627b4-68b0-464b-9437-a497123fe057-kube-api-access-npthn\") pod \"coredns-7db6d8ff4d-kmp25\" (UID: \"090627b4-68b0-464b-9437-a497123fe057\") " pod="kube-system/coredns-7db6d8ff4d-kmp25" Sep 4 17:53:12.175700 kubelet[2620]: I0904 17:53:12.175651 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f10445bd-c567-4cbb-b259-041496b4f378-config-volume\") pod \"coredns-7db6d8ff4d-s2r6n\" (UID: \"f10445bd-c567-4cbb-b259-041496b4f378\") " pod="kube-system/coredns-7db6d8ff4d-s2r6n" Sep 4 17:53:12.175700 kubelet[2620]: I0904 17:53:12.175678 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ckdz\" (UniqueName: \"kubernetes.io/projected/f2e7d76b-6bfd-417d-a1c5-8517155d4273-kube-api-access-9ckdz\") pod \"calico-kube-controllers-5f4c99c577-f2nlw\" (UID: \"f2e7d76b-6bfd-417d-a1c5-8517155d4273\") " pod="calico-system/calico-kube-controllers-5f4c99c577-f2nlw" Sep 4 17:53:12.176444 kubelet[2620]: I0904 17:53:12.175706 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z82gz\" (UniqueName: \"kubernetes.io/projected/f10445bd-c567-4cbb-b259-041496b4f378-kube-api-access-z82gz\") pod \"coredns-7db6d8ff4d-s2r6n\" (UID: \"f10445bd-c567-4cbb-b259-041496b4f378\") " pod="kube-system/coredns-7db6d8ff4d-s2r6n" Sep 4 17:53:12.334715 containerd[1456]: time="2024-09-04T17:53:12.334655065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-s2r6n,Uid:f10445bd-c567-4cbb-b259-041496b4f378,Namespace:kube-system,Attempt:0,}" Sep 4 17:53:12.356962 containerd[1456]: time="2024-09-04T17:53:12.356906841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kmp25,Uid:090627b4-68b0-464b-9437-a497123fe057,Namespace:kube-system,Attempt:0,}" Sep 4 17:53:12.368146 containerd[1456]: time="2024-09-04T17:53:12.368081500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f4c99c577-f2nlw,Uid:f2e7d76b-6bfd-417d-a1c5-8517155d4273,Namespace:calico-system,Attempt:0,}" Sep 4 17:53:12.545265 systemd[1]: Created slice kubepods-besteffort-pod3f204459_67ca_4ef3_87db_d2dfa1c8a5a7.slice - libcontainer container kubepods-besteffort-pod3f204459_67ca_4ef3_87db_d2dfa1c8a5a7.slice. Sep 4 17:53:12.549726 containerd[1456]: time="2024-09-04T17:53:12.549676322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6v7cb,Uid:3f204459-67ca-4ef3-87db-d2dfa1c8a5a7,Namespace:calico-system,Attempt:0,}" Sep 4 17:53:12.696437 containerd[1456]: time="2024-09-04T17:53:12.696123149Z" level=info msg="shim disconnected" id=afa01d8c2553e856788f63dfc026b97037300466917bc4dc6d1483e20bf5086e namespace=k8s.io Sep 4 17:53:12.696437 containerd[1456]: time="2024-09-04T17:53:12.696210322Z" level=warning msg="cleaning up after shim disconnected" id=afa01d8c2553e856788f63dfc026b97037300466917bc4dc6d1483e20bf5086e namespace=k8s.io Sep 4 17:53:12.696437 containerd[1456]: time="2024-09-04T17:53:12.696226229Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:53:12.919051 containerd[1456]: time="2024-09-04T17:53:12.918981930Z" level=error msg="Failed to destroy network for sandbox \"0e5cb0fa1a121df5e8a5f95962197ec847cb1409092d3ffa1e92784f84413787\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:53:12.920180 containerd[1456]: time="2024-09-04T17:53:12.919470806Z" level=error msg="encountered an error cleaning up failed sandbox \"0e5cb0fa1a121df5e8a5f95962197ec847cb1409092d3ffa1e92784f84413787\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:53:12.920180 containerd[1456]: time="2024-09-04T17:53:12.919551376Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-s2r6n,Uid:f10445bd-c567-4cbb-b259-041496b4f378,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0e5cb0fa1a121df5e8a5f95962197ec847cb1409092d3ffa1e92784f84413787\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:53:12.920363 kubelet[2620]: E0904 17:53:12.919955 2620 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e5cb0fa1a121df5e8a5f95962197ec847cb1409092d3ffa1e92784f84413787\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:53:12.920363 kubelet[2620]: E0904 17:53:12.920032 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e5cb0fa1a121df5e8a5f95962197ec847cb1409092d3ffa1e92784f84413787\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-s2r6n" Sep 4 17:53:12.920363 kubelet[2620]: E0904 17:53:12.920065 2620 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e5cb0fa1a121df5e8a5f95962197ec847cb1409092d3ffa1e92784f84413787\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-s2r6n" Sep 4 17:53:12.920544 kubelet[2620]: E0904 17:53:12.920124 2620 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-s2r6n_kube-system(f10445bd-c567-4cbb-b259-041496b4f378)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-s2r6n_kube-system(f10445bd-c567-4cbb-b259-041496b4f378)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0e5cb0fa1a121df5e8a5f95962197ec847cb1409092d3ffa1e92784f84413787\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-s2r6n" podUID="f10445bd-c567-4cbb-b259-041496b4f378" Sep 4 17:53:12.965586 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0e5cb0fa1a121df5e8a5f95962197ec847cb1409092d3ffa1e92784f84413787-shm.mount: Deactivated successfully. Sep 4 17:53:12.972337 containerd[1456]: time="2024-09-04T17:53:12.971365690Z" level=error msg="Failed to destroy network for sandbox \"15230386667992300db60420603239f9442b3c652fddd7ff6bb0eededb63a969\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:53:12.975300 containerd[1456]: time="2024-09-04T17:53:12.974188284Z" level=error msg="encountered an error cleaning up failed sandbox \"15230386667992300db60420603239f9442b3c652fddd7ff6bb0eededb63a969\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:53:12.975472 containerd[1456]: time="2024-09-04T17:53:12.975420638Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6v7cb,Uid:3f204459-67ca-4ef3-87db-d2dfa1c8a5a7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"15230386667992300db60420603239f9442b3c652fddd7ff6bb0eededb63a969\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:53:12.978178 kubelet[2620]: E0904 17:53:12.977235 2620 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15230386667992300db60420603239f9442b3c652fddd7ff6bb0eededb63a969\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:53:12.978178 kubelet[2620]: E0904 17:53:12.977917 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15230386667992300db60420603239f9442b3c652fddd7ff6bb0eededb63a969\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6v7cb" Sep 4 17:53:12.978178 kubelet[2620]: E0904 17:53:12.977978 2620 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15230386667992300db60420603239f9442b3c652fddd7ff6bb0eededb63a969\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6v7cb" Sep 4 17:53:12.979211 kubelet[2620]: E0904 17:53:12.978842 2620 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6v7cb_calico-system(3f204459-67ca-4ef3-87db-d2dfa1c8a5a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6v7cb_calico-system(3f204459-67ca-4ef3-87db-d2dfa1c8a5a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"15230386667992300db60420603239f9442b3c652fddd7ff6bb0eededb63a969\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6v7cb" podUID="3f204459-67ca-4ef3-87db-d2dfa1c8a5a7" Sep 4 17:53:12.980362 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-15230386667992300db60420603239f9442b3c652fddd7ff6bb0eededb63a969-shm.mount: Deactivated successfully. Sep 4 17:53:12.983092 containerd[1456]: time="2024-09-04T17:53:12.981357151Z" level=error msg="Failed to destroy network for sandbox \"c080fddd61097a4c7ac68ca5afc0847d98be35a0b6dd702524b6e61b51ddee43\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:53:12.984807 containerd[1456]: time="2024-09-04T17:53:12.984590007Z" level=error msg="Failed to destroy network for sandbox \"edfb6999042757428ea4cf4185b7a5f38a7f8faf9db1fb0a8d6c7a236029faed\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:53:12.987180 containerd[1456]: time="2024-09-04T17:53:12.985908470Z" level=error msg="encountered an error cleaning up failed sandbox \"c080fddd61097a4c7ac68ca5afc0847d98be35a0b6dd702524b6e61b51ddee43\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:53:12.987180 containerd[1456]: time="2024-09-04T17:53:12.985989582Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kmp25,Uid:090627b4-68b0-464b-9437-a497123fe057,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c080fddd61097a4c7ac68ca5afc0847d98be35a0b6dd702524b6e61b51ddee43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:53:12.987365 kubelet[2620]: E0904 17:53:12.986297 2620 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c080fddd61097a4c7ac68ca5afc0847d98be35a0b6dd702524b6e61b51ddee43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:53:12.987365 kubelet[2620]: E0904 17:53:12.986362 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c080fddd61097a4c7ac68ca5afc0847d98be35a0b6dd702524b6e61b51ddee43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-kmp25" Sep 4 17:53:12.987365 kubelet[2620]: E0904 17:53:12.986393 2620 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c080fddd61097a4c7ac68ca5afc0847d98be35a0b6dd702524b6e61b51ddee43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-kmp25" Sep 4 17:53:12.987555 kubelet[2620]: E0904 17:53:12.986456 2620 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-kmp25_kube-system(090627b4-68b0-464b-9437-a497123fe057)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-kmp25_kube-system(090627b4-68b0-464b-9437-a497123fe057)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c080fddd61097a4c7ac68ca5afc0847d98be35a0b6dd702524b6e61b51ddee43\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-kmp25" podUID="090627b4-68b0-464b-9437-a497123fe057" Sep 4 17:53:12.989203 containerd[1456]: time="2024-09-04T17:53:12.987870106Z" level=error msg="encountered an error cleaning up failed sandbox \"edfb6999042757428ea4cf4185b7a5f38a7f8faf9db1fb0a8d6c7a236029faed\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:53:12.989203 containerd[1456]: time="2024-09-04T17:53:12.987993336Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f4c99c577-f2nlw,Uid:f2e7d76b-6bfd-417d-a1c5-8517155d4273,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"edfb6999042757428ea4cf4185b7a5f38a7f8faf9db1fb0a8d6c7a236029faed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:53:12.990385 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c080fddd61097a4c7ac68ca5afc0847d98be35a0b6dd702524b6e61b51ddee43-shm.mount: Deactivated successfully. Sep 4 17:53:12.992306 kubelet[2620]: E0904 17:53:12.991362 2620 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"edfb6999042757428ea4cf4185b7a5f38a7f8faf9db1fb0a8d6c7a236029faed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:53:12.992306 kubelet[2620]: E0904 17:53:12.991427 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"edfb6999042757428ea4cf4185b7a5f38a7f8faf9db1fb0a8d6c7a236029faed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f4c99c577-f2nlw" Sep 4 17:53:12.992306 kubelet[2620]: E0904 17:53:12.991457 2620 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"edfb6999042757428ea4cf4185b7a5f38a7f8faf9db1fb0a8d6c7a236029faed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f4c99c577-f2nlw" Sep 4 17:53:12.992538 kubelet[2620]: E0904 17:53:12.991512 2620 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5f4c99c577-f2nlw_calico-system(f2e7d76b-6bfd-417d-a1c5-8517155d4273)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5f4c99c577-f2nlw_calico-system(f2e7d76b-6bfd-417d-a1c5-8517155d4273)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"edfb6999042757428ea4cf4185b7a5f38a7f8faf9db1fb0a8d6c7a236029faed\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5f4c99c577-f2nlw" podUID="f2e7d76b-6bfd-417d-a1c5-8517155d4273" Sep 4 17:53:12.997860 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-edfb6999042757428ea4cf4185b7a5f38a7f8faf9db1fb0a8d6c7a236029faed-shm.mount: Deactivated successfully. Sep 4 17:53:13.700505 kubelet[2620]: I0904 17:53:13.700295 2620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15230386667992300db60420603239f9442b3c652fddd7ff6bb0eededb63a969" Sep 4 17:53:13.701551 containerd[1456]: time="2024-09-04T17:53:13.701319113Z" level=info msg="StopPodSandbox for \"15230386667992300db60420603239f9442b3c652fddd7ff6bb0eededb63a969\"" Sep 4 17:53:13.704213 containerd[1456]: time="2024-09-04T17:53:13.701582699Z" level=info msg="Ensure that sandbox 15230386667992300db60420603239f9442b3c652fddd7ff6bb0eededb63a969 in task-service has been cleanup successfully" Sep 4 17:53:13.706548 kubelet[2620]: I0904 17:53:13.705760 2620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="edfb6999042757428ea4cf4185b7a5f38a7f8faf9db1fb0a8d6c7a236029faed" Sep 4 17:53:13.708107 containerd[1456]: time="2024-09-04T17:53:13.707568638Z" level=info msg="StopPodSandbox for \"edfb6999042757428ea4cf4185b7a5f38a7f8faf9db1fb0a8d6c7a236029faed\"" Sep 4 17:53:13.710813 containerd[1456]: time="2024-09-04T17:53:13.710528613Z" level=info msg="Ensure that sandbox edfb6999042757428ea4cf4185b7a5f38a7f8faf9db1fb0a8d6c7a236029faed in task-service has been cleanup successfully" Sep 4 17:53:13.711819 kubelet[2620]: I0904 17:53:13.711172 2620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c080fddd61097a4c7ac68ca5afc0847d98be35a0b6dd702524b6e61b51ddee43" Sep 4 17:53:13.718532 containerd[1456]: time="2024-09-04T17:53:13.718302630Z" level=info msg="StopPodSandbox for \"c080fddd61097a4c7ac68ca5afc0847d98be35a0b6dd702524b6e61b51ddee43\"" Sep 4 17:53:13.720782 containerd[1456]: time="2024-09-04T17:53:13.720729189Z" level=info msg="Ensure that sandbox c080fddd61097a4c7ac68ca5afc0847d98be35a0b6dd702524b6e61b51ddee43 in task-service has been cleanup successfully" Sep 4 17:53:13.723693 kubelet[2620]: I0904 17:53:13.723416 2620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e5cb0fa1a121df5e8a5f95962197ec847cb1409092d3ffa1e92784f84413787" Sep 4 17:53:13.727191 containerd[1456]: time="2024-09-04T17:53:13.727088396Z" level=info msg="StopPodSandbox for \"0e5cb0fa1a121df5e8a5f95962197ec847cb1409092d3ffa1e92784f84413787\"" Sep 4 17:53:13.729580 containerd[1456]: time="2024-09-04T17:53:13.729545866Z" level=info msg="Ensure that sandbox 0e5cb0fa1a121df5e8a5f95962197ec847cb1409092d3ffa1e92784f84413787 in task-service has been cleanup successfully" Sep 4 17:53:13.751474 containerd[1456]: time="2024-09-04T17:53:13.751428725Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Sep 4 17:53:13.810905 containerd[1456]: time="2024-09-04T17:53:13.810744639Z" level=error msg="StopPodSandbox for \"15230386667992300db60420603239f9442b3c652fddd7ff6bb0eededb63a969\" failed" error="failed to destroy network for sandbox \"15230386667992300db60420603239f9442b3c652fddd7ff6bb0eededb63a969\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:53:13.811486 kubelet[2620]: E0904 17:53:13.811355 2620 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"15230386667992300db60420603239f9442b3c652fddd7ff6bb0eededb63a969\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="15230386667992300db60420603239f9442b3c652fddd7ff6bb0eededb63a969" Sep 4 17:53:13.811780 kubelet[2620]: E0904 17:53:13.811438 2620 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"15230386667992300db60420603239f9442b3c652fddd7ff6bb0eededb63a969"} Sep 4 17:53:13.811780 kubelet[2620]: E0904 17:53:13.811537 2620 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3f204459-67ca-4ef3-87db-d2dfa1c8a5a7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"15230386667992300db60420603239f9442b3c652fddd7ff6bb0eededb63a969\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:53:13.811780 kubelet[2620]: E0904 17:53:13.811577 2620 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3f204459-67ca-4ef3-87db-d2dfa1c8a5a7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"15230386667992300db60420603239f9442b3c652fddd7ff6bb0eededb63a969\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6v7cb" podUID="3f204459-67ca-4ef3-87db-d2dfa1c8a5a7" Sep 4 17:53:13.843418 containerd[1456]: time="2024-09-04T17:53:13.843343942Z" level=error msg="StopPodSandbox for \"c080fddd61097a4c7ac68ca5afc0847d98be35a0b6dd702524b6e61b51ddee43\" failed" error="failed to destroy network for sandbox \"c080fddd61097a4c7ac68ca5afc0847d98be35a0b6dd702524b6e61b51ddee43\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:53:13.843750 kubelet[2620]: E0904 17:53:13.843691 2620 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c080fddd61097a4c7ac68ca5afc0847d98be35a0b6dd702524b6e61b51ddee43\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c080fddd61097a4c7ac68ca5afc0847d98be35a0b6dd702524b6e61b51ddee43" Sep 4 17:53:13.843900 kubelet[2620]: E0904 17:53:13.843772 2620 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c080fddd61097a4c7ac68ca5afc0847d98be35a0b6dd702524b6e61b51ddee43"} Sep 4 17:53:13.843900 kubelet[2620]: E0904 17:53:13.843824 2620 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"090627b4-68b0-464b-9437-a497123fe057\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c080fddd61097a4c7ac68ca5afc0847d98be35a0b6dd702524b6e61b51ddee43\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:53:13.843900 kubelet[2620]: E0904 17:53:13.843860 2620 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"090627b4-68b0-464b-9437-a497123fe057\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c080fddd61097a4c7ac68ca5afc0847d98be35a0b6dd702524b6e61b51ddee43\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-kmp25" podUID="090627b4-68b0-464b-9437-a497123fe057" Sep 4 17:53:13.849922 containerd[1456]: time="2024-09-04T17:53:13.849567760Z" level=error msg="StopPodSandbox for \"edfb6999042757428ea4cf4185b7a5f38a7f8faf9db1fb0a8d6c7a236029faed\" failed" error="failed to destroy network for sandbox \"edfb6999042757428ea4cf4185b7a5f38a7f8faf9db1fb0a8d6c7a236029faed\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:53:13.850205 kubelet[2620]: E0904 17:53:13.850033 2620 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"edfb6999042757428ea4cf4185b7a5f38a7f8faf9db1fb0a8d6c7a236029faed\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="edfb6999042757428ea4cf4185b7a5f38a7f8faf9db1fb0a8d6c7a236029faed" Sep 4 17:53:13.850205 kubelet[2620]: E0904 17:53:13.850097 2620 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"edfb6999042757428ea4cf4185b7a5f38a7f8faf9db1fb0a8d6c7a236029faed"} Sep 4 17:53:13.850205 kubelet[2620]: E0904 17:53:13.850167 2620 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f2e7d76b-6bfd-417d-a1c5-8517155d4273\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"edfb6999042757428ea4cf4185b7a5f38a7f8faf9db1fb0a8d6c7a236029faed\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:53:13.850573 kubelet[2620]: E0904 17:53:13.850207 2620 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f2e7d76b-6bfd-417d-a1c5-8517155d4273\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"edfb6999042757428ea4cf4185b7a5f38a7f8faf9db1fb0a8d6c7a236029faed\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5f4c99c577-f2nlw" podUID="f2e7d76b-6bfd-417d-a1c5-8517155d4273" Sep 4 17:53:13.854208 containerd[1456]: time="2024-09-04T17:53:13.854129506Z" level=error msg="StopPodSandbox for \"0e5cb0fa1a121df5e8a5f95962197ec847cb1409092d3ffa1e92784f84413787\" failed" error="failed to destroy network for sandbox \"0e5cb0fa1a121df5e8a5f95962197ec847cb1409092d3ffa1e92784f84413787\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:53:13.854498 kubelet[2620]: E0904 17:53:13.854444 2620 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0e5cb0fa1a121df5e8a5f95962197ec847cb1409092d3ffa1e92784f84413787\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0e5cb0fa1a121df5e8a5f95962197ec847cb1409092d3ffa1e92784f84413787" Sep 4 17:53:13.854604 kubelet[2620]: E0904 17:53:13.854509 2620 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0e5cb0fa1a121df5e8a5f95962197ec847cb1409092d3ffa1e92784f84413787"} Sep 4 17:53:13.854604 kubelet[2620]: E0904 17:53:13.854557 2620 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f10445bd-c567-4cbb-b259-041496b4f378\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0e5cb0fa1a121df5e8a5f95962197ec847cb1409092d3ffa1e92784f84413787\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:53:13.854953 kubelet[2620]: E0904 17:53:13.854599 2620 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f10445bd-c567-4cbb-b259-041496b4f378\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0e5cb0fa1a121df5e8a5f95962197ec847cb1409092d3ffa1e92784f84413787\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-s2r6n" podUID="f10445bd-c567-4cbb-b259-041496b4f378" Sep 4 17:53:14.198554 kubelet[2620]: I0904 17:53:14.198511 2620 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 17:53:19.654895 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount174190475.mount: Deactivated successfully. Sep 4 17:53:19.685716 containerd[1456]: time="2024-09-04T17:53:19.685656768Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:53:19.687573 containerd[1456]: time="2024-09-04T17:53:19.687515458Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=117873564" Sep 4 17:53:19.689793 containerd[1456]: time="2024-09-04T17:53:19.688901885Z" level=info msg="ImageCreate event name:\"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:53:19.693233 containerd[1456]: time="2024-09-04T17:53:19.693143914Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:53:19.694187 containerd[1456]: time="2024-09-04T17:53:19.694124078Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"117873426\" in 5.942450079s" Sep 4 17:53:19.694286 containerd[1456]: time="2024-09-04T17:53:19.694195145Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\"" Sep 4 17:53:19.718331 containerd[1456]: time="2024-09-04T17:53:19.718281682Z" level=info msg="CreateContainer within sandbox \"35147d2c37b6d2dad67decd188681b5bf6e232bbaaeff01406a9a2fed5ce4b51\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 4 17:53:19.742054 containerd[1456]: time="2024-09-04T17:53:19.742004866Z" level=info msg="CreateContainer within sandbox \"35147d2c37b6d2dad67decd188681b5bf6e232bbaaeff01406a9a2fed5ce4b51\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b6cf4d8a55953e52d3b77c4fd0f110cb895892ff6d7dae437d9998924309bcc2\"" Sep 4 17:53:19.743443 containerd[1456]: time="2024-09-04T17:53:19.743076770Z" level=info msg="StartContainer for \"b6cf4d8a55953e52d3b77c4fd0f110cb895892ff6d7dae437d9998924309bcc2\"" Sep 4 17:53:19.794372 systemd[1]: Started cri-containerd-b6cf4d8a55953e52d3b77c4fd0f110cb895892ff6d7dae437d9998924309bcc2.scope - libcontainer container b6cf4d8a55953e52d3b77c4fd0f110cb895892ff6d7dae437d9998924309bcc2. Sep 4 17:53:19.837269 containerd[1456]: time="2024-09-04T17:53:19.837100827Z" level=info msg="StartContainer for \"b6cf4d8a55953e52d3b77c4fd0f110cb895892ff6d7dae437d9998924309bcc2\" returns successfully" Sep 4 17:53:19.961787 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 4 17:53:19.961981 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 4 17:53:20.811253 kubelet[2620]: I0904 17:53:20.807520 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-4tp87" podStartSLOduration=1.812983317 podStartE2EDuration="19.807496132s" podCreationTimestamp="2024-09-04 17:53:01 +0000 UTC" firstStartedPulling="2024-09-04 17:53:01.700775166 +0000 UTC m=+22.310294560" lastFinishedPulling="2024-09-04 17:53:19.695287989 +0000 UTC m=+40.304807375" observedRunningTime="2024-09-04 17:53:20.801144756 +0000 UTC m=+41.410664156" watchObservedRunningTime="2024-09-04 17:53:20.807496132 +0000 UTC m=+41.417015793" Sep 4 17:53:21.840568 kernel: bpftool[3852]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 4 17:53:22.160438 systemd-networkd[1375]: vxlan.calico: Link UP Sep 4 17:53:22.160452 systemd-networkd[1375]: vxlan.calico: Gained carrier Sep 4 17:53:23.516585 systemd-networkd[1375]: vxlan.calico: Gained IPv6LL Sep 4 17:53:25.538564 containerd[1456]: time="2024-09-04T17:53:25.538073796Z" level=info msg="StopPodSandbox for \"0e5cb0fa1a121df5e8a5f95962197ec847cb1409092d3ffa1e92784f84413787\"" Sep 4 17:53:25.640876 containerd[1456]: 2024-09-04 17:53:25.596 [INFO][3942] k8s.go 608: Cleaning up netns ContainerID="0e5cb0fa1a121df5e8a5f95962197ec847cb1409092d3ffa1e92784f84413787" Sep 4 17:53:25.640876 containerd[1456]: 2024-09-04 17:53:25.597 [INFO][3942] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="0e5cb0fa1a121df5e8a5f95962197ec847cb1409092d3ffa1e92784f84413787" iface="eth0" netns="/var/run/netns/cni-e3d5eff7-b42d-f578-3f3e-8c3515376502" Sep 4 17:53:25.640876 containerd[1456]: 2024-09-04 17:53:25.597 [INFO][3942] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="0e5cb0fa1a121df5e8a5f95962197ec847cb1409092d3ffa1e92784f84413787" iface="eth0" netns="/var/run/netns/cni-e3d5eff7-b42d-f578-3f3e-8c3515376502" Sep 4 17:53:25.640876 containerd[1456]: 2024-09-04 17:53:25.597 [INFO][3942] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="0e5cb0fa1a121df5e8a5f95962197ec847cb1409092d3ffa1e92784f84413787" iface="eth0" netns="/var/run/netns/cni-e3d5eff7-b42d-f578-3f3e-8c3515376502" Sep 4 17:53:25.640876 containerd[1456]: 2024-09-04 17:53:25.597 [INFO][3942] k8s.go 615: Releasing IP address(es) ContainerID="0e5cb0fa1a121df5e8a5f95962197ec847cb1409092d3ffa1e92784f84413787" Sep 4 17:53:25.640876 containerd[1456]: 2024-09-04 17:53:25.597 [INFO][3942] utils.go 188: Calico CNI releasing IP address ContainerID="0e5cb0fa1a121df5e8a5f95962197ec847cb1409092d3ffa1e92784f84413787" Sep 4 17:53:25.640876 containerd[1456]: 2024-09-04 17:53:25.624 [INFO][3949] ipam_plugin.go 417: Releasing address using handleID ContainerID="0e5cb0fa1a121df5e8a5f95962197ec847cb1409092d3ffa1e92784f84413787" HandleID="k8s-pod-network.0e5cb0fa1a121df5e8a5f95962197ec847cb1409092d3ffa1e92784f84413787" Workload="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--s2r6n-eth0" Sep 4 17:53:25.640876 containerd[1456]: 2024-09-04 17:53:25.625 [INFO][3949] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:53:25.640876 containerd[1456]: 2024-09-04 17:53:25.625 [INFO][3949] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:53:25.640876 containerd[1456]: 2024-09-04 17:53:25.633 [WARNING][3949] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="0e5cb0fa1a121df5e8a5f95962197ec847cb1409092d3ffa1e92784f84413787" HandleID="k8s-pod-network.0e5cb0fa1a121df5e8a5f95962197ec847cb1409092d3ffa1e92784f84413787" Workload="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--s2r6n-eth0" Sep 4 17:53:25.640876 containerd[1456]: 2024-09-04 17:53:25.633 [INFO][3949] ipam_plugin.go 445: Releasing address using workloadID ContainerID="0e5cb0fa1a121df5e8a5f95962197ec847cb1409092d3ffa1e92784f84413787" HandleID="k8s-pod-network.0e5cb0fa1a121df5e8a5f95962197ec847cb1409092d3ffa1e92784f84413787" Workload="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--s2r6n-eth0" Sep 4 17:53:25.640876 containerd[1456]: 2024-09-04 17:53:25.635 [INFO][3949] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:53:25.640876 containerd[1456]: 2024-09-04 17:53:25.637 [INFO][3942] k8s.go 621: Teardown processing complete. ContainerID="0e5cb0fa1a121df5e8a5f95962197ec847cb1409092d3ffa1e92784f84413787" Sep 4 17:53:25.641926 containerd[1456]: time="2024-09-04T17:53:25.641789010Z" level=info msg="TearDown network for sandbox \"0e5cb0fa1a121df5e8a5f95962197ec847cb1409092d3ffa1e92784f84413787\" successfully" Sep 4 17:53:25.642276 containerd[1456]: time="2024-09-04T17:53:25.642002884Z" level=info msg="StopPodSandbox for \"0e5cb0fa1a121df5e8a5f95962197ec847cb1409092d3ffa1e92784f84413787\" returns successfully" Sep 4 17:53:25.645916 systemd[1]: run-netns-cni\x2de3d5eff7\x2db42d\x2df578\x2d3f3e\x2d8c3515376502.mount: Deactivated successfully. Sep 4 17:53:25.646333 containerd[1456]: time="2024-09-04T17:53:25.645975037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-s2r6n,Uid:f10445bd-c567-4cbb-b259-041496b4f378,Namespace:kube-system,Attempt:1,}" Sep 4 17:53:25.789398 systemd-networkd[1375]: cali84a8bd4d8d1: Link UP Sep 4 17:53:25.793067 systemd-networkd[1375]: cali84a8bd4d8d1: Gained carrier Sep 4 17:53:25.813727 containerd[1456]: 2024-09-04 17:53:25.707 [INFO][3956] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--s2r6n-eth0 coredns-7db6d8ff4d- kube-system f10445bd-c567-4cbb-b259-041496b4f378 718 0 2024-09-04 17:52:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal coredns-7db6d8ff4d-s2r6n eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali84a8bd4d8d1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="32c59166c4e1dad89ed22bb5e5b59d6b662dcfc67ee76e6718eafe98274acca9" Namespace="kube-system" Pod="coredns-7db6d8ff4d-s2r6n" WorkloadEndpoint="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--s2r6n-" Sep 4 17:53:25.813727 containerd[1456]: 2024-09-04 17:53:25.707 [INFO][3956] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="32c59166c4e1dad89ed22bb5e5b59d6b662dcfc67ee76e6718eafe98274acca9" Namespace="kube-system" Pod="coredns-7db6d8ff4d-s2r6n" WorkloadEndpoint="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--s2r6n-eth0" Sep 4 17:53:25.813727 containerd[1456]: 2024-09-04 17:53:25.743 [INFO][3966] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="32c59166c4e1dad89ed22bb5e5b59d6b662dcfc67ee76e6718eafe98274acca9" HandleID="k8s-pod-network.32c59166c4e1dad89ed22bb5e5b59d6b662dcfc67ee76e6718eafe98274acca9" Workload="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--s2r6n-eth0" Sep 4 17:53:25.813727 containerd[1456]: 2024-09-04 17:53:25.753 [INFO][3966] ipam_plugin.go 270: Auto assigning IP ContainerID="32c59166c4e1dad89ed22bb5e5b59d6b662dcfc67ee76e6718eafe98274acca9" HandleID="k8s-pod-network.32c59166c4e1dad89ed22bb5e5b59d6b662dcfc67ee76e6718eafe98274acca9" Workload="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--s2r6n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000265de0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal", "pod":"coredns-7db6d8ff4d-s2r6n", "timestamp":"2024-09-04 17:53:25.743202377 +0000 UTC"}, Hostname:"ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:53:25.813727 containerd[1456]: 2024-09-04 17:53:25.753 [INFO][3966] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:53:25.813727 containerd[1456]: 2024-09-04 17:53:25.753 [INFO][3966] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:53:25.813727 containerd[1456]: 2024-09-04 17:53:25.753 [INFO][3966] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal' Sep 4 17:53:25.813727 containerd[1456]: 2024-09-04 17:53:25.755 [INFO][3966] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.32c59166c4e1dad89ed22bb5e5b59d6b662dcfc67ee76e6718eafe98274acca9" host="ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:53:25.813727 containerd[1456]: 2024-09-04 17:53:25.759 [INFO][3966] ipam.go 372: Looking up existing affinities for host host="ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:53:25.813727 containerd[1456]: 2024-09-04 17:53:25.763 [INFO][3966] ipam.go 489: Trying affinity for 192.168.118.128/26 host="ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:53:25.813727 containerd[1456]: 2024-09-04 17:53:25.765 [INFO][3966] ipam.go 155: Attempting to load block cidr=192.168.118.128/26 host="ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:53:25.813727 containerd[1456]: 2024-09-04 17:53:25.767 [INFO][3966] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.118.128/26 host="ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:53:25.813727 containerd[1456]: 2024-09-04 17:53:25.767 [INFO][3966] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.118.128/26 handle="k8s-pod-network.32c59166c4e1dad89ed22bb5e5b59d6b662dcfc67ee76e6718eafe98274acca9" host="ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:53:25.813727 containerd[1456]: 2024-09-04 17:53:25.769 [INFO][3966] ipam.go 1685: Creating new handle: k8s-pod-network.32c59166c4e1dad89ed22bb5e5b59d6b662dcfc67ee76e6718eafe98274acca9 Sep 4 17:53:25.813727 containerd[1456]: 2024-09-04 17:53:25.777 [INFO][3966] ipam.go 1203: Writing block in order to claim IPs block=192.168.118.128/26 handle="k8s-pod-network.32c59166c4e1dad89ed22bb5e5b59d6b662dcfc67ee76e6718eafe98274acca9" host="ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:53:25.813727 containerd[1456]: 2024-09-04 17:53:25.782 [INFO][3966] ipam.go 1216: Successfully claimed IPs: [192.168.118.129/26] block=192.168.118.128/26 handle="k8s-pod-network.32c59166c4e1dad89ed22bb5e5b59d6b662dcfc67ee76e6718eafe98274acca9" host="ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:53:25.813727 containerd[1456]: 2024-09-04 17:53:25.782 [INFO][3966] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.118.129/26] handle="k8s-pod-network.32c59166c4e1dad89ed22bb5e5b59d6b662dcfc67ee76e6718eafe98274acca9" host="ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:53:25.813727 containerd[1456]: 2024-09-04 17:53:25.783 [INFO][3966] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:53:25.813727 containerd[1456]: 2024-09-04 17:53:25.783 [INFO][3966] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.118.129/26] IPv6=[] ContainerID="32c59166c4e1dad89ed22bb5e5b59d6b662dcfc67ee76e6718eafe98274acca9" HandleID="k8s-pod-network.32c59166c4e1dad89ed22bb5e5b59d6b662dcfc67ee76e6718eafe98274acca9" Workload="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--s2r6n-eth0" Sep 4 17:53:25.814879 containerd[1456]: 2024-09-04 17:53:25.785 [INFO][3956] k8s.go 386: Populated endpoint ContainerID="32c59166c4e1dad89ed22bb5e5b59d6b662dcfc67ee76e6718eafe98274acca9" Namespace="kube-system" Pod="coredns-7db6d8ff4d-s2r6n" WorkloadEndpoint="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--s2r6n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--s2r6n-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"f10445bd-c567-4cbb-b259-041496b4f378", ResourceVersion:"718", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 52, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-7db6d8ff4d-s2r6n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.118.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali84a8bd4d8d1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:53:25.814879 containerd[1456]: 2024-09-04 17:53:25.785 [INFO][3956] k8s.go 387: Calico CNI using IPs: [192.168.118.129/32] ContainerID="32c59166c4e1dad89ed22bb5e5b59d6b662dcfc67ee76e6718eafe98274acca9" Namespace="kube-system" Pod="coredns-7db6d8ff4d-s2r6n" WorkloadEndpoint="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--s2r6n-eth0" Sep 4 17:53:25.814879 containerd[1456]: 2024-09-04 17:53:25.785 [INFO][3956] dataplane_linux.go 68: Setting the host side veth name to cali84a8bd4d8d1 ContainerID="32c59166c4e1dad89ed22bb5e5b59d6b662dcfc67ee76e6718eafe98274acca9" Namespace="kube-system" Pod="coredns-7db6d8ff4d-s2r6n" WorkloadEndpoint="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--s2r6n-eth0" Sep 4 17:53:25.814879 containerd[1456]: 2024-09-04 17:53:25.790 [INFO][3956] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="32c59166c4e1dad89ed22bb5e5b59d6b662dcfc67ee76e6718eafe98274acca9" Namespace="kube-system" Pod="coredns-7db6d8ff4d-s2r6n" WorkloadEndpoint="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--s2r6n-eth0" Sep 4 17:53:25.814879 containerd[1456]: 2024-09-04 17:53:25.790 [INFO][3956] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="32c59166c4e1dad89ed22bb5e5b59d6b662dcfc67ee76e6718eafe98274acca9" Namespace="kube-system" Pod="coredns-7db6d8ff4d-s2r6n" WorkloadEndpoint="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--s2r6n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--s2r6n-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"f10445bd-c567-4cbb-b259-041496b4f378", ResourceVersion:"718", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 52, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal", ContainerID:"32c59166c4e1dad89ed22bb5e5b59d6b662dcfc67ee76e6718eafe98274acca9", Pod:"coredns-7db6d8ff4d-s2r6n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.118.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali84a8bd4d8d1", MAC:"42:06:1c:89:7a:12", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:53:25.814879 containerd[1456]: 2024-09-04 17:53:25.805 [INFO][3956] k8s.go 500: Wrote updated endpoint to datastore ContainerID="32c59166c4e1dad89ed22bb5e5b59d6b662dcfc67ee76e6718eafe98274acca9" Namespace="kube-system" Pod="coredns-7db6d8ff4d-s2r6n" WorkloadEndpoint="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--s2r6n-eth0" Sep 4 17:53:25.866701 containerd[1456]: time="2024-09-04T17:53:25.866524585Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:53:25.866701 containerd[1456]: time="2024-09-04T17:53:25.866590433Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:53:25.866701 containerd[1456]: time="2024-09-04T17:53:25.866633014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:53:25.867086 containerd[1456]: time="2024-09-04T17:53:25.866792044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:53:25.904391 systemd[1]: Started cri-containerd-32c59166c4e1dad89ed22bb5e5b59d6b662dcfc67ee76e6718eafe98274acca9.scope - libcontainer container 32c59166c4e1dad89ed22bb5e5b59d6b662dcfc67ee76e6718eafe98274acca9. Sep 4 17:53:25.967601 containerd[1456]: time="2024-09-04T17:53:25.967551589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-s2r6n,Uid:f10445bd-c567-4cbb-b259-041496b4f378,Namespace:kube-system,Attempt:1,} returns sandbox id \"32c59166c4e1dad89ed22bb5e5b59d6b662dcfc67ee76e6718eafe98274acca9\"" Sep 4 17:53:25.971889 containerd[1456]: time="2024-09-04T17:53:25.971752100Z" level=info msg="CreateContainer within sandbox \"32c59166c4e1dad89ed22bb5e5b59d6b662dcfc67ee76e6718eafe98274acca9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:53:25.992076 containerd[1456]: time="2024-09-04T17:53:25.991955262Z" level=info msg="CreateContainer within sandbox \"32c59166c4e1dad89ed22bb5e5b59d6b662dcfc67ee76e6718eafe98274acca9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9e3ca91c3bcce180f7b7b8cba895f131441fd5f0bb3ea30f5a13a841aec27753\"" Sep 4 17:53:25.994393 containerd[1456]: time="2024-09-04T17:53:25.992688818Z" level=info msg="StartContainer for \"9e3ca91c3bcce180f7b7b8cba895f131441fd5f0bb3ea30f5a13a841aec27753\"" Sep 4 17:53:26.027371 systemd[1]: Started cri-containerd-9e3ca91c3bcce180f7b7b8cba895f131441fd5f0bb3ea30f5a13a841aec27753.scope - libcontainer container 9e3ca91c3bcce180f7b7b8cba895f131441fd5f0bb3ea30f5a13a841aec27753. Sep 4 17:53:26.064852 containerd[1456]: time="2024-09-04T17:53:26.064722915Z" level=info msg="StartContainer for \"9e3ca91c3bcce180f7b7b8cba895f131441fd5f0bb3ea30f5a13a841aec27753\" returns successfully" Sep 4 17:53:26.811319 kubelet[2620]: I0904 17:53:26.809925 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-s2r6n" podStartSLOduration=32.809903926 podStartE2EDuration="32.809903926s" podCreationTimestamp="2024-09-04 17:52:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:53:26.809768568 +0000 UTC m=+47.419287975" watchObservedRunningTime="2024-09-04 17:53:26.809903926 +0000 UTC m=+47.419423333" Sep 4 17:53:27.484508 systemd-networkd[1375]: cali84a8bd4d8d1: Gained IPv6LL Sep 4 17:53:27.539211 containerd[1456]: time="2024-09-04T17:53:27.538637254Z" level=info msg="StopPodSandbox for \"15230386667992300db60420603239f9442b3c652fddd7ff6bb0eededb63a969\"" Sep 4 17:53:27.670623 containerd[1456]: 2024-09-04 17:53:27.624 [INFO][4092] k8s.go 608: Cleaning up netns ContainerID="15230386667992300db60420603239f9442b3c652fddd7ff6bb0eededb63a969" Sep 4 17:53:27.670623 containerd[1456]: 2024-09-04 17:53:27.625 [INFO][4092] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="15230386667992300db60420603239f9442b3c652fddd7ff6bb0eededb63a969" iface="eth0" netns="/var/run/netns/cni-d934b8cb-9e8b-707c-0dfa-d12823c28023" Sep 4 17:53:27.670623 containerd[1456]: 2024-09-04 17:53:27.627 [INFO][4092] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="15230386667992300db60420603239f9442b3c652fddd7ff6bb0eededb63a969" iface="eth0" netns="/var/run/netns/cni-d934b8cb-9e8b-707c-0dfa-d12823c28023" Sep 4 17:53:27.670623 containerd[1456]: 2024-09-04 17:53:27.628 [INFO][4092] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="15230386667992300db60420603239f9442b3c652fddd7ff6bb0eededb63a969" iface="eth0" netns="/var/run/netns/cni-d934b8cb-9e8b-707c-0dfa-d12823c28023" Sep 4 17:53:27.670623 containerd[1456]: 2024-09-04 17:53:27.628 [INFO][4092] k8s.go 615: Releasing IP address(es) ContainerID="15230386667992300db60420603239f9442b3c652fddd7ff6bb0eededb63a969" Sep 4 17:53:27.670623 containerd[1456]: 2024-09-04 17:53:27.628 [INFO][4092] utils.go 188: Calico CNI releasing IP address ContainerID="15230386667992300db60420603239f9442b3c652fddd7ff6bb0eededb63a969" Sep 4 17:53:27.670623 containerd[1456]: 2024-09-04 17:53:27.657 [INFO][4098] ipam_plugin.go 417: Releasing address using handleID ContainerID="15230386667992300db60420603239f9442b3c652fddd7ff6bb0eededb63a969" HandleID="k8s-pod-network.15230386667992300db60420603239f9442b3c652fddd7ff6bb0eededb63a969" Workload="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-csi--node--driver--6v7cb-eth0" Sep 4 17:53:27.670623 containerd[1456]: 2024-09-04 17:53:27.657 [INFO][4098] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:53:27.670623 containerd[1456]: 2024-09-04 17:53:27.657 [INFO][4098] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:53:27.670623 containerd[1456]: 2024-09-04 17:53:27.665 [WARNING][4098] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="15230386667992300db60420603239f9442b3c652fddd7ff6bb0eededb63a969" HandleID="k8s-pod-network.15230386667992300db60420603239f9442b3c652fddd7ff6bb0eededb63a969" Workload="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-csi--node--driver--6v7cb-eth0" Sep 4 17:53:27.670623 containerd[1456]: 2024-09-04 17:53:27.666 [INFO][4098] ipam_plugin.go 445: Releasing address using workloadID ContainerID="15230386667992300db60420603239f9442b3c652fddd7ff6bb0eededb63a969" HandleID="k8s-pod-network.15230386667992300db60420603239f9442b3c652fddd7ff6bb0eededb63a969" Workload="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-csi--node--driver--6v7cb-eth0" Sep 4 17:53:27.670623 containerd[1456]: 2024-09-04 17:53:27.667 [INFO][4098] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:53:27.670623 containerd[1456]: 2024-09-04 17:53:27.669 [INFO][4092] k8s.go 621: Teardown processing complete. ContainerID="15230386667992300db60420603239f9442b3c652fddd7ff6bb0eededb63a969" Sep 4 17:53:27.674214 containerd[1456]: time="2024-09-04T17:53:27.673352653Z" level=info msg="TearDown network for sandbox \"15230386667992300db60420603239f9442b3c652fddd7ff6bb0eededb63a969\" successfully" Sep 4 17:53:27.674214 containerd[1456]: time="2024-09-04T17:53:27.673408091Z" level=info msg="StopPodSandbox for \"15230386667992300db60420603239f9442b3c652fddd7ff6bb0eededb63a969\" returns successfully" Sep 4 17:53:27.677011 containerd[1456]: time="2024-09-04T17:53:27.676413943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6v7cb,Uid:3f204459-67ca-4ef3-87db-d2dfa1c8a5a7,Namespace:calico-system,Attempt:1,}" Sep 4 17:53:27.677142 systemd[1]: run-netns-cni\x2dd934b8cb\x2d9e8b\x2d707c\x2d0dfa\x2dd12823c28023.mount: Deactivated successfully. Sep 4 17:53:27.947725 systemd-networkd[1375]: cali0794a5fbbf8: Link UP Sep 4 17:53:27.951650 systemd-networkd[1375]: cali0794a5fbbf8: Gained carrier Sep 4 17:53:27.986288 containerd[1456]: 2024-09-04 17:53:27.795 [INFO][4105] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-csi--node--driver--6v7cb-eth0 csi-node-driver- calico-system 3f204459-67ca-4ef3-87db-d2dfa1c8a5a7 742 0 2024-09-04 17:53:01 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65cb9bb8f4 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal csi-node-driver-6v7cb eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali0794a5fbbf8 [] []}} ContainerID="462ec1a6aa2b146c4fbcf7537a9443913535336513997b5865b9c3a866a26917" Namespace="calico-system" Pod="csi-node-driver-6v7cb" WorkloadEndpoint="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-csi--node--driver--6v7cb-" Sep 4 17:53:27.986288 containerd[1456]: 2024-09-04 17:53:27.795 [INFO][4105] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="462ec1a6aa2b146c4fbcf7537a9443913535336513997b5865b9c3a866a26917" Namespace="calico-system" Pod="csi-node-driver-6v7cb" WorkloadEndpoint="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-csi--node--driver--6v7cb-eth0" Sep 4 17:53:27.986288 containerd[1456]: 2024-09-04 17:53:27.864 [INFO][4118] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="462ec1a6aa2b146c4fbcf7537a9443913535336513997b5865b9c3a866a26917" HandleID="k8s-pod-network.462ec1a6aa2b146c4fbcf7537a9443913535336513997b5865b9c3a866a26917" Workload="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-csi--node--driver--6v7cb-eth0" Sep 4 17:53:27.986288 containerd[1456]: 2024-09-04 17:53:27.874 [INFO][4118] ipam_plugin.go 270: Auto assigning IP ContainerID="462ec1a6aa2b146c4fbcf7537a9443913535336513997b5865b9c3a866a26917" HandleID="k8s-pod-network.462ec1a6aa2b146c4fbcf7537a9443913535336513997b5865b9c3a866a26917" Workload="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-csi--node--driver--6v7cb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290b00), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal", "pod":"csi-node-driver-6v7cb", "timestamp":"2024-09-04 17:53:27.864081136 +0000 UTC"}, Hostname:"ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:53:27.986288 containerd[1456]: 2024-09-04 17:53:27.874 [INFO][4118] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:53:27.986288 containerd[1456]: 2024-09-04 17:53:27.874 [INFO][4118] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:53:27.986288 containerd[1456]: 2024-09-04 17:53:27.874 [INFO][4118] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal' Sep 4 17:53:27.986288 containerd[1456]: 2024-09-04 17:53:27.876 [INFO][4118] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.462ec1a6aa2b146c4fbcf7537a9443913535336513997b5865b9c3a866a26917" host="ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:53:27.986288 containerd[1456]: 2024-09-04 17:53:27.889 [INFO][4118] ipam.go 372: Looking up existing affinities for host host="ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:53:27.986288 containerd[1456]: 2024-09-04 17:53:27.903 [INFO][4118] ipam.go 489: Trying affinity for 192.168.118.128/26 host="ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:53:27.986288 containerd[1456]: 2024-09-04 17:53:27.906 [INFO][4118] ipam.go 155: Attempting to load block cidr=192.168.118.128/26 host="ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:53:27.986288 containerd[1456]: 2024-09-04 17:53:27.912 [INFO][4118] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.118.128/26 host="ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:53:27.986288 containerd[1456]: 2024-09-04 17:53:27.912 [INFO][4118] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.118.128/26 handle="k8s-pod-network.462ec1a6aa2b146c4fbcf7537a9443913535336513997b5865b9c3a866a26917" host="ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:53:27.986288 containerd[1456]: 2024-09-04 17:53:27.914 [INFO][4118] ipam.go 1685: Creating new handle: k8s-pod-network.462ec1a6aa2b146c4fbcf7537a9443913535336513997b5865b9c3a866a26917 Sep 4 17:53:27.986288 containerd[1456]: 2024-09-04 17:53:27.923 [INFO][4118] ipam.go 1203: Writing block in order to claim IPs block=192.168.118.128/26 handle="k8s-pod-network.462ec1a6aa2b146c4fbcf7537a9443913535336513997b5865b9c3a866a26917" host="ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:53:27.986288 containerd[1456]: 2024-09-04 17:53:27.939 [INFO][4118] ipam.go 1216: Successfully claimed IPs: [192.168.118.130/26] block=192.168.118.128/26 handle="k8s-pod-network.462ec1a6aa2b146c4fbcf7537a9443913535336513997b5865b9c3a866a26917" host="ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:53:27.986288 containerd[1456]: 2024-09-04 17:53:27.939 [INFO][4118] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.118.130/26] handle="k8s-pod-network.462ec1a6aa2b146c4fbcf7537a9443913535336513997b5865b9c3a866a26917" host="ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:53:27.986288 containerd[1456]: 2024-09-04 17:53:27.939 [INFO][4118] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:53:27.986288 containerd[1456]: 2024-09-04 17:53:27.939 [INFO][4118] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.118.130/26] IPv6=[] ContainerID="462ec1a6aa2b146c4fbcf7537a9443913535336513997b5865b9c3a866a26917" HandleID="k8s-pod-network.462ec1a6aa2b146c4fbcf7537a9443913535336513997b5865b9c3a866a26917" Workload="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-csi--node--driver--6v7cb-eth0" Sep 4 17:53:27.987580 containerd[1456]: 2024-09-04 17:53:27.943 [INFO][4105] k8s.go 386: Populated endpoint ContainerID="462ec1a6aa2b146c4fbcf7537a9443913535336513997b5865b9c3a866a26917" Namespace="calico-system" Pod="csi-node-driver-6v7cb" WorkloadEndpoint="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-csi--node--driver--6v7cb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-csi--node--driver--6v7cb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3f204459-67ca-4ef3-87db-d2dfa1c8a5a7", ResourceVersion:"742", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 53, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal", ContainerID:"", Pod:"csi-node-driver-6v7cb", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.118.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali0794a5fbbf8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:53:27.987580 containerd[1456]: 2024-09-04 17:53:27.943 [INFO][4105] k8s.go 387: Calico CNI using IPs: [192.168.118.130/32] ContainerID="462ec1a6aa2b146c4fbcf7537a9443913535336513997b5865b9c3a866a26917" Namespace="calico-system" Pod="csi-node-driver-6v7cb" WorkloadEndpoint="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-csi--node--driver--6v7cb-eth0" Sep 4 17:53:27.987580 containerd[1456]: 2024-09-04 17:53:27.943 [INFO][4105] dataplane_linux.go 68: Setting the host side veth name to cali0794a5fbbf8 ContainerID="462ec1a6aa2b146c4fbcf7537a9443913535336513997b5865b9c3a866a26917" Namespace="calico-system" Pod="csi-node-driver-6v7cb" WorkloadEndpoint="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-csi--node--driver--6v7cb-eth0" Sep 4 17:53:27.987580 containerd[1456]: 2024-09-04 17:53:27.948 [INFO][4105] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="462ec1a6aa2b146c4fbcf7537a9443913535336513997b5865b9c3a866a26917" Namespace="calico-system" Pod="csi-node-driver-6v7cb" WorkloadEndpoint="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-csi--node--driver--6v7cb-eth0" Sep 4 17:53:27.987580 containerd[1456]: 2024-09-04 17:53:27.948 [INFO][4105] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="462ec1a6aa2b146c4fbcf7537a9443913535336513997b5865b9c3a866a26917" Namespace="calico-system" Pod="csi-node-driver-6v7cb" WorkloadEndpoint="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-csi--node--driver--6v7cb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-csi--node--driver--6v7cb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3f204459-67ca-4ef3-87db-d2dfa1c8a5a7", ResourceVersion:"742", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 53, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal", ContainerID:"462ec1a6aa2b146c4fbcf7537a9443913535336513997b5865b9c3a866a26917", Pod:"csi-node-driver-6v7cb", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.118.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali0794a5fbbf8", MAC:"86:e4:50:8f:93:21", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:53:27.987580 containerd[1456]: 2024-09-04 17:53:27.975 [INFO][4105] k8s.go 500: Wrote updated endpoint to datastore ContainerID="462ec1a6aa2b146c4fbcf7537a9443913535336513997b5865b9c3a866a26917" Namespace="calico-system" Pod="csi-node-driver-6v7cb" WorkloadEndpoint="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-csi--node--driver--6v7cb-eth0" Sep 4 17:53:28.002652 systemd[1]: Started sshd@11-10.128.0.52:22-147.75.109.163:51802.service - OpenSSH per-connection server daemon (147.75.109.163:51802). Sep 4 17:53:28.055268 containerd[1456]: time="2024-09-04T17:53:28.055009841Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:53:28.057469 containerd[1456]: time="2024-09-04T17:53:28.057387535Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:53:28.057608 containerd[1456]: time="2024-09-04T17:53:28.057491757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:53:28.057833 containerd[1456]: time="2024-09-04T17:53:28.057781646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:53:28.104440 systemd[1]: Started cri-containerd-462ec1a6aa2b146c4fbcf7537a9443913535336513997b5865b9c3a866a26917.scope - libcontainer container 462ec1a6aa2b146c4fbcf7537a9443913535336513997b5865b9c3a866a26917. Sep 4 17:53:28.164859 containerd[1456]: time="2024-09-04T17:53:28.164568353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6v7cb,Uid:3f204459-67ca-4ef3-87db-d2dfa1c8a5a7,Namespace:calico-system,Attempt:1,} returns sandbox id \"462ec1a6aa2b146c4fbcf7537a9443913535336513997b5865b9c3a866a26917\"" Sep 4 17:53:28.167774 containerd[1456]: time="2024-09-04T17:53:28.167449073Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Sep 4 17:53:28.313132 sshd[4129]: Accepted publickey for core from 147.75.109.163 port 51802 ssh2: RSA SHA256:3UawMEk03AfeR6A6/drmeg302df853gVTK6IGVvrB/U Sep 4 17:53:28.315427 sshd[4129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:53:28.322111 systemd-logind[1441]: New session 10 of user core. Sep 4 17:53:28.327369 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 17:53:28.540447 containerd[1456]: time="2024-09-04T17:53:28.540050642Z" level=info msg="StopPodSandbox for \"edfb6999042757428ea4cf4185b7a5f38a7f8faf9db1fb0a8d6c7a236029faed\"" Sep 4 17:53:28.665533 sshd[4129]: pam_unix(sshd:session): session closed for user core Sep 4 17:53:28.676214 systemd[1]: run-containerd-runc-k8s.io-462ec1a6aa2b146c4fbcf7537a9443913535336513997b5865b9c3a866a26917-runc.2G0lVG.mount: Deactivated successfully. Sep 4 17:53:28.679041 systemd[1]: sshd@11-10.128.0.52:22-147.75.109.163:51802.service: Deactivated successfully. Sep 4 17:53:28.684580 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 17:53:28.687948 systemd-logind[1441]: Session 10 logged out. Waiting for processes to exit. Sep 4 17:53:28.692410 systemd-logind[1441]: Removed session 10. Sep 4 17:53:28.703523 containerd[1456]: 2024-09-04 17:53:28.642 [INFO][4203] k8s.go 608: Cleaning up netns ContainerID="edfb6999042757428ea4cf4185b7a5f38a7f8faf9db1fb0a8d6c7a236029faed" Sep 4 17:53:28.703523 containerd[1456]: 2024-09-04 17:53:28.643 [INFO][4203] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="edfb6999042757428ea4cf4185b7a5f38a7f8faf9db1fb0a8d6c7a236029faed" iface="eth0" netns="/var/run/netns/cni-c5cd2dd7-c504-b724-f1eb-4242b84ae55c" Sep 4 17:53:28.703523 containerd[1456]: 2024-09-04 17:53:28.643 [INFO][4203] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="edfb6999042757428ea4cf4185b7a5f38a7f8faf9db1fb0a8d6c7a236029faed" iface="eth0" netns="/var/run/netns/cni-c5cd2dd7-c504-b724-f1eb-4242b84ae55c" Sep 4 17:53:28.703523 containerd[1456]: 2024-09-04 17:53:28.643 [INFO][4203] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="edfb6999042757428ea4cf4185b7a5f38a7f8faf9db1fb0a8d6c7a236029faed" iface="eth0" netns="/var/run/netns/cni-c5cd2dd7-c504-b724-f1eb-4242b84ae55c" Sep 4 17:53:28.703523 containerd[1456]: 2024-09-04 17:53:28.643 [INFO][4203] k8s.go 615: Releasing IP address(es) ContainerID="edfb6999042757428ea4cf4185b7a5f38a7f8faf9db1fb0a8d6c7a236029faed" Sep 4 17:53:28.703523 containerd[1456]: 2024-09-04 17:53:28.643 [INFO][4203] utils.go 188: Calico CNI releasing IP address ContainerID="edfb6999042757428ea4cf4185b7a5f38a7f8faf9db1fb0a8d6c7a236029faed" Sep 4 17:53:28.703523 containerd[1456]: 2024-09-04 17:53:28.689 [INFO][4210] ipam_plugin.go 417: Releasing address using handleID ContainerID="edfb6999042757428ea4cf4185b7a5f38a7f8faf9db1fb0a8d6c7a236029faed" HandleID="k8s-pod-network.edfb6999042757428ea4cf4185b7a5f38a7f8faf9db1fb0a8d6c7a236029faed" Workload="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-calico--kube--controllers--5f4c99c577--f2nlw-eth0" Sep 4 17:53:28.703523 containerd[1456]: 2024-09-04 17:53:28.689 [INFO][4210] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:53:28.703523 containerd[1456]: 2024-09-04 17:53:28.689 [INFO][4210] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:53:28.703523 containerd[1456]: 2024-09-04 17:53:28.698 [WARNING][4210] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="edfb6999042757428ea4cf4185b7a5f38a7f8faf9db1fb0a8d6c7a236029faed" HandleID="k8s-pod-network.edfb6999042757428ea4cf4185b7a5f38a7f8faf9db1fb0a8d6c7a236029faed" Workload="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-calico--kube--controllers--5f4c99c577--f2nlw-eth0" Sep 4 17:53:28.703523 containerd[1456]: 2024-09-04 17:53:28.698 [INFO][4210] ipam_plugin.go 445: Releasing address using workloadID ContainerID="edfb6999042757428ea4cf4185b7a5f38a7f8faf9db1fb0a8d6c7a236029faed" HandleID="k8s-pod-network.edfb6999042757428ea4cf4185b7a5f38a7f8faf9db1fb0a8d6c7a236029faed" Workload="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-calico--kube--controllers--5f4c99c577--f2nlw-eth0" Sep 4 17:53:28.703523 containerd[1456]: 2024-09-04 17:53:28.700 [INFO][4210] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:53:28.703523 containerd[1456]: 2024-09-04 17:53:28.702 [INFO][4203] k8s.go 621: Teardown processing complete. ContainerID="edfb6999042757428ea4cf4185b7a5f38a7f8faf9db1fb0a8d6c7a236029faed" Sep 4 17:53:28.705902 containerd[1456]: time="2024-09-04T17:53:28.705640557Z" level=info msg="TearDown network for sandbox \"edfb6999042757428ea4cf4185b7a5f38a7f8faf9db1fb0a8d6c7a236029faed\" successfully" Sep 4 17:53:28.705902 containerd[1456]: time="2024-09-04T17:53:28.705710385Z" level=info msg="StopPodSandbox for \"edfb6999042757428ea4cf4185b7a5f38a7f8faf9db1fb0a8d6c7a236029faed\" returns successfully" Sep 4 17:53:28.707943 containerd[1456]: time="2024-09-04T17:53:28.707895915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f4c99c577-f2nlw,Uid:f2e7d76b-6bfd-417d-a1c5-8517155d4273,Namespace:calico-system,Attempt:1,}" Sep 4 17:53:28.709966 systemd[1]: run-netns-cni\x2dc5cd2dd7\x2dc504\x2db724\x2df1eb\x2d4242b84ae55c.mount: Deactivated successfully. Sep 4 17:53:28.896187 systemd-networkd[1375]: calic5022c3891f: Link UP Sep 4 17:53:28.897358 systemd-networkd[1375]: calic5022c3891f: Gained carrier Sep 4 17:53:28.924867 containerd[1456]: 2024-09-04 17:53:28.793 [INFO][4220] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-calico--kube--controllers--5f4c99c577--f2nlw-eth0 calico-kube-controllers-5f4c99c577- calico-system f2e7d76b-6bfd-417d-a1c5-8517155d4273 781 0 2024-09-04 17:53:01 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5f4c99c577 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal calico-kube-controllers-5f4c99c577-f2nlw eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic5022c3891f [] []}} ContainerID="fc800e2b3a09f5fa4af462c2112338027b5e9f52ce69fd2933b4d36d59bb1da1" Namespace="calico-system" Pod="calico-kube-controllers-5f4c99c577-f2nlw" WorkloadEndpoint="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-calico--kube--controllers--5f4c99c577--f2nlw-" Sep 4 17:53:28.924867 containerd[1456]: 2024-09-04 17:53:28.793 [INFO][4220] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fc800e2b3a09f5fa4af462c2112338027b5e9f52ce69fd2933b4d36d59bb1da1" Namespace="calico-system" Pod="calico-kube-controllers-5f4c99c577-f2nlw" WorkloadEndpoint="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-calico--kube--controllers--5f4c99c577--f2nlw-eth0" Sep 4 17:53:28.924867 containerd[1456]: 2024-09-04 17:53:28.845 [INFO][4230] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fc800e2b3a09f5fa4af462c2112338027b5e9f52ce69fd2933b4d36d59bb1da1" HandleID="k8s-pod-network.fc800e2b3a09f5fa4af462c2112338027b5e9f52ce69fd2933b4d36d59bb1da1" Workload="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-calico--kube--controllers--5f4c99c577--f2nlw-eth0" Sep 4 17:53:28.924867 containerd[1456]: 2024-09-04 17:53:28.858 [INFO][4230] ipam_plugin.go 270: Auto assigning IP ContainerID="fc800e2b3a09f5fa4af462c2112338027b5e9f52ce69fd2933b4d36d59bb1da1" HandleID="k8s-pod-network.fc800e2b3a09f5fa4af462c2112338027b5e9f52ce69fd2933b4d36d59bb1da1" Workload="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-calico--kube--controllers--5f4c99c577--f2nlw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318400), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal", "pod":"calico-kube-controllers-5f4c99c577-f2nlw", "timestamp":"2024-09-04 17:53:28.845612764 +0000 UTC"}, Hostname:"ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:53:28.924867 containerd[1456]: 2024-09-04 17:53:28.858 [INFO][4230] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:53:28.924867 containerd[1456]: 2024-09-04 17:53:28.858 [INFO][4230] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:53:28.924867 containerd[1456]: 2024-09-04 17:53:28.858 [INFO][4230] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal' Sep 4 17:53:28.924867 containerd[1456]: 2024-09-04 17:53:28.860 [INFO][4230] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fc800e2b3a09f5fa4af462c2112338027b5e9f52ce69fd2933b4d36d59bb1da1" host="ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:53:28.924867 containerd[1456]: 2024-09-04 17:53:28.865 [INFO][4230] ipam.go 372: Looking up existing affinities for host host="ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:53:28.924867 containerd[1456]: 2024-09-04 17:53:28.870 [INFO][4230] ipam.go 489: Trying affinity for 192.168.118.128/26 host="ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:53:28.924867 containerd[1456]: 2024-09-04 17:53:28.872 [INFO][4230] ipam.go 155: Attempting to load block cidr=192.168.118.128/26 host="ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:53:28.924867 containerd[1456]: 2024-09-04 17:53:28.875 [INFO][4230] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.118.128/26 host="ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:53:28.924867 containerd[1456]: 2024-09-04 17:53:28.875 [INFO][4230] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.118.128/26 handle="k8s-pod-network.fc800e2b3a09f5fa4af462c2112338027b5e9f52ce69fd2933b4d36d59bb1da1" host="ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:53:28.924867 containerd[1456]: 2024-09-04 17:53:28.877 [INFO][4230] ipam.go 1685: Creating new handle: k8s-pod-network.fc800e2b3a09f5fa4af462c2112338027b5e9f52ce69fd2933b4d36d59bb1da1 Sep 4 17:53:28.924867 containerd[1456]: 2024-09-04 17:53:28.881 [INFO][4230] ipam.go 1203: Writing block in order to claim IPs block=192.168.118.128/26 handle="k8s-pod-network.fc800e2b3a09f5fa4af462c2112338027b5e9f52ce69fd2933b4d36d59bb1da1" host="ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:53:28.924867 containerd[1456]: 2024-09-04 17:53:28.890 [INFO][4230] ipam.go 1216: Successfully claimed IPs: [192.168.118.131/26] block=192.168.118.128/26 handle="k8s-pod-network.fc800e2b3a09f5fa4af462c2112338027b5e9f52ce69fd2933b4d36d59bb1da1" host="ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:53:28.924867 containerd[1456]: 2024-09-04 17:53:28.890 [INFO][4230] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.118.131/26] handle="k8s-pod-network.fc800e2b3a09f5fa4af462c2112338027b5e9f52ce69fd2933b4d36d59bb1da1" host="ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:53:28.924867 containerd[1456]: 2024-09-04 17:53:28.890 [INFO][4230] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:53:28.924867 containerd[1456]: 2024-09-04 17:53:28.890 [INFO][4230] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.118.131/26] IPv6=[] ContainerID="fc800e2b3a09f5fa4af462c2112338027b5e9f52ce69fd2933b4d36d59bb1da1" HandleID="k8s-pod-network.fc800e2b3a09f5fa4af462c2112338027b5e9f52ce69fd2933b4d36d59bb1da1" Workload="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-calico--kube--controllers--5f4c99c577--f2nlw-eth0" Sep 4 17:53:28.926085 containerd[1456]: 2024-09-04 17:53:28.892 [INFO][4220] k8s.go 386: Populated endpoint ContainerID="fc800e2b3a09f5fa4af462c2112338027b5e9f52ce69fd2933b4d36d59bb1da1" Namespace="calico-system" Pod="calico-kube-controllers-5f4c99c577-f2nlw" WorkloadEndpoint="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-calico--kube--controllers--5f4c99c577--f2nlw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-calico--kube--controllers--5f4c99c577--f2nlw-eth0", GenerateName:"calico-kube-controllers-5f4c99c577-", Namespace:"calico-system", SelfLink:"", UID:"f2e7d76b-6bfd-417d-a1c5-8517155d4273", ResourceVersion:"781", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 53, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f4c99c577", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-kube-controllers-5f4c99c577-f2nlw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.118.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic5022c3891f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:53:28.926085 containerd[1456]: 2024-09-04 17:53:28.892 [INFO][4220] k8s.go 387: Calico CNI using IPs: [192.168.118.131/32] ContainerID="fc800e2b3a09f5fa4af462c2112338027b5e9f52ce69fd2933b4d36d59bb1da1" Namespace="calico-system" Pod="calico-kube-controllers-5f4c99c577-f2nlw" WorkloadEndpoint="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-calico--kube--controllers--5f4c99c577--f2nlw-eth0" Sep 4 17:53:28.926085 containerd[1456]: 2024-09-04 17:53:28.892 [INFO][4220] dataplane_linux.go 68: Setting the host side veth name to calic5022c3891f ContainerID="fc800e2b3a09f5fa4af462c2112338027b5e9f52ce69fd2933b4d36d59bb1da1" Namespace="calico-system" Pod="calico-kube-controllers-5f4c99c577-f2nlw" WorkloadEndpoint="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-calico--kube--controllers--5f4c99c577--f2nlw-eth0" Sep 4 17:53:28.926085 containerd[1456]: 2024-09-04 17:53:28.897 [INFO][4220] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="fc800e2b3a09f5fa4af462c2112338027b5e9f52ce69fd2933b4d36d59bb1da1" Namespace="calico-system" Pod="calico-kube-controllers-5f4c99c577-f2nlw" WorkloadEndpoint="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-calico--kube--controllers--5f4c99c577--f2nlw-eth0" Sep 4 17:53:28.926085 containerd[1456]: 2024-09-04 17:53:28.898 [INFO][4220] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fc800e2b3a09f5fa4af462c2112338027b5e9f52ce69fd2933b4d36d59bb1da1" Namespace="calico-system" Pod="calico-kube-controllers-5f4c99c577-f2nlw" WorkloadEndpoint="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-calico--kube--controllers--5f4c99c577--f2nlw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-calico--kube--controllers--5f4c99c577--f2nlw-eth0", GenerateName:"calico-kube-controllers-5f4c99c577-", Namespace:"calico-system", SelfLink:"", UID:"f2e7d76b-6bfd-417d-a1c5-8517155d4273", ResourceVersion:"781", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 53, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f4c99c577", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal", ContainerID:"fc800e2b3a09f5fa4af462c2112338027b5e9f52ce69fd2933b4d36d59bb1da1", Pod:"calico-kube-controllers-5f4c99c577-f2nlw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.118.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic5022c3891f", MAC:"72:24:e7:4b:ce:a3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:53:28.926085 containerd[1456]: 2024-09-04 17:53:28.916 [INFO][4220] k8s.go 500: Wrote updated endpoint to datastore ContainerID="fc800e2b3a09f5fa4af462c2112338027b5e9f52ce69fd2933b4d36d59bb1da1" Namespace="calico-system" Pod="calico-kube-controllers-5f4c99c577-f2nlw" WorkloadEndpoint="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-calico--kube--controllers--5f4c99c577--f2nlw-eth0" Sep 4 17:53:28.982514 containerd[1456]: time="2024-09-04T17:53:28.969742646Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:53:28.982514 containerd[1456]: time="2024-09-04T17:53:28.969826689Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:53:28.982514 containerd[1456]: time="2024-09-04T17:53:28.969855058Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:53:28.982514 containerd[1456]: time="2024-09-04T17:53:28.970022611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:53:29.034426 systemd[1]: Started cri-containerd-fc800e2b3a09f5fa4af462c2112338027b5e9f52ce69fd2933b4d36d59bb1da1.scope - libcontainer container fc800e2b3a09f5fa4af462c2112338027b5e9f52ce69fd2933b4d36d59bb1da1. Sep 4 17:53:29.112550 containerd[1456]: time="2024-09-04T17:53:29.112494311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f4c99c577-f2nlw,Uid:f2e7d76b-6bfd-417d-a1c5-8517155d4273,Namespace:calico-system,Attempt:1,} returns sandbox id \"fc800e2b3a09f5fa4af462c2112338027b5e9f52ce69fd2933b4d36d59bb1da1\"" Sep 4 17:53:29.265980 containerd[1456]: time="2024-09-04T17:53:29.265835890Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:53:29.267624 containerd[1456]: time="2024-09-04T17:53:29.267560719Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7642081" Sep 4 17:53:29.269052 containerd[1456]: time="2024-09-04T17:53:29.268975738Z" level=info msg="ImageCreate event name:\"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:53:29.271858 containerd[1456]: time="2024-09-04T17:53:29.271778356Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:53:29.273206 containerd[1456]: time="2024-09-04T17:53:29.272743000Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"9134482\" in 1.105239357s" Sep 4 17:53:29.273206 containerd[1456]: time="2024-09-04T17:53:29.272788151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\"" Sep 4 17:53:29.274516 containerd[1456]: time="2024-09-04T17:53:29.274139952Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Sep 4 17:53:29.276267 containerd[1456]: time="2024-09-04T17:53:29.275877455Z" level=info msg="CreateContainer within sandbox \"462ec1a6aa2b146c4fbcf7537a9443913535336513997b5865b9c3a866a26917\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 4 17:53:29.296675 containerd[1456]: time="2024-09-04T17:53:29.296601522Z" level=info msg="CreateContainer within sandbox \"462ec1a6aa2b146c4fbcf7537a9443913535336513997b5865b9c3a866a26917\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"9818058d982756fbf5c2e84f6bb8022f7de22ce95a74cb85bfcf610365742b81\"" Sep 4 17:53:29.297531 containerd[1456]: time="2024-09-04T17:53:29.297333575Z" level=info msg="StartContainer for \"9818058d982756fbf5c2e84f6bb8022f7de22ce95a74cb85bfcf610365742b81\"" Sep 4 17:53:29.336604 systemd[1]: Started cri-containerd-9818058d982756fbf5c2e84f6bb8022f7de22ce95a74cb85bfcf610365742b81.scope - libcontainer container 9818058d982756fbf5c2e84f6bb8022f7de22ce95a74cb85bfcf610365742b81. Sep 4 17:53:29.380517 containerd[1456]: time="2024-09-04T17:53:29.380444962Z" level=info msg="StartContainer for \"9818058d982756fbf5c2e84f6bb8022f7de22ce95a74cb85bfcf610365742b81\" returns successfully" Sep 4 17:53:29.532632 systemd-networkd[1375]: cali0794a5fbbf8: Gained IPv6LL Sep 4 17:53:29.542090 containerd[1456]: time="2024-09-04T17:53:29.540070884Z" level=info msg="StopPodSandbox for \"c080fddd61097a4c7ac68ca5afc0847d98be35a0b6dd702524b6e61b51ddee43\"" Sep 4 17:53:29.639491 containerd[1456]: 2024-09-04 17:53:29.594 [INFO][4342] k8s.go 608: Cleaning up netns ContainerID="c080fddd61097a4c7ac68ca5afc0847d98be35a0b6dd702524b6e61b51ddee43" Sep 4 17:53:29.639491 containerd[1456]: 2024-09-04 17:53:29.595 [INFO][4342] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="c080fddd61097a4c7ac68ca5afc0847d98be35a0b6dd702524b6e61b51ddee43" iface="eth0" netns="/var/run/netns/cni-2cc6e50f-de50-586b-d430-241e4f7bc3ea" Sep 4 17:53:29.639491 containerd[1456]: 2024-09-04 17:53:29.595 [INFO][4342] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="c080fddd61097a4c7ac68ca5afc0847d98be35a0b6dd702524b6e61b51ddee43" iface="eth0" netns="/var/run/netns/cni-2cc6e50f-de50-586b-d430-241e4f7bc3ea" Sep 4 17:53:29.639491 containerd[1456]: 2024-09-04 17:53:29.596 [INFO][4342] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="c080fddd61097a4c7ac68ca5afc0847d98be35a0b6dd702524b6e61b51ddee43" iface="eth0" netns="/var/run/netns/cni-2cc6e50f-de50-586b-d430-241e4f7bc3ea" Sep 4 17:53:29.639491 containerd[1456]: 2024-09-04 17:53:29.597 [INFO][4342] k8s.go 615: Releasing IP address(es) ContainerID="c080fddd61097a4c7ac68ca5afc0847d98be35a0b6dd702524b6e61b51ddee43" Sep 4 17:53:29.639491 containerd[1456]: 2024-09-04 17:53:29.597 [INFO][4342] utils.go 188: Calico CNI releasing IP address ContainerID="c080fddd61097a4c7ac68ca5afc0847d98be35a0b6dd702524b6e61b51ddee43" Sep 4 17:53:29.639491 containerd[1456]: 2024-09-04 17:53:29.628 [INFO][4348] ipam_plugin.go 417: Releasing address using handleID ContainerID="c080fddd61097a4c7ac68ca5afc0847d98be35a0b6dd702524b6e61b51ddee43" HandleID="k8s-pod-network.c080fddd61097a4c7ac68ca5afc0847d98be35a0b6dd702524b6e61b51ddee43" Workload="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--kmp25-eth0" Sep 4 17:53:29.639491 containerd[1456]: 2024-09-04 17:53:29.628 [INFO][4348] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:53:29.639491 containerd[1456]: 2024-09-04 17:53:29.628 [INFO][4348] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:53:29.639491 containerd[1456]: 2024-09-04 17:53:29.635 [WARNING][4348] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="c080fddd61097a4c7ac68ca5afc0847d98be35a0b6dd702524b6e61b51ddee43" HandleID="k8s-pod-network.c080fddd61097a4c7ac68ca5afc0847d98be35a0b6dd702524b6e61b51ddee43" Workload="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--kmp25-eth0" Sep 4 17:53:29.639491 containerd[1456]: 2024-09-04 17:53:29.635 [INFO][4348] ipam_plugin.go 445: Releasing address using workloadID ContainerID="c080fddd61097a4c7ac68ca5afc0847d98be35a0b6dd702524b6e61b51ddee43" HandleID="k8s-pod-network.c080fddd61097a4c7ac68ca5afc0847d98be35a0b6dd702524b6e61b51ddee43" Workload="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--kmp25-eth0" Sep 4 17:53:29.639491 containerd[1456]: 2024-09-04 17:53:29.636 [INFO][4348] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:53:29.639491 containerd[1456]: 2024-09-04 17:53:29.638 [INFO][4342] k8s.go 621: Teardown processing complete. ContainerID="c080fddd61097a4c7ac68ca5afc0847d98be35a0b6dd702524b6e61b51ddee43" Sep 4 17:53:29.640359 containerd[1456]: time="2024-09-04T17:53:29.639696958Z" level=info msg="TearDown network for sandbox \"c080fddd61097a4c7ac68ca5afc0847d98be35a0b6dd702524b6e61b51ddee43\" successfully" Sep 4 17:53:29.640359 containerd[1456]: time="2024-09-04T17:53:29.639737326Z" level=info msg="StopPodSandbox for \"c080fddd61097a4c7ac68ca5afc0847d98be35a0b6dd702524b6e61b51ddee43\" returns successfully" Sep 4 17:53:29.641216 containerd[1456]: time="2024-09-04T17:53:29.640915375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kmp25,Uid:090627b4-68b0-464b-9437-a497123fe057,Namespace:kube-system,Attempt:1,}" Sep 4 17:53:29.682361 systemd[1]: run-netns-cni\x2d2cc6e50f\x2dde50\x2d586b\x2dd430\x2d241e4f7bc3ea.mount: Deactivated successfully. Sep 4 17:53:29.805140 systemd-networkd[1375]: calia2d9b3250b5: Link UP Sep 4 17:53:29.808693 systemd-networkd[1375]: calia2d9b3250b5: Gained carrier Sep 4 17:53:29.834588 containerd[1456]: 2024-09-04 17:53:29.707 [INFO][4355] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--kmp25-eth0 coredns-7db6d8ff4d- kube-system 090627b4-68b0-464b-9437-a497123fe057 796 0 2024-09-04 17:52:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal coredns-7db6d8ff4d-kmp25 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia2d9b3250b5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="82c1fe216b3b26af966abf8f234c78c33fdebc5583693c2d0232729e8bf40581" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kmp25" WorkloadEndpoint="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--kmp25-" Sep 4 17:53:29.834588 containerd[1456]: 2024-09-04 17:53:29.707 [INFO][4355] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="82c1fe216b3b26af966abf8f234c78c33fdebc5583693c2d0232729e8bf40581" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kmp25" WorkloadEndpoint="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--kmp25-eth0" Sep 4 17:53:29.834588 containerd[1456]: 2024-09-04 17:53:29.747 [INFO][4365] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="82c1fe216b3b26af966abf8f234c78c33fdebc5583693c2d0232729e8bf40581" HandleID="k8s-pod-network.82c1fe216b3b26af966abf8f234c78c33fdebc5583693c2d0232729e8bf40581" Workload="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--kmp25-eth0" Sep 4 17:53:29.834588 containerd[1456]: 2024-09-04 17:53:29.757 [INFO][4365] ipam_plugin.go 270: Auto assigning IP ContainerID="82c1fe216b3b26af966abf8f234c78c33fdebc5583693c2d0232729e8bf40581" HandleID="k8s-pod-network.82c1fe216b3b26af966abf8f234c78c33fdebc5583693c2d0232729e8bf40581" Workload="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--kmp25-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001fd780), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal", "pod":"coredns-7db6d8ff4d-kmp25", "timestamp":"2024-09-04 17:53:29.747592108 +0000 UTC"}, Hostname:"ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:53:29.834588 containerd[1456]: 2024-09-04 17:53:29.757 [INFO][4365] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:53:29.834588 containerd[1456]: 2024-09-04 17:53:29.757 [INFO][4365] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:53:29.834588 containerd[1456]: 2024-09-04 17:53:29.758 [INFO][4365] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal' Sep 4 17:53:29.834588 containerd[1456]: 2024-09-04 17:53:29.760 [INFO][4365] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.82c1fe216b3b26af966abf8f234c78c33fdebc5583693c2d0232729e8bf40581" host="ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:53:29.834588 containerd[1456]: 2024-09-04 17:53:29.766 [INFO][4365] ipam.go 372: Looking up existing affinities for host host="ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:53:29.834588 containerd[1456]: 2024-09-04 17:53:29.771 [INFO][4365] ipam.go 489: Trying affinity for 192.168.118.128/26 host="ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:53:29.834588 containerd[1456]: 2024-09-04 17:53:29.774 [INFO][4365] ipam.go 155: Attempting to load block cidr=192.168.118.128/26 host="ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:53:29.834588 containerd[1456]: 2024-09-04 17:53:29.777 [INFO][4365] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.118.128/26 host="ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:53:29.834588 containerd[1456]: 2024-09-04 17:53:29.777 [INFO][4365] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.118.128/26 handle="k8s-pod-network.82c1fe216b3b26af966abf8f234c78c33fdebc5583693c2d0232729e8bf40581" host="ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:53:29.834588 containerd[1456]: 2024-09-04 17:53:29.779 [INFO][4365] ipam.go 1685: Creating new handle: k8s-pod-network.82c1fe216b3b26af966abf8f234c78c33fdebc5583693c2d0232729e8bf40581 Sep 4 17:53:29.834588 containerd[1456]: 2024-09-04 17:53:29.785 [INFO][4365] ipam.go 1203: Writing block in order to claim IPs block=192.168.118.128/26 handle="k8s-pod-network.82c1fe216b3b26af966abf8f234c78c33fdebc5583693c2d0232729e8bf40581" host="ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:53:29.834588 containerd[1456]: 2024-09-04 17:53:29.791 [INFO][4365] ipam.go 1216: Successfully claimed IPs: [192.168.118.132/26] block=192.168.118.128/26 handle="k8s-pod-network.82c1fe216b3b26af966abf8f234c78c33fdebc5583693c2d0232729e8bf40581" host="ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:53:29.834588 containerd[1456]: 2024-09-04 17:53:29.791 [INFO][4365] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.118.132/26] handle="k8s-pod-network.82c1fe216b3b26af966abf8f234c78c33fdebc5583693c2d0232729e8bf40581" host="ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:53:29.834588 containerd[1456]: 2024-09-04 17:53:29.791 [INFO][4365] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:53:29.834588 containerd[1456]: 2024-09-04 17:53:29.791 [INFO][4365] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.118.132/26] IPv6=[] ContainerID="82c1fe216b3b26af966abf8f234c78c33fdebc5583693c2d0232729e8bf40581" HandleID="k8s-pod-network.82c1fe216b3b26af966abf8f234c78c33fdebc5583693c2d0232729e8bf40581" Workload="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--kmp25-eth0" Sep 4 17:53:29.837866 containerd[1456]: 2024-09-04 17:53:29.795 [INFO][4355] k8s.go 386: Populated endpoint ContainerID="82c1fe216b3b26af966abf8f234c78c33fdebc5583693c2d0232729e8bf40581" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kmp25" WorkloadEndpoint="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--kmp25-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--kmp25-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"090627b4-68b0-464b-9437-a497123fe057", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 52, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-7db6d8ff4d-kmp25", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.118.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia2d9b3250b5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:53:29.837866 containerd[1456]: 2024-09-04 17:53:29.795 [INFO][4355] k8s.go 387: Calico CNI using IPs: [192.168.118.132/32] ContainerID="82c1fe216b3b26af966abf8f234c78c33fdebc5583693c2d0232729e8bf40581" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kmp25" WorkloadEndpoint="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--kmp25-eth0" Sep 4 17:53:29.837866 containerd[1456]: 2024-09-04 17:53:29.795 [INFO][4355] dataplane_linux.go 68: Setting the host side veth name to calia2d9b3250b5 ContainerID="82c1fe216b3b26af966abf8f234c78c33fdebc5583693c2d0232729e8bf40581" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kmp25" WorkloadEndpoint="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--kmp25-eth0" Sep 4 17:53:29.837866 containerd[1456]: 2024-09-04 17:53:29.811 [INFO][4355] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="82c1fe216b3b26af966abf8f234c78c33fdebc5583693c2d0232729e8bf40581" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kmp25" WorkloadEndpoint="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--kmp25-eth0" Sep 4 17:53:29.837866 containerd[1456]: 2024-09-04 17:53:29.813 [INFO][4355] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="82c1fe216b3b26af966abf8f234c78c33fdebc5583693c2d0232729e8bf40581" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kmp25" WorkloadEndpoint="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--kmp25-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--kmp25-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"090627b4-68b0-464b-9437-a497123fe057", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 52, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal", ContainerID:"82c1fe216b3b26af966abf8f234c78c33fdebc5583693c2d0232729e8bf40581", Pod:"coredns-7db6d8ff4d-kmp25", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.118.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia2d9b3250b5", MAC:"3a:8a:db:54:41:a7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:53:29.837866 containerd[1456]: 2024-09-04 17:53:29.831 [INFO][4355] k8s.go 500: Wrote updated endpoint to datastore ContainerID="82c1fe216b3b26af966abf8f234c78c33fdebc5583693c2d0232729e8bf40581" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kmp25" WorkloadEndpoint="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--kmp25-eth0" Sep 4 17:53:29.878446 containerd[1456]: time="2024-09-04T17:53:29.877620556Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:53:29.879527 containerd[1456]: time="2024-09-04T17:53:29.879239222Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:53:29.879527 containerd[1456]: time="2024-09-04T17:53:29.879297026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:53:29.879903 containerd[1456]: time="2024-09-04T17:53:29.879725238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:53:29.918442 systemd-networkd[1375]: calic5022c3891f: Gained IPv6LL Sep 4 17:53:29.931420 systemd[1]: Started cri-containerd-82c1fe216b3b26af966abf8f234c78c33fdebc5583693c2d0232729e8bf40581.scope - libcontainer container 82c1fe216b3b26af966abf8f234c78c33fdebc5583693c2d0232729e8bf40581. Sep 4 17:53:29.994293 containerd[1456]: time="2024-09-04T17:53:29.994202086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kmp25,Uid:090627b4-68b0-464b-9437-a497123fe057,Namespace:kube-system,Attempt:1,} returns sandbox id \"82c1fe216b3b26af966abf8f234c78c33fdebc5583693c2d0232729e8bf40581\"" Sep 4 17:53:30.000397 containerd[1456]: time="2024-09-04T17:53:29.999897058Z" level=info msg="CreateContainer within sandbox \"82c1fe216b3b26af966abf8f234c78c33fdebc5583693c2d0232729e8bf40581\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:53:30.020369 containerd[1456]: time="2024-09-04T17:53:30.019031813Z" level=info msg="CreateContainer within sandbox \"82c1fe216b3b26af966abf8f234c78c33fdebc5583693c2d0232729e8bf40581\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ed7f63f0c0735ace7dc338eb0b60966cd7e5a79c7a99f6aad3e3b511bddddcbc\"" Sep 4 17:53:30.020369 containerd[1456]: time="2024-09-04T17:53:30.019845728Z" level=info msg="StartContainer for \"ed7f63f0c0735ace7dc338eb0b60966cd7e5a79c7a99f6aad3e3b511bddddcbc\"" Sep 4 17:53:30.061509 systemd[1]: Started cri-containerd-ed7f63f0c0735ace7dc338eb0b60966cd7e5a79c7a99f6aad3e3b511bddddcbc.scope - libcontainer container ed7f63f0c0735ace7dc338eb0b60966cd7e5a79c7a99f6aad3e3b511bddddcbc. Sep 4 17:53:30.114909 containerd[1456]: time="2024-09-04T17:53:30.113908829Z" level=info msg="StartContainer for \"ed7f63f0c0735ace7dc338eb0b60966cd7e5a79c7a99f6aad3e3b511bddddcbc\" returns successfully" Sep 4 17:53:30.873076 kubelet[2620]: I0904 17:53:30.872992 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-kmp25" podStartSLOduration=36.872965425 podStartE2EDuration="36.872965425s" podCreationTimestamp="2024-09-04 17:52:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:53:30.846537682 +0000 UTC m=+51.456057089" watchObservedRunningTime="2024-09-04 17:53:30.872965425 +0000 UTC m=+51.482484820" Sep 4 17:53:31.133615 systemd-networkd[1375]: calia2d9b3250b5: Gained IPv6LL Sep 4 17:53:31.361006 containerd[1456]: time="2024-09-04T17:53:31.360924410Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:53:31.362356 containerd[1456]: time="2024-09-04T17:53:31.362286309Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=33507125" Sep 4 17:53:31.363646 containerd[1456]: time="2024-09-04T17:53:31.363560131Z" level=info msg="ImageCreate event name:\"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:53:31.366705 containerd[1456]: time="2024-09-04T17:53:31.366626999Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:53:31.368368 containerd[1456]: time="2024-09-04T17:53:31.367757939Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"34999494\" in 2.0932703s" Sep 4 17:53:31.368368 containerd[1456]: time="2024-09-04T17:53:31.367818103Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\"" Sep 4 17:53:31.370847 containerd[1456]: time="2024-09-04T17:53:31.370461858Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Sep 4 17:53:31.394548 containerd[1456]: time="2024-09-04T17:53:31.394224385Z" level=info msg="CreateContainer within sandbox \"fc800e2b3a09f5fa4af462c2112338027b5e9f52ce69fd2933b4d36d59bb1da1\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 4 17:53:31.421552 containerd[1456]: time="2024-09-04T17:53:31.421247723Z" level=info msg="CreateContainer within sandbox \"fc800e2b3a09f5fa4af462c2112338027b5e9f52ce69fd2933b4d36d59bb1da1\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"5ecffbc0804d8f20e326f038a242202ab83bafc0248e56ae91df210d90f87319\"" Sep 4 17:53:31.421756 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4237066737.mount: Deactivated successfully. Sep 4 17:53:31.422997 containerd[1456]: time="2024-09-04T17:53:31.422788140Z" level=info msg="StartContainer for \"5ecffbc0804d8f20e326f038a242202ab83bafc0248e56ae91df210d90f87319\"" Sep 4 17:53:31.467401 systemd[1]: Started cri-containerd-5ecffbc0804d8f20e326f038a242202ab83bafc0248e56ae91df210d90f87319.scope - libcontainer container 5ecffbc0804d8f20e326f038a242202ab83bafc0248e56ae91df210d90f87319. Sep 4 17:53:31.536395 containerd[1456]: time="2024-09-04T17:53:31.536124493Z" level=info msg="StartContainer for \"5ecffbc0804d8f20e326f038a242202ab83bafc0248e56ae91df210d90f87319\" returns successfully" Sep 4 17:53:31.895971 systemd[1]: run-containerd-runc-k8s.io-5ecffbc0804d8f20e326f038a242202ab83bafc0248e56ae91df210d90f87319-runc.jgqyfv.mount: Deactivated successfully. Sep 4 17:53:31.962915 kubelet[2620]: I0904 17:53:31.961758 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5f4c99c577-f2nlw" podStartSLOduration=28.710505236 podStartE2EDuration="30.961729671s" podCreationTimestamp="2024-09-04 17:53:01 +0000 UTC" firstStartedPulling="2024-09-04 17:53:29.117824741 +0000 UTC m=+49.727344137" lastFinishedPulling="2024-09-04 17:53:31.369049186 +0000 UTC m=+51.978568572" observedRunningTime="2024-09-04 17:53:31.8613937 +0000 UTC m=+52.470913107" watchObservedRunningTime="2024-09-04 17:53:31.961729671 +0000 UTC m=+52.571249076" Sep 4 17:53:32.845800 containerd[1456]: time="2024-09-04T17:53:32.845725271Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:53:32.847948 containerd[1456]: time="2024-09-04T17:53:32.847871869Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12907822" Sep 4 17:53:32.849085 containerd[1456]: time="2024-09-04T17:53:32.848933548Z" level=info msg="ImageCreate event name:\"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:53:32.853978 containerd[1456]: time="2024-09-04T17:53:32.853882474Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:53:32.855416 containerd[1456]: time="2024-09-04T17:53:32.855301420Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"14400175\" in 1.484780691s" Sep 4 17:53:32.855781 containerd[1456]: time="2024-09-04T17:53:32.855605991Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\"" Sep 4 17:53:32.860006 containerd[1456]: time="2024-09-04T17:53:32.859561601Z" level=info msg="CreateContainer within sandbox \"462ec1a6aa2b146c4fbcf7537a9443913535336513997b5865b9c3a866a26917\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 4 17:53:32.886186 containerd[1456]: time="2024-09-04T17:53:32.885593512Z" level=info msg="CreateContainer within sandbox \"462ec1a6aa2b146c4fbcf7537a9443913535336513997b5865b9c3a866a26917\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"9991cf0ed870d0d803184290af70436c32f75463618cc5a078304b67cbe3dfda\"" Sep 4 17:53:32.887850 containerd[1456]: time="2024-09-04T17:53:32.887778212Z" level=info msg="StartContainer for \"9991cf0ed870d0d803184290af70436c32f75463618cc5a078304b67cbe3dfda\"" Sep 4 17:53:32.972288 systemd[1]: run-containerd-runc-k8s.io-9991cf0ed870d0d803184290af70436c32f75463618cc5a078304b67cbe3dfda-runc.qUgdMz.mount: Deactivated successfully. Sep 4 17:53:32.983388 systemd[1]: Started cri-containerd-9991cf0ed870d0d803184290af70436c32f75463618cc5a078304b67cbe3dfda.scope - libcontainer container 9991cf0ed870d0d803184290af70436c32f75463618cc5a078304b67cbe3dfda. Sep 4 17:53:33.042664 containerd[1456]: time="2024-09-04T17:53:33.042608167Z" level=info msg="StartContainer for \"9991cf0ed870d0d803184290af70436c32f75463618cc5a078304b67cbe3dfda\" returns successfully" Sep 4 17:53:33.721469 kubelet[2620]: I0904 17:53:33.721424 2620 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 4 17:53:33.721469 kubelet[2620]: I0904 17:53:33.721476 2620 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 4 17:53:33.724637 systemd[1]: Started sshd@12-10.128.0.52:22-147.75.109.163:51808.service - OpenSSH per-connection server daemon (147.75.109.163:51808). Sep 4 17:53:33.858138 kubelet[2620]: I0904 17:53:33.857669 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-6v7cb" podStartSLOduration=28.166925096 podStartE2EDuration="32.857644909s" podCreationTimestamp="2024-09-04 17:53:01 +0000 UTC" firstStartedPulling="2024-09-04 17:53:28.166780533 +0000 UTC m=+48.776299927" lastFinishedPulling="2024-09-04 17:53:32.857500329 +0000 UTC m=+53.467019740" observedRunningTime="2024-09-04 17:53:33.855949032 +0000 UTC m=+54.465468471" watchObservedRunningTime="2024-09-04 17:53:33.857644909 +0000 UTC m=+54.467164316" Sep 4 17:53:33.922448 ntpd[1425]: Listen normally on 8 vxlan.calico 192.168.118.128:123 Sep 4 17:53:33.923368 ntpd[1425]: 4 Sep 17:53:33 ntpd[1425]: Listen normally on 8 vxlan.calico 192.168.118.128:123 Sep 4 17:53:33.923368 ntpd[1425]: 4 Sep 17:53:33 ntpd[1425]: Listen normally on 9 vxlan.calico [fe80::64fd:32ff:fe8a:2bf1%4]:123 Sep 4 17:53:33.923368 ntpd[1425]: 4 Sep 17:53:33 ntpd[1425]: Listen normally on 10 cali84a8bd4d8d1 [fe80::ecee:eeff:feee:eeee%7]:123 Sep 4 17:53:33.923368 ntpd[1425]: 4 Sep 17:53:33 ntpd[1425]: Listen normally on 11 cali0794a5fbbf8 [fe80::ecee:eeff:feee:eeee%8]:123 Sep 4 17:53:33.923368 ntpd[1425]: 4 Sep 17:53:33 ntpd[1425]: Listen normally on 12 calic5022c3891f [fe80::ecee:eeff:feee:eeee%9]:123 Sep 4 17:53:33.923368 ntpd[1425]: 4 Sep 17:53:33 ntpd[1425]: Listen normally on 13 calia2d9b3250b5 [fe80::ecee:eeff:feee:eeee%10]:123 Sep 4 17:53:33.922581 ntpd[1425]: Listen normally on 9 vxlan.calico [fe80::64fd:32ff:fe8a:2bf1%4]:123 Sep 4 17:53:33.922653 ntpd[1425]: Listen normally on 10 cali84a8bd4d8d1 [fe80::ecee:eeff:feee:eeee%7]:123 Sep 4 17:53:33.922714 ntpd[1425]: Listen normally on 11 cali0794a5fbbf8 [fe80::ecee:eeff:feee:eeee%8]:123 Sep 4 17:53:33.922766 ntpd[1425]: Listen normally on 12 calic5022c3891f [fe80::ecee:eeff:feee:eeee%9]:123 Sep 4 17:53:33.922816 ntpd[1425]: Listen normally on 13 calia2d9b3250b5 [fe80::ecee:eeff:feee:eeee%10]:123 Sep 4 17:53:34.029624 sshd[4582]: Accepted publickey for core from 147.75.109.163 port 51808 ssh2: RSA SHA256:3UawMEk03AfeR6A6/drmeg302df853gVTK6IGVvrB/U Sep 4 17:53:34.031907 sshd[4582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:53:34.038235 systemd-logind[1441]: New session 11 of user core. Sep 4 17:53:34.046451 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 17:53:34.133773 systemd[1]: run-containerd-runc-k8s.io-b6cf4d8a55953e52d3b77c4fd0f110cb895892ff6d7dae437d9998924309bcc2-runc.OdlW06.mount: Deactivated successfully. Sep 4 17:53:34.390611 sshd[4582]: pam_unix(sshd:session): session closed for user core Sep 4 17:53:34.396377 systemd[1]: sshd@12-10.128.0.52:22-147.75.109.163:51808.service: Deactivated successfully. Sep 4 17:53:34.399041 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 17:53:34.400130 systemd-logind[1441]: Session 11 logged out. Waiting for processes to exit. Sep 4 17:53:34.402059 systemd-logind[1441]: Removed session 11. Sep 4 17:53:39.448229 systemd[1]: Started sshd@13-10.128.0.52:22-147.75.109.163:36442.service - OpenSSH per-connection server daemon (147.75.109.163:36442). Sep 4 17:53:39.552850 containerd[1456]: time="2024-09-04T17:53:39.552656564Z" level=info msg="StopPodSandbox for \"15230386667992300db60420603239f9442b3c652fddd7ff6bb0eededb63a969\"" Sep 4 17:53:39.640238 containerd[1456]: 2024-09-04 17:53:39.600 [WARNING][4638] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="15230386667992300db60420603239f9442b3c652fddd7ff6bb0eededb63a969" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-csi--node--driver--6v7cb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3f204459-67ca-4ef3-87db-d2dfa1c8a5a7", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 53, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal", ContainerID:"462ec1a6aa2b146c4fbcf7537a9443913535336513997b5865b9c3a866a26917", Pod:"csi-node-driver-6v7cb", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.118.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali0794a5fbbf8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:53:39.640238 containerd[1456]: 2024-09-04 17:53:39.600 [INFO][4638] k8s.go 608: Cleaning up netns ContainerID="15230386667992300db60420603239f9442b3c652fddd7ff6bb0eededb63a969" Sep 4 17:53:39.640238 containerd[1456]: 2024-09-04 17:53:39.600 [INFO][4638] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="15230386667992300db60420603239f9442b3c652fddd7ff6bb0eededb63a969" iface="eth0" netns="" Sep 4 17:53:39.640238 containerd[1456]: 2024-09-04 17:53:39.600 [INFO][4638] k8s.go 615: Releasing IP address(es) ContainerID="15230386667992300db60420603239f9442b3c652fddd7ff6bb0eededb63a969" Sep 4 17:53:39.640238 containerd[1456]: 2024-09-04 17:53:39.600 [INFO][4638] utils.go 188: Calico CNI releasing IP address ContainerID="15230386667992300db60420603239f9442b3c652fddd7ff6bb0eededb63a969" Sep 4 17:53:39.640238 containerd[1456]: 2024-09-04 17:53:39.627 [INFO][4644] ipam_plugin.go 417: Releasing address using handleID ContainerID="15230386667992300db60420603239f9442b3c652fddd7ff6bb0eededb63a969" HandleID="k8s-pod-network.15230386667992300db60420603239f9442b3c652fddd7ff6bb0eededb63a969" Workload="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-csi--node--driver--6v7cb-eth0" Sep 4 17:53:39.640238 containerd[1456]: 2024-09-04 17:53:39.627 [INFO][4644] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:53:39.640238 containerd[1456]: 2024-09-04 17:53:39.627 [INFO][4644] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:53:39.640238 containerd[1456]: 2024-09-04 17:53:39.636 [WARNING][4644] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="15230386667992300db60420603239f9442b3c652fddd7ff6bb0eededb63a969" HandleID="k8s-pod-network.15230386667992300db60420603239f9442b3c652fddd7ff6bb0eededb63a969" Workload="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-csi--node--driver--6v7cb-eth0" Sep 4 17:53:39.640238 containerd[1456]: 2024-09-04 17:53:39.636 [INFO][4644] ipam_plugin.go 445: Releasing address using workloadID ContainerID="15230386667992300db60420603239f9442b3c652fddd7ff6bb0eededb63a969" HandleID="k8s-pod-network.15230386667992300db60420603239f9442b3c652fddd7ff6bb0eededb63a969" Workload="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-csi--node--driver--6v7cb-eth0" Sep 4 17:53:39.640238 containerd[1456]: 2024-09-04 17:53:39.637 [INFO][4644] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:53:39.640238 containerd[1456]: 2024-09-04 17:53:39.638 [INFO][4638] k8s.go 621: Teardown processing complete. ContainerID="15230386667992300db60420603239f9442b3c652fddd7ff6bb0eededb63a969" Sep 4 17:53:39.641004 containerd[1456]: time="2024-09-04T17:53:39.640293329Z" level=info msg="TearDown network for sandbox \"15230386667992300db60420603239f9442b3c652fddd7ff6bb0eededb63a969\" successfully" Sep 4 17:53:39.641004 containerd[1456]: time="2024-09-04T17:53:39.640330946Z" level=info msg="StopPodSandbox for \"15230386667992300db60420603239f9442b3c652fddd7ff6bb0eededb63a969\" returns successfully" Sep 4 17:53:39.641131 containerd[1456]: time="2024-09-04T17:53:39.641037402Z" level=info msg="RemovePodSandbox for \"15230386667992300db60420603239f9442b3c652fddd7ff6bb0eededb63a969\"" Sep 4 17:53:39.641131 containerd[1456]: time="2024-09-04T17:53:39.641081043Z" level=info msg="Forcibly stopping sandbox \"15230386667992300db60420603239f9442b3c652fddd7ff6bb0eededb63a969\"" Sep 4 17:53:39.742460 containerd[1456]: 2024-09-04 17:53:39.700 [WARNING][4662] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="15230386667992300db60420603239f9442b3c652fddd7ff6bb0eededb63a969" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-csi--node--driver--6v7cb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3f204459-67ca-4ef3-87db-d2dfa1c8a5a7", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 53, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal", ContainerID:"462ec1a6aa2b146c4fbcf7537a9443913535336513997b5865b9c3a866a26917", Pod:"csi-node-driver-6v7cb", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.118.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali0794a5fbbf8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:53:39.742460 containerd[1456]: 2024-09-04 17:53:39.701 [INFO][4662] k8s.go 608: Cleaning up netns ContainerID="15230386667992300db60420603239f9442b3c652fddd7ff6bb0eededb63a969" Sep 4 17:53:39.742460 containerd[1456]: 2024-09-04 17:53:39.701 [INFO][4662] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="15230386667992300db60420603239f9442b3c652fddd7ff6bb0eededb63a969" iface="eth0" netns="" Sep 4 17:53:39.742460 containerd[1456]: 2024-09-04 17:53:39.701 [INFO][4662] k8s.go 615: Releasing IP address(es) ContainerID="15230386667992300db60420603239f9442b3c652fddd7ff6bb0eededb63a969" Sep 4 17:53:39.742460 containerd[1456]: 2024-09-04 17:53:39.701 [INFO][4662] utils.go 188: Calico CNI releasing IP address ContainerID="15230386667992300db60420603239f9442b3c652fddd7ff6bb0eededb63a969" Sep 4 17:53:39.742460 containerd[1456]: 2024-09-04 17:53:39.731 [INFO][4669] ipam_plugin.go 417: Releasing address using handleID ContainerID="15230386667992300db60420603239f9442b3c652fddd7ff6bb0eededb63a969" HandleID="k8s-pod-network.15230386667992300db60420603239f9442b3c652fddd7ff6bb0eededb63a969" Workload="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-csi--node--driver--6v7cb-eth0" Sep 4 17:53:39.742460 containerd[1456]: 2024-09-04 17:53:39.731 [INFO][4669] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:53:39.742460 containerd[1456]: 2024-09-04 17:53:39.731 [INFO][4669] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:53:39.742460 containerd[1456]: 2024-09-04 17:53:39.738 [WARNING][4669] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="15230386667992300db60420603239f9442b3c652fddd7ff6bb0eededb63a969" HandleID="k8s-pod-network.15230386667992300db60420603239f9442b3c652fddd7ff6bb0eededb63a969" Workload="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-csi--node--driver--6v7cb-eth0" Sep 4 17:53:39.742460 containerd[1456]: 2024-09-04 17:53:39.738 [INFO][4669] ipam_plugin.go 445: Releasing address using workloadID ContainerID="15230386667992300db60420603239f9442b3c652fddd7ff6bb0eededb63a969" HandleID="k8s-pod-network.15230386667992300db60420603239f9442b3c652fddd7ff6bb0eededb63a969" Workload="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-csi--node--driver--6v7cb-eth0" Sep 4 17:53:39.742460 containerd[1456]: 2024-09-04 17:53:39.740 [INFO][4669] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:53:39.742460 containerd[1456]: 2024-09-04 17:53:39.741 [INFO][4662] k8s.go 621: Teardown processing complete. ContainerID="15230386667992300db60420603239f9442b3c652fddd7ff6bb0eededb63a969" Sep 4 17:53:39.742460 containerd[1456]: time="2024-09-04T17:53:39.742416503Z" level=info msg="TearDown network for sandbox \"15230386667992300db60420603239f9442b3c652fddd7ff6bb0eededb63a969\" successfully" Sep 4 17:53:39.747766 containerd[1456]: time="2024-09-04T17:53:39.747651245Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"15230386667992300db60420603239f9442b3c652fddd7ff6bb0eededb63a969\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:53:39.747766 containerd[1456]: time="2024-09-04T17:53:39.747736937Z" level=info msg="RemovePodSandbox \"15230386667992300db60420603239f9442b3c652fddd7ff6bb0eededb63a969\" returns successfully" Sep 4 17:53:39.749025 containerd[1456]: time="2024-09-04T17:53:39.748620333Z" level=info msg="StopPodSandbox for \"c080fddd61097a4c7ac68ca5afc0847d98be35a0b6dd702524b6e61b51ddee43\"" Sep 4 17:53:39.763362 sshd[4621]: Accepted publickey for core from 147.75.109.163 port 36442 ssh2: RSA SHA256:3UawMEk03AfeR6A6/drmeg302df853gVTK6IGVvrB/U Sep 4 17:53:39.766870 sshd[4621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:53:39.778730 systemd-logind[1441]: New session 12 of user core. Sep 4 17:53:39.783390 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 17:53:39.849432 containerd[1456]: 2024-09-04 17:53:39.812 [WARNING][4689] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c080fddd61097a4c7ac68ca5afc0847d98be35a0b6dd702524b6e61b51ddee43" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--kmp25-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"090627b4-68b0-464b-9437-a497123fe057", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 52, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal", ContainerID:"82c1fe216b3b26af966abf8f234c78c33fdebc5583693c2d0232729e8bf40581", Pod:"coredns-7db6d8ff4d-kmp25", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.118.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia2d9b3250b5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:53:39.849432 containerd[1456]: 2024-09-04 17:53:39.812 [INFO][4689] k8s.go 608: Cleaning up netns ContainerID="c080fddd61097a4c7ac68ca5afc0847d98be35a0b6dd702524b6e61b51ddee43" Sep 4 17:53:39.849432 containerd[1456]: 2024-09-04 17:53:39.812 [INFO][4689] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="c080fddd61097a4c7ac68ca5afc0847d98be35a0b6dd702524b6e61b51ddee43" iface="eth0" netns="" Sep 4 17:53:39.849432 containerd[1456]: 2024-09-04 17:53:39.812 [INFO][4689] k8s.go 615: Releasing IP address(es) ContainerID="c080fddd61097a4c7ac68ca5afc0847d98be35a0b6dd702524b6e61b51ddee43" Sep 4 17:53:39.849432 containerd[1456]: 2024-09-04 17:53:39.812 [INFO][4689] utils.go 188: Calico CNI releasing IP address ContainerID="c080fddd61097a4c7ac68ca5afc0847d98be35a0b6dd702524b6e61b51ddee43" Sep 4 17:53:39.849432 containerd[1456]: 2024-09-04 17:53:39.837 [INFO][4697] ipam_plugin.go 417: Releasing address using handleID ContainerID="c080fddd61097a4c7ac68ca5afc0847d98be35a0b6dd702524b6e61b51ddee43" HandleID="k8s-pod-network.c080fddd61097a4c7ac68ca5afc0847d98be35a0b6dd702524b6e61b51ddee43" Workload="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--kmp25-eth0" Sep 4 17:53:39.849432 containerd[1456]: 2024-09-04 17:53:39.837 [INFO][4697] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:53:39.849432 containerd[1456]: 2024-09-04 17:53:39.837 [INFO][4697] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:53:39.849432 containerd[1456]: 2024-09-04 17:53:39.844 [WARNING][4697] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="c080fddd61097a4c7ac68ca5afc0847d98be35a0b6dd702524b6e61b51ddee43" HandleID="k8s-pod-network.c080fddd61097a4c7ac68ca5afc0847d98be35a0b6dd702524b6e61b51ddee43" Workload="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--kmp25-eth0" Sep 4 17:53:39.849432 containerd[1456]: 2024-09-04 17:53:39.844 [INFO][4697] ipam_plugin.go 445: Releasing address using workloadID ContainerID="c080fddd61097a4c7ac68ca5afc0847d98be35a0b6dd702524b6e61b51ddee43" HandleID="k8s-pod-network.c080fddd61097a4c7ac68ca5afc0847d98be35a0b6dd702524b6e61b51ddee43" Workload="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--kmp25-eth0" Sep 4 17:53:39.849432 containerd[1456]: 2024-09-04 17:53:39.846 [INFO][4697] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:53:39.849432 containerd[1456]: 2024-09-04 17:53:39.848 [INFO][4689] k8s.go 621: Teardown processing complete. ContainerID="c080fddd61097a4c7ac68ca5afc0847d98be35a0b6dd702524b6e61b51ddee43" Sep 4 17:53:39.850356 containerd[1456]: time="2024-09-04T17:53:39.850148184Z" level=info msg="TearDown network for sandbox \"c080fddd61097a4c7ac68ca5afc0847d98be35a0b6dd702524b6e61b51ddee43\" successfully" Sep 4 17:53:39.850356 containerd[1456]: time="2024-09-04T17:53:39.850255814Z" level=info msg="StopPodSandbox for \"c080fddd61097a4c7ac68ca5afc0847d98be35a0b6dd702524b6e61b51ddee43\" returns successfully" Sep 4 17:53:39.851012 containerd[1456]: time="2024-09-04T17:53:39.850981264Z" level=info msg="RemovePodSandbox for \"c080fddd61097a4c7ac68ca5afc0847d98be35a0b6dd702524b6e61b51ddee43\"" Sep 4 17:53:39.851257 containerd[1456]: time="2024-09-04T17:53:39.851225995Z" level=info msg="Forcibly stopping sandbox \"c080fddd61097a4c7ac68ca5afc0847d98be35a0b6dd702524b6e61b51ddee43\"" Sep 4 17:53:39.940146 containerd[1456]: 2024-09-04 17:53:39.896 [WARNING][4715] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c080fddd61097a4c7ac68ca5afc0847d98be35a0b6dd702524b6e61b51ddee43" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--kmp25-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"090627b4-68b0-464b-9437-a497123fe057", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 52, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal", ContainerID:"82c1fe216b3b26af966abf8f234c78c33fdebc5583693c2d0232729e8bf40581", Pod:"coredns-7db6d8ff4d-kmp25", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.118.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia2d9b3250b5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:53:39.940146 containerd[1456]: 2024-09-04 17:53:39.897 [INFO][4715] k8s.go 608: Cleaning up netns ContainerID="c080fddd61097a4c7ac68ca5afc0847d98be35a0b6dd702524b6e61b51ddee43" Sep 4 17:53:39.940146 containerd[1456]: 2024-09-04 17:53:39.897 [INFO][4715] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="c080fddd61097a4c7ac68ca5afc0847d98be35a0b6dd702524b6e61b51ddee43" iface="eth0" netns="" Sep 4 17:53:39.940146 containerd[1456]: 2024-09-04 17:53:39.897 [INFO][4715] k8s.go 615: Releasing IP address(es) ContainerID="c080fddd61097a4c7ac68ca5afc0847d98be35a0b6dd702524b6e61b51ddee43" Sep 4 17:53:39.940146 containerd[1456]: 2024-09-04 17:53:39.897 [INFO][4715] utils.go 188: Calico CNI releasing IP address ContainerID="c080fddd61097a4c7ac68ca5afc0847d98be35a0b6dd702524b6e61b51ddee43" Sep 4 17:53:39.940146 containerd[1456]: 2024-09-04 17:53:39.923 [INFO][4721] ipam_plugin.go 417: Releasing address using handleID ContainerID="c080fddd61097a4c7ac68ca5afc0847d98be35a0b6dd702524b6e61b51ddee43" HandleID="k8s-pod-network.c080fddd61097a4c7ac68ca5afc0847d98be35a0b6dd702524b6e61b51ddee43" Workload="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--kmp25-eth0" Sep 4 17:53:39.940146 containerd[1456]: 2024-09-04 17:53:39.923 [INFO][4721] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:53:39.940146 containerd[1456]: 2024-09-04 17:53:39.923 [INFO][4721] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:53:39.940146 containerd[1456]: 2024-09-04 17:53:39.931 [WARNING][4721] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="c080fddd61097a4c7ac68ca5afc0847d98be35a0b6dd702524b6e61b51ddee43" HandleID="k8s-pod-network.c080fddd61097a4c7ac68ca5afc0847d98be35a0b6dd702524b6e61b51ddee43" Workload="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--kmp25-eth0" Sep 4 17:53:39.940146 containerd[1456]: 2024-09-04 17:53:39.931 [INFO][4721] ipam_plugin.go 445: Releasing address using workloadID ContainerID="c080fddd61097a4c7ac68ca5afc0847d98be35a0b6dd702524b6e61b51ddee43" HandleID="k8s-pod-network.c080fddd61097a4c7ac68ca5afc0847d98be35a0b6dd702524b6e61b51ddee43" Workload="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--kmp25-eth0" Sep 4 17:53:39.940146 containerd[1456]: 2024-09-04 17:53:39.933 [INFO][4721] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:53:39.940146 containerd[1456]: 2024-09-04 17:53:39.936 [INFO][4715] k8s.go 621: Teardown processing complete. ContainerID="c080fddd61097a4c7ac68ca5afc0847d98be35a0b6dd702524b6e61b51ddee43" Sep 4 17:53:39.940146 containerd[1456]: time="2024-09-04T17:53:39.938136727Z" level=info msg="TearDown network for sandbox \"c080fddd61097a4c7ac68ca5afc0847d98be35a0b6dd702524b6e61b51ddee43\" successfully" Sep 4 17:53:39.943986 containerd[1456]: time="2024-09-04T17:53:39.943910761Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c080fddd61097a4c7ac68ca5afc0847d98be35a0b6dd702524b6e61b51ddee43\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:53:39.944246 containerd[1456]: time="2024-09-04T17:53:39.944217938Z" level=info msg="RemovePodSandbox \"c080fddd61097a4c7ac68ca5afc0847d98be35a0b6dd702524b6e61b51ddee43\" returns successfully" Sep 4 17:53:39.945023 containerd[1456]: time="2024-09-04T17:53:39.944992825Z" level=info msg="StopPodSandbox for \"edfb6999042757428ea4cf4185b7a5f38a7f8faf9db1fb0a8d6c7a236029faed\"" Sep 4 17:53:40.102866 containerd[1456]: 2024-09-04 17:53:40.036 [WARNING][4747] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="edfb6999042757428ea4cf4185b7a5f38a7f8faf9db1fb0a8d6c7a236029faed" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-calico--kube--controllers--5f4c99c577--f2nlw-eth0", GenerateName:"calico-kube-controllers-5f4c99c577-", Namespace:"calico-system", SelfLink:"", UID:"f2e7d76b-6bfd-417d-a1c5-8517155d4273", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 53, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f4c99c577", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal", ContainerID:"fc800e2b3a09f5fa4af462c2112338027b5e9f52ce69fd2933b4d36d59bb1da1", Pod:"calico-kube-controllers-5f4c99c577-f2nlw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.118.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic5022c3891f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:53:40.102866 containerd[1456]: 2024-09-04 17:53:40.037 [INFO][4747] k8s.go 608: Cleaning up netns ContainerID="edfb6999042757428ea4cf4185b7a5f38a7f8faf9db1fb0a8d6c7a236029faed" Sep 4 17:53:40.102866 containerd[1456]: 2024-09-04 17:53:40.037 [INFO][4747] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="edfb6999042757428ea4cf4185b7a5f38a7f8faf9db1fb0a8d6c7a236029faed" iface="eth0" netns="" Sep 4 17:53:40.102866 containerd[1456]: 2024-09-04 17:53:40.037 [INFO][4747] k8s.go 615: Releasing IP address(es) ContainerID="edfb6999042757428ea4cf4185b7a5f38a7f8faf9db1fb0a8d6c7a236029faed" Sep 4 17:53:40.102866 containerd[1456]: 2024-09-04 17:53:40.038 [INFO][4747] utils.go 188: Calico CNI releasing IP address ContainerID="edfb6999042757428ea4cf4185b7a5f38a7f8faf9db1fb0a8d6c7a236029faed" Sep 4 17:53:40.102866 containerd[1456]: 2024-09-04 17:53:40.087 [INFO][4756] ipam_plugin.go 417: Releasing address using handleID ContainerID="edfb6999042757428ea4cf4185b7a5f38a7f8faf9db1fb0a8d6c7a236029faed" HandleID="k8s-pod-network.edfb6999042757428ea4cf4185b7a5f38a7f8faf9db1fb0a8d6c7a236029faed" Workload="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-calico--kube--controllers--5f4c99c577--f2nlw-eth0" Sep 4 17:53:40.102866 containerd[1456]: 2024-09-04 17:53:40.087 [INFO][4756] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:53:40.102866 containerd[1456]: 2024-09-04 17:53:40.088 [INFO][4756] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:53:40.102866 containerd[1456]: 2024-09-04 17:53:40.097 [WARNING][4756] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="edfb6999042757428ea4cf4185b7a5f38a7f8faf9db1fb0a8d6c7a236029faed" HandleID="k8s-pod-network.edfb6999042757428ea4cf4185b7a5f38a7f8faf9db1fb0a8d6c7a236029faed" Workload="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-calico--kube--controllers--5f4c99c577--f2nlw-eth0" Sep 4 17:53:40.102866 containerd[1456]: 2024-09-04 17:53:40.097 [INFO][4756] ipam_plugin.go 445: Releasing address using workloadID ContainerID="edfb6999042757428ea4cf4185b7a5f38a7f8faf9db1fb0a8d6c7a236029faed" HandleID="k8s-pod-network.edfb6999042757428ea4cf4185b7a5f38a7f8faf9db1fb0a8d6c7a236029faed" Workload="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-calico--kube--controllers--5f4c99c577--f2nlw-eth0" Sep 4 17:53:40.102866 containerd[1456]: 2024-09-04 17:53:40.099 [INFO][4756] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:53:40.102866 containerd[1456]: 2024-09-04 17:53:40.100 [INFO][4747] k8s.go 621: Teardown processing complete. ContainerID="edfb6999042757428ea4cf4185b7a5f38a7f8faf9db1fb0a8d6c7a236029faed" Sep 4 17:53:40.103801 containerd[1456]: time="2024-09-04T17:53:40.102827571Z" level=info msg="TearDown network for sandbox \"edfb6999042757428ea4cf4185b7a5f38a7f8faf9db1fb0a8d6c7a236029faed\" successfully" Sep 4 17:53:40.103801 containerd[1456]: time="2024-09-04T17:53:40.103265047Z" level=info msg="StopPodSandbox for \"edfb6999042757428ea4cf4185b7a5f38a7f8faf9db1fb0a8d6c7a236029faed\" returns successfully" Sep 4 17:53:40.104573 containerd[1456]: time="2024-09-04T17:53:40.104096194Z" level=info msg="RemovePodSandbox for \"edfb6999042757428ea4cf4185b7a5f38a7f8faf9db1fb0a8d6c7a236029faed\"" Sep 4 17:53:40.104573 containerd[1456]: time="2024-09-04T17:53:40.104133957Z" level=info msg="Forcibly stopping sandbox \"edfb6999042757428ea4cf4185b7a5f38a7f8faf9db1fb0a8d6c7a236029faed\"" Sep 4 17:53:40.114601 sshd[4621]: pam_unix(sshd:session): session closed for user core Sep 4 17:53:40.119694 systemd[1]: sshd@13-10.128.0.52:22-147.75.109.163:36442.service: Deactivated successfully. Sep 4 17:53:40.126875 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 17:53:40.132051 systemd-logind[1441]: Session 12 logged out. Waiting for processes to exit. Sep 4 17:53:40.136837 systemd-logind[1441]: Removed session 12. Sep 4 17:53:40.203429 containerd[1456]: 2024-09-04 17:53:40.166 [WARNING][4774] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="edfb6999042757428ea4cf4185b7a5f38a7f8faf9db1fb0a8d6c7a236029faed" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-calico--kube--controllers--5f4c99c577--f2nlw-eth0", GenerateName:"calico-kube-controllers-5f4c99c577-", Namespace:"calico-system", SelfLink:"", UID:"f2e7d76b-6bfd-417d-a1c5-8517155d4273", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 53, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f4c99c577", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal", ContainerID:"fc800e2b3a09f5fa4af462c2112338027b5e9f52ce69fd2933b4d36d59bb1da1", Pod:"calico-kube-controllers-5f4c99c577-f2nlw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.118.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic5022c3891f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:53:40.203429 containerd[1456]: 2024-09-04 17:53:40.166 [INFO][4774] k8s.go 608: Cleaning up netns ContainerID="edfb6999042757428ea4cf4185b7a5f38a7f8faf9db1fb0a8d6c7a236029faed" Sep 4 17:53:40.203429 containerd[1456]: 2024-09-04 17:53:40.167 [INFO][4774] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="edfb6999042757428ea4cf4185b7a5f38a7f8faf9db1fb0a8d6c7a236029faed" iface="eth0" netns="" Sep 4 17:53:40.203429 containerd[1456]: 2024-09-04 17:53:40.167 [INFO][4774] k8s.go 615: Releasing IP address(es) ContainerID="edfb6999042757428ea4cf4185b7a5f38a7f8faf9db1fb0a8d6c7a236029faed" Sep 4 17:53:40.203429 containerd[1456]: 2024-09-04 17:53:40.167 [INFO][4774] utils.go 188: Calico CNI releasing IP address ContainerID="edfb6999042757428ea4cf4185b7a5f38a7f8faf9db1fb0a8d6c7a236029faed" Sep 4 17:53:40.203429 containerd[1456]: 2024-09-04 17:53:40.192 [INFO][4783] ipam_plugin.go 417: Releasing address using handleID ContainerID="edfb6999042757428ea4cf4185b7a5f38a7f8faf9db1fb0a8d6c7a236029faed" HandleID="k8s-pod-network.edfb6999042757428ea4cf4185b7a5f38a7f8faf9db1fb0a8d6c7a236029faed" Workload="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-calico--kube--controllers--5f4c99c577--f2nlw-eth0" Sep 4 17:53:40.203429 containerd[1456]: 2024-09-04 17:53:40.193 [INFO][4783] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:53:40.203429 containerd[1456]: 2024-09-04 17:53:40.193 [INFO][4783] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:53:40.203429 containerd[1456]: 2024-09-04 17:53:40.199 [WARNING][4783] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="edfb6999042757428ea4cf4185b7a5f38a7f8faf9db1fb0a8d6c7a236029faed" HandleID="k8s-pod-network.edfb6999042757428ea4cf4185b7a5f38a7f8faf9db1fb0a8d6c7a236029faed" Workload="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-calico--kube--controllers--5f4c99c577--f2nlw-eth0" Sep 4 17:53:40.203429 containerd[1456]: 2024-09-04 17:53:40.199 [INFO][4783] ipam_plugin.go 445: Releasing address using workloadID ContainerID="edfb6999042757428ea4cf4185b7a5f38a7f8faf9db1fb0a8d6c7a236029faed" HandleID="k8s-pod-network.edfb6999042757428ea4cf4185b7a5f38a7f8faf9db1fb0a8d6c7a236029faed" Workload="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-calico--kube--controllers--5f4c99c577--f2nlw-eth0" Sep 4 17:53:40.203429 containerd[1456]: 2024-09-04 17:53:40.201 [INFO][4783] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:53:40.203429 containerd[1456]: 2024-09-04 17:53:40.202 [INFO][4774] k8s.go 621: Teardown processing complete. ContainerID="edfb6999042757428ea4cf4185b7a5f38a7f8faf9db1fb0a8d6c7a236029faed" Sep 4 17:53:40.204571 containerd[1456]: time="2024-09-04T17:53:40.203481868Z" level=info msg="TearDown network for sandbox \"edfb6999042757428ea4cf4185b7a5f38a7f8faf9db1fb0a8d6c7a236029faed\" successfully" Sep 4 17:53:40.213802 containerd[1456]: time="2024-09-04T17:53:40.212684195Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"edfb6999042757428ea4cf4185b7a5f38a7f8faf9db1fb0a8d6c7a236029faed\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:53:40.213802 containerd[1456]: time="2024-09-04T17:53:40.212815634Z" level=info msg="RemovePodSandbox \"edfb6999042757428ea4cf4185b7a5f38a7f8faf9db1fb0a8d6c7a236029faed\" returns successfully" Sep 4 17:53:40.214933 containerd[1456]: time="2024-09-04T17:53:40.214901962Z" level=info msg="StopPodSandbox for \"0e5cb0fa1a121df5e8a5f95962197ec847cb1409092d3ffa1e92784f84413787\"" Sep 4 17:53:40.298955 containerd[1456]: 2024-09-04 17:53:40.260 [WARNING][4801] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0e5cb0fa1a121df5e8a5f95962197ec847cb1409092d3ffa1e92784f84413787" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--s2r6n-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"f10445bd-c567-4cbb-b259-041496b4f378", ResourceVersion:"730", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 52, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal", ContainerID:"32c59166c4e1dad89ed22bb5e5b59d6b662dcfc67ee76e6718eafe98274acca9", Pod:"coredns-7db6d8ff4d-s2r6n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.118.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali84a8bd4d8d1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:53:40.298955 containerd[1456]: 2024-09-04 17:53:40.260 [INFO][4801] k8s.go 608: Cleaning up netns ContainerID="0e5cb0fa1a121df5e8a5f95962197ec847cb1409092d3ffa1e92784f84413787" Sep 4 17:53:40.298955 containerd[1456]: 2024-09-04 17:53:40.260 [INFO][4801] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="0e5cb0fa1a121df5e8a5f95962197ec847cb1409092d3ffa1e92784f84413787" iface="eth0" netns="" Sep 4 17:53:40.298955 containerd[1456]: 2024-09-04 17:53:40.261 [INFO][4801] k8s.go 615: Releasing IP address(es) ContainerID="0e5cb0fa1a121df5e8a5f95962197ec847cb1409092d3ffa1e92784f84413787" Sep 4 17:53:40.298955 containerd[1456]: 2024-09-04 17:53:40.261 [INFO][4801] utils.go 188: Calico CNI releasing IP address ContainerID="0e5cb0fa1a121df5e8a5f95962197ec847cb1409092d3ffa1e92784f84413787" Sep 4 17:53:40.298955 containerd[1456]: 2024-09-04 17:53:40.287 [INFO][4807] ipam_plugin.go 417: Releasing address using handleID ContainerID="0e5cb0fa1a121df5e8a5f95962197ec847cb1409092d3ffa1e92784f84413787" HandleID="k8s-pod-network.0e5cb0fa1a121df5e8a5f95962197ec847cb1409092d3ffa1e92784f84413787" Workload="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--s2r6n-eth0" Sep 4 17:53:40.298955 containerd[1456]: 2024-09-04 17:53:40.288 [INFO][4807] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:53:40.298955 containerd[1456]: 2024-09-04 17:53:40.288 [INFO][4807] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:53:40.298955 containerd[1456]: 2024-09-04 17:53:40.294 [WARNING][4807] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="0e5cb0fa1a121df5e8a5f95962197ec847cb1409092d3ffa1e92784f84413787" HandleID="k8s-pod-network.0e5cb0fa1a121df5e8a5f95962197ec847cb1409092d3ffa1e92784f84413787" Workload="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--s2r6n-eth0" Sep 4 17:53:40.298955 containerd[1456]: 2024-09-04 17:53:40.294 [INFO][4807] ipam_plugin.go 445: Releasing address using workloadID ContainerID="0e5cb0fa1a121df5e8a5f95962197ec847cb1409092d3ffa1e92784f84413787" HandleID="k8s-pod-network.0e5cb0fa1a121df5e8a5f95962197ec847cb1409092d3ffa1e92784f84413787" Workload="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--s2r6n-eth0" Sep 4 17:53:40.298955 containerd[1456]: 2024-09-04 17:53:40.296 [INFO][4807] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:53:40.298955 containerd[1456]: 2024-09-04 17:53:40.297 [INFO][4801] k8s.go 621: Teardown processing complete. ContainerID="0e5cb0fa1a121df5e8a5f95962197ec847cb1409092d3ffa1e92784f84413787" Sep 4 17:53:40.299739 containerd[1456]: time="2024-09-04T17:53:40.299077658Z" level=info msg="TearDown network for sandbox \"0e5cb0fa1a121df5e8a5f95962197ec847cb1409092d3ffa1e92784f84413787\" successfully" Sep 4 17:53:40.299739 containerd[1456]: time="2024-09-04T17:53:40.299119167Z" level=info msg="StopPodSandbox for \"0e5cb0fa1a121df5e8a5f95962197ec847cb1409092d3ffa1e92784f84413787\" returns successfully" Sep 4 17:53:40.299849 containerd[1456]: time="2024-09-04T17:53:40.299738474Z" level=info msg="RemovePodSandbox for \"0e5cb0fa1a121df5e8a5f95962197ec847cb1409092d3ffa1e92784f84413787\"" Sep 4 17:53:40.299849 containerd[1456]: time="2024-09-04T17:53:40.299776647Z" level=info msg="Forcibly stopping sandbox \"0e5cb0fa1a121df5e8a5f95962197ec847cb1409092d3ffa1e92784f84413787\"" Sep 4 17:53:40.385115 containerd[1456]: 2024-09-04 17:53:40.344 [WARNING][4825] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0e5cb0fa1a121df5e8a5f95962197ec847cb1409092d3ffa1e92784f84413787" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--s2r6n-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"f10445bd-c567-4cbb-b259-041496b4f378", ResourceVersion:"730", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 52, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal", ContainerID:"32c59166c4e1dad89ed22bb5e5b59d6b662dcfc67ee76e6718eafe98274acca9", Pod:"coredns-7db6d8ff4d-s2r6n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.118.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali84a8bd4d8d1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:53:40.385115 containerd[1456]: 2024-09-04 17:53:40.345 [INFO][4825] k8s.go 608: Cleaning up netns ContainerID="0e5cb0fa1a121df5e8a5f95962197ec847cb1409092d3ffa1e92784f84413787" Sep 4 17:53:40.385115 containerd[1456]: 2024-09-04 17:53:40.345 [INFO][4825] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="0e5cb0fa1a121df5e8a5f95962197ec847cb1409092d3ffa1e92784f84413787" iface="eth0" netns="" Sep 4 17:53:40.385115 containerd[1456]: 2024-09-04 17:53:40.345 [INFO][4825] k8s.go 615: Releasing IP address(es) ContainerID="0e5cb0fa1a121df5e8a5f95962197ec847cb1409092d3ffa1e92784f84413787" Sep 4 17:53:40.385115 containerd[1456]: 2024-09-04 17:53:40.345 [INFO][4825] utils.go 188: Calico CNI releasing IP address ContainerID="0e5cb0fa1a121df5e8a5f95962197ec847cb1409092d3ffa1e92784f84413787" Sep 4 17:53:40.385115 containerd[1456]: 2024-09-04 17:53:40.373 [INFO][4831] ipam_plugin.go 417: Releasing address using handleID ContainerID="0e5cb0fa1a121df5e8a5f95962197ec847cb1409092d3ffa1e92784f84413787" HandleID="k8s-pod-network.0e5cb0fa1a121df5e8a5f95962197ec847cb1409092d3ffa1e92784f84413787" Workload="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--s2r6n-eth0" Sep 4 17:53:40.385115 containerd[1456]: 2024-09-04 17:53:40.373 [INFO][4831] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:53:40.385115 containerd[1456]: 2024-09-04 17:53:40.373 [INFO][4831] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:53:40.385115 containerd[1456]: 2024-09-04 17:53:40.380 [WARNING][4831] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="0e5cb0fa1a121df5e8a5f95962197ec847cb1409092d3ffa1e92784f84413787" HandleID="k8s-pod-network.0e5cb0fa1a121df5e8a5f95962197ec847cb1409092d3ffa1e92784f84413787" Workload="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--s2r6n-eth0" Sep 4 17:53:40.385115 containerd[1456]: 2024-09-04 17:53:40.380 [INFO][4831] ipam_plugin.go 445: Releasing address using workloadID ContainerID="0e5cb0fa1a121df5e8a5f95962197ec847cb1409092d3ffa1e92784f84413787" HandleID="k8s-pod-network.0e5cb0fa1a121df5e8a5f95962197ec847cb1409092d3ffa1e92784f84413787" Workload="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--s2r6n-eth0" Sep 4 17:53:40.385115 containerd[1456]: 2024-09-04 17:53:40.381 [INFO][4831] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:53:40.385115 containerd[1456]: 2024-09-04 17:53:40.383 [INFO][4825] k8s.go 621: Teardown processing complete. ContainerID="0e5cb0fa1a121df5e8a5f95962197ec847cb1409092d3ffa1e92784f84413787" Sep 4 17:53:40.386347 containerd[1456]: time="2024-09-04T17:53:40.385212409Z" level=info msg="TearDown network for sandbox \"0e5cb0fa1a121df5e8a5f95962197ec847cb1409092d3ffa1e92784f84413787\" successfully" Sep 4 17:53:40.389900 containerd[1456]: time="2024-09-04T17:53:40.389857831Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0e5cb0fa1a121df5e8a5f95962197ec847cb1409092d3ffa1e92784f84413787\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:53:40.390086 containerd[1456]: time="2024-09-04T17:53:40.389939783Z" level=info msg="RemovePodSandbox \"0e5cb0fa1a121df5e8a5f95962197ec847cb1409092d3ffa1e92784f84413787\" returns successfully" Sep 4 17:53:45.170593 systemd[1]: Started sshd@14-10.128.0.52:22-147.75.109.163:36448.service - OpenSSH per-connection server daemon (147.75.109.163:36448). Sep 4 17:53:45.451425 sshd[4868]: Accepted publickey for core from 147.75.109.163 port 36448 ssh2: RSA SHA256:3UawMEk03AfeR6A6/drmeg302df853gVTK6IGVvrB/U Sep 4 17:53:45.453013 sshd[4868]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:53:45.459248 systemd-logind[1441]: New session 13 of user core. Sep 4 17:53:45.466363 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 17:53:45.740442 sshd[4868]: pam_unix(sshd:session): session closed for user core Sep 4 17:53:45.746716 systemd[1]: sshd@14-10.128.0.52:22-147.75.109.163:36448.service: Deactivated successfully. Sep 4 17:53:45.749910 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 17:53:45.751051 systemd-logind[1441]: Session 13 logged out. Waiting for processes to exit. Sep 4 17:53:45.753981 systemd-logind[1441]: Removed session 13. Sep 4 17:53:45.797987 systemd[1]: Started sshd@15-10.128.0.52:22-147.75.109.163:39578.service - OpenSSH per-connection server daemon (147.75.109.163:39578). Sep 4 17:53:46.084792 sshd[4881]: Accepted publickey for core from 147.75.109.163 port 39578 ssh2: RSA SHA256:3UawMEk03AfeR6A6/drmeg302df853gVTK6IGVvrB/U Sep 4 17:53:46.087067 sshd[4881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:53:46.093925 systemd-logind[1441]: New session 14 of user core. Sep 4 17:53:46.096406 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 17:53:46.408737 sshd[4881]: pam_unix(sshd:session): session closed for user core Sep 4 17:53:46.414120 systemd[1]: sshd@15-10.128.0.52:22-147.75.109.163:39578.service: Deactivated successfully. Sep 4 17:53:46.417205 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 17:53:46.419443 systemd-logind[1441]: Session 14 logged out. Waiting for processes to exit. Sep 4 17:53:46.421095 systemd-logind[1441]: Removed session 14. Sep 4 17:53:46.464594 systemd[1]: Started sshd@16-10.128.0.52:22-147.75.109.163:39590.service - OpenSSH per-connection server daemon (147.75.109.163:39590). Sep 4 17:53:46.756127 sshd[4892]: Accepted publickey for core from 147.75.109.163 port 39590 ssh2: RSA SHA256:3UawMEk03AfeR6A6/drmeg302df853gVTK6IGVvrB/U Sep 4 17:53:46.758422 sshd[4892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:53:46.765062 systemd-logind[1441]: New session 15 of user core. Sep 4 17:53:46.775447 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 17:53:47.044458 sshd[4892]: pam_unix(sshd:session): session closed for user core Sep 4 17:53:47.050335 systemd[1]: sshd@16-10.128.0.52:22-147.75.109.163:39590.service: Deactivated successfully. Sep 4 17:53:47.052752 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 17:53:47.053951 systemd-logind[1441]: Session 15 logged out. Waiting for processes to exit. Sep 4 17:53:47.055534 systemd-logind[1441]: Removed session 15. Sep 4 17:53:52.101553 systemd[1]: Started sshd@17-10.128.0.52:22-147.75.109.163:39604.service - OpenSSH per-connection server daemon (147.75.109.163:39604). Sep 4 17:53:52.391495 sshd[4910]: Accepted publickey for core from 147.75.109.163 port 39604 ssh2: RSA SHA256:3UawMEk03AfeR6A6/drmeg302df853gVTK6IGVvrB/U Sep 4 17:53:52.393493 sshd[4910]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:53:52.402661 systemd-logind[1441]: New session 16 of user core. Sep 4 17:53:52.407418 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 17:53:52.685417 sshd[4910]: pam_unix(sshd:session): session closed for user core Sep 4 17:53:52.691493 systemd[1]: sshd@17-10.128.0.52:22-147.75.109.163:39604.service: Deactivated successfully. Sep 4 17:53:52.694065 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 17:53:52.695147 systemd-logind[1441]: Session 16 logged out. Waiting for processes to exit. Sep 4 17:53:52.696665 systemd-logind[1441]: Removed session 16. Sep 4 17:53:57.740722 systemd[1]: Started sshd@18-10.128.0.52:22-147.75.109.163:58116.service - OpenSSH per-connection server daemon (147.75.109.163:58116). Sep 4 17:53:58.041466 sshd[4930]: Accepted publickey for core from 147.75.109.163 port 58116 ssh2: RSA SHA256:3UawMEk03AfeR6A6/drmeg302df853gVTK6IGVvrB/U Sep 4 17:53:58.043401 sshd[4930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:53:58.050398 systemd-logind[1441]: New session 17 of user core. Sep 4 17:53:58.054402 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 17:53:58.333499 sshd[4930]: pam_unix(sshd:session): session closed for user core Sep 4 17:53:58.338455 systemd[1]: sshd@18-10.128.0.52:22-147.75.109.163:58116.service: Deactivated successfully. Sep 4 17:53:58.341872 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 17:53:58.344394 systemd-logind[1441]: Session 17 logged out. Waiting for processes to exit. Sep 4 17:53:58.346835 systemd-logind[1441]: Removed session 17. Sep 4 17:53:59.266625 systemd[1]: Started sshd@19-10.128.0.52:22-128.199.100.189:48278.service - OpenSSH per-connection server daemon (128.199.100.189:48278). Sep 4 17:54:00.613172 sshd[4944]: Received disconnect from 128.199.100.189 port 48278:11: Bye Bye [preauth] Sep 4 17:54:00.613172 sshd[4944]: Disconnected from authenticating user root 128.199.100.189 port 48278 [preauth] Sep 4 17:54:00.616591 systemd[1]: sshd@19-10.128.0.52:22-128.199.100.189:48278.service: Deactivated successfully. Sep 4 17:54:00.871732 kubelet[2620]: I0904 17:54:00.871567 2620 topology_manager.go:215] "Topology Admit Handler" podUID="71bef7f8-8900-4db4-a4da-126e89c95642" podNamespace="calico-apiserver" podName="calico-apiserver-85cc95999c-l9rqm" Sep 4 17:54:00.888095 systemd[1]: Created slice kubepods-besteffort-pod71bef7f8_8900_4db4_a4da_126e89c95642.slice - libcontainer container kubepods-besteffort-pod71bef7f8_8900_4db4_a4da_126e89c95642.slice. Sep 4 17:54:00.898796 kubelet[2620]: I0904 17:54:00.898421 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2755\" (UniqueName: \"kubernetes.io/projected/71bef7f8-8900-4db4-a4da-126e89c95642-kube-api-access-h2755\") pod \"calico-apiserver-85cc95999c-l9rqm\" (UID: \"71bef7f8-8900-4db4-a4da-126e89c95642\") " pod="calico-apiserver/calico-apiserver-85cc95999c-l9rqm" Sep 4 17:54:00.898796 kubelet[2620]: I0904 17:54:00.898521 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/71bef7f8-8900-4db4-a4da-126e89c95642-calico-apiserver-certs\") pod \"calico-apiserver-85cc95999c-l9rqm\" (UID: \"71bef7f8-8900-4db4-a4da-126e89c95642\") " pod="calico-apiserver/calico-apiserver-85cc95999c-l9rqm" Sep 4 17:54:00.999143 kubelet[2620]: E0904 17:54:00.999063 2620 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Sep 4 17:54:00.999677 kubelet[2620]: E0904 17:54:00.999183 2620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71bef7f8-8900-4db4-a4da-126e89c95642-calico-apiserver-certs podName:71bef7f8-8900-4db4-a4da-126e89c95642 nodeName:}" failed. No retries permitted until 2024-09-04 17:54:01.499144713 +0000 UTC m=+82.108664096 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/71bef7f8-8900-4db4-a4da-126e89c95642-calico-apiserver-certs") pod "calico-apiserver-85cc95999c-l9rqm" (UID: "71bef7f8-8900-4db4-a4da-126e89c95642") : secret "calico-apiserver-certs" not found Sep 4 17:54:01.795619 containerd[1456]: time="2024-09-04T17:54:01.795565487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85cc95999c-l9rqm,Uid:71bef7f8-8900-4db4-a4da-126e89c95642,Namespace:calico-apiserver,Attempt:0,}" Sep 4 17:54:01.974636 systemd-networkd[1375]: cali98dda0a6152: Link UP Sep 4 17:54:01.975005 systemd-networkd[1375]: cali98dda0a6152: Gained carrier Sep 4 17:54:02.002778 containerd[1456]: 2024-09-04 17:54:01.862 [INFO][4955] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-calico--apiserver--85cc95999c--l9rqm-eth0 calico-apiserver-85cc95999c- calico-apiserver 71bef7f8-8900-4db4-a4da-126e89c95642 1024 0 2024-09-04 17:54:00 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:85cc95999c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal calico-apiserver-85cc95999c-l9rqm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali98dda0a6152 [] []}} ContainerID="867892544ca2ee9374df89e0513afc1262c7165cbcc63b26e3a1fb75916463fa" Namespace="calico-apiserver" Pod="calico-apiserver-85cc95999c-l9rqm" WorkloadEndpoint="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-calico--apiserver--85cc95999c--l9rqm-" Sep 4 17:54:02.002778 containerd[1456]: 2024-09-04 17:54:01.862 [INFO][4955] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="867892544ca2ee9374df89e0513afc1262c7165cbcc63b26e3a1fb75916463fa" Namespace="calico-apiserver" Pod="calico-apiserver-85cc95999c-l9rqm" WorkloadEndpoint="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-calico--apiserver--85cc95999c--l9rqm-eth0" Sep 4 17:54:02.002778 containerd[1456]: 2024-09-04 17:54:01.910 [INFO][4965] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="867892544ca2ee9374df89e0513afc1262c7165cbcc63b26e3a1fb75916463fa" HandleID="k8s-pod-network.867892544ca2ee9374df89e0513afc1262c7165cbcc63b26e3a1fb75916463fa" Workload="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-calico--apiserver--85cc95999c--l9rqm-eth0" Sep 4 17:54:02.002778 containerd[1456]: 2024-09-04 17:54:01.927 [INFO][4965] ipam_plugin.go 270: Auto assigning IP ContainerID="867892544ca2ee9374df89e0513afc1262c7165cbcc63b26e3a1fb75916463fa" HandleID="k8s-pod-network.867892544ca2ee9374df89e0513afc1262c7165cbcc63b26e3a1fb75916463fa" Workload="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-calico--apiserver--85cc95999c--l9rqm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318100), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal", "pod":"calico-apiserver-85cc95999c-l9rqm", "timestamp":"2024-09-04 17:54:01.910491959 +0000 UTC"}, Hostname:"ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:54:02.002778 containerd[1456]: 2024-09-04 17:54:01.928 [INFO][4965] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:54:02.002778 containerd[1456]: 2024-09-04 17:54:01.928 [INFO][4965] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:54:02.002778 containerd[1456]: 2024-09-04 17:54:01.928 [INFO][4965] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal' Sep 4 17:54:02.002778 containerd[1456]: 2024-09-04 17:54:01.931 [INFO][4965] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.867892544ca2ee9374df89e0513afc1262c7165cbcc63b26e3a1fb75916463fa" host="ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:54:02.002778 containerd[1456]: 2024-09-04 17:54:01.937 [INFO][4965] ipam.go 372: Looking up existing affinities for host host="ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:54:02.002778 containerd[1456]: 2024-09-04 17:54:01.944 [INFO][4965] ipam.go 489: Trying affinity for 192.168.118.128/26 host="ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:54:02.002778 containerd[1456]: 2024-09-04 17:54:01.948 [INFO][4965] ipam.go 155: Attempting to load block cidr=192.168.118.128/26 host="ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:54:02.002778 containerd[1456]: 2024-09-04 17:54:01.951 [INFO][4965] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.118.128/26 host="ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:54:02.002778 containerd[1456]: 2024-09-04 17:54:01.951 [INFO][4965] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.118.128/26 handle="k8s-pod-network.867892544ca2ee9374df89e0513afc1262c7165cbcc63b26e3a1fb75916463fa" host="ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:54:02.002778 containerd[1456]: 2024-09-04 17:54:01.953 [INFO][4965] ipam.go 1685: Creating new handle: k8s-pod-network.867892544ca2ee9374df89e0513afc1262c7165cbcc63b26e3a1fb75916463fa Sep 4 17:54:02.002778 containerd[1456]: 2024-09-04 17:54:01.958 [INFO][4965] ipam.go 1203: Writing block in order to claim IPs block=192.168.118.128/26 handle="k8s-pod-network.867892544ca2ee9374df89e0513afc1262c7165cbcc63b26e3a1fb75916463fa" host="ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:54:02.002778 containerd[1456]: 2024-09-04 17:54:01.964 [INFO][4965] ipam.go 1216: Successfully claimed IPs: [192.168.118.133/26] block=192.168.118.128/26 handle="k8s-pod-network.867892544ca2ee9374df89e0513afc1262c7165cbcc63b26e3a1fb75916463fa" host="ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:54:02.002778 containerd[1456]: 2024-09-04 17:54:01.964 [INFO][4965] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.118.133/26] handle="k8s-pod-network.867892544ca2ee9374df89e0513afc1262c7165cbcc63b26e3a1fb75916463fa" host="ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal" Sep 4 17:54:02.002778 containerd[1456]: 2024-09-04 17:54:01.964 [INFO][4965] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:54:02.002778 containerd[1456]: 2024-09-04 17:54:01.965 [INFO][4965] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.118.133/26] IPv6=[] ContainerID="867892544ca2ee9374df89e0513afc1262c7165cbcc63b26e3a1fb75916463fa" HandleID="k8s-pod-network.867892544ca2ee9374df89e0513afc1262c7165cbcc63b26e3a1fb75916463fa" Workload="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-calico--apiserver--85cc95999c--l9rqm-eth0" Sep 4 17:54:02.004020 containerd[1456]: 2024-09-04 17:54:01.968 [INFO][4955] k8s.go 386: Populated endpoint ContainerID="867892544ca2ee9374df89e0513afc1262c7165cbcc63b26e3a1fb75916463fa" Namespace="calico-apiserver" Pod="calico-apiserver-85cc95999c-l9rqm" WorkloadEndpoint="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-calico--apiserver--85cc95999c--l9rqm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-calico--apiserver--85cc95999c--l9rqm-eth0", GenerateName:"calico-apiserver-85cc95999c-", Namespace:"calico-apiserver", SelfLink:"", UID:"71bef7f8-8900-4db4-a4da-126e89c95642", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 54, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85cc95999c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-85cc95999c-l9rqm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.118.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali98dda0a6152", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:54:02.004020 containerd[1456]: 2024-09-04 17:54:01.968 [INFO][4955] k8s.go 387: Calico CNI using IPs: [192.168.118.133/32] ContainerID="867892544ca2ee9374df89e0513afc1262c7165cbcc63b26e3a1fb75916463fa" Namespace="calico-apiserver" Pod="calico-apiserver-85cc95999c-l9rqm" WorkloadEndpoint="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-calico--apiserver--85cc95999c--l9rqm-eth0" Sep 4 17:54:02.004020 containerd[1456]: 2024-09-04 17:54:01.969 [INFO][4955] dataplane_linux.go 68: Setting the host side veth name to cali98dda0a6152 ContainerID="867892544ca2ee9374df89e0513afc1262c7165cbcc63b26e3a1fb75916463fa" Namespace="calico-apiserver" Pod="calico-apiserver-85cc95999c-l9rqm" WorkloadEndpoint="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-calico--apiserver--85cc95999c--l9rqm-eth0" Sep 4 17:54:02.004020 containerd[1456]: 2024-09-04 17:54:01.973 [INFO][4955] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="867892544ca2ee9374df89e0513afc1262c7165cbcc63b26e3a1fb75916463fa" Namespace="calico-apiserver" Pod="calico-apiserver-85cc95999c-l9rqm" WorkloadEndpoint="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-calico--apiserver--85cc95999c--l9rqm-eth0" Sep 4 17:54:02.004020 containerd[1456]: 2024-09-04 17:54:01.974 [INFO][4955] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="867892544ca2ee9374df89e0513afc1262c7165cbcc63b26e3a1fb75916463fa" Namespace="calico-apiserver" Pod="calico-apiserver-85cc95999c-l9rqm" WorkloadEndpoint="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-calico--apiserver--85cc95999c--l9rqm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-calico--apiserver--85cc95999c--l9rqm-eth0", GenerateName:"calico-apiserver-85cc95999c-", Namespace:"calico-apiserver", SelfLink:"", UID:"71bef7f8-8900-4db4-a4da-126e89c95642", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 54, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85cc95999c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4054-1-0-a2db7da4357f84326c59.c.flatcar-212911.internal", ContainerID:"867892544ca2ee9374df89e0513afc1262c7165cbcc63b26e3a1fb75916463fa", Pod:"calico-apiserver-85cc95999c-l9rqm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.118.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali98dda0a6152", MAC:"be:49:74:83:9c:bf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:54:02.004020 containerd[1456]: 2024-09-04 17:54:01.999 [INFO][4955] k8s.go 500: Wrote updated endpoint to datastore ContainerID="867892544ca2ee9374df89e0513afc1262c7165cbcc63b26e3a1fb75916463fa" Namespace="calico-apiserver" Pod="calico-apiserver-85cc95999c-l9rqm" WorkloadEndpoint="ci--4054--1--0--a2db7da4357f84326c59.c.flatcar--212911.internal-k8s-calico--apiserver--85cc95999c--l9rqm-eth0" Sep 4 17:54:02.058048 containerd[1456]: time="2024-09-04T17:54:02.057641791Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:54:02.058048 containerd[1456]: time="2024-09-04T17:54:02.057878213Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:54:02.058048 containerd[1456]: time="2024-09-04T17:54:02.057950445Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:54:02.061025 containerd[1456]: time="2024-09-04T17:54:02.059755679Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:54:02.131683 systemd[1]: run-containerd-runc-k8s.io-867892544ca2ee9374df89e0513afc1262c7165cbcc63b26e3a1fb75916463fa-runc.7LZbbN.mount: Deactivated successfully. Sep 4 17:54:02.144132 systemd[1]: Started cri-containerd-867892544ca2ee9374df89e0513afc1262c7165cbcc63b26e3a1fb75916463fa.scope - libcontainer container 867892544ca2ee9374df89e0513afc1262c7165cbcc63b26e3a1fb75916463fa. Sep 4 17:54:02.243604 containerd[1456]: time="2024-09-04T17:54:02.243539215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85cc95999c-l9rqm,Uid:71bef7f8-8900-4db4-a4da-126e89c95642,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"867892544ca2ee9374df89e0513afc1262c7165cbcc63b26e3a1fb75916463fa\"" Sep 4 17:54:02.246067 containerd[1456]: time="2024-09-04T17:54:02.246025659Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Sep 4 17:54:03.403355 systemd[1]: Started sshd@20-10.128.0.52:22-147.75.109.163:58128.service - OpenSSH per-connection server daemon (147.75.109.163:58128). Sep 4 17:54:03.710297 systemd-networkd[1375]: cali98dda0a6152: Gained IPv6LL Sep 4 17:54:03.737008 sshd[5040]: Accepted publickey for core from 147.75.109.163 port 58128 ssh2: RSA SHA256:3UawMEk03AfeR6A6/drmeg302df853gVTK6IGVvrB/U Sep 4 17:54:03.741684 sshd[5040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:54:03.763244 systemd-logind[1441]: New session 18 of user core. Sep 4 17:54:03.765639 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 17:54:04.161828 systemd[1]: run-containerd-runc-k8s.io-b6cf4d8a55953e52d3b77c4fd0f110cb895892ff6d7dae437d9998924309bcc2-runc.itaHlv.mount: Deactivated successfully. Sep 4 17:54:04.225965 sshd[5040]: pam_unix(sshd:session): session closed for user core Sep 4 17:54:04.242875 systemd[1]: sshd@20-10.128.0.52:22-147.75.109.163:58128.service: Deactivated successfully. Sep 4 17:54:04.252008 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 17:54:04.258194 systemd-logind[1441]: Session 18 logged out. Waiting for processes to exit. Sep 4 17:54:04.263060 systemd-logind[1441]: Removed session 18. Sep 4 17:54:04.291504 systemd[1]: Started sshd@21-10.128.0.52:22-147.75.109.163:58130.service - OpenSSH per-connection server daemon (147.75.109.163:58130). Sep 4 17:54:04.623364 sshd[5074]: Accepted publickey for core from 147.75.109.163 port 58130 ssh2: RSA SHA256:3UawMEk03AfeR6A6/drmeg302df853gVTK6IGVvrB/U Sep 4 17:54:04.628742 sshd[5074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:54:04.647489 systemd-logind[1441]: New session 19 of user core. Sep 4 17:54:04.653385 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 17:54:05.103606 sshd[5074]: pam_unix(sshd:session): session closed for user core Sep 4 17:54:05.112063 systemd[1]: sshd@21-10.128.0.52:22-147.75.109.163:58130.service: Deactivated successfully. Sep 4 17:54:05.117399 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 17:54:05.122396 systemd-logind[1441]: Session 19 logged out. Waiting for processes to exit. Sep 4 17:54:05.127869 systemd-logind[1441]: Removed session 19. Sep 4 17:54:05.166537 systemd[1]: Started sshd@22-10.128.0.52:22-147.75.109.163:58140.service - OpenSSH per-connection server daemon (147.75.109.163:58140). Sep 4 17:54:05.502387 sshd[5090]: Accepted publickey for core from 147.75.109.163 port 58140 ssh2: RSA SHA256:3UawMEk03AfeR6A6/drmeg302df853gVTK6IGVvrB/U Sep 4 17:54:05.504404 sshd[5090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:54:05.510307 systemd-logind[1441]: New session 20 of user core. Sep 4 17:54:05.517540 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 17:54:05.541893 containerd[1456]: time="2024-09-04T17:54:05.541830435Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:54:05.543968 containerd[1456]: time="2024-09-04T17:54:05.543889338Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=40419849" Sep 4 17:54:05.544968 containerd[1456]: time="2024-09-04T17:54:05.544903794Z" level=info msg="ImageCreate event name:\"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:54:05.549187 containerd[1456]: time="2024-09-04T17:54:05.549131490Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:54:05.550244 containerd[1456]: time="2024-09-04T17:54:05.550200591Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 3.304126509s" Sep 4 17:54:05.550244 containerd[1456]: time="2024-09-04T17:54:05.550248521Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Sep 4 17:54:05.555225 containerd[1456]: time="2024-09-04T17:54:05.555185620Z" level=info msg="CreateContainer within sandbox \"867892544ca2ee9374df89e0513afc1262c7165cbcc63b26e3a1fb75916463fa\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 4 17:54:05.572780 containerd[1456]: time="2024-09-04T17:54:05.572725197Z" level=info msg="CreateContainer within sandbox \"867892544ca2ee9374df89e0513afc1262c7165cbcc63b26e3a1fb75916463fa\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4c9e777856b9f1c755f80b8ef1aa59669e0d9ea86ecdfccc6cd9d937363a8adb\"" Sep 4 17:54:05.578181 containerd[1456]: time="2024-09-04T17:54:05.576519134Z" level=info msg="StartContainer for \"4c9e777856b9f1c755f80b8ef1aa59669e0d9ea86ecdfccc6cd9d937363a8adb\"" Sep 4 17:54:05.579828 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount705227379.mount: Deactivated successfully. Sep 4 17:54:05.636340 systemd[1]: Started cri-containerd-4c9e777856b9f1c755f80b8ef1aa59669e0d9ea86ecdfccc6cd9d937363a8adb.scope - libcontainer container 4c9e777856b9f1c755f80b8ef1aa59669e0d9ea86ecdfccc6cd9d937363a8adb. Sep 4 17:54:05.722207 containerd[1456]: time="2024-09-04T17:54:05.718987465Z" level=info msg="StartContainer for \"4c9e777856b9f1c755f80b8ef1aa59669e0d9ea86ecdfccc6cd9d937363a8adb\" returns successfully" Sep 4 17:54:05.922812 ntpd[1425]: Listen normally on 14 cali98dda0a6152 [fe80::ecee:eeff:feee:eeee%11]:123 Sep 4 17:54:05.923357 ntpd[1425]: 4 Sep 17:54:05 ntpd[1425]: Listen normally on 14 cali98dda0a6152 [fe80::ecee:eeff:feee:eeee%11]:123 Sep 4 17:54:06.797673 systemd[1]: run-containerd-runc-k8s.io-5ecffbc0804d8f20e326f038a242202ab83bafc0248e56ae91df210d90f87319-runc.0Kdl8c.mount: Deactivated successfully. Sep 4 17:54:08.015259 kubelet[2620]: I0904 17:54:08.015176 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-85cc95999c-l9rqm" podStartSLOduration=4.707396691 podStartE2EDuration="8.013832082s" podCreationTimestamp="2024-09-04 17:54:00 +0000 UTC" firstStartedPulling="2024-09-04 17:54:02.245408185 +0000 UTC m=+82.854927581" lastFinishedPulling="2024-09-04 17:54:05.551843576 +0000 UTC m=+86.161362972" observedRunningTime="2024-09-04 17:54:05.96865048 +0000 UTC m=+86.578169887" watchObservedRunningTime="2024-09-04 17:54:08.013832082 +0000 UTC m=+88.623351487" Sep 4 17:54:08.320593 sshd[5090]: pam_unix(sshd:session): session closed for user core Sep 4 17:54:08.330437 systemd[1]: sshd@22-10.128.0.52:22-147.75.109.163:58140.service: Deactivated successfully. Sep 4 17:54:08.336834 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 17:54:08.342740 systemd-logind[1441]: Session 20 logged out. Waiting for processes to exit. Sep 4 17:54:08.346639 systemd-logind[1441]: Removed session 20. Sep 4 17:54:08.379642 systemd[1]: Started sshd@23-10.128.0.52:22-147.75.109.163:57390.service - OpenSSH per-connection server daemon (147.75.109.163:57390). Sep 4 17:54:08.700746 sshd[5176]: Accepted publickey for core from 147.75.109.163 port 57390 ssh2: RSA SHA256:3UawMEk03AfeR6A6/drmeg302df853gVTK6IGVvrB/U Sep 4 17:54:08.702731 sshd[5176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:54:08.709261 systemd-logind[1441]: New session 21 of user core. Sep 4 17:54:08.714385 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 17:54:09.202567 sshd[5176]: pam_unix(sshd:session): session closed for user core Sep 4 17:54:09.208470 systemd-logind[1441]: Session 21 logged out. Waiting for processes to exit. Sep 4 17:54:09.210084 systemd[1]: sshd@23-10.128.0.52:22-147.75.109.163:57390.service: Deactivated successfully. Sep 4 17:54:09.214088 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 17:54:09.218202 systemd-logind[1441]: Removed session 21. Sep 4 17:54:09.256722 systemd[1]: Started sshd@24-10.128.0.52:22-147.75.109.163:57392.service - OpenSSH per-connection server daemon (147.75.109.163:57392). Sep 4 17:54:09.550691 sshd[5186]: Accepted publickey for core from 147.75.109.163 port 57392 ssh2: RSA SHA256:3UawMEk03AfeR6A6/drmeg302df853gVTK6IGVvrB/U Sep 4 17:54:09.552100 sshd[5186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:54:09.559342 systemd-logind[1441]: New session 22 of user core. Sep 4 17:54:09.564378 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 17:54:09.854426 sshd[5186]: pam_unix(sshd:session): session closed for user core Sep 4 17:54:09.861296 systemd[1]: sshd@24-10.128.0.52:22-147.75.109.163:57392.service: Deactivated successfully. Sep 4 17:54:09.864630 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 17:54:09.866119 systemd-logind[1441]: Session 22 logged out. Waiting for processes to exit. Sep 4 17:54:09.869348 systemd-logind[1441]: Removed session 22. Sep 4 17:54:14.910570 systemd[1]: Started sshd@25-10.128.0.52:22-147.75.109.163:57408.service - OpenSSH per-connection server daemon (147.75.109.163:57408). Sep 4 17:54:15.208597 sshd[5227]: Accepted publickey for core from 147.75.109.163 port 57408 ssh2: RSA SHA256:3UawMEk03AfeR6A6/drmeg302df853gVTK6IGVvrB/U Sep 4 17:54:15.210365 sshd[5227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:54:15.217969 systemd-logind[1441]: New session 23 of user core. Sep 4 17:54:15.224470 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 17:54:15.533920 sshd[5227]: pam_unix(sshd:session): session closed for user core Sep 4 17:54:15.542561 systemd[1]: sshd@25-10.128.0.52:22-147.75.109.163:57408.service: Deactivated successfully. Sep 4 17:54:15.548347 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 17:54:15.551589 systemd-logind[1441]: Session 23 logged out. Waiting for processes to exit. Sep 4 17:54:15.555054 systemd-logind[1441]: Removed session 23. Sep 4 17:54:20.590601 systemd[1]: Started sshd@26-10.128.0.52:22-147.75.109.163:46368.service - OpenSSH per-connection server daemon (147.75.109.163:46368). Sep 4 17:54:20.879060 sshd[5243]: Accepted publickey for core from 147.75.109.163 port 46368 ssh2: RSA SHA256:3UawMEk03AfeR6A6/drmeg302df853gVTK6IGVvrB/U Sep 4 17:54:20.881189 sshd[5243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:54:20.887940 systemd-logind[1441]: New session 24 of user core. Sep 4 17:54:20.895384 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 17:54:21.157982 sshd[5243]: pam_unix(sshd:session): session closed for user core Sep 4 17:54:21.164096 systemd[1]: sshd@26-10.128.0.52:22-147.75.109.163:46368.service: Deactivated successfully. Sep 4 17:54:21.166863 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 17:54:21.168064 systemd-logind[1441]: Session 24 logged out. Waiting for processes to exit. Sep 4 17:54:21.169891 systemd-logind[1441]: Removed session 24. Sep 4 17:54:26.217564 systemd[1]: Started sshd@27-10.128.0.52:22-147.75.109.163:35014.service - OpenSSH per-connection server daemon (147.75.109.163:35014). Sep 4 17:54:26.507649 sshd[5264]: Accepted publickey for core from 147.75.109.163 port 35014 ssh2: RSA SHA256:3UawMEk03AfeR6A6/drmeg302df853gVTK6IGVvrB/U Sep 4 17:54:26.509607 sshd[5264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:54:26.516445 systemd-logind[1441]: New session 25 of user core. Sep 4 17:54:26.522385 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 17:54:26.797125 sshd[5264]: pam_unix(sshd:session): session closed for user core Sep 4 17:54:26.802170 systemd[1]: sshd@27-10.128.0.52:22-147.75.109.163:35014.service: Deactivated successfully. Sep 4 17:54:26.804917 systemd[1]: session-25.scope: Deactivated successfully. Sep 4 17:54:26.807069 systemd-logind[1441]: Session 25 logged out. Waiting for processes to exit. Sep 4 17:54:26.809261 systemd-logind[1441]: Removed session 25. Sep 4 17:54:31.853598 systemd[1]: Started sshd@28-10.128.0.52:22-147.75.109.163:35016.service - OpenSSH per-connection server daemon (147.75.109.163:35016). Sep 4 17:54:32.138510 sshd[5276]: Accepted publickey for core from 147.75.109.163 port 35016 ssh2: RSA SHA256:3UawMEk03AfeR6A6/drmeg302df853gVTK6IGVvrB/U Sep 4 17:54:32.140668 sshd[5276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:54:32.149301 systemd-logind[1441]: New session 26 of user core. Sep 4 17:54:32.153397 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 4 17:54:32.422363 sshd[5276]: pam_unix(sshd:session): session closed for user core Sep 4 17:54:32.428788 systemd[1]: sshd@28-10.128.0.52:22-147.75.109.163:35016.service: Deactivated successfully. Sep 4 17:54:32.431979 systemd[1]: session-26.scope: Deactivated successfully. Sep 4 17:54:32.433110 systemd-logind[1441]: Session 26 logged out. Waiting for processes to exit. Sep 4 17:54:32.434939 systemd-logind[1441]: Removed session 26.