Jan 17 12:20:54.093142 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 17 10:39:07 -00 2025 Jan 17 12:20:54.093195 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:20:54.093215 kernel: BIOS-provided physical RAM map: Jan 17 12:20:54.093229 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Jan 17 12:20:54.093242 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Jan 17 12:20:54.093255 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Jan 17 12:20:54.093272 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Jan 17 12:20:54.093292 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Jan 17 12:20:54.093307 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Jan 17 12:20:54.093321 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Jan 17 12:20:54.093335 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Jan 17 12:20:54.093351 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Jan 17 12:20:54.093365 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Jan 17 12:20:54.093379 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Jan 17 12:20:54.093403 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Jan 17 12:20:54.093419 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Jan 17 12:20:54.093435 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Jan 17 12:20:54.093451 kernel: NX (Execute Disable) protection: active Jan 17 12:20:54.093468 kernel: APIC: Static calls initialized Jan 17 12:20:54.093484 kernel: efi: EFI v2.7 by EDK II Jan 17 12:20:54.093499 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Jan 17 12:20:54.093530 kernel: SMBIOS 2.4 present. Jan 17 12:20:54.093546 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Jan 17 12:20:54.093563 kernel: Hypervisor detected: KVM Jan 17 12:20:54.093585 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 12:20:54.093601 kernel: kvm-clock: using sched offset of 12315416044 cycles Jan 17 12:20:54.093618 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 12:20:54.093636 kernel: tsc: Detected 2299.998 MHz processor Jan 17 12:20:54.093652 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 12:20:54.093670 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 12:20:54.093687 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Jan 17 12:20:54.093704 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Jan 17 12:20:54.093721 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 12:20:54.093741 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Jan 17 12:20:54.093757 kernel: Using GB pages for direct mapping Jan 17 12:20:54.093774 kernel: Secure boot disabled Jan 17 12:20:54.093791 kernel: ACPI: Early table checksum verification disabled Jan 17 12:20:54.093808 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Jan 17 12:20:54.093825 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Jan 17 12:20:54.093841 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Jan 17 12:20:54.093866 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Jan 17 12:20:54.093886 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Jan 17 12:20:54.093904 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Jan 17 12:20:54.093923 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Jan 17 12:20:54.093941 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Jan 17 12:20:54.093960 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Jan 17 12:20:54.093978 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Jan 17 12:20:54.094007 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Jan 17 12:20:54.094132 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Jan 17 12:20:54.094163 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Jan 17 12:20:54.094181 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Jan 17 12:20:54.094198 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Jan 17 12:20:54.094215 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Jan 17 12:20:54.094233 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Jan 17 12:20:54.094251 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Jan 17 12:20:54.094270 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Jan 17 12:20:54.094296 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Jan 17 12:20:54.094315 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 17 12:20:54.094333 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 17 12:20:54.094352 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 17 12:20:54.094370 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Jan 17 12:20:54.094388 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Jan 17 12:20:54.094407 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Jan 17 12:20:54.094426 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Jan 17 12:20:54.094444 kernel: NODE_DATA(0) allocated [mem 0x21fff8000-0x21fffdfff] Jan 17 12:20:54.094467 kernel: Zone ranges: Jan 17 12:20:54.094485 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 12:20:54.094520 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 17 12:20:54.094538 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Jan 17 12:20:54.094557 kernel: Movable zone start for each node Jan 17 12:20:54.094575 kernel: Early memory node ranges Jan 17 12:20:54.094593 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Jan 17 12:20:54.094610 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Jan 17 12:20:54.094629 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Jan 17 12:20:54.094653 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Jan 17 12:20:54.094671 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Jan 17 12:20:54.094690 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Jan 17 12:20:54.094706 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 12:20:54.094725 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Jan 17 12:20:54.094744 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Jan 17 12:20:54.094763 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 17 12:20:54.094780 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Jan 17 12:20:54.094799 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 17 12:20:54.094822 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 12:20:54.094841 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 17 12:20:54.094859 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 12:20:54.094878 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 12:20:54.094896 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 12:20:54.094915 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 12:20:54.094933 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 12:20:54.094952 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 17 12:20:54.094971 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 17 12:20:54.094993 kernel: Booting paravirtualized kernel on KVM Jan 17 12:20:54.095012 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 12:20:54.095038 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 17 12:20:54.095057 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 17 12:20:54.095077 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 17 12:20:54.095095 kernel: pcpu-alloc: [0] 0 1 Jan 17 12:20:54.095113 kernel: kvm-guest: PV spinlocks enabled Jan 17 12:20:54.095132 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 17 12:20:54.095152 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:20:54.095176 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 17 12:20:54.095195 kernel: random: crng init done Jan 17 12:20:54.095213 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 17 12:20:54.095232 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 12:20:54.095251 kernel: Fallback order for Node 0: 0 Jan 17 12:20:54.095269 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Jan 17 12:20:54.095288 kernel: Policy zone: Normal Jan 17 12:20:54.095308 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 12:20:54.095330 kernel: software IO TLB: area num 2. Jan 17 12:20:54.095350 kernel: Memory: 7513372K/7860584K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42848K init, 2344K bss, 346952K reserved, 0K cma-reserved) Jan 17 12:20:54.095368 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 12:20:54.095387 kernel: Kernel/User page tables isolation: enabled Jan 17 12:20:54.095404 kernel: ftrace: allocating 37918 entries in 149 pages Jan 17 12:20:54.095423 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 12:20:54.095442 kernel: Dynamic Preempt: voluntary Jan 17 12:20:54.095460 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 12:20:54.095486 kernel: rcu: RCU event tracing is enabled. Jan 17 12:20:54.095539 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 12:20:54.095560 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 12:20:54.095580 kernel: Rude variant of Tasks RCU enabled. Jan 17 12:20:54.095604 kernel: Tracing variant of Tasks RCU enabled. Jan 17 12:20:54.095624 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 12:20:54.095644 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 12:20:54.095663 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 17 12:20:54.095683 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 12:20:54.095704 kernel: Console: colour dummy device 80x25 Jan 17 12:20:54.095728 kernel: printk: console [ttyS0] enabled Jan 17 12:20:54.095748 kernel: ACPI: Core revision 20230628 Jan 17 12:20:54.095768 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 12:20:54.095789 kernel: x2apic enabled Jan 17 12:20:54.095809 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 12:20:54.095830 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Jan 17 12:20:54.095849 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 17 12:20:54.095868 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Jan 17 12:20:54.095891 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Jan 17 12:20:54.095912 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Jan 17 12:20:54.095932 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 12:20:54.095952 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 17 12:20:54.095973 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 17 12:20:54.095993 kernel: Spectre V2 : Mitigation: IBRS Jan 17 12:20:54.096019 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 17 12:20:54.096039 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 17 12:20:54.096059 kernel: RETBleed: Mitigation: IBRS Jan 17 12:20:54.096084 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 17 12:20:54.096105 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Jan 17 12:20:54.096125 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 17 12:20:54.096145 kernel: MDS: Mitigation: Clear CPU buffers Jan 17 12:20:54.096165 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 12:20:54.096185 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 12:20:54.096205 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 12:20:54.096225 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 12:20:54.096245 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 12:20:54.096269 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 17 12:20:54.096290 kernel: Freeing SMP alternatives memory: 32K Jan 17 12:20:54.096309 kernel: pid_max: default: 32768 minimum: 301 Jan 17 12:20:54.096329 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 12:20:54.096349 kernel: landlock: Up and running. Jan 17 12:20:54.096369 kernel: SELinux: Initializing. Jan 17 12:20:54.096389 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 12:20:54.096409 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 12:20:54.096428 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Jan 17 12:20:54.096453 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:20:54.096472 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:20:54.096492 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:20:54.096538 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Jan 17 12:20:54.096558 kernel: signal: max sigframe size: 1776 Jan 17 12:20:54.096576 kernel: rcu: Hierarchical SRCU implementation. Jan 17 12:20:54.096593 kernel: rcu: Max phase no-delay instances is 400. Jan 17 12:20:54.096610 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 17 12:20:54.096629 kernel: smp: Bringing up secondary CPUs ... Jan 17 12:20:54.096653 kernel: smpboot: x86: Booting SMP configuration: Jan 17 12:20:54.096672 kernel: .... node #0, CPUs: #1 Jan 17 12:20:54.096692 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 17 12:20:54.096712 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 17 12:20:54.096731 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 12:20:54.096750 kernel: smpboot: Max logical packages: 1 Jan 17 12:20:54.096768 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jan 17 12:20:54.096787 kernel: devtmpfs: initialized Jan 17 12:20:54.096811 kernel: x86/mm: Memory block size: 128MB Jan 17 12:20:54.096830 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Jan 17 12:20:54.096849 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 12:20:54.096870 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 12:20:54.096888 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 12:20:54.096907 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 12:20:54.096926 kernel: audit: initializing netlink subsys (disabled) Jan 17 12:20:54.096945 kernel: audit: type=2000 audit(1737116452.877:1): state=initialized audit_enabled=0 res=1 Jan 17 12:20:54.096963 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 12:20:54.096988 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 12:20:54.097007 kernel: cpuidle: using governor menu Jan 17 12:20:54.097034 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 12:20:54.097052 kernel: dca service started, version 1.12.1 Jan 17 12:20:54.097070 kernel: PCI: Using configuration type 1 for base access Jan 17 12:20:54.097088 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 12:20:54.097107 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 12:20:54.097123 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 12:20:54.097140 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 12:20:54.097164 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 12:20:54.097182 kernel: ACPI: Added _OSI(Module Device) Jan 17 12:20:54.097199 kernel: ACPI: Added _OSI(Processor Device) Jan 17 12:20:54.097218 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 17 12:20:54.097236 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 12:20:54.097254 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 17 12:20:54.097273 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 12:20:54.097291 kernel: ACPI: Interpreter enabled Jan 17 12:20:54.097309 kernel: ACPI: PM: (supports S0 S3 S5) Jan 17 12:20:54.097335 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 12:20:54.097355 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 12:20:54.097375 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 17 12:20:54.097395 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 17 12:20:54.097415 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 12:20:54.099232 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 17 12:20:54.099442 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 17 12:20:54.099653 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 17 12:20:54.099678 kernel: PCI host bridge to bus 0000:00 Jan 17 12:20:54.099866 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 12:20:54.100044 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 12:20:54.100211 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 12:20:54.100382 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Jan 17 12:20:54.100838 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 12:20:54.101464 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 17 12:20:54.101996 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Jan 17 12:20:54.102210 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 17 12:20:54.102399 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 17 12:20:54.102641 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Jan 17 12:20:54.102822 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Jan 17 12:20:54.103006 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Jan 17 12:20:54.103202 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 17 12:20:54.103385 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Jan 17 12:20:54.103597 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Jan 17 12:20:54.103793 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Jan 17 12:20:54.103992 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Jan 17 12:20:54.104187 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Jan 17 12:20:54.104217 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 12:20:54.104236 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 12:20:54.104254 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 12:20:54.104272 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 12:20:54.104290 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 17 12:20:54.104309 kernel: iommu: Default domain type: Translated Jan 17 12:20:54.104328 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 12:20:54.104347 kernel: efivars: Registered efivars operations Jan 17 12:20:54.104366 kernel: PCI: Using ACPI for IRQ routing Jan 17 12:20:54.104389 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 12:20:54.104408 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Jan 17 12:20:54.104426 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Jan 17 12:20:54.104444 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Jan 17 12:20:54.104462 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Jan 17 12:20:54.104481 kernel: vgaarb: loaded Jan 17 12:20:54.106700 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 12:20:54.106736 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 12:20:54.106756 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 12:20:54.106783 kernel: pnp: PnP ACPI init Jan 17 12:20:54.106803 kernel: pnp: PnP ACPI: found 7 devices Jan 17 12:20:54.106822 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 12:20:54.106842 kernel: NET: Registered PF_INET protocol family Jan 17 12:20:54.106861 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 17 12:20:54.106880 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 17 12:20:54.106899 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 12:20:54.106918 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 12:20:54.106937 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 17 12:20:54.106961 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 17 12:20:54.106980 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 17 12:20:54.107000 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 17 12:20:54.107026 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 12:20:54.107045 kernel: NET: Registered PF_XDP protocol family Jan 17 12:20:54.107249 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 12:20:54.107418 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 12:20:54.107633 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 12:20:54.107804 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Jan 17 12:20:54.107994 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 17 12:20:54.108026 kernel: PCI: CLS 0 bytes, default 64 Jan 17 12:20:54.108046 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 17 12:20:54.108065 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Jan 17 12:20:54.108084 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 17 12:20:54.108104 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 17 12:20:54.108124 kernel: clocksource: Switched to clocksource tsc Jan 17 12:20:54.108148 kernel: Initialise system trusted keyrings Jan 17 12:20:54.108166 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 17 12:20:54.108185 kernel: Key type asymmetric registered Jan 17 12:20:54.108203 kernel: Asymmetric key parser 'x509' registered Jan 17 12:20:54.108222 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 12:20:54.108241 kernel: io scheduler mq-deadline registered Jan 17 12:20:54.108260 kernel: io scheduler kyber registered Jan 17 12:20:54.108279 kernel: io scheduler bfq registered Jan 17 12:20:54.108297 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 12:20:54.108321 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 17 12:20:54.108586 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Jan 17 12:20:54.108612 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jan 17 12:20:54.108799 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Jan 17 12:20:54.108822 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 17 12:20:54.109000 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Jan 17 12:20:54.109031 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 12:20:54.109051 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 12:20:54.109070 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 17 12:20:54.109095 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Jan 17 12:20:54.109114 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Jan 17 12:20:54.109300 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Jan 17 12:20:54.109326 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 12:20:54.109345 kernel: i8042: Warning: Keylock active Jan 17 12:20:54.109363 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 12:20:54.109382 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 12:20:54.109599 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 17 12:20:54.109776 kernel: rtc_cmos 00:00: registered as rtc0 Jan 17 12:20:54.109944 kernel: rtc_cmos 00:00: setting system clock to 2025-01-17T12:20:53 UTC (1737116453) Jan 17 12:20:54.110118 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 17 12:20:54.110142 kernel: intel_pstate: CPU model not supported Jan 17 12:20:54.110161 kernel: pstore: Using crash dump compression: deflate Jan 17 12:20:54.110180 kernel: pstore: Registered efi_pstore as persistent store backend Jan 17 12:20:54.110198 kernel: NET: Registered PF_INET6 protocol family Jan 17 12:20:54.110217 kernel: Segment Routing with IPv6 Jan 17 12:20:54.110241 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 12:20:54.110260 kernel: NET: Registered PF_PACKET protocol family Jan 17 12:20:54.110279 kernel: Key type dns_resolver registered Jan 17 12:20:54.110297 kernel: IPI shorthand broadcast: enabled Jan 17 12:20:54.110316 kernel: sched_clock: Marking stable (875004380, 171398232)->(1112743984, -66341372) Jan 17 12:20:54.110335 kernel: registered taskstats version 1 Jan 17 12:20:54.110355 kernel: Loading compiled-in X.509 certificates Jan 17 12:20:54.110374 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 6baa290b0089ed5c4c5f7248306af816ac8c7f80' Jan 17 12:20:54.110392 kernel: Key type .fscrypt registered Jan 17 12:20:54.110415 kernel: Key type fscrypt-provisioning registered Jan 17 12:20:54.110433 kernel: ima: Allocated hash algorithm: sha1 Jan 17 12:20:54.110452 kernel: ima: No architecture policies found Jan 17 12:20:54.110471 kernel: clk: Disabling unused clocks Jan 17 12:20:54.110490 kernel: Freeing unused kernel image (initmem) memory: 42848K Jan 17 12:20:54.110533 kernel: Write protecting the kernel read-only data: 36864k Jan 17 12:20:54.110553 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 17 12:20:54.110571 kernel: Run /init as init process Jan 17 12:20:54.110595 kernel: with arguments: Jan 17 12:20:54.110613 kernel: /init Jan 17 12:20:54.110632 kernel: with environment: Jan 17 12:20:54.110650 kernel: HOME=/ Jan 17 12:20:54.110668 kernel: TERM=linux Jan 17 12:20:54.110687 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 17 12:20:54.110706 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 12:20:54.110728 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:20:54.110752 systemd[1]: Detected virtualization google. Jan 17 12:20:54.110770 systemd[1]: Detected architecture x86-64. Jan 17 12:20:54.110787 systemd[1]: Running in initrd. Jan 17 12:20:54.110804 systemd[1]: No hostname configured, using default hostname. Jan 17 12:20:54.110822 systemd[1]: Hostname set to . Jan 17 12:20:54.110841 systemd[1]: Initializing machine ID from random generator. Jan 17 12:20:54.110859 systemd[1]: Queued start job for default target initrd.target. Jan 17 12:20:54.110878 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:20:54.110901 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:20:54.110921 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 12:20:54.110940 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:20:54.110960 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 12:20:54.110979 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 12:20:54.111001 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 12:20:54.111028 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 12:20:54.111050 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:20:54.111070 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:20:54.111109 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:20:54.111132 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:20:54.111151 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:20:54.111171 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:20:54.111195 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:20:54.111215 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:20:54.111236 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:20:54.111256 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:20:54.111274 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:20:54.111293 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:20:54.111313 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:20:54.111332 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:20:54.111356 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 12:20:54.111377 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:20:54.111397 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 12:20:54.111416 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 12:20:54.111436 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:20:54.111455 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:20:54.111814 systemd-journald[183]: Collecting audit messages is disabled. Jan 17 12:20:54.111872 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:20:54.111904 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 12:20:54.111925 systemd-journald[183]: Journal started Jan 17 12:20:54.111965 systemd-journald[183]: Runtime Journal (/run/log/journal/7b52e734d76f4836ac97bcfe1efc2706) is 8.0M, max 148.7M, 140.7M free. Jan 17 12:20:54.117275 systemd-modules-load[184]: Inserted module 'overlay' Jan 17 12:20:54.125650 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:20:54.123287 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:20:54.133020 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 12:20:54.147753 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:20:54.163179 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:20:54.167674 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 12:20:54.169984 systemd-modules-load[184]: Inserted module 'br_netfilter' Jan 17 12:20:54.177676 kernel: Bridge firewalling registered Jan 17 12:20:54.170864 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:20:54.174767 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:20:54.186350 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:20:54.195015 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:20:54.205786 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:20:54.216938 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:20:54.223727 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:20:54.239913 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:20:54.250806 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:20:54.261111 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:20:54.261946 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:20:54.283810 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 12:20:54.299266 systemd-resolved[212]: Positive Trust Anchors: Jan 17 12:20:54.299789 systemd-resolved[212]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:20:54.299862 systemd-resolved[212]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:20:54.306626 systemd-resolved[212]: Defaulting to hostname 'linux'. Jan 17 12:20:54.328670 dracut-cmdline[217]: dracut-dracut-053 Jan 17 12:20:54.328670 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:20:54.308308 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:20:54.314330 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:20:54.412550 kernel: SCSI subsystem initialized Jan 17 12:20:54.423559 kernel: Loading iSCSI transport class v2.0-870. Jan 17 12:20:54.435548 kernel: iscsi: registered transport (tcp) Jan 17 12:20:54.459592 kernel: iscsi: registered transport (qla4xxx) Jan 17 12:20:54.459680 kernel: QLogic iSCSI HBA Driver Jan 17 12:20:54.513972 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 12:20:54.518777 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 12:20:54.558112 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 12:20:54.558198 kernel: device-mapper: uevent: version 1.0.3 Jan 17 12:20:54.558227 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 12:20:54.604547 kernel: raid6: avx2x4 gen() 18032 MB/s Jan 17 12:20:54.621535 kernel: raid6: avx2x2 gen() 18112 MB/s Jan 17 12:20:54.639033 kernel: raid6: avx2x1 gen() 13592 MB/s Jan 17 12:20:54.639090 kernel: raid6: using algorithm avx2x2 gen() 18112 MB/s Jan 17 12:20:54.657193 kernel: raid6: .... xor() 17622 MB/s, rmw enabled Jan 17 12:20:54.657249 kernel: raid6: using avx2x2 recovery algorithm Jan 17 12:20:54.680546 kernel: xor: automatically using best checksumming function avx Jan 17 12:20:54.855541 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 12:20:54.869488 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:20:54.877757 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:20:54.900291 systemd-udevd[399]: Using default interface naming scheme 'v255'. Jan 17 12:20:54.907772 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:20:54.920849 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 12:20:54.951964 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Jan 17 12:20:54.988837 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:20:54.995746 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:20:55.089674 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:20:55.099749 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 12:20:55.137683 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 12:20:55.142755 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:20:55.147266 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:20:55.159645 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:20:55.173666 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 12:20:55.212892 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:20:55.213707 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 12:20:55.236150 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 12:20:55.236221 kernel: AES CTR mode by8 optimization enabled Jan 17 12:20:55.250528 kernel: scsi host0: Virtio SCSI HBA Jan 17 12:20:55.275901 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Jan 17 12:20:55.297858 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:20:55.316688 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:20:55.323894 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:20:55.332699 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:20:55.333302 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:20:55.346707 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:20:55.360462 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:20:55.373697 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Jan 17 12:20:55.390394 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Jan 17 12:20:55.390713 kernel: sd 0:0:1:0: [sda] Write Protect is off Jan 17 12:20:55.390955 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Jan 17 12:20:55.391176 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 17 12:20:55.391649 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 12:20:55.391679 kernel: GPT:17805311 != 25165823 Jan 17 12:20:55.391701 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 12:20:55.391722 kernel: GPT:17805311 != 25165823 Jan 17 12:20:55.391742 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 12:20:55.391765 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:20:55.391799 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Jan 17 12:20:55.391673 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:20:55.401810 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:20:55.443683 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:20:55.445696 kernel: BTRFS: device fsid e459b8ee-f1f7-4c3d-a087-3f1955f52c85 devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (452) Jan 17 12:20:55.457546 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (443) Jan 17 12:20:55.482138 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Jan 17 12:20:55.489451 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Jan 17 12:20:55.501866 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 17 12:20:55.508186 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Jan 17 12:20:55.508328 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Jan 17 12:20:55.523752 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 12:20:55.537606 disk-uuid[549]: Primary Header is updated. Jan 17 12:20:55.537606 disk-uuid[549]: Secondary Entries is updated. Jan 17 12:20:55.537606 disk-uuid[549]: Secondary Header is updated. Jan 17 12:20:55.551548 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:20:55.578551 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:20:55.591547 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:20:56.591719 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:20:56.591805 disk-uuid[550]: The operation has completed successfully. Jan 17 12:20:56.664313 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 12:20:56.664468 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 12:20:56.698758 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 12:20:56.718864 sh[567]: Success Jan 17 12:20:56.732761 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 17 12:20:56.818636 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 12:20:56.826219 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 12:20:56.850109 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 12:20:56.900157 kernel: BTRFS info (device dm-0): first mount of filesystem e459b8ee-f1f7-4c3d-a087-3f1955f52c85 Jan 17 12:20:56.900256 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:20:56.900283 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 12:20:56.916642 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 12:20:56.916748 kernel: BTRFS info (device dm-0): using free space tree Jan 17 12:20:56.957540 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 17 12:20:56.962344 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 12:20:56.963310 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 12:20:56.968737 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 12:20:57.009687 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 12:20:57.057738 kernel: BTRFS info (device sda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:20:57.057783 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:20:57.057818 kernel: BTRFS info (device sda6): using free space tree Jan 17 12:20:57.071581 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 12:20:57.071677 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 12:20:57.087096 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 12:20:57.106928 kernel: BTRFS info (device sda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:20:57.111173 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 12:20:57.140813 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 12:20:57.214216 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:20:57.219791 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:20:57.333334 systemd-networkd[750]: lo: Link UP Jan 17 12:20:57.333351 systemd-networkd[750]: lo: Gained carrier Jan 17 12:20:57.336767 systemd-networkd[750]: Enumeration completed Jan 17 12:20:57.350431 ignition[685]: Ignition 2.19.0 Jan 17 12:20:57.337451 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:20:57.350449 ignition[685]: Stage: fetch-offline Jan 17 12:20:57.337459 systemd-networkd[750]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:20:57.350523 ignition[685]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:20:57.337674 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:20:57.350541 ignition[685]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 12:20:57.340359 systemd-networkd[750]: eth0: Link UP Jan 17 12:20:57.350723 ignition[685]: parsed url from cmdline: "" Jan 17 12:20:57.340368 systemd-networkd[750]: eth0: Gained carrier Jan 17 12:20:57.350730 ignition[685]: no config URL provided Jan 17 12:20:57.340385 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:20:57.350740 ignition[685]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:20:57.350615 systemd-networkd[750]: eth0: DHCPv4 address 10.128.0.38/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 17 12:20:57.350755 ignition[685]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:20:57.353032 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:20:57.350766 ignition[685]: failed to fetch config: resource requires networking Jan 17 12:20:57.370446 systemd[1]: Reached target network.target - Network. Jan 17 12:20:57.351051 ignition[685]: Ignition finished successfully Jan 17 12:20:57.390780 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 12:20:57.446810 ignition[758]: Ignition 2.19.0 Jan 17 12:20:57.458846 unknown[758]: fetched base config from "system" Jan 17 12:20:57.446819 ignition[758]: Stage: fetch Jan 17 12:20:57.458874 unknown[758]: fetched base config from "system" Jan 17 12:20:57.447024 ignition[758]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:20:57.458888 unknown[758]: fetched user config from "gcp" Jan 17 12:20:57.447036 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 12:20:57.461986 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 12:20:57.447155 ignition[758]: parsed url from cmdline: "" Jan 17 12:20:57.481817 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 12:20:57.447162 ignition[758]: no config URL provided Jan 17 12:20:57.530063 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 12:20:57.447170 ignition[758]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:20:57.568788 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 12:20:57.447181 ignition[758]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:20:57.617546 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 12:20:57.447204 ignition[758]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Jan 17 12:20:57.633452 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 12:20:57.451939 ignition[758]: GET result: OK Jan 17 12:20:57.649751 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:20:57.452061 ignition[758]: parsing config with SHA512: 937dc624f0b598a8495a7b9a61e131a318320a59a660619289a0b585fa5fd1d80e29d56a417c25b28076f422fc6f4529e234da933bb255521a613458c1ef3480 Jan 17 12:20:57.666714 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:20:57.459798 ignition[758]: fetch: fetch complete Jan 17 12:20:57.680721 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:20:57.459815 ignition[758]: fetch: fetch passed Jan 17 12:20:57.696719 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:20:57.459889 ignition[758]: Ignition finished successfully Jan 17 12:20:57.718953 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 12:20:57.527247 ignition[765]: Ignition 2.19.0 Jan 17 12:20:57.527258 ignition[765]: Stage: kargs Jan 17 12:20:57.527466 ignition[765]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:20:57.527481 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 12:20:57.528774 ignition[765]: kargs: kargs passed Jan 17 12:20:57.528846 ignition[765]: Ignition finished successfully Jan 17 12:20:57.615095 ignition[771]: Ignition 2.19.0 Jan 17 12:20:57.615104 ignition[771]: Stage: disks Jan 17 12:20:57.615293 ignition[771]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:20:57.615305 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 12:20:57.616369 ignition[771]: disks: disks passed Jan 17 12:20:57.616435 ignition[771]: Ignition finished successfully Jan 17 12:20:57.779748 systemd-fsck[779]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 17 12:20:57.949259 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 12:20:57.982705 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 12:20:58.098903 kernel: EXT4-fs (sda9): mounted filesystem 0ba4fe0e-76d7-406f-b570-4642d86198f6 r/w with ordered data mode. Quota mode: none. Jan 17 12:20:58.099813 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 12:20:58.100692 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 12:20:58.131807 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:20:58.149669 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 12:20:58.207723 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (787) Jan 17 12:20:58.207776 kernel: BTRFS info (device sda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:20:58.207800 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:20:58.207822 kernel: BTRFS info (device sda6): using free space tree Jan 17 12:20:58.159243 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 12:20:58.242663 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 12:20:58.242696 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 12:20:58.159343 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 12:20:58.159386 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:20:58.201376 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 12:20:58.232859 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 12:20:58.279868 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:20:58.380462 initrd-setup-root[811]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 12:20:58.391688 initrd-setup-root[818]: cut: /sysroot/etc/group: No such file or directory Jan 17 12:20:58.402666 initrd-setup-root[825]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 12:20:58.413628 initrd-setup-root[832]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 12:20:58.546825 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 12:20:58.574691 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 12:20:58.602713 kernel: BTRFS info (device sda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:20:58.597729 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 12:20:58.611832 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 12:20:58.639490 ignition[899]: INFO : Ignition 2.19.0 Jan 17 12:20:58.639490 ignition[899]: INFO : Stage: mount Jan 17 12:20:58.653674 ignition[899]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:20:58.653674 ignition[899]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 12:20:58.653674 ignition[899]: INFO : mount: mount passed Jan 17 12:20:58.653674 ignition[899]: INFO : Ignition finished successfully Jan 17 12:20:58.642184 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 12:20:58.683119 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 12:20:58.707690 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 12:20:59.105801 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:20:59.150578 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (911) Jan 17 12:20:59.169000 kernel: BTRFS info (device sda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:20:59.169097 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:20:59.169124 kernel: BTRFS info (device sda6): using free space tree Jan 17 12:20:59.192664 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 12:20:59.192757 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 12:20:59.195894 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:20:59.232858 systemd-networkd[750]: eth0: Gained IPv6LL Jan 17 12:20:59.240702 ignition[928]: INFO : Ignition 2.19.0 Jan 17 12:20:59.240702 ignition[928]: INFO : Stage: files Jan 17 12:20:59.255656 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:20:59.255656 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 12:20:59.255656 ignition[928]: DEBUG : files: compiled without relabeling support, skipping Jan 17 12:20:59.255656 ignition[928]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 12:20:59.255656 ignition[928]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 12:20:59.255656 ignition[928]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 12:20:59.255656 ignition[928]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 12:20:59.255656 ignition[928]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 12:20:59.252079 unknown[928]: wrote ssh authorized keys file for user: core Jan 17 12:20:59.357730 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 17 12:20:59.357730 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 17 12:20:59.455723 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 17 12:20:59.618249 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 17 12:20:59.635702 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 17 12:20:59.635702 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 12:20:59.635702 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:20:59.635702 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:20:59.635702 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:20:59.635702 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:20:59.635702 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:20:59.635702 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:20:59.635702 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:20:59.635702 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:20:59.635702 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 17 12:20:59.635702 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 17 12:20:59.635702 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 17 12:20:59.635702 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 17 12:20:59.983362 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 17 12:21:00.496656 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 17 12:21:00.496656 ignition[928]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 17 12:21:00.536691 ignition[928]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:21:00.536691 ignition[928]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:21:00.536691 ignition[928]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 17 12:21:00.536691 ignition[928]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 17 12:21:00.536691 ignition[928]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 12:21:00.536691 ignition[928]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:21:00.536691 ignition[928]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:21:00.536691 ignition[928]: INFO : files: files passed Jan 17 12:21:00.536691 ignition[928]: INFO : Ignition finished successfully Jan 17 12:21:00.502738 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 12:21:00.531823 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 12:21:00.537885 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 12:21:00.578295 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 12:21:00.743720 initrd-setup-root-after-ignition[955]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:21:00.743720 initrd-setup-root-after-ignition[955]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:21:00.578419 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 12:21:00.792744 initrd-setup-root-after-ignition[959]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:21:00.643255 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:21:00.647978 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 12:21:00.678825 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 12:21:00.760880 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 12:21:00.761018 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 12:21:00.783726 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 12:21:00.802862 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 12:21:00.826938 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 12:21:00.833833 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 12:21:00.898884 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:21:00.924769 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 12:21:00.969608 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 12:21:00.969836 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 12:21:00.980617 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:21:01.008736 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:21:01.008893 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 12:21:01.037778 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 12:21:01.037906 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:21:01.068698 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 12:21:01.085710 systemd[1]: Stopped target basic.target - Basic System. Jan 17 12:21:01.100697 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 12:21:01.119704 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:21:01.137711 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 12:21:01.154708 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 12:21:01.172702 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:21:01.172889 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 12:21:01.203719 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 12:21:01.221701 systemd[1]: Stopped target swap.target - Swaps. Jan 17 12:21:01.236702 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 12:21:01.236831 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:21:01.264966 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:21:01.274965 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:21:01.289962 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 12:21:01.290062 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:21:01.307930 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 12:21:01.308019 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 12:21:01.346034 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 12:21:01.346157 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:21:01.477715 ignition[981]: INFO : Ignition 2.19.0 Jan 17 12:21:01.477715 ignition[981]: INFO : Stage: umount Jan 17 12:21:01.477715 ignition[981]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:21:01.477715 ignition[981]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 12:21:01.477715 ignition[981]: INFO : umount: umount passed Jan 17 12:21:01.477715 ignition[981]: INFO : Ignition finished successfully Jan 17 12:21:01.356960 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 12:21:01.357036 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 12:21:01.381691 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 12:21:01.418470 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 12:21:01.442689 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 12:21:01.442836 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:21:01.453767 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 12:21:01.453864 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:21:01.466872 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 12:21:01.467037 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 12:21:01.493430 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 12:21:01.494107 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 12:21:01.494228 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 12:21:01.508895 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 12:21:01.508967 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 12:21:01.528773 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 12:21:01.528882 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 12:21:01.549782 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 12:21:01.549877 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 12:21:01.566785 systemd[1]: Stopped target network.target - Network. Jan 17 12:21:01.581680 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 12:21:01.581830 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:21:01.601791 systemd[1]: Stopped target paths.target - Path Units. Jan 17 12:21:01.617673 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 12:21:01.619623 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:21:01.638689 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 12:21:01.653701 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 12:21:01.670772 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 12:21:01.670861 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:21:01.690767 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 12:21:01.690864 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:21:01.710757 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 12:21:01.710864 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 12:21:01.730794 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 12:21:01.730895 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 12:21:01.748785 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 12:21:01.748890 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 12:21:01.767084 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 12:21:01.772614 systemd-networkd[750]: eth0: DHCPv6 lease lost Jan 17 12:21:01.784898 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 12:21:01.804376 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 12:21:01.804546 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 12:21:01.823602 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 12:21:01.823861 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 12:21:01.832495 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 12:21:01.832602 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:21:01.852681 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 12:21:01.880642 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 12:21:02.367684 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Jan 17 12:21:01.880865 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:21:01.888945 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 12:21:01.889018 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:21:01.906981 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 12:21:01.907056 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 12:21:01.933872 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 12:21:01.933954 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:21:01.955075 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:21:01.976212 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 12:21:01.976401 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:21:02.002721 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 12:21:02.002789 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 12:21:02.017980 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 12:21:02.018064 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:21:02.046838 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 12:21:02.046923 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:21:02.073873 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 12:21:02.074082 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 12:21:02.100950 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:21:02.101047 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:21:02.135728 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 12:21:02.165681 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 12:21:02.165921 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:21:02.186891 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 17 12:21:02.186978 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:21:02.207885 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 12:21:02.207966 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:21:02.228900 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:21:02.228999 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:21:02.236426 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 12:21:02.236604 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 12:21:02.254251 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 12:21:02.254373 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 12:21:02.272188 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 12:21:02.305762 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 12:21:02.321900 systemd[1]: Switching root. Jan 17 12:21:02.727667 systemd-journald[183]: Journal stopped Jan 17 12:20:54.093142 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 17 10:39:07 -00 2025 Jan 17 12:20:54.093195 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:20:54.093215 kernel: BIOS-provided physical RAM map: Jan 17 12:20:54.093229 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Jan 17 12:20:54.093242 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Jan 17 12:20:54.093255 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Jan 17 12:20:54.093272 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Jan 17 12:20:54.093292 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Jan 17 12:20:54.093307 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Jan 17 12:20:54.093321 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Jan 17 12:20:54.093335 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Jan 17 12:20:54.093351 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Jan 17 12:20:54.093365 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Jan 17 12:20:54.093379 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Jan 17 12:20:54.093403 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Jan 17 12:20:54.093419 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Jan 17 12:20:54.093435 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Jan 17 12:20:54.093451 kernel: NX (Execute Disable) protection: active Jan 17 12:20:54.093468 kernel: APIC: Static calls initialized Jan 17 12:20:54.093484 kernel: efi: EFI v2.7 by EDK II Jan 17 12:20:54.093499 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Jan 17 12:20:54.093530 kernel: SMBIOS 2.4 present. Jan 17 12:20:54.093546 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Jan 17 12:20:54.093563 kernel: Hypervisor detected: KVM Jan 17 12:20:54.093585 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 12:20:54.093601 kernel: kvm-clock: using sched offset of 12315416044 cycles Jan 17 12:20:54.093618 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 12:20:54.093636 kernel: tsc: Detected 2299.998 MHz processor Jan 17 12:20:54.093652 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 12:20:54.093670 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 12:20:54.093687 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Jan 17 12:20:54.093704 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Jan 17 12:20:54.093721 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 12:20:54.093741 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Jan 17 12:20:54.093757 kernel: Using GB pages for direct mapping Jan 17 12:20:54.093774 kernel: Secure boot disabled Jan 17 12:20:54.093791 kernel: ACPI: Early table checksum verification disabled Jan 17 12:20:54.093808 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Jan 17 12:20:54.093825 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Jan 17 12:20:54.093841 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Jan 17 12:20:54.093866 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Jan 17 12:20:54.093886 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Jan 17 12:20:54.093904 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Jan 17 12:20:54.093923 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Jan 17 12:20:54.093941 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Jan 17 12:20:54.093960 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Jan 17 12:20:54.093978 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Jan 17 12:20:54.094007 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Jan 17 12:20:54.094132 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Jan 17 12:20:54.094163 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Jan 17 12:20:54.094181 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Jan 17 12:20:54.094198 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Jan 17 12:20:54.094215 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Jan 17 12:20:54.094233 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Jan 17 12:20:54.094251 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Jan 17 12:20:54.094270 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Jan 17 12:20:54.094296 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Jan 17 12:20:54.094315 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 17 12:20:54.094333 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 17 12:20:54.094352 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 17 12:20:54.094370 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Jan 17 12:20:54.094388 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Jan 17 12:20:54.094407 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Jan 17 12:20:54.094426 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Jan 17 12:20:54.094444 kernel: NODE_DATA(0) allocated [mem 0x21fff8000-0x21fffdfff] Jan 17 12:20:54.094467 kernel: Zone ranges: Jan 17 12:20:54.094485 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 12:20:54.094520 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 17 12:20:54.094538 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Jan 17 12:20:54.094557 kernel: Movable zone start for each node Jan 17 12:20:54.094575 kernel: Early memory node ranges Jan 17 12:20:54.094593 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Jan 17 12:20:54.094610 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Jan 17 12:20:54.094629 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Jan 17 12:20:54.094653 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Jan 17 12:20:54.094671 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Jan 17 12:20:54.094690 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Jan 17 12:20:54.094706 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 12:20:54.094725 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Jan 17 12:20:54.094744 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Jan 17 12:20:54.094763 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 17 12:20:54.094780 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Jan 17 12:20:54.094799 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 17 12:20:54.094822 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 12:20:54.094841 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 17 12:20:54.094859 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 12:20:54.094878 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 12:20:54.094896 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 12:20:54.094915 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 12:20:54.094933 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 12:20:54.094952 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 17 12:20:54.094971 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 17 12:20:54.094993 kernel: Booting paravirtualized kernel on KVM Jan 17 12:20:54.095012 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 12:20:54.095038 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 17 12:20:54.095057 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 17 12:20:54.095077 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 17 12:20:54.095095 kernel: pcpu-alloc: [0] 0 1 Jan 17 12:20:54.095113 kernel: kvm-guest: PV spinlocks enabled Jan 17 12:20:54.095132 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 17 12:20:54.095152 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:20:54.095176 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 17 12:20:54.095195 kernel: random: crng init done Jan 17 12:20:54.095213 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 17 12:20:54.095232 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 12:20:54.095251 kernel: Fallback order for Node 0: 0 Jan 17 12:20:54.095269 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Jan 17 12:20:54.095288 kernel: Policy zone: Normal Jan 17 12:20:54.095308 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 12:20:54.095330 kernel: software IO TLB: area num 2. Jan 17 12:20:54.095350 kernel: Memory: 7513372K/7860584K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42848K init, 2344K bss, 346952K reserved, 0K cma-reserved) Jan 17 12:20:54.095368 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 12:20:54.095387 kernel: Kernel/User page tables isolation: enabled Jan 17 12:20:54.095404 kernel: ftrace: allocating 37918 entries in 149 pages Jan 17 12:20:54.095423 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 12:20:54.095442 kernel: Dynamic Preempt: voluntary Jan 17 12:20:54.095460 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 12:20:54.095486 kernel: rcu: RCU event tracing is enabled. Jan 17 12:20:54.095539 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 12:20:54.095560 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 12:20:54.095580 kernel: Rude variant of Tasks RCU enabled. Jan 17 12:20:54.095604 kernel: Tracing variant of Tasks RCU enabled. Jan 17 12:20:54.095624 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 12:20:54.095644 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 12:20:54.095663 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 17 12:20:54.095683 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 12:20:54.095704 kernel: Console: colour dummy device 80x25 Jan 17 12:20:54.095728 kernel: printk: console [ttyS0] enabled Jan 17 12:20:54.095748 kernel: ACPI: Core revision 20230628 Jan 17 12:20:54.095768 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 12:20:54.095789 kernel: x2apic enabled Jan 17 12:20:54.095809 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 12:20:54.095830 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Jan 17 12:20:54.095849 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 17 12:20:54.095868 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Jan 17 12:20:54.095891 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Jan 17 12:20:54.095912 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Jan 17 12:20:54.095932 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 12:20:54.095952 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 17 12:20:54.095973 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 17 12:20:54.095993 kernel: Spectre V2 : Mitigation: IBRS Jan 17 12:20:54.096019 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 17 12:20:54.096039 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 17 12:20:54.096059 kernel: RETBleed: Mitigation: IBRS Jan 17 12:20:54.096084 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 17 12:20:54.096105 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Jan 17 12:20:54.096125 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 17 12:20:54.096145 kernel: MDS: Mitigation: Clear CPU buffers Jan 17 12:20:54.096165 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 12:20:54.096185 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 12:20:54.096205 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 12:20:54.096225 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 12:20:54.096245 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 12:20:54.096269 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 17 12:20:54.096290 kernel: Freeing SMP alternatives memory: 32K Jan 17 12:20:54.096309 kernel: pid_max: default: 32768 minimum: 301 Jan 17 12:20:54.096329 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 12:20:54.096349 kernel: landlock: Up and running. Jan 17 12:20:54.096369 kernel: SELinux: Initializing. Jan 17 12:20:54.096389 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 12:20:54.096409 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 12:20:54.096428 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Jan 17 12:20:54.096453 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:20:54.096472 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:20:54.096492 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:20:54.096538 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Jan 17 12:20:54.096558 kernel: signal: max sigframe size: 1776 Jan 17 12:20:54.096576 kernel: rcu: Hierarchical SRCU implementation. Jan 17 12:20:54.096593 kernel: rcu: Max phase no-delay instances is 400. Jan 17 12:20:54.096610 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 17 12:20:54.096629 kernel: smp: Bringing up secondary CPUs ... Jan 17 12:20:54.096653 kernel: smpboot: x86: Booting SMP configuration: Jan 17 12:20:54.096672 kernel: .... node #0, CPUs: #1 Jan 17 12:20:54.096692 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 17 12:20:54.096712 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 17 12:20:54.096731 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 12:20:54.096750 kernel: smpboot: Max logical packages: 1 Jan 17 12:20:54.096768 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jan 17 12:20:54.096787 kernel: devtmpfs: initialized Jan 17 12:20:54.096811 kernel: x86/mm: Memory block size: 128MB Jan 17 12:20:54.096830 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Jan 17 12:20:54.096849 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 12:20:54.096870 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 12:20:54.096888 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 12:20:54.096907 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 12:20:54.096926 kernel: audit: initializing netlink subsys (disabled) Jan 17 12:20:54.096945 kernel: audit: type=2000 audit(1737116452.877:1): state=initialized audit_enabled=0 res=1 Jan 17 12:20:54.096963 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 12:20:54.096988 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 12:20:54.097007 kernel: cpuidle: using governor menu Jan 17 12:20:54.097034 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 12:20:54.097052 kernel: dca service started, version 1.12.1 Jan 17 12:20:54.097070 kernel: PCI: Using configuration type 1 for base access Jan 17 12:20:54.097088 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 12:20:54.097107 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 12:20:54.097123 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 12:20:54.097140 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 12:20:54.097164 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 12:20:54.097182 kernel: ACPI: Added _OSI(Module Device) Jan 17 12:20:54.097199 kernel: ACPI: Added _OSI(Processor Device) Jan 17 12:20:54.097218 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 17 12:20:54.097236 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 12:20:54.097254 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 17 12:20:54.097273 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 12:20:54.097291 kernel: ACPI: Interpreter enabled Jan 17 12:20:54.097309 kernel: ACPI: PM: (supports S0 S3 S5) Jan 17 12:20:54.097335 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 12:20:54.097355 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 12:20:54.097375 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 17 12:20:54.097395 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 17 12:20:54.097415 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 12:20:54.099232 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 17 12:20:54.099442 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 17 12:20:54.099653 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 17 12:20:54.099678 kernel: PCI host bridge to bus 0000:00 Jan 17 12:20:54.099866 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 12:20:54.100044 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 12:20:54.100211 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 12:20:54.100382 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Jan 17 12:20:54.100838 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 12:20:54.101464 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 17 12:20:54.101996 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Jan 17 12:20:54.102210 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 17 12:20:54.102399 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 17 12:20:54.102641 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Jan 17 12:20:54.102822 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Jan 17 12:20:54.103006 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Jan 17 12:20:54.103202 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 17 12:20:54.103385 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Jan 17 12:20:54.103597 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Jan 17 12:20:54.103793 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Jan 17 12:20:54.103992 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Jan 17 12:20:54.104187 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Jan 17 12:20:54.104217 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 12:20:54.104236 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 12:20:54.104254 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 12:20:54.104272 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 12:20:54.104290 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 17 12:20:54.104309 kernel: iommu: Default domain type: Translated Jan 17 12:20:54.104328 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 12:20:54.104347 kernel: efivars: Registered efivars operations Jan 17 12:20:54.104366 kernel: PCI: Using ACPI for IRQ routing Jan 17 12:20:54.104389 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 12:20:54.104408 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Jan 17 12:20:54.104426 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Jan 17 12:20:54.104444 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Jan 17 12:20:54.104462 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Jan 17 12:20:54.104481 kernel: vgaarb: loaded Jan 17 12:20:54.106700 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 12:20:54.106736 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 12:20:54.106756 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 12:20:54.106783 kernel: pnp: PnP ACPI init Jan 17 12:20:54.106803 kernel: pnp: PnP ACPI: found 7 devices Jan 17 12:20:54.106822 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 12:20:54.106842 kernel: NET: Registered PF_INET protocol family Jan 17 12:20:54.106861 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 17 12:20:54.106880 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 17 12:20:54.106899 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 12:20:54.106918 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 12:20:54.106937 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 17 12:20:54.106961 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 17 12:20:54.106980 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 17 12:20:54.107000 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 17 12:20:54.107026 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 12:20:54.107045 kernel: NET: Registered PF_XDP protocol family Jan 17 12:20:54.107249 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 12:20:54.107418 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 12:20:54.107633 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 12:20:54.107804 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Jan 17 12:20:54.107994 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 17 12:20:54.108026 kernel: PCI: CLS 0 bytes, default 64 Jan 17 12:20:54.108046 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 17 12:20:54.108065 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Jan 17 12:20:54.108084 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 17 12:20:54.108104 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 17 12:20:54.108124 kernel: clocksource: Switched to clocksource tsc Jan 17 12:20:54.108148 kernel: Initialise system trusted keyrings Jan 17 12:20:54.108166 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 17 12:20:54.108185 kernel: Key type asymmetric registered Jan 17 12:20:54.108203 kernel: Asymmetric key parser 'x509' registered Jan 17 12:20:54.108222 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 12:20:54.108241 kernel: io scheduler mq-deadline registered Jan 17 12:20:54.108260 kernel: io scheduler kyber registered Jan 17 12:20:54.108279 kernel: io scheduler bfq registered Jan 17 12:20:54.108297 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 12:20:54.108321 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 17 12:20:54.108586 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Jan 17 12:20:54.108612 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jan 17 12:20:54.108799 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Jan 17 12:20:54.108822 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 17 12:20:54.109000 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Jan 17 12:20:54.109031 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 12:20:54.109051 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 12:20:54.109070 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 17 12:20:54.109095 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Jan 17 12:20:54.109114 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Jan 17 12:20:54.109300 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Jan 17 12:20:54.109326 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 12:20:54.109345 kernel: i8042: Warning: Keylock active Jan 17 12:20:54.109363 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 12:20:54.109382 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 12:20:54.109599 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 17 12:20:54.109776 kernel: rtc_cmos 00:00: registered as rtc0 Jan 17 12:20:54.109944 kernel: rtc_cmos 00:00: setting system clock to 2025-01-17T12:20:53 UTC (1737116453) Jan 17 12:20:54.110118 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 17 12:20:54.110142 kernel: intel_pstate: CPU model not supported Jan 17 12:20:54.110161 kernel: pstore: Using crash dump compression: deflate Jan 17 12:20:54.110180 kernel: pstore: Registered efi_pstore as persistent store backend Jan 17 12:20:54.110198 kernel: NET: Registered PF_INET6 protocol family Jan 17 12:20:54.110217 kernel: Segment Routing with IPv6 Jan 17 12:20:54.110241 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 12:20:54.110260 kernel: NET: Registered PF_PACKET protocol family Jan 17 12:20:54.110279 kernel: Key type dns_resolver registered Jan 17 12:20:54.110297 kernel: IPI shorthand broadcast: enabled Jan 17 12:20:54.110316 kernel: sched_clock: Marking stable (875004380, 171398232)->(1112743984, -66341372) Jan 17 12:20:54.110335 kernel: registered taskstats version 1 Jan 17 12:20:54.110355 kernel: Loading compiled-in X.509 certificates Jan 17 12:20:54.110374 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 6baa290b0089ed5c4c5f7248306af816ac8c7f80' Jan 17 12:20:54.110392 kernel: Key type .fscrypt registered Jan 17 12:20:54.110415 kernel: Key type fscrypt-provisioning registered Jan 17 12:20:54.110433 kernel: ima: Allocated hash algorithm: sha1 Jan 17 12:20:54.110452 kernel: ima: No architecture policies found Jan 17 12:20:54.110471 kernel: clk: Disabling unused clocks Jan 17 12:20:54.110490 kernel: Freeing unused kernel image (initmem) memory: 42848K Jan 17 12:20:54.110533 kernel: Write protecting the kernel read-only data: 36864k Jan 17 12:20:54.110553 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 17 12:20:54.110571 kernel: Run /init as init process Jan 17 12:20:54.110595 kernel: with arguments: Jan 17 12:20:54.110613 kernel: /init Jan 17 12:20:54.110632 kernel: with environment: Jan 17 12:20:54.110650 kernel: HOME=/ Jan 17 12:20:54.110668 kernel: TERM=linux Jan 17 12:20:54.110687 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 17 12:20:54.110706 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 12:20:54.110728 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:20:54.110752 systemd[1]: Detected virtualization google. Jan 17 12:20:54.110770 systemd[1]: Detected architecture x86-64. Jan 17 12:20:54.110787 systemd[1]: Running in initrd. Jan 17 12:20:54.110804 systemd[1]: No hostname configured, using default hostname. Jan 17 12:20:54.110822 systemd[1]: Hostname set to . Jan 17 12:20:54.110841 systemd[1]: Initializing machine ID from random generator. Jan 17 12:20:54.110859 systemd[1]: Queued start job for default target initrd.target. Jan 17 12:20:54.110878 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:20:54.110901 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:20:54.110921 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 12:20:54.110940 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:20:54.110960 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 12:20:54.110979 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 12:20:54.111001 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 12:20:54.111028 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 12:20:54.111050 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:20:54.111070 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:20:54.111109 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:20:54.111132 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:20:54.111151 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:20:54.111171 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:20:54.111195 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:20:54.111215 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:20:54.111236 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:20:54.111256 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:20:54.111274 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:20:54.111293 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:20:54.111313 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:20:54.111332 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:20:54.111356 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 12:20:54.111377 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:20:54.111397 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 12:20:54.111416 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 12:20:54.111436 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:20:54.111455 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:20:54.111814 systemd-journald[183]: Collecting audit messages is disabled. Jan 17 12:20:54.111872 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:20:54.111904 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 12:20:54.111925 systemd-journald[183]: Journal started Jan 17 12:20:54.111965 systemd-journald[183]: Runtime Journal (/run/log/journal/7b52e734d76f4836ac97bcfe1efc2706) is 8.0M, max 148.7M, 140.7M free. Jan 17 12:20:54.117275 systemd-modules-load[184]: Inserted module 'overlay' Jan 17 12:20:54.125650 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:20:54.123287 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:20:54.133020 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 12:20:54.147753 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:20:54.163179 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:20:54.167674 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 12:20:54.169984 systemd-modules-load[184]: Inserted module 'br_netfilter' Jan 17 12:20:54.177676 kernel: Bridge firewalling registered Jan 17 12:20:54.170864 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:20:54.174767 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:20:54.186350 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:20:54.195015 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:20:54.205786 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:20:54.216938 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:20:54.223727 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:20:54.239913 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:20:54.250806 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:20:54.261111 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:20:54.261946 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:20:54.283810 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 12:20:54.299266 systemd-resolved[212]: Positive Trust Anchors: Jan 17 12:20:54.299789 systemd-resolved[212]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:20:54.299862 systemd-resolved[212]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:20:54.306626 systemd-resolved[212]: Defaulting to hostname 'linux'. Jan 17 12:20:54.328670 dracut-cmdline[217]: dracut-dracut-053 Jan 17 12:20:54.328670 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:20:54.308308 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:20:54.314330 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:20:54.412550 kernel: SCSI subsystem initialized Jan 17 12:20:54.423559 kernel: Loading iSCSI transport class v2.0-870. Jan 17 12:20:54.435548 kernel: iscsi: registered transport (tcp) Jan 17 12:20:54.459592 kernel: iscsi: registered transport (qla4xxx) Jan 17 12:20:54.459680 kernel: QLogic iSCSI HBA Driver Jan 17 12:20:54.513972 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 12:20:54.518777 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 12:20:54.558112 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 12:20:54.558198 kernel: device-mapper: uevent: version 1.0.3 Jan 17 12:20:54.558227 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 12:20:54.604547 kernel: raid6: avx2x4 gen() 18032 MB/s Jan 17 12:20:54.621535 kernel: raid6: avx2x2 gen() 18112 MB/s Jan 17 12:20:54.639033 kernel: raid6: avx2x1 gen() 13592 MB/s Jan 17 12:20:54.639090 kernel: raid6: using algorithm avx2x2 gen() 18112 MB/s Jan 17 12:20:54.657193 kernel: raid6: .... xor() 17622 MB/s, rmw enabled Jan 17 12:20:54.657249 kernel: raid6: using avx2x2 recovery algorithm Jan 17 12:20:54.680546 kernel: xor: automatically using best checksumming function avx Jan 17 12:20:54.855541 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 12:20:54.869488 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:20:54.877757 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:20:54.900291 systemd-udevd[399]: Using default interface naming scheme 'v255'. Jan 17 12:20:54.907772 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:20:54.920849 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 12:20:54.951964 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Jan 17 12:20:54.988837 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:20:54.995746 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:20:55.089674 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:20:55.099749 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 12:20:55.137683 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 12:20:55.142755 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:20:55.147266 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:20:55.159645 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:20:55.173666 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 12:20:55.212892 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:20:55.213707 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 12:20:55.236150 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 12:20:55.236221 kernel: AES CTR mode by8 optimization enabled Jan 17 12:20:55.250528 kernel: scsi host0: Virtio SCSI HBA Jan 17 12:20:55.275901 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Jan 17 12:20:55.297858 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:20:55.316688 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:20:55.323894 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:20:55.332699 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:20:55.333302 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:20:55.346707 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:20:55.360462 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:20:55.373697 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Jan 17 12:20:55.390394 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Jan 17 12:20:55.390713 kernel: sd 0:0:1:0: [sda] Write Protect is off Jan 17 12:20:55.390955 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Jan 17 12:20:55.391176 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 17 12:20:55.391649 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 12:20:55.391679 kernel: GPT:17805311 != 25165823 Jan 17 12:20:55.391701 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 12:20:55.391722 kernel: GPT:17805311 != 25165823 Jan 17 12:20:55.391742 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 12:20:55.391765 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:20:55.391799 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Jan 17 12:20:55.391673 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:20:55.401810 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:20:55.443683 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:20:55.445696 kernel: BTRFS: device fsid e459b8ee-f1f7-4c3d-a087-3f1955f52c85 devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (452) Jan 17 12:20:55.457546 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (443) Jan 17 12:20:55.482138 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Jan 17 12:20:55.489451 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Jan 17 12:20:55.501866 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 17 12:20:55.508186 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Jan 17 12:20:55.508328 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Jan 17 12:20:55.523752 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 12:20:55.537606 disk-uuid[549]: Primary Header is updated. Jan 17 12:20:55.537606 disk-uuid[549]: Secondary Entries is updated. Jan 17 12:20:55.537606 disk-uuid[549]: Secondary Header is updated. Jan 17 12:20:55.551548 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:20:55.578551 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:20:55.591547 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:20:56.591719 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:20:56.591805 disk-uuid[550]: The operation has completed successfully. Jan 17 12:20:56.664313 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 12:20:56.664468 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 12:20:56.698758 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 12:20:56.718864 sh[567]: Success Jan 17 12:20:56.732761 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 17 12:20:56.818636 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 12:20:56.826219 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 12:20:56.850109 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 12:20:56.900157 kernel: BTRFS info (device dm-0): first mount of filesystem e459b8ee-f1f7-4c3d-a087-3f1955f52c85 Jan 17 12:20:56.900256 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:20:56.900283 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 12:20:56.916642 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 12:20:56.916748 kernel: BTRFS info (device dm-0): using free space tree Jan 17 12:20:56.957540 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 17 12:20:56.962344 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 12:20:56.963310 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 12:20:56.968737 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 12:20:57.009687 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 12:20:57.057738 kernel: BTRFS info (device sda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:20:57.057783 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:20:57.057818 kernel: BTRFS info (device sda6): using free space tree Jan 17 12:20:57.071581 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 12:20:57.071677 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 12:20:57.087096 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 12:20:57.106928 kernel: BTRFS info (device sda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:20:57.111173 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 12:20:57.140813 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 12:20:57.214216 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:20:57.219791 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:20:57.333334 systemd-networkd[750]: lo: Link UP Jan 17 12:20:57.333351 systemd-networkd[750]: lo: Gained carrier Jan 17 12:20:57.336767 systemd-networkd[750]: Enumeration completed Jan 17 12:20:57.350431 ignition[685]: Ignition 2.19.0 Jan 17 12:20:57.337451 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:20:57.350449 ignition[685]: Stage: fetch-offline Jan 17 12:20:57.337459 systemd-networkd[750]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:20:57.350523 ignition[685]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:20:57.337674 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:20:57.350541 ignition[685]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 12:20:57.340359 systemd-networkd[750]: eth0: Link UP Jan 17 12:20:57.350723 ignition[685]: parsed url from cmdline: "" Jan 17 12:20:57.340368 systemd-networkd[750]: eth0: Gained carrier Jan 17 12:20:57.350730 ignition[685]: no config URL provided Jan 17 12:20:57.340385 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:20:57.350740 ignition[685]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:20:57.350615 systemd-networkd[750]: eth0: DHCPv4 address 10.128.0.38/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 17 12:20:57.350755 ignition[685]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:20:57.353032 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:20:57.350766 ignition[685]: failed to fetch config: resource requires networking Jan 17 12:20:57.370446 systemd[1]: Reached target network.target - Network. Jan 17 12:20:57.351051 ignition[685]: Ignition finished successfully Jan 17 12:20:57.390780 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 12:20:57.446810 ignition[758]: Ignition 2.19.0 Jan 17 12:20:57.458846 unknown[758]: fetched base config from "system" Jan 17 12:20:57.446819 ignition[758]: Stage: fetch Jan 17 12:20:57.458874 unknown[758]: fetched base config from "system" Jan 17 12:20:57.447024 ignition[758]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:20:57.458888 unknown[758]: fetched user config from "gcp" Jan 17 12:20:57.447036 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 12:20:57.461986 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 12:20:57.447155 ignition[758]: parsed url from cmdline: "" Jan 17 12:20:57.481817 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 12:20:57.447162 ignition[758]: no config URL provided Jan 17 12:20:57.530063 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 12:20:57.447170 ignition[758]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:20:57.568788 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 12:20:57.447181 ignition[758]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:20:57.617546 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 12:20:57.447204 ignition[758]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Jan 17 12:20:57.633452 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 12:20:57.451939 ignition[758]: GET result: OK Jan 17 12:20:57.649751 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:20:57.452061 ignition[758]: parsing config with SHA512: 937dc624f0b598a8495a7b9a61e131a318320a59a660619289a0b585fa5fd1d80e29d56a417c25b28076f422fc6f4529e234da933bb255521a613458c1ef3480 Jan 17 12:20:57.666714 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:20:57.459798 ignition[758]: fetch: fetch complete Jan 17 12:20:57.680721 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:20:57.459815 ignition[758]: fetch: fetch passed Jan 17 12:20:57.696719 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:20:57.459889 ignition[758]: Ignition finished successfully Jan 17 12:20:57.718953 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 12:20:57.527247 ignition[765]: Ignition 2.19.0 Jan 17 12:20:57.527258 ignition[765]: Stage: kargs Jan 17 12:20:57.527466 ignition[765]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:20:57.527481 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 12:20:57.528774 ignition[765]: kargs: kargs passed Jan 17 12:20:57.528846 ignition[765]: Ignition finished successfully Jan 17 12:20:57.615095 ignition[771]: Ignition 2.19.0 Jan 17 12:20:57.615104 ignition[771]: Stage: disks Jan 17 12:20:57.615293 ignition[771]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:20:57.615305 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 12:20:57.616369 ignition[771]: disks: disks passed Jan 17 12:20:57.616435 ignition[771]: Ignition finished successfully Jan 17 12:20:57.779748 systemd-fsck[779]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 17 12:20:57.949259 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 12:20:57.982705 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 12:20:58.098903 kernel: EXT4-fs (sda9): mounted filesystem 0ba4fe0e-76d7-406f-b570-4642d86198f6 r/w with ordered data mode. Quota mode: none. Jan 17 12:20:58.099813 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 12:20:58.100692 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 12:20:58.131807 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:20:58.149669 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 12:20:58.207723 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (787) Jan 17 12:20:58.207776 kernel: BTRFS info (device sda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:20:58.207800 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:20:58.207822 kernel: BTRFS info (device sda6): using free space tree Jan 17 12:20:58.159243 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 12:20:58.242663 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 12:20:58.242696 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 12:20:58.159343 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 12:20:58.159386 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:20:58.201376 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 12:20:58.232859 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 12:20:58.279868 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:20:58.380462 initrd-setup-root[811]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 12:20:58.391688 initrd-setup-root[818]: cut: /sysroot/etc/group: No such file or directory Jan 17 12:20:58.402666 initrd-setup-root[825]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 12:20:58.413628 initrd-setup-root[832]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 12:20:58.546825 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 12:20:58.574691 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 12:20:58.602713 kernel: BTRFS info (device sda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:20:58.597729 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 12:20:58.611832 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 12:20:58.639490 ignition[899]: INFO : Ignition 2.19.0 Jan 17 12:20:58.639490 ignition[899]: INFO : Stage: mount Jan 17 12:20:58.653674 ignition[899]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:20:58.653674 ignition[899]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 12:20:58.653674 ignition[899]: INFO : mount: mount passed Jan 17 12:20:58.653674 ignition[899]: INFO : Ignition finished successfully Jan 17 12:20:58.642184 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 12:20:58.683119 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 12:20:58.707690 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 12:20:59.105801 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:20:59.150578 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (911) Jan 17 12:20:59.169000 kernel: BTRFS info (device sda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:20:59.169097 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:20:59.169124 kernel: BTRFS info (device sda6): using free space tree Jan 17 12:20:59.192664 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 12:20:59.192757 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 12:20:59.195894 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:20:59.232858 systemd-networkd[750]: eth0: Gained IPv6LL Jan 17 12:20:59.240702 ignition[928]: INFO : Ignition 2.19.0 Jan 17 12:20:59.240702 ignition[928]: INFO : Stage: files Jan 17 12:20:59.255656 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:20:59.255656 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 12:20:59.255656 ignition[928]: DEBUG : files: compiled without relabeling support, skipping Jan 17 12:20:59.255656 ignition[928]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 12:20:59.255656 ignition[928]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 12:20:59.255656 ignition[928]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 12:20:59.255656 ignition[928]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 12:20:59.255656 ignition[928]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 12:20:59.252079 unknown[928]: wrote ssh authorized keys file for user: core Jan 17 12:20:59.357730 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 17 12:20:59.357730 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 17 12:20:59.455723 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 17 12:20:59.618249 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 17 12:20:59.635702 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 17 12:20:59.635702 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 12:20:59.635702 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:20:59.635702 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:20:59.635702 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:20:59.635702 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:20:59.635702 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:20:59.635702 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:20:59.635702 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:20:59.635702 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:20:59.635702 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 17 12:20:59.635702 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 17 12:20:59.635702 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 17 12:20:59.635702 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 17 12:20:59.983362 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 17 12:21:00.496656 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 17 12:21:00.496656 ignition[928]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 17 12:21:00.536691 ignition[928]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:21:00.536691 ignition[928]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:21:00.536691 ignition[928]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 17 12:21:00.536691 ignition[928]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 17 12:21:00.536691 ignition[928]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 12:21:00.536691 ignition[928]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:21:00.536691 ignition[928]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:21:00.536691 ignition[928]: INFO : files: files passed Jan 17 12:21:00.536691 ignition[928]: INFO : Ignition finished successfully Jan 17 12:21:00.502738 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 12:21:00.531823 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 12:21:00.537885 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 12:21:00.578295 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 12:21:00.743720 initrd-setup-root-after-ignition[955]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:21:00.743720 initrd-setup-root-after-ignition[955]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:21:00.578419 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 12:21:00.792744 initrd-setup-root-after-ignition[959]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:21:00.643255 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:21:00.647978 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 12:21:00.678825 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 12:21:00.760880 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 12:21:00.761018 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 12:21:00.783726 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 12:21:00.802862 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 12:21:00.826938 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 12:21:00.833833 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 12:21:00.898884 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:21:00.924769 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 12:21:00.969608 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 12:21:00.969836 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 12:21:00.980617 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:21:01.008736 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:21:01.008893 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 12:21:01.037778 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 12:21:01.037906 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:21:01.068698 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 12:21:01.085710 systemd[1]: Stopped target basic.target - Basic System. Jan 17 12:21:01.100697 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 12:21:01.119704 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:21:01.137711 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 12:21:01.154708 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 12:21:01.172702 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:21:01.172889 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 12:21:01.203719 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 12:21:01.221701 systemd[1]: Stopped target swap.target - Swaps. Jan 17 12:21:01.236702 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 12:21:01.236831 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:21:01.264966 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:21:01.274965 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:21:01.289962 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 12:21:01.290062 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:21:01.307930 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 12:21:01.308019 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 12:21:01.346034 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 12:21:01.346157 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:21:01.477715 ignition[981]: INFO : Ignition 2.19.0 Jan 17 12:21:01.477715 ignition[981]: INFO : Stage: umount Jan 17 12:21:01.477715 ignition[981]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:21:01.477715 ignition[981]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 12:21:01.477715 ignition[981]: INFO : umount: umount passed Jan 17 12:21:01.477715 ignition[981]: INFO : Ignition finished successfully Jan 17 12:21:01.356960 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 12:21:01.357036 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 12:21:01.381691 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 12:21:01.418470 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 12:21:01.442689 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 12:21:01.442836 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:21:01.453767 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 12:21:01.453864 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:21:01.466872 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 12:21:01.467037 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 12:21:01.493430 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 12:21:01.494107 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 12:21:01.494228 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 12:21:01.508895 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 12:21:01.508967 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 12:21:01.528773 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 12:21:01.528882 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 12:21:01.549782 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 12:21:01.549877 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 12:21:01.566785 systemd[1]: Stopped target network.target - Network. Jan 17 12:21:01.581680 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 12:21:01.581830 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:21:01.601791 systemd[1]: Stopped target paths.target - Path Units. Jan 17 12:21:01.617673 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 12:21:01.619623 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:21:01.638689 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 12:21:01.653701 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 12:21:01.670772 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 12:21:01.670861 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:21:01.690767 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 12:21:01.690864 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:21:01.710757 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 12:21:01.710864 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 12:21:01.730794 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 12:21:01.730895 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 12:21:01.748785 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 12:21:01.748890 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 12:21:01.767084 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 12:21:01.772614 systemd-networkd[750]: eth0: DHCPv6 lease lost Jan 17 12:21:01.784898 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 12:21:01.804376 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 12:21:01.804546 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 12:21:01.823602 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 12:21:01.823861 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 12:21:01.832495 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 12:21:01.832602 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:21:01.852681 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 12:21:01.880642 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 12:21:02.367684 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Jan 17 12:21:01.880865 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:21:01.888945 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 12:21:01.889018 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:21:01.906981 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 12:21:01.907056 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 12:21:01.933872 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 12:21:01.933954 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:21:01.955075 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:21:01.976212 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 12:21:01.976401 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:21:02.002721 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 12:21:02.002789 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 12:21:02.017980 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 12:21:02.018064 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:21:02.046838 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 12:21:02.046923 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:21:02.073873 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 12:21:02.074082 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 12:21:02.100950 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:21:02.101047 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:21:02.135728 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 12:21:02.165681 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 12:21:02.165921 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:21:02.186891 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 17 12:21:02.186978 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:21:02.207885 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 12:21:02.207966 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:21:02.228900 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:21:02.228999 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:21:02.236426 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 12:21:02.236604 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 12:21:02.254251 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 12:21:02.254373 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 12:21:02.272188 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 12:21:02.305762 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 12:21:02.321900 systemd[1]: Switching root. Jan 17 12:21:02.727667 systemd-journald[183]: Journal stopped Jan 17 12:21:05.133584 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 12:21:05.133645 kernel: SELinux: policy capability open_perms=1 Jan 17 12:21:05.133668 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 12:21:05.133685 kernel: SELinux: policy capability always_check_network=0 Jan 17 12:21:05.133702 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 12:21:05.133720 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 12:21:05.133739 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 12:21:05.133761 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 12:21:05.133780 kernel: audit: type=1403 audit(1737116462.968:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 12:21:05.133802 systemd[1]: Successfully loaded SELinux policy in 92.967ms. Jan 17 12:21:05.133824 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.508ms. Jan 17 12:21:05.133847 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:21:05.133868 systemd[1]: Detected virtualization google. Jan 17 12:21:05.133889 systemd[1]: Detected architecture x86-64. Jan 17 12:21:05.133917 systemd[1]: Detected first boot. Jan 17 12:21:05.133939 systemd[1]: Initializing machine ID from random generator. Jan 17 12:21:05.133960 zram_generator::config[1021]: No configuration found. Jan 17 12:21:05.133986 systemd[1]: Populated /etc with preset unit settings. Jan 17 12:21:05.134016 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 17 12:21:05.134040 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 17 12:21:05.134060 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 17 12:21:05.134081 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 12:21:05.134102 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 12:21:05.134122 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 12:21:05.134143 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 12:21:05.134164 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 12:21:05.134189 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 12:21:05.134210 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 12:21:05.134230 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 12:21:05.134250 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:21:05.134271 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:21:05.134292 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 12:21:05.134313 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 12:21:05.134334 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 12:21:05.134360 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:21:05.134381 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 12:21:05.134401 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:21:05.134423 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 17 12:21:05.134446 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 17 12:21:05.134467 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 17 12:21:05.134495 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 12:21:05.134542 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:21:05.134565 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:21:05.134592 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:21:05.134614 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:21:05.134635 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 12:21:05.134658 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 12:21:05.134681 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:21:05.134703 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:21:05.134725 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:21:05.134753 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 12:21:05.134776 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 12:21:05.134798 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 12:21:05.134820 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 12:21:05.134844 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:21:05.134871 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 12:21:05.134893 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 12:21:05.134916 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 12:21:05.134939 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 12:21:05.134964 systemd[1]: Reached target machines.target - Containers. Jan 17 12:21:05.134987 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 12:21:05.135018 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:21:05.135042 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:21:05.135069 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 12:21:05.135092 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:21:05.135114 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:21:05.135138 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:21:05.135161 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 12:21:05.135184 kernel: fuse: init (API version 7.39) Jan 17 12:21:05.135205 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:21:05.135229 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 12:21:05.135255 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 17 12:21:05.135275 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 17 12:21:05.135298 kernel: ACPI: bus type drm_connector registered Jan 17 12:21:05.135318 kernel: loop: module loaded Jan 17 12:21:05.135341 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 17 12:21:05.135364 systemd[1]: Stopped systemd-fsck-usr.service. Jan 17 12:21:05.135388 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:21:05.135411 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:21:05.135482 systemd-journald[1108]: Collecting audit messages is disabled. Jan 17 12:21:05.135575 systemd-journald[1108]: Journal started Jan 17 12:21:05.135616 systemd-journald[1108]: Runtime Journal (/run/log/journal/2b5cb7d58953482a9240c2e5612bfd36) is 8.0M, max 148.7M, 140.7M free. Jan 17 12:21:03.920092 systemd[1]: Queued start job for default target multi-user.target. Jan 17 12:21:03.946762 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 17 12:21:03.947349 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 17 12:21:05.147565 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 12:21:05.172554 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 12:21:05.188573 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:21:05.210394 systemd[1]: verity-setup.service: Deactivated successfully. Jan 17 12:21:05.210487 systemd[1]: Stopped verity-setup.service. Jan 17 12:21:05.239533 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:21:05.248575 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:21:05.260081 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 12:21:05.270935 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 12:21:05.280930 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 12:21:05.290918 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 12:21:05.302005 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 12:21:05.311994 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 12:21:05.322156 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 12:21:05.334076 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:21:05.346064 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 12:21:05.346304 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 12:21:05.358073 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:21:05.358301 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:21:05.370126 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:21:05.370367 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:21:05.381057 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:21:05.381283 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:21:05.393103 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 12:21:05.393332 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 12:21:05.404083 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:21:05.404309 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:21:05.414087 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:21:05.424040 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 12:21:05.436121 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 12:21:05.448098 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:21:05.473168 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 12:21:05.488660 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 12:21:05.508692 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 12:21:05.518690 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 12:21:05.518819 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:21:05.530642 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 12:21:05.549734 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 12:21:05.569764 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 12:21:05.579843 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:21:05.586921 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 12:21:05.607824 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 12:21:05.616686 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:21:05.625532 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 12:21:05.636131 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:21:05.647181 systemd-journald[1108]: Time spent on flushing to /var/log/journal/2b5cb7d58953482a9240c2e5612bfd36 is 128.246ms for 929 entries. Jan 17 12:21:05.647181 systemd-journald[1108]: System Journal (/var/log/journal/2b5cb7d58953482a9240c2e5612bfd36) is 8.0M, max 584.8M, 576.8M free. Jan 17 12:21:05.812145 systemd-journald[1108]: Received client request to flush runtime journal. Jan 17 12:21:05.812222 kernel: loop0: detected capacity change from 0 to 142488 Jan 17 12:21:05.649718 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:21:05.671812 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 12:21:05.692777 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:21:05.712497 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 12:21:05.727307 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 12:21:05.738890 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 12:21:05.760610 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 12:21:05.772109 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 12:21:05.789140 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 12:21:05.812616 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 12:21:05.824346 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 12:21:05.836206 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:21:05.864563 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 12:21:05.862666 udevadm[1142]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 17 12:21:05.866472 systemd-tmpfiles[1140]: ACLs are not supported, ignoring. Jan 17 12:21:05.866525 systemd-tmpfiles[1140]: ACLs are not supported, ignoring. Jan 17 12:21:05.888423 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:21:05.900558 kernel: loop1: detected capacity change from 0 to 210664 Jan 17 12:21:05.918841 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 12:21:05.930793 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 12:21:05.936891 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 12:21:06.010771 kernel: loop2: detected capacity change from 0 to 140768 Jan 17 12:21:06.034865 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 12:21:06.051818 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:21:06.117051 systemd-tmpfiles[1161]: ACLs are not supported, ignoring. Jan 17 12:21:06.117089 systemd-tmpfiles[1161]: ACLs are not supported, ignoring. Jan 17 12:21:06.134221 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:21:06.142606 kernel: loop3: detected capacity change from 0 to 54824 Jan 17 12:21:06.221700 kernel: loop4: detected capacity change from 0 to 142488 Jan 17 12:21:06.266533 kernel: loop5: detected capacity change from 0 to 210664 Jan 17 12:21:06.308895 kernel: loop6: detected capacity change from 0 to 140768 Jan 17 12:21:06.374572 kernel: loop7: detected capacity change from 0 to 54824 Jan 17 12:21:06.402263 (sd-merge)[1166]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Jan 17 12:21:06.404480 (sd-merge)[1166]: Merged extensions into '/usr'. Jan 17 12:21:06.420139 systemd[1]: Reloading requested from client PID 1139 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 12:21:06.420590 systemd[1]: Reloading... Jan 17 12:21:06.588538 zram_generator::config[1189]: No configuration found. Jan 17 12:21:06.853542 ldconfig[1134]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 12:21:06.861218 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:21:06.968193 systemd[1]: Reloading finished in 546 ms. Jan 17 12:21:07.003710 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 12:21:07.014357 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 12:21:07.035779 systemd[1]: Starting ensure-sysext.service... Jan 17 12:21:07.049291 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:21:07.068643 systemd[1]: Reloading requested from client PID 1233 ('systemctl') (unit ensure-sysext.service)... Jan 17 12:21:07.068679 systemd[1]: Reloading... Jan 17 12:21:07.125765 systemd-tmpfiles[1234]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 12:21:07.126469 systemd-tmpfiles[1234]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 12:21:07.128910 systemd-tmpfiles[1234]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 12:21:07.129635 systemd-tmpfiles[1234]: ACLs are not supported, ignoring. Jan 17 12:21:07.129870 systemd-tmpfiles[1234]: ACLs are not supported, ignoring. Jan 17 12:21:07.137896 systemd-tmpfiles[1234]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:21:07.137921 systemd-tmpfiles[1234]: Skipping /boot Jan 17 12:21:07.172411 systemd-tmpfiles[1234]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:21:07.172440 systemd-tmpfiles[1234]: Skipping /boot Jan 17 12:21:07.235584 zram_generator::config[1266]: No configuration found. Jan 17 12:21:07.352935 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:21:07.418018 systemd[1]: Reloading finished in 348 ms. Jan 17 12:21:07.440532 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 12:21:07.457264 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:21:07.482855 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:21:07.499773 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 12:21:07.518815 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 12:21:07.543798 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:21:07.565786 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:21:07.571728 augenrules[1322]: No rules Jan 17 12:21:07.584913 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 12:21:07.599330 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:21:07.611318 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 12:21:07.625632 systemd-udevd[1320]: Using default interface naming scheme 'v255'. Jan 17 12:21:07.645246 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 12:21:07.661735 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 12:21:07.681953 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:21:07.682406 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:21:07.692696 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:21:07.710918 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:21:07.730183 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:21:07.740852 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:21:07.741262 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:21:07.743315 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:21:07.764100 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 12:21:07.777890 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 12:21:07.791554 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 12:21:07.805580 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 12:21:07.817462 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:21:07.818824 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:21:07.832194 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:21:07.833594 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:21:07.846562 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:21:07.846870 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:21:07.914831 systemd[1]: Finished ensure-sysext.service. Jan 17 12:21:07.933318 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 17 12:21:07.937050 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:21:07.938883 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:21:07.952609 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:21:07.968757 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:21:07.984786 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:21:08.004781 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:21:08.020778 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 17 12:21:08.028257 systemd-resolved[1317]: Positive Trust Anchors: Jan 17 12:21:08.029823 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:21:08.034099 systemd-resolved[1317]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:21:08.034181 systemd-resolved[1317]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:21:08.040704 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:21:08.050709 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 12:21:08.058983 systemd-resolved[1317]: Defaulting to hostname 'linux'. Jan 17 12:21:08.060695 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 12:21:08.060754 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:21:08.061880 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:21:08.063357 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:21:08.075393 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:21:08.086528 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 17 12:21:08.110536 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 17 12:21:08.112146 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:21:08.112805 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:21:08.121538 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jan 17 12:21:08.131175 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:21:08.131434 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:21:08.136772 kernel: ACPI: button: Power Button [PWRF] Jan 17 12:21:08.156111 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1345) Jan 17 12:21:08.156243 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Jan 17 12:21:08.166806 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:21:08.167307 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:21:08.182603 kernel: ACPI: button: Sleep Button [SLPF] Jan 17 12:21:08.216118 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 17 12:21:08.245205 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:21:08.252541 kernel: EDAC MC: Ver: 3.0.0 Jan 17 12:21:08.264784 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Jan 17 12:21:08.279036 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:21:08.279165 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:21:08.279526 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 12:21:08.310128 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:21:08.344788 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 17 12:21:08.363778 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 12:21:08.367153 systemd-networkd[1378]: lo: Link UP Jan 17 12:21:08.367643 systemd-networkd[1378]: lo: Gained carrier Jan 17 12:21:08.372589 systemd-networkd[1378]: Enumeration completed Jan 17 12:21:08.373556 systemd-networkd[1378]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:21:08.373571 systemd-networkd[1378]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:21:08.374278 systemd-networkd[1378]: eth0: Link UP Jan 17 12:21:08.374295 systemd-networkd[1378]: eth0: Gained carrier Jan 17 12:21:08.374318 systemd-networkd[1378]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:21:08.374880 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:21:08.385297 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 12:21:08.386639 systemd-networkd[1378]: eth0: DHCPv4 address 10.128.0.38/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 17 12:21:08.397579 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Jan 17 12:21:08.400195 systemd[1]: Reached target network.target - Network. Jan 17 12:21:08.406830 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 12:21:08.417856 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 12:21:08.418465 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 12:21:08.439269 lvm[1410]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:21:08.478650 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 12:21:08.479136 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:21:08.484743 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 12:21:08.500190 lvm[1415]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:21:08.509668 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:21:08.521963 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:21:08.532854 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 12:21:08.544770 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 12:21:08.555949 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 12:21:08.565892 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 12:21:08.576700 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 12:21:08.587658 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 12:21:08.587725 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:21:08.596673 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:21:08.606276 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 12:21:08.618415 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 12:21:08.631045 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 12:21:08.641744 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 12:21:08.652924 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 12:21:08.663743 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:21:08.673726 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:21:08.682791 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:21:08.682854 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:21:08.694718 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 12:21:08.709776 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 17 12:21:08.731772 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 12:21:08.750654 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 12:21:08.775795 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 12:21:08.785668 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 12:21:08.795810 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 12:21:08.811982 jq[1424]: false Jan 17 12:21:08.813740 systemd[1]: Started ntpd.service - Network Time Service. Jan 17 12:21:08.829696 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 12:21:08.843192 extend-filesystems[1425]: Found loop4 Jan 17 12:21:08.846169 extend-filesystems[1425]: Found loop5 Jan 17 12:21:08.846169 extend-filesystems[1425]: Found loop6 Jan 17 12:21:08.846169 extend-filesystems[1425]: Found loop7 Jan 17 12:21:08.846169 extend-filesystems[1425]: Found sda Jan 17 12:21:08.846169 extend-filesystems[1425]: Found sda1 Jan 17 12:21:08.846169 extend-filesystems[1425]: Found sda2 Jan 17 12:21:08.846169 extend-filesystems[1425]: Found sda3 Jan 17 12:21:08.846169 extend-filesystems[1425]: Found usr Jan 17 12:21:08.846169 extend-filesystems[1425]: Found sda4 Jan 17 12:21:08.846169 extend-filesystems[1425]: Found sda6 Jan 17 12:21:08.846169 extend-filesystems[1425]: Found sda7 Jan 17 12:21:08.846169 extend-filesystems[1425]: Found sda9 Jan 17 12:21:08.846169 extend-filesystems[1425]: Checking size of /dev/sda9 Jan 17 12:21:09.038674 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Jan 17 12:21:09.038721 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Jan 17 12:21:09.038741 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1345) Jan 17 12:21:08.845771 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 12:21:09.038935 extend-filesystems[1425]: Resized partition /dev/sda9 Jan 17 12:21:08.912169 ntpd[1430]: ntpd 4.2.8p17@1.4004-o Fri Jan 17 10:03:35 UTC 2025 (1): Starting Jan 17 12:21:09.048142 coreos-metadata[1422]: Jan 17 12:21:08.850 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Jan 17 12:21:09.048142 coreos-metadata[1422]: Jan 17 12:21:08.860 INFO Fetch successful Jan 17 12:21:09.048142 coreos-metadata[1422]: Jan 17 12:21:08.860 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Jan 17 12:21:09.048142 coreos-metadata[1422]: Jan 17 12:21:08.865 INFO Fetch successful Jan 17 12:21:09.048142 coreos-metadata[1422]: Jan 17 12:21:08.865 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Jan 17 12:21:09.048142 coreos-metadata[1422]: Jan 17 12:21:08.868 INFO Fetch successful Jan 17 12:21:09.048142 coreos-metadata[1422]: Jan 17 12:21:08.868 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Jan 17 12:21:09.048142 coreos-metadata[1422]: Jan 17 12:21:08.870 INFO Fetch successful Jan 17 12:21:09.048619 ntpd[1430]: 17 Jan 12:21:08 ntpd[1430]: ntpd 4.2.8p17@1.4004-o Fri Jan 17 10:03:35 UTC 2025 (1): Starting Jan 17 12:21:09.048619 ntpd[1430]: 17 Jan 12:21:08 ntpd[1430]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 17 12:21:09.048619 ntpd[1430]: 17 Jan 12:21:08 ntpd[1430]: ---------------------------------------------------- Jan 17 12:21:09.048619 ntpd[1430]: 17 Jan 12:21:08 ntpd[1430]: ntp-4 is maintained by Network Time Foundation, Jan 17 12:21:09.048619 ntpd[1430]: 17 Jan 12:21:08 ntpd[1430]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 17 12:21:09.048619 ntpd[1430]: 17 Jan 12:21:08 ntpd[1430]: corporation. Support and training for ntp-4 are Jan 17 12:21:09.048619 ntpd[1430]: 17 Jan 12:21:08 ntpd[1430]: available at https://www.nwtime.org/support Jan 17 12:21:09.048619 ntpd[1430]: 17 Jan 12:21:08 ntpd[1430]: ---------------------------------------------------- Jan 17 12:21:09.048619 ntpd[1430]: 17 Jan 12:21:08 ntpd[1430]: proto: precision = 0.076 usec (-24) Jan 17 12:21:09.048619 ntpd[1430]: 17 Jan 12:21:08 ntpd[1430]: basedate set to 2025-01-05 Jan 17 12:21:09.048619 ntpd[1430]: 17 Jan 12:21:08 ntpd[1430]: gps base set to 2025-01-05 (week 2348) Jan 17 12:21:09.048619 ntpd[1430]: 17 Jan 12:21:08 ntpd[1430]: Listen and drop on 0 v6wildcard [::]:123 Jan 17 12:21:09.048619 ntpd[1430]: 17 Jan 12:21:08 ntpd[1430]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 17 12:21:09.048619 ntpd[1430]: 17 Jan 12:21:08 ntpd[1430]: Listen normally on 2 lo 127.0.0.1:123 Jan 17 12:21:09.048619 ntpd[1430]: 17 Jan 12:21:08 ntpd[1430]: Listen normally on 3 eth0 10.128.0.38:123 Jan 17 12:21:09.048619 ntpd[1430]: 17 Jan 12:21:08 ntpd[1430]: Listen normally on 4 lo [::1]:123 Jan 17 12:21:09.048619 ntpd[1430]: 17 Jan 12:21:08 ntpd[1430]: bind(21) AF_INET6 fe80::4001:aff:fe80:26%2#123 flags 0x11 failed: Cannot assign requested address Jan 17 12:21:09.048619 ntpd[1430]: 17 Jan 12:21:08 ntpd[1430]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:26%2#123 Jan 17 12:21:09.048619 ntpd[1430]: 17 Jan 12:21:08 ntpd[1430]: failed to init interface for address fe80::4001:aff:fe80:26%2 Jan 17 12:21:09.048619 ntpd[1430]: 17 Jan 12:21:08 ntpd[1430]: Listening on routing socket on fd #21 for interface updates Jan 17 12:21:09.048619 ntpd[1430]: 17 Jan 12:21:08 ntpd[1430]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 12:21:09.048619 ntpd[1430]: 17 Jan 12:21:08 ntpd[1430]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 12:21:08.871787 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 12:21:09.057855 extend-filesystems[1445]: resize2fs 1.47.1 (20-May-2024) Jan 17 12:21:09.057855 extend-filesystems[1445]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 17 12:21:09.057855 extend-filesystems[1445]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 17 12:21:09.057855 extend-filesystems[1445]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Jan 17 12:21:08.912203 ntpd[1430]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 17 12:21:08.917787 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 12:21:09.129288 extend-filesystems[1425]: Resized filesystem in /dev/sda9 Jan 17 12:21:08.912250 ntpd[1430]: ---------------------------------------------------- Jan 17 12:21:08.937300 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Jan 17 12:21:08.912264 ntpd[1430]: ntp-4 is maintained by Network Time Foundation, Jan 17 12:21:08.938217 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 12:21:08.912279 ntpd[1430]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 17 12:21:08.945762 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 12:21:09.144224 jq[1451]: true Jan 17 12:21:08.912296 ntpd[1430]: corporation. Support and training for ntp-4 are Jan 17 12:21:08.959697 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 12:21:08.912310 ntpd[1430]: available at https://www.nwtime.org/support Jan 17 12:21:08.972699 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 12:21:08.912322 ntpd[1430]: ---------------------------------------------------- Jan 17 12:21:09.021120 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 12:21:08.919113 ntpd[1430]: proto: precision = 0.076 usec (-24) Jan 17 12:21:09.021382 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 12:21:08.921132 ntpd[1430]: basedate set to 2025-01-05 Jan 17 12:21:09.021891 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 12:21:08.921158 ntpd[1430]: gps base set to 2025-01-05 (week 2348) Jan 17 12:21:09.022120 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 12:21:08.948122 dbus-daemon[1423]: [system] SELinux support is enabled Jan 17 12:21:09.060410 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 12:21:09.202920 update_engine[1449]: I20250117 12:21:09.185256 1449 main.cc:92] Flatcar Update Engine starting Jan 17 12:21:08.948151 ntpd[1430]: Listen and drop on 0 v6wildcard [::]:123 Jan 17 12:21:09.060710 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 12:21:08.948211 ntpd[1430]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 17 12:21:09.077826 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 12:21:08.948469 ntpd[1430]: Listen normally on 2 lo 127.0.0.1:123 Jan 17 12:21:09.207330 update_engine[1449]: I20250117 12:21:09.204860 1449 update_check_scheduler.cc:74] Next update check in 3m24s Jan 17 12:21:09.208610 jq[1460]: true Jan 17 12:21:09.079642 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 12:21:08.948571 ntpd[1430]: Listen normally on 3 eth0 10.128.0.38:123 Jan 17 12:21:09.201134 (ntainerd)[1468]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 12:21:08.948635 ntpd[1430]: Listen normally on 4 lo [::1]:123 Jan 17 12:21:09.204095 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 17 12:21:08.948723 ntpd[1430]: bind(21) AF_INET6 fe80::4001:aff:fe80:26%2#123 flags 0x11 failed: Cannot assign requested address Jan 17 12:21:08.948755 ntpd[1430]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:26%2#123 Jan 17 12:21:08.948777 ntpd[1430]: failed to init interface for address fe80::4001:aff:fe80:26%2 Jan 17 12:21:08.959148 ntpd[1430]: Listening on routing socket on fd #21 for interface updates Jan 17 12:21:08.960831 dbus-daemon[1423]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1378 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 17 12:21:08.963935 ntpd[1430]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 12:21:08.963983 ntpd[1430]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 12:21:09.142907 dbus-daemon[1423]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 17 12:21:09.229541 sshd_keygen[1456]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 12:21:09.265229 systemd[1]: Started update-engine.service - Update Engine. Jan 17 12:21:09.277263 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 12:21:09.277407 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 12:21:09.277451 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 12:21:09.297782 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 17 12:21:09.306376 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 12:21:09.306440 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 12:21:09.316628 tar[1459]: linux-amd64/helm Jan 17 12:21:09.331031 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 12:21:09.358433 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 12:21:09.374775 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 12:21:09.418777 systemd-logind[1448]: Watching system buttons on /dev/input/event2 (Power Button) Jan 17 12:21:09.418818 systemd-logind[1448]: Watching system buttons on /dev/input/event3 (Sleep Button) Jan 17 12:21:09.418849 systemd-logind[1448]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 12:21:09.436765 systemd-logind[1448]: New seat seat0. Jan 17 12:21:09.440734 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 12:21:09.441781 bash[1501]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:21:09.451190 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 12:21:09.468797 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 12:21:09.469085 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 12:21:09.505940 systemd[1]: Starting sshkeys.service... Jan 17 12:21:09.524816 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 12:21:09.553571 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 17 12:21:09.575828 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 17 12:21:09.613630 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 12:21:09.637861 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 12:21:09.653835 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 12:21:09.664113 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 12:21:09.670238 dbus-daemon[1423]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 17 12:21:09.673096 dbus-daemon[1423]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1484 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 17 12:21:09.674679 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 17 12:21:09.693970 locksmithd[1491]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 12:21:09.699162 systemd[1]: Starting polkit.service - Authorization Manager... Jan 17 12:21:09.722684 coreos-metadata[1513]: Jan 17 12:21:09.722 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Jan 17 12:21:09.723086 coreos-metadata[1513]: Jan 17 12:21:09.722 INFO Fetch failed with 404: resource not found Jan 17 12:21:09.723086 coreos-metadata[1513]: Jan 17 12:21:09.722 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Jan 17 12:21:09.725528 coreos-metadata[1513]: Jan 17 12:21:09.724 INFO Fetch successful Jan 17 12:21:09.725528 coreos-metadata[1513]: Jan 17 12:21:09.724 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Jan 17 12:21:09.726139 coreos-metadata[1513]: Jan 17 12:21:09.725 INFO Fetch failed with 404: resource not found Jan 17 12:21:09.726139 coreos-metadata[1513]: Jan 17 12:21:09.726 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Jan 17 12:21:09.727254 coreos-metadata[1513]: Jan 17 12:21:09.726 INFO Fetch failed with 404: resource not found Jan 17 12:21:09.727522 coreos-metadata[1513]: Jan 17 12:21:09.727 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Jan 17 12:21:09.730576 coreos-metadata[1513]: Jan 17 12:21:09.730 INFO Fetch successful Jan 17 12:21:09.736091 unknown[1513]: wrote ssh authorized keys file for user: core Jan 17 12:21:09.774601 update-ssh-keys[1526]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:21:09.774320 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 17 12:21:09.792082 systemd[1]: Finished sshkeys.service. Jan 17 12:21:09.792676 systemd-networkd[1378]: eth0: Gained IPv6LL Jan 17 12:21:09.801533 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 12:21:09.803733 polkitd[1524]: Started polkitd version 121 Jan 17 12:21:09.816500 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 12:21:09.823043 polkitd[1524]: Loading rules from directory /etc/polkit-1/rules.d Jan 17 12:21:09.823159 polkitd[1524]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 17 12:21:09.829760 polkitd[1524]: Finished loading, compiling and executing 2 rules Jan 17 12:21:09.839739 dbus-daemon[1423]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 17 12:21:09.841866 polkitd[1524]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 17 12:21:09.841974 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:21:09.861946 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 12:21:09.883702 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Jan 17 12:21:09.902189 systemd[1]: Started polkit.service - Authorization Manager. Jan 17 12:21:09.940956 systemd-hostnamed[1484]: Hostname set to (transient) Jan 17 12:21:09.944845 systemd-resolved[1317]: System hostname changed to 'ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal'. Jan 17 12:21:09.952103 init.sh[1539]: + '[' -e /etc/default/instance_configs.cfg.template ']' Jan 17 12:21:09.958537 init.sh[1539]: + echo -e '[InstanceSetup]\nset_host_keys = false' Jan 17 12:21:09.958537 init.sh[1539]: + /usr/bin/google_instance_setup Jan 17 12:21:09.983255 containerd[1468]: time="2025-01-17T12:21:09.982784411Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 12:21:09.992299 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 12:21:10.066278 containerd[1468]: time="2025-01-17T12:21:10.064859797Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:21:10.069568 containerd[1468]: time="2025-01-17T12:21:10.069492479Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:21:10.070159 containerd[1468]: time="2025-01-17T12:21:10.070122831Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 12:21:10.070296 containerd[1468]: time="2025-01-17T12:21:10.070273630Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 12:21:10.070970 containerd[1468]: time="2025-01-17T12:21:10.070737570Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 12:21:10.071463 containerd[1468]: time="2025-01-17T12:21:10.071430799Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 12:21:10.072131 containerd[1468]: time="2025-01-17T12:21:10.071742894Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:21:10.072131 containerd[1468]: time="2025-01-17T12:21:10.071776722Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:21:10.072413 containerd[1468]: time="2025-01-17T12:21:10.072367672Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:21:10.072413 containerd[1468]: time="2025-01-17T12:21:10.072409426Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 12:21:10.072745 containerd[1468]: time="2025-01-17T12:21:10.072434710Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:21:10.072745 containerd[1468]: time="2025-01-17T12:21:10.072454591Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 12:21:10.072745 containerd[1468]: time="2025-01-17T12:21:10.072611302Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:21:10.076176 containerd[1468]: time="2025-01-17T12:21:10.075961646Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:21:10.076256 containerd[1468]: time="2025-01-17T12:21:10.076178285Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:21:10.076256 containerd[1468]: time="2025-01-17T12:21:10.076203753Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 12:21:10.076365 containerd[1468]: time="2025-01-17T12:21:10.076344020Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 12:21:10.077024 containerd[1468]: time="2025-01-17T12:21:10.076421808Z" level=info msg="metadata content store policy set" policy=shared Jan 17 12:21:10.084693 containerd[1468]: time="2025-01-17T12:21:10.084634861Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 12:21:10.084829 containerd[1468]: time="2025-01-17T12:21:10.084748920Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 12:21:10.084884 containerd[1468]: time="2025-01-17T12:21:10.084822992Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 12:21:10.084884 containerd[1468]: time="2025-01-17T12:21:10.084853884Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 12:21:10.084992 containerd[1468]: time="2025-01-17T12:21:10.084899982Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 12:21:10.085530 containerd[1468]: time="2025-01-17T12:21:10.085146199Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 12:21:10.085625 containerd[1468]: time="2025-01-17T12:21:10.085550478Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 12:21:10.085790 containerd[1468]: time="2025-01-17T12:21:10.085758182Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 12:21:10.085855 containerd[1468]: time="2025-01-17T12:21:10.085795606Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 12:21:10.085855 containerd[1468]: time="2025-01-17T12:21:10.085818824Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 12:21:10.085855 containerd[1468]: time="2025-01-17T12:21:10.085843295Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 12:21:10.085995 containerd[1468]: time="2025-01-17T12:21:10.085866641Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 12:21:10.085995 containerd[1468]: time="2025-01-17T12:21:10.085889546Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 12:21:10.085995 containerd[1468]: time="2025-01-17T12:21:10.085914351Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 12:21:10.085995 containerd[1468]: time="2025-01-17T12:21:10.085940748Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 12:21:10.085995 containerd[1468]: time="2025-01-17T12:21:10.085963657Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 12:21:10.085995 containerd[1468]: time="2025-01-17T12:21:10.085986788Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 12:21:10.086239 containerd[1468]: time="2025-01-17T12:21:10.086006818Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 12:21:10.086239 containerd[1468]: time="2025-01-17T12:21:10.086039365Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 12:21:10.086239 containerd[1468]: time="2025-01-17T12:21:10.086062251Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 12:21:10.086239 containerd[1468]: time="2025-01-17T12:21:10.086102560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 12:21:10.086239 containerd[1468]: time="2025-01-17T12:21:10.086127855Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 12:21:10.086239 containerd[1468]: time="2025-01-17T12:21:10.086148956Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 12:21:10.086239 containerd[1468]: time="2025-01-17T12:21:10.086180177Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 12:21:10.086239 containerd[1468]: time="2025-01-17T12:21:10.086201948Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 12:21:10.086239 containerd[1468]: time="2025-01-17T12:21:10.086225213Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 12:21:10.086656 containerd[1468]: time="2025-01-17T12:21:10.086247293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 12:21:10.086656 containerd[1468]: time="2025-01-17T12:21:10.086273586Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 12:21:10.086656 containerd[1468]: time="2025-01-17T12:21:10.086294011Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 12:21:10.086656 containerd[1468]: time="2025-01-17T12:21:10.086316116Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 12:21:10.086656 containerd[1468]: time="2025-01-17T12:21:10.086338897Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 12:21:10.086656 containerd[1468]: time="2025-01-17T12:21:10.086365730Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 12:21:10.086656 containerd[1468]: time="2025-01-17T12:21:10.086398337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 12:21:10.086656 containerd[1468]: time="2025-01-17T12:21:10.086429114Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 12:21:10.086656 containerd[1468]: time="2025-01-17T12:21:10.086454301Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 12:21:10.091304 containerd[1468]: time="2025-01-17T12:21:10.089866810Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 12:21:10.091304 containerd[1468]: time="2025-01-17T12:21:10.090024141Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 12:21:10.091304 containerd[1468]: time="2025-01-17T12:21:10.090068017Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 12:21:10.091304 containerd[1468]: time="2025-01-17T12:21:10.090092017Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 12:21:10.091304 containerd[1468]: time="2025-01-17T12:21:10.090110465Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 12:21:10.091304 containerd[1468]: time="2025-01-17T12:21:10.090418293Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 12:21:10.091304 containerd[1468]: time="2025-01-17T12:21:10.090445441Z" level=info msg="NRI interface is disabled by configuration." Jan 17 12:21:10.092087 containerd[1468]: time="2025-01-17T12:21:10.091388237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 12:21:10.099391 containerd[1468]: time="2025-01-17T12:21:10.098576394Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 12:21:10.100779 containerd[1468]: time="2025-01-17T12:21:10.100489839Z" level=info msg="Connect containerd service" Jan 17 12:21:10.101733 containerd[1468]: time="2025-01-17T12:21:10.100857157Z" level=info msg="using legacy CRI server" Jan 17 12:21:10.101733 containerd[1468]: time="2025-01-17T12:21:10.100897756Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 12:21:10.101733 containerd[1468]: time="2025-01-17T12:21:10.101107366Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 12:21:10.102157 containerd[1468]: time="2025-01-17T12:21:10.102118897Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:21:10.103420 containerd[1468]: time="2025-01-17T12:21:10.102782015Z" level=info msg="Start subscribing containerd event" Jan 17 12:21:10.103420 containerd[1468]: time="2025-01-17T12:21:10.103250714Z" level=info msg="Start recovering state" Jan 17 12:21:10.103420 containerd[1468]: time="2025-01-17T12:21:10.103376606Z" level=info msg="Start event monitor" Jan 17 12:21:10.104413 containerd[1468]: time="2025-01-17T12:21:10.103715941Z" level=info msg="Start snapshots syncer" Jan 17 12:21:10.104413 containerd[1468]: time="2025-01-17T12:21:10.103746479Z" level=info msg="Start cni network conf syncer for default" Jan 17 12:21:10.104413 containerd[1468]: time="2025-01-17T12:21:10.103772529Z" level=info msg="Start streaming server" Jan 17 12:21:10.104891 containerd[1468]: time="2025-01-17T12:21:10.104849602Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 12:21:10.104964 containerd[1468]: time="2025-01-17T12:21:10.104947371Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 12:21:10.105643 containerd[1468]: time="2025-01-17T12:21:10.105034643Z" level=info msg="containerd successfully booted in 0.123546s" Jan 17 12:21:10.105157 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 12:21:10.423278 tar[1459]: linux-amd64/LICENSE Jan 17 12:21:10.423278 tar[1459]: linux-amd64/README.md Jan 17 12:21:10.448656 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 12:21:10.656368 instance-setup[1545]: INFO Running google_set_multiqueue. Jan 17 12:21:10.676158 instance-setup[1545]: INFO Set channels for eth0 to 2. Jan 17 12:21:10.682138 instance-setup[1545]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Jan 17 12:21:10.684364 instance-setup[1545]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Jan 17 12:21:10.684701 instance-setup[1545]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Jan 17 12:21:10.686122 instance-setup[1545]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Jan 17 12:21:10.686466 instance-setup[1545]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Jan 17 12:21:10.688033 instance-setup[1545]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Jan 17 12:21:10.688369 instance-setup[1545]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Jan 17 12:21:10.690648 instance-setup[1545]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Jan 17 12:21:10.699016 instance-setup[1545]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jan 17 12:21:10.703461 instance-setup[1545]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jan 17 12:21:10.705598 instance-setup[1545]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Jan 17 12:21:10.705825 instance-setup[1545]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Jan 17 12:21:10.729372 init.sh[1539]: + /usr/bin/google_metadata_script_runner --script-type startup Jan 17 12:21:10.890417 startup-script[1585]: INFO Starting startup scripts. Jan 17 12:21:10.897120 startup-script[1585]: INFO No startup scripts found in metadata. Jan 17 12:21:10.897196 startup-script[1585]: INFO Finished running startup scripts. Jan 17 12:21:10.919897 init.sh[1539]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Jan 17 12:21:10.919897 init.sh[1539]: + daemon_pids=() Jan 17 12:21:10.920143 init.sh[1539]: + for d in accounts clock_skew network Jan 17 12:21:10.920588 init.sh[1539]: + daemon_pids+=($!) Jan 17 12:21:10.920705 init.sh[1588]: + /usr/bin/google_accounts_daemon Jan 17 12:21:10.921101 init.sh[1539]: + for d in accounts clock_skew network Jan 17 12:21:10.921279 init.sh[1539]: + daemon_pids+=($!) Jan 17 12:21:10.921335 init.sh[1539]: + for d in accounts clock_skew network Jan 17 12:21:10.921644 init.sh[1539]: + daemon_pids+=($!) Jan 17 12:21:10.921715 init.sh[1539]: + NOTIFY_SOCKET=/run/systemd/notify Jan 17 12:21:10.921715 init.sh[1539]: + /usr/bin/systemd-notify --ready Jan 17 12:21:10.921846 init.sh[1589]: + /usr/bin/google_clock_skew_daemon Jan 17 12:21:10.922829 init.sh[1590]: + /usr/bin/google_network_daemon Jan 17 12:21:10.940700 systemd[1]: Started oem-gce.service - GCE Linux Agent. Jan 17 12:21:10.954686 init.sh[1539]: + wait -n 1588 1589 1590 Jan 17 12:21:11.263371 google-clock-skew[1589]: INFO Starting Google Clock Skew daemon. Jan 17 12:21:11.275135 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 12:21:11.287079 google-clock-skew[1589]: INFO Clock drift token has changed: 0. Jan 17 12:21:11.293317 systemd[1]: Started sshd@0-10.128.0.38:22-139.178.89.65:50990.service - OpenSSH per-connection server daemon (139.178.89.65:50990). Jan 17 12:21:11.312136 google-networking[1590]: INFO Starting Google Networking daemon. Jan 17 12:21:11.395412 groupadd[1603]: group added to /etc/group: name=google-sudoers, GID=1000 Jan 17 12:21:11.400543 groupadd[1603]: group added to /etc/gshadow: name=google-sudoers Jan 17 12:21:11.454775 groupadd[1603]: new group: name=google-sudoers, GID=1000 Jan 17 12:21:11.485252 google-accounts[1588]: INFO Starting Google Accounts daemon. Jan 17 12:21:11.498206 google-accounts[1588]: WARNING OS Login not installed. Jan 17 12:21:11.500306 google-accounts[1588]: INFO Creating a new user account for 0. Jan 17 12:21:11.505087 init.sh[1611]: useradd: invalid user name '0': use --badname to ignore Jan 17 12:21:11.505837 google-accounts[1588]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Jan 17 12:21:11.586459 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:21:11.598454 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 12:21:11.602044 (kubelet)[1618]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:21:11.608855 systemd[1]: Startup finished in 1.052s (kernel) + 9.185s (initrd) + 8.722s (userspace) = 18.960s. Jan 17 12:21:11.643856 sshd[1600]: Accepted publickey for core from 139.178.89.65 port 50990 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:21:11.646174 sshd[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:21:11.663114 systemd-logind[1448]: New session 1 of user core. Jan 17 12:21:11.665707 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 12:21:11.675002 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 12:21:11.694665 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 12:21:11.705032 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 12:21:11.726565 (systemd)[1625]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 12:21:12.002024 google-clock-skew[1589]: INFO Synced system time with hardware clock. Jan 17 12:21:12.002333 systemd-resolved[1317]: Clock change detected. Flushing caches. Jan 17 12:21:12.093307 systemd[1625]: Queued start job for default target default.target. Jan 17 12:21:12.099149 systemd[1625]: Created slice app.slice - User Application Slice. Jan 17 12:21:12.099397 systemd[1625]: Reached target paths.target - Paths. Jan 17 12:21:12.099517 systemd[1625]: Reached target timers.target - Timers. Jan 17 12:21:12.102619 ntpd[1430]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:26%2]:123 Jan 17 12:21:12.103214 ntpd[1430]: 17 Jan 12:21:12 ntpd[1430]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:26%2]:123 Jan 17 12:21:12.103000 systemd[1625]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 12:21:12.120690 systemd[1625]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 12:21:12.120889 systemd[1625]: Reached target sockets.target - Sockets. Jan 17 12:21:12.120928 systemd[1625]: Reached target basic.target - Basic System. Jan 17 12:21:12.120999 systemd[1625]: Reached target default.target - Main User Target. Jan 17 12:21:12.121056 systemd[1625]: Startup finished in 194ms. Jan 17 12:21:12.121737 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 12:21:12.128498 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 12:21:12.366384 systemd[1]: Started sshd@1-10.128.0.38:22-139.178.89.65:50994.service - OpenSSH per-connection server daemon (139.178.89.65:50994). Jan 17 12:21:12.670455 sshd[1640]: Accepted publickey for core from 139.178.89.65 port 50994 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:21:12.672766 sshd[1640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:21:12.680480 systemd-logind[1448]: New session 2 of user core. Jan 17 12:21:12.686504 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 12:21:12.874085 kubelet[1618]: E0117 12:21:12.873942 1618 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:21:12.877088 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:21:12.877405 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:21:12.877905 systemd[1]: kubelet.service: Consumed 1.343s CPU time. Jan 17 12:21:12.892603 sshd[1640]: pam_unix(sshd:session): session closed for user core Jan 17 12:21:12.896300 systemd[1]: sshd@1-10.128.0.38:22-139.178.89.65:50994.service: Deactivated successfully. Jan 17 12:21:12.898584 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 12:21:12.900513 systemd-logind[1448]: Session 2 logged out. Waiting for processes to exit. Jan 17 12:21:12.901912 systemd-logind[1448]: Removed session 2. Jan 17 12:21:12.948667 systemd[1]: Started sshd@2-10.128.0.38:22-139.178.89.65:51004.service - OpenSSH per-connection server daemon (139.178.89.65:51004). Jan 17 12:21:13.233580 sshd[1650]: Accepted publickey for core from 139.178.89.65 port 51004 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:21:13.235361 sshd[1650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:21:13.241615 systemd-logind[1448]: New session 3 of user core. Jan 17 12:21:13.251528 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 12:21:13.445889 sshd[1650]: pam_unix(sshd:session): session closed for user core Jan 17 12:21:13.450368 systemd[1]: sshd@2-10.128.0.38:22-139.178.89.65:51004.service: Deactivated successfully. Jan 17 12:21:13.452612 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 12:21:13.454398 systemd-logind[1448]: Session 3 logged out. Waiting for processes to exit. Jan 17 12:21:13.455971 systemd-logind[1448]: Removed session 3. Jan 17 12:21:13.501651 systemd[1]: Started sshd@3-10.128.0.38:22-139.178.89.65:51016.service - OpenSSH per-connection server daemon (139.178.89.65:51016). Jan 17 12:21:13.795781 sshd[1657]: Accepted publickey for core from 139.178.89.65 port 51016 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:21:13.797666 sshd[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:21:13.803877 systemd-logind[1448]: New session 4 of user core. Jan 17 12:21:13.814498 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 12:21:14.012547 sshd[1657]: pam_unix(sshd:session): session closed for user core Jan 17 12:21:14.016876 systemd[1]: sshd@3-10.128.0.38:22-139.178.89.65:51016.service: Deactivated successfully. Jan 17 12:21:14.019321 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 12:21:14.021160 systemd-logind[1448]: Session 4 logged out. Waiting for processes to exit. Jan 17 12:21:14.022594 systemd-logind[1448]: Removed session 4. Jan 17 12:21:14.068683 systemd[1]: Started sshd@4-10.128.0.38:22-139.178.89.65:51028.service - OpenSSH per-connection server daemon (139.178.89.65:51028). Jan 17 12:21:14.349996 sshd[1664]: Accepted publickey for core from 139.178.89.65 port 51028 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:21:14.351772 sshd[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:21:14.357319 systemd-logind[1448]: New session 5 of user core. Jan 17 12:21:14.369543 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 12:21:14.541618 sudo[1667]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 12:21:14.542122 sudo[1667]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:21:14.557129 sudo[1667]: pam_unix(sudo:session): session closed for user root Jan 17 12:21:14.600393 sshd[1664]: pam_unix(sshd:session): session closed for user core Jan 17 12:21:14.605479 systemd[1]: sshd@4-10.128.0.38:22-139.178.89.65:51028.service: Deactivated successfully. Jan 17 12:21:14.607982 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 12:21:14.610061 systemd-logind[1448]: Session 5 logged out. Waiting for processes to exit. Jan 17 12:21:14.611910 systemd-logind[1448]: Removed session 5. Jan 17 12:21:14.655635 systemd[1]: Started sshd@5-10.128.0.38:22-139.178.89.65:51030.service - OpenSSH per-connection server daemon (139.178.89.65:51030). Jan 17 12:21:14.939987 sshd[1672]: Accepted publickey for core from 139.178.89.65 port 51030 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:21:14.941893 sshd[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:21:14.948474 systemd-logind[1448]: New session 6 of user core. Jan 17 12:21:14.958524 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 12:21:15.120934 sudo[1676]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 12:21:15.121459 sudo[1676]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:21:15.126576 sudo[1676]: pam_unix(sudo:session): session closed for user root Jan 17 12:21:15.140482 sudo[1675]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 12:21:15.140985 sudo[1675]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:21:15.163806 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 12:21:15.166531 auditctl[1679]: No rules Jan 17 12:21:15.167052 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 12:21:15.167377 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 12:21:15.173806 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:21:15.212937 augenrules[1697]: No rules Jan 17 12:21:15.214917 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:21:15.216990 sudo[1675]: pam_unix(sudo:session): session closed for user root Jan 17 12:21:15.260403 sshd[1672]: pam_unix(sshd:session): session closed for user core Jan 17 12:21:15.264804 systemd[1]: sshd@5-10.128.0.38:22-139.178.89.65:51030.service: Deactivated successfully. Jan 17 12:21:15.267237 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 12:21:15.269340 systemd-logind[1448]: Session 6 logged out. Waiting for processes to exit. Jan 17 12:21:15.270823 systemd-logind[1448]: Removed session 6. Jan 17 12:21:15.316975 systemd[1]: Started sshd@6-10.128.0.38:22-139.178.89.65:51044.service - OpenSSH per-connection server daemon (139.178.89.65:51044). Jan 17 12:21:15.601991 sshd[1705]: Accepted publickey for core from 139.178.89.65 port 51044 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:21:15.603925 sshd[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:21:15.610197 systemd-logind[1448]: New session 7 of user core. Jan 17 12:21:15.616485 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 12:21:15.782109 sudo[1708]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 12:21:15.782702 sudo[1708]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:21:16.231681 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 12:21:16.244058 (dockerd)[1724]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 12:21:16.696264 dockerd[1724]: time="2025-01-17T12:21:16.696155417Z" level=info msg="Starting up" Jan 17 12:21:16.815706 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport773402334-merged.mount: Deactivated successfully. Jan 17 12:21:16.903655 dockerd[1724]: time="2025-01-17T12:21:16.903592822Z" level=info msg="Loading containers: start." Jan 17 12:21:17.052321 kernel: Initializing XFRM netlink socket Jan 17 12:21:17.156791 systemd-networkd[1378]: docker0: Link UP Jan 17 12:21:17.180339 dockerd[1724]: time="2025-01-17T12:21:17.180278338Z" level=info msg="Loading containers: done." Jan 17 12:21:17.202649 dockerd[1724]: time="2025-01-17T12:21:17.202581415Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 12:21:17.202886 dockerd[1724]: time="2025-01-17T12:21:17.202735995Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 12:21:17.202953 dockerd[1724]: time="2025-01-17T12:21:17.202892276Z" level=info msg="Daemon has completed initialization" Jan 17 12:21:17.245429 dockerd[1724]: time="2025-01-17T12:21:17.245230874Z" level=info msg="API listen on /run/docker.sock" Jan 17 12:21:17.245764 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 12:21:18.285793 containerd[1468]: time="2025-01-17T12:21:18.285747052Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 17 12:21:18.818553 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount407433770.mount: Deactivated successfully. Jan 17 12:21:20.593741 containerd[1468]: time="2025-01-17T12:21:20.593664785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:20.595344 containerd[1468]: time="2025-01-17T12:21:20.595264639Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32683640" Jan 17 12:21:20.596596 containerd[1468]: time="2025-01-17T12:21:20.596531018Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:20.600171 containerd[1468]: time="2025-01-17T12:21:20.600098657Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:20.601780 containerd[1468]: time="2025-01-17T12:21:20.601542534Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 2.315741249s" Jan 17 12:21:20.601780 containerd[1468]: time="2025-01-17T12:21:20.601592802Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 17 12:21:20.631092 containerd[1468]: time="2025-01-17T12:21:20.630793996Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 17 12:21:22.343833 containerd[1468]: time="2025-01-17T12:21:22.343743338Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:22.345682 containerd[1468]: time="2025-01-17T12:21:22.345610627Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29607679" Jan 17 12:21:22.346992 containerd[1468]: time="2025-01-17T12:21:22.346913584Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:22.350942 containerd[1468]: time="2025-01-17T12:21:22.350862303Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:22.352474 containerd[1468]: time="2025-01-17T12:21:22.352222750Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 1.721372804s" Jan 17 12:21:22.352474 containerd[1468]: time="2025-01-17T12:21:22.352295432Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 17 12:21:22.383553 containerd[1468]: time="2025-01-17T12:21:22.383489617Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 17 12:21:23.111935 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 12:21:23.123660 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:21:23.405592 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:21:23.409895 (kubelet)[1948]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:21:23.505061 kubelet[1948]: E0117 12:21:23.503821 1948 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:21:23.512856 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:21:23.513233 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:21:23.835178 containerd[1468]: time="2025-01-17T12:21:23.835007942Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:23.837009 containerd[1468]: time="2025-01-17T12:21:23.836933777Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17784980" Jan 17 12:21:23.838624 containerd[1468]: time="2025-01-17T12:21:23.838556338Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:23.842758 containerd[1468]: time="2025-01-17T12:21:23.842664713Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:23.844533 containerd[1468]: time="2025-01-17T12:21:23.844057735Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 1.460511636s" Jan 17 12:21:23.844533 containerd[1468]: time="2025-01-17T12:21:23.844104418Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 17 12:21:23.875514 containerd[1468]: time="2025-01-17T12:21:23.875466347Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 17 12:21:25.011598 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2790025297.mount: Deactivated successfully. Jan 17 12:21:25.578440 containerd[1468]: time="2025-01-17T12:21:25.578360666Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:25.579818 containerd[1468]: time="2025-01-17T12:21:25.579740290Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29060232" Jan 17 12:21:25.581441 containerd[1468]: time="2025-01-17T12:21:25.581348340Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:25.584407 containerd[1468]: time="2025-01-17T12:21:25.584333316Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:25.585327 containerd[1468]: time="2025-01-17T12:21:25.585279086Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 1.70955233s" Jan 17 12:21:25.585454 containerd[1468]: time="2025-01-17T12:21:25.585334023Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 17 12:21:25.616027 containerd[1468]: time="2025-01-17T12:21:25.615959307Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 17 12:21:26.020170 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3221071712.mount: Deactivated successfully. Jan 17 12:21:27.109392 containerd[1468]: time="2025-01-17T12:21:27.109317380Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:27.111108 containerd[1468]: time="2025-01-17T12:21:27.111044833Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18192419" Jan 17 12:21:27.112603 containerd[1468]: time="2025-01-17T12:21:27.112514394Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:27.116663 containerd[1468]: time="2025-01-17T12:21:27.116576552Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:27.118402 containerd[1468]: time="2025-01-17T12:21:27.118184063Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.502169264s" Jan 17 12:21:27.118402 containerd[1468]: time="2025-01-17T12:21:27.118237141Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 17 12:21:27.149695 containerd[1468]: time="2025-01-17T12:21:27.149648924Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 17 12:21:27.537946 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount722067955.mount: Deactivated successfully. Jan 17 12:21:27.544329 containerd[1468]: time="2025-01-17T12:21:27.544233630Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:27.545587 containerd[1468]: time="2025-01-17T12:21:27.545273496Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=324188" Jan 17 12:21:27.546883 containerd[1468]: time="2025-01-17T12:21:27.546800258Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:27.553269 containerd[1468]: time="2025-01-17T12:21:27.551452159Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:27.553269 containerd[1468]: time="2025-01-17T12:21:27.552919697Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 403.169057ms" Jan 17 12:21:27.553269 containerd[1468]: time="2025-01-17T12:21:27.552962844Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 17 12:21:27.583004 containerd[1468]: time="2025-01-17T12:21:27.582952143Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 17 12:21:28.012149 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3288868880.mount: Deactivated successfully. Jan 17 12:21:30.257137 containerd[1468]: time="2025-01-17T12:21:30.257064068Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:30.258897 containerd[1468]: time="2025-01-17T12:21:30.258810060Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57246061" Jan 17 12:21:30.260383 containerd[1468]: time="2025-01-17T12:21:30.260309283Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:30.264546 containerd[1468]: time="2025-01-17T12:21:30.264480344Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:30.266942 containerd[1468]: time="2025-01-17T12:21:30.266455156Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.683441748s" Jan 17 12:21:30.266942 containerd[1468]: time="2025-01-17T12:21:30.266517802Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 17 12:21:33.611330 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 12:21:33.620387 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:21:33.909529 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:21:33.919968 (kubelet)[2141]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:21:33.998082 kubelet[2141]: E0117 12:21:33.998009 2141 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:21:34.002209 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:21:34.002506 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:21:34.038115 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:21:34.047998 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:21:34.084980 systemd[1]: Reloading requested from client PID 2155 ('systemctl') (unit session-7.scope)... Jan 17 12:21:34.085761 systemd[1]: Reloading... Jan 17 12:21:34.259290 zram_generator::config[2196]: No configuration found. Jan 17 12:21:34.401295 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:21:34.503759 systemd[1]: Reloading finished in 417 ms. Jan 17 12:21:34.561379 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 12:21:34.561532 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 12:21:34.561861 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:21:34.567797 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:21:34.794582 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:21:34.806930 (kubelet)[2246]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:21:34.862884 kubelet[2246]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:21:34.862884 kubelet[2246]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:21:34.863486 kubelet[2246]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:21:34.863486 kubelet[2246]: I0117 12:21:34.863066 2246 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:21:35.735863 kubelet[2246]: I0117 12:21:35.735798 2246 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 17 12:21:35.735863 kubelet[2246]: I0117 12:21:35.735837 2246 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:21:35.736219 kubelet[2246]: I0117 12:21:35.736177 2246 server.go:927] "Client rotation is on, will bootstrap in background" Jan 17 12:21:35.772184 kubelet[2246]: E0117 12:21:35.771824 2246 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.128.0.38:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.128.0.38:6443: connect: connection refused Jan 17 12:21:35.772184 kubelet[2246]: I0117 12:21:35.771929 2246 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:21:35.789721 kubelet[2246]: I0117 12:21:35.789683 2246 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:21:35.790094 kubelet[2246]: I0117 12:21:35.790016 2246 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:21:35.790392 kubelet[2246]: I0117 12:21:35.790069 2246 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 12:21:35.791765 kubelet[2246]: I0117 12:21:35.791716 2246 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:21:35.791765 kubelet[2246]: I0117 12:21:35.791755 2246 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 12:21:35.791973 kubelet[2246]: I0117 12:21:35.791937 2246 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:21:35.793202 kubelet[2246]: I0117 12:21:35.793177 2246 kubelet.go:400] "Attempting to sync node with API server" Jan 17 12:21:35.793685 kubelet[2246]: I0117 12:21:35.793207 2246 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:21:35.793685 kubelet[2246]: I0117 12:21:35.793265 2246 kubelet.go:312] "Adding apiserver pod source" Jan 17 12:21:35.793685 kubelet[2246]: I0117 12:21:35.793292 2246 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:21:35.793868 kubelet[2246]: W0117 12:21:35.793808 2246 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.38:6443: connect: connection refused Jan 17 12:21:35.793935 kubelet[2246]: E0117 12:21:35.793887 2246 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.38:6443: connect: connection refused Jan 17 12:21:35.800504 kubelet[2246]: W0117 12:21:35.800446 2246 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.38:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.38:6443: connect: connection refused Jan 17 12:21:35.800789 kubelet[2246]: E0117 12:21:35.800658 2246 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.38:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.38:6443: connect: connection refused Jan 17 12:21:35.801524 kubelet[2246]: I0117 12:21:35.801328 2246 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:21:35.804678 kubelet[2246]: I0117 12:21:35.804645 2246 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:21:35.804863 kubelet[2246]: W0117 12:21:35.804846 2246 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 12:21:35.805803 kubelet[2246]: I0117 12:21:35.805731 2246 server.go:1264] "Started kubelet" Jan 17 12:21:35.810034 kubelet[2246]: I0117 12:21:35.809457 2246 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:21:35.812026 kubelet[2246]: I0117 12:21:35.810821 2246 server.go:455] "Adding debug handlers to kubelet server" Jan 17 12:21:35.815132 kubelet[2246]: I0117 12:21:35.813979 2246 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:21:35.815132 kubelet[2246]: I0117 12:21:35.814063 2246 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:21:35.815132 kubelet[2246]: I0117 12:21:35.814368 2246 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:21:35.819028 kubelet[2246]: E0117 12:21:35.818760 2246 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.38:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.38:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal.181b7a3fd1f858e9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal,UID:ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal,},FirstTimestamp:2025-01-17 12:21:35.805700329 +0000 UTC m=+0.992955438,LastTimestamp:2025-01-17 12:21:35.805700329 +0000 UTC m=+0.992955438,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal,}" Jan 17 12:21:35.822698 kubelet[2246]: E0117 12:21:35.822039 2246 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal\" not found" Jan 17 12:21:35.822698 kubelet[2246]: I0117 12:21:35.822099 2246 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 12:21:35.822698 kubelet[2246]: I0117 12:21:35.822235 2246 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 17 12:21:35.822698 kubelet[2246]: I0117 12:21:35.822334 2246 reconciler.go:26] "Reconciler: start to sync state" Jan 17 12:21:35.822950 kubelet[2246]: W0117 12:21:35.822817 2246 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.38:6443: connect: connection refused Jan 17 12:21:35.822950 kubelet[2246]: E0117 12:21:35.822882 2246 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.38:6443: connect: connection refused Jan 17 12:21:35.823446 kubelet[2246]: E0117 12:21:35.823380 2246 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.38:6443: connect: connection refused" interval="200ms" Jan 17 12:21:35.824739 kubelet[2246]: I0117 12:21:35.824712 2246 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:21:35.825445 kubelet[2246]: I0117 12:21:35.825183 2246 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:21:35.827118 kubelet[2246]: E0117 12:21:35.826773 2246 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:21:35.828764 kubelet[2246]: I0117 12:21:35.828738 2246 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:21:35.851668 kubelet[2246]: I0117 12:21:35.851424 2246 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:21:35.856342 kubelet[2246]: I0117 12:21:35.856239 2246 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:21:35.857282 kubelet[2246]: I0117 12:21:35.856729 2246 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:21:35.857282 kubelet[2246]: I0117 12:21:35.856785 2246 kubelet.go:2337] "Starting kubelet main sync loop" Jan 17 12:21:35.857282 kubelet[2246]: E0117 12:21:35.856879 2246 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:21:35.861731 kubelet[2246]: W0117 12:21:35.861654 2246 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.38:6443: connect: connection refused Jan 17 12:21:35.861845 kubelet[2246]: E0117 12:21:35.861746 2246 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.38:6443: connect: connection refused Jan 17 12:21:35.865175 kubelet[2246]: I0117 12:21:35.865142 2246 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:21:35.866199 kubelet[2246]: I0117 12:21:35.865772 2246 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:21:35.866199 kubelet[2246]: I0117 12:21:35.865810 2246 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:21:35.871516 kubelet[2246]: I0117 12:21:35.871472 2246 policy_none.go:49] "None policy: Start" Jan 17 12:21:35.872666 kubelet[2246]: I0117 12:21:35.872552 2246 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:21:35.872666 kubelet[2246]: I0117 12:21:35.872611 2246 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:21:35.881749 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 17 12:21:35.899472 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 17 12:21:35.911307 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 17 12:21:35.913954 kubelet[2246]: I0117 12:21:35.913314 2246 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:21:35.913954 kubelet[2246]: I0117 12:21:35.913599 2246 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 12:21:35.913954 kubelet[2246]: I0117 12:21:35.913779 2246 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:21:35.916499 kubelet[2246]: E0117 12:21:35.916459 2246 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal\" not found" Jan 17 12:21:35.928633 kubelet[2246]: I0117 12:21:35.928592 2246 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:21:35.929144 kubelet[2246]: E0117 12:21:35.929080 2246 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.38:6443/api/v1/nodes\": dial tcp 10.128.0.38:6443: connect: connection refused" node="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:21:35.957417 kubelet[2246]: I0117 12:21:35.957341 2246 topology_manager.go:215] "Topology Admit Handler" podUID="4061b45d52af7f83e52c20d9e00976cd" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:21:35.964888 kubelet[2246]: I0117 12:21:35.964821 2246 topology_manager.go:215] "Topology Admit Handler" podUID="e456e39c715f172f6fcaa581cb145a80" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:21:35.971387 kubelet[2246]: I0117 12:21:35.971023 2246 topology_manager.go:215] "Topology Admit Handler" podUID="41bc140503b60d7b9999ca9a5c556a94" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:21:35.978807 systemd[1]: Created slice kubepods-burstable-pod4061b45d52af7f83e52c20d9e00976cd.slice - libcontainer container kubepods-burstable-pod4061b45d52af7f83e52c20d9e00976cd.slice. Jan 17 12:21:35.993299 systemd[1]: Created slice kubepods-burstable-pode456e39c715f172f6fcaa581cb145a80.slice - libcontainer container kubepods-burstable-pode456e39c715f172f6fcaa581cb145a80.slice. Jan 17 12:21:36.011176 systemd[1]: Created slice kubepods-burstable-pod41bc140503b60d7b9999ca9a5c556a94.slice - libcontainer container kubepods-burstable-pod41bc140503b60d7b9999ca9a5c556a94.slice. Jan 17 12:21:36.024046 kubelet[2246]: E0117 12:21:36.023975 2246 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.38:6443: connect: connection refused" interval="400ms" Jan 17 12:21:36.123554 kubelet[2246]: I0117 12:21:36.123481 2246 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/41bc140503b60d7b9999ca9a5c556a94-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal\" (UID: \"41bc140503b60d7b9999ca9a5c556a94\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:21:36.123554 kubelet[2246]: I0117 12:21:36.123559 2246 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4061b45d52af7f83e52c20d9e00976cd-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal\" (UID: \"4061b45d52af7f83e52c20d9e00976cd\") " pod="kube-system/kube-scheduler-ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:21:36.123800 kubelet[2246]: I0117 12:21:36.123589 2246 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e456e39c715f172f6fcaa581cb145a80-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal\" (UID: \"e456e39c715f172f6fcaa581cb145a80\") " pod="kube-system/kube-apiserver-ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:21:36.123800 kubelet[2246]: I0117 12:21:36.123619 2246 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e456e39c715f172f6fcaa581cb145a80-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal\" (UID: \"e456e39c715f172f6fcaa581cb145a80\") " pod="kube-system/kube-apiserver-ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:21:36.123800 kubelet[2246]: I0117 12:21:36.123646 2246 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/41bc140503b60d7b9999ca9a5c556a94-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal\" (UID: \"41bc140503b60d7b9999ca9a5c556a94\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:21:36.123800 kubelet[2246]: I0117 12:21:36.123670 2246 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/41bc140503b60d7b9999ca9a5c556a94-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal\" (UID: \"41bc140503b60d7b9999ca9a5c556a94\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:21:36.124003 kubelet[2246]: I0117 12:21:36.123697 2246 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/41bc140503b60d7b9999ca9a5c556a94-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal\" (UID: \"41bc140503b60d7b9999ca9a5c556a94\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:21:36.124003 kubelet[2246]: I0117 12:21:36.123725 2246 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e456e39c715f172f6fcaa581cb145a80-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal\" (UID: \"e456e39c715f172f6fcaa581cb145a80\") " pod="kube-system/kube-apiserver-ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:21:36.124003 kubelet[2246]: I0117 12:21:36.123758 2246 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/41bc140503b60d7b9999ca9a5c556a94-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal\" (UID: \"41bc140503b60d7b9999ca9a5c556a94\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:21:36.134136 kubelet[2246]: I0117 12:21:36.134102 2246 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:21:36.134590 kubelet[2246]: E0117 12:21:36.134517 2246 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.38:6443/api/v1/nodes\": dial tcp 10.128.0.38:6443: connect: connection refused" node="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:21:36.289809 containerd[1468]: time="2025-01-17T12:21:36.289658188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal,Uid:4061b45d52af7f83e52c20d9e00976cd,Namespace:kube-system,Attempt:0,}" Jan 17 12:21:36.309790 containerd[1468]: time="2025-01-17T12:21:36.309714351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal,Uid:e456e39c715f172f6fcaa581cb145a80,Namespace:kube-system,Attempt:0,}" Jan 17 12:21:36.315901 containerd[1468]: time="2025-01-17T12:21:36.315563925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal,Uid:41bc140503b60d7b9999ca9a5c556a94,Namespace:kube-system,Attempt:0,}" Jan 17 12:21:36.425103 kubelet[2246]: E0117 12:21:36.425037 2246 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.38:6443: connect: connection refused" interval="800ms" Jan 17 12:21:36.539523 kubelet[2246]: I0117 12:21:36.539411 2246 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:21:36.540004 kubelet[2246]: E0117 12:21:36.539890 2246 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.38:6443/api/v1/nodes\": dial tcp 10.128.0.38:6443: connect: connection refused" node="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:21:36.667731 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount90172192.mount: Deactivated successfully. Jan 17 12:21:36.677476 containerd[1468]: time="2025-01-17T12:21:36.677403940Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:21:36.678847 containerd[1468]: time="2025-01-17T12:21:36.678781662Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:21:36.680109 containerd[1468]: time="2025-01-17T12:21:36.680036504Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=313954" Jan 17 12:21:36.681140 containerd[1468]: time="2025-01-17T12:21:36.681066385Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:21:36.682899 containerd[1468]: time="2025-01-17T12:21:36.682836591Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:21:36.684019 containerd[1468]: time="2025-01-17T12:21:36.683878793Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:21:36.685571 containerd[1468]: time="2025-01-17T12:21:36.685504690Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:21:36.688819 containerd[1468]: time="2025-01-17T12:21:36.688705497Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:21:36.690668 containerd[1468]: time="2025-01-17T12:21:36.690270928Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 400.491466ms" Jan 17 12:21:36.693664 containerd[1468]: time="2025-01-17T12:21:36.693602567Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 377.924898ms" Jan 17 12:21:36.698803 containerd[1468]: time="2025-01-17T12:21:36.698738348Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 388.920168ms" Jan 17 12:21:36.779292 kubelet[2246]: W0117 12:21:36.778338 2246 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.38:6443: connect: connection refused Jan 17 12:21:36.779292 kubelet[2246]: E0117 12:21:36.778466 2246 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.38:6443: connect: connection refused Jan 17 12:21:36.821747 kubelet[2246]: W0117 12:21:36.821466 2246 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.38:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.38:6443: connect: connection refused Jan 17 12:21:36.821747 kubelet[2246]: E0117 12:21:36.821589 2246 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.38:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.38:6443: connect: connection refused Jan 17 12:21:36.884342 containerd[1468]: time="2025-01-17T12:21:36.883897375Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:21:36.886208 containerd[1468]: time="2025-01-17T12:21:36.886141901Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:21:36.888041 containerd[1468]: time="2025-01-17T12:21:36.887776690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:21:36.888041 containerd[1468]: time="2025-01-17T12:21:36.887945506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:21:36.888677 containerd[1468]: time="2025-01-17T12:21:36.888567576Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:21:36.889168 containerd[1468]: time="2025-01-17T12:21:36.888818649Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:21:36.889168 containerd[1468]: time="2025-01-17T12:21:36.888863001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:21:36.889168 containerd[1468]: time="2025-01-17T12:21:36.889032964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:21:36.897257 containerd[1468]: time="2025-01-17T12:21:36.897067852Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:21:36.897520 containerd[1468]: time="2025-01-17T12:21:36.897465698Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:21:36.897705 containerd[1468]: time="2025-01-17T12:21:36.897661986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:21:36.898006 containerd[1468]: time="2025-01-17T12:21:36.897954689Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:21:36.934809 systemd[1]: Started cri-containerd-60a50f1846f337d2ab72a8306412e0929b9cac1148350a108b2752bd18ec2430.scope - libcontainer container 60a50f1846f337d2ab72a8306412e0929b9cac1148350a108b2752bd18ec2430. Jan 17 12:21:36.945834 systemd[1]: Started cri-containerd-ad73e159677913e179fbc1f6d258260c53540b8c6ccd3bd9f5ab6690a41e688b.scope - libcontainer container ad73e159677913e179fbc1f6d258260c53540b8c6ccd3bd9f5ab6690a41e688b. Jan 17 12:21:36.952742 systemd[1]: Started cri-containerd-e7f1c187fbc3938718831f6479d8a7c6b38677a28669d9b809bfc7da23f7a7d0.scope - libcontainer container e7f1c187fbc3938718831f6479d8a7c6b38677a28669d9b809bfc7da23f7a7d0. Jan 17 12:21:37.050438 containerd[1468]: time="2025-01-17T12:21:37.050154955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal,Uid:41bc140503b60d7b9999ca9a5c556a94,Namespace:kube-system,Attempt:0,} returns sandbox id \"60a50f1846f337d2ab72a8306412e0929b9cac1148350a108b2752bd18ec2430\"" Jan 17 12:21:37.056106 kubelet[2246]: E0117 12:21:37.055794 2246 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4081-3-0-e966b69e4db9c6f11aa6.c.flat" Jan 17 12:21:37.060772 containerd[1468]: time="2025-01-17T12:21:37.059814427Z" level=info msg="CreateContainer within sandbox \"60a50f1846f337d2ab72a8306412e0929b9cac1148350a108b2752bd18ec2430\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 12:21:37.063615 containerd[1468]: time="2025-01-17T12:21:37.063226194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal,Uid:4061b45d52af7f83e52c20d9e00976cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad73e159677913e179fbc1f6d258260c53540b8c6ccd3bd9f5ab6690a41e688b\"" Jan 17 12:21:37.066319 kubelet[2246]: E0117 12:21:37.066266 2246 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-21291" Jan 17 12:21:37.068833 containerd[1468]: time="2025-01-17T12:21:37.068774872Z" level=info msg="CreateContainer within sandbox \"ad73e159677913e179fbc1f6d258260c53540b8c6ccd3bd9f5ab6690a41e688b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 12:21:37.082186 containerd[1468]: time="2025-01-17T12:21:37.081428322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal,Uid:e456e39c715f172f6fcaa581cb145a80,Namespace:kube-system,Attempt:0,} returns sandbox id \"e7f1c187fbc3938718831f6479d8a7c6b38677a28669d9b809bfc7da23f7a7d0\"" Jan 17 12:21:37.083896 kubelet[2246]: E0117 12:21:37.083789 2246 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-21291" Jan 17 12:21:37.086026 containerd[1468]: time="2025-01-17T12:21:37.085984865Z" level=info msg="CreateContainer within sandbox \"e7f1c187fbc3938718831f6479d8a7c6b38677a28669d9b809bfc7da23f7a7d0\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 12:21:37.097842 kubelet[2246]: W0117 12:21:37.097791 2246 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.38:6443: connect: connection refused Jan 17 12:21:37.098280 kubelet[2246]: E0117 12:21:37.098211 2246 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.38:6443: connect: connection refused Jan 17 12:21:37.100046 containerd[1468]: time="2025-01-17T12:21:37.099684204Z" level=info msg="CreateContainer within sandbox \"60a50f1846f337d2ab72a8306412e0929b9cac1148350a108b2752bd18ec2430\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"df3c702485962dc21736a1948c018d58b39b2fdd442b0d2ddb72ff7df3f94851\"" Jan 17 12:21:37.101192 containerd[1468]: time="2025-01-17T12:21:37.101113069Z" level=info msg="StartContainer for \"df3c702485962dc21736a1948c018d58b39b2fdd442b0d2ddb72ff7df3f94851\"" Jan 17 12:21:37.105232 containerd[1468]: time="2025-01-17T12:21:37.105187118Z" level=info msg="CreateContainer within sandbox \"ad73e159677913e179fbc1f6d258260c53540b8c6ccd3bd9f5ab6690a41e688b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7af5821cfa746393c2b0d92d75f706fd959c67eb07480a565c314ac22dd1bcf4\"" Jan 17 12:21:37.107409 containerd[1468]: time="2025-01-17T12:21:37.107374206Z" level=info msg="StartContainer for \"7af5821cfa746393c2b0d92d75f706fd959c67eb07480a565c314ac22dd1bcf4\"" Jan 17 12:21:37.118785 containerd[1468]: time="2025-01-17T12:21:37.118735328Z" level=info msg="CreateContainer within sandbox \"e7f1c187fbc3938718831f6479d8a7c6b38677a28669d9b809bfc7da23f7a7d0\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"70e66518c076d03be5cde8bd8ad1fd49f160a0468683caa6718906b18f9aa21b\"" Jan 17 12:21:37.120840 containerd[1468]: time="2025-01-17T12:21:37.120728987Z" level=info msg="StartContainer for \"70e66518c076d03be5cde8bd8ad1fd49f160a0468683caa6718906b18f9aa21b\"" Jan 17 12:21:37.156533 systemd[1]: Started cri-containerd-df3c702485962dc21736a1948c018d58b39b2fdd442b0d2ddb72ff7df3f94851.scope - libcontainer container df3c702485962dc21736a1948c018d58b39b2fdd442b0d2ddb72ff7df3f94851. Jan 17 12:21:37.181507 systemd[1]: Started cri-containerd-7af5821cfa746393c2b0d92d75f706fd959c67eb07480a565c314ac22dd1bcf4.scope - libcontainer container 7af5821cfa746393c2b0d92d75f706fd959c67eb07480a565c314ac22dd1bcf4. Jan 17 12:21:37.197699 systemd[1]: Started cri-containerd-70e66518c076d03be5cde8bd8ad1fd49f160a0468683caa6718906b18f9aa21b.scope - libcontainer container 70e66518c076d03be5cde8bd8ad1fd49f160a0468683caa6718906b18f9aa21b. Jan 17 12:21:37.226016 kubelet[2246]: E0117 12:21:37.225950 2246 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.38:6443: connect: connection refused" interval="1.6s" Jan 17 12:21:37.231189 kubelet[2246]: W0117 12:21:37.230650 2246 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.38:6443: connect: connection refused Jan 17 12:21:37.231388 kubelet[2246]: E0117 12:21:37.231205 2246 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.38:6443: connect: connection refused Jan 17 12:21:37.275048 containerd[1468]: time="2025-01-17T12:21:37.274945281Z" level=info msg="StartContainer for \"df3c702485962dc21736a1948c018d58b39b2fdd442b0d2ddb72ff7df3f94851\" returns successfully" Jan 17 12:21:37.315093 containerd[1468]: time="2025-01-17T12:21:37.314938356Z" level=info msg="StartContainer for \"70e66518c076d03be5cde8bd8ad1fd49f160a0468683caa6718906b18f9aa21b\" returns successfully" Jan 17 12:21:37.341331 containerd[1468]: time="2025-01-17T12:21:37.340607121Z" level=info msg="StartContainer for \"7af5821cfa746393c2b0d92d75f706fd959c67eb07480a565c314ac22dd1bcf4\" returns successfully" Jan 17 12:21:37.348648 kubelet[2246]: I0117 12:21:37.348600 2246 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:21:37.349197 kubelet[2246]: E0117 12:21:37.349138 2246 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.38:6443/api/v1/nodes\": dial tcp 10.128.0.38:6443: connect: connection refused" node="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:21:38.957176 kubelet[2246]: I0117 12:21:38.956564 2246 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:21:40.030778 kubelet[2246]: E0117 12:21:40.030708 2246 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal\" not found" node="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:21:40.069411 kubelet[2246]: I0117 12:21:40.069178 2246 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:21:40.168480 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 17 12:21:40.186278 kubelet[2246]: E0117 12:21:40.185505 2246 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:21:40.796869 kubelet[2246]: I0117 12:21:40.796822 2246 apiserver.go:52] "Watching apiserver" Jan 17 12:21:40.823132 kubelet[2246]: I0117 12:21:40.823059 2246 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 17 12:21:42.264917 systemd[1]: Reloading requested from client PID 2522 ('systemctl') (unit session-7.scope)... Jan 17 12:21:42.264941 systemd[1]: Reloading... Jan 17 12:21:42.417283 zram_generator::config[2562]: No configuration found. Jan 17 12:21:42.635024 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:21:42.771998 systemd[1]: Reloading finished in 506 ms. Jan 17 12:21:42.827374 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:21:42.849138 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 12:21:42.849487 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:21:42.849570 systemd[1]: kubelet.service: Consumed 1.526s CPU time, 115.1M memory peak, 0B memory swap peak. Jan 17 12:21:42.855766 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:21:43.149199 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:21:43.164906 (kubelet)[2610]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:21:43.252457 kubelet[2610]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:21:43.252457 kubelet[2610]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:21:43.252457 kubelet[2610]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:21:43.253066 kubelet[2610]: I0117 12:21:43.252568 2610 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:21:43.266896 kubelet[2610]: I0117 12:21:43.266846 2610 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 17 12:21:43.266896 kubelet[2610]: I0117 12:21:43.266891 2610 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:21:43.267576 kubelet[2610]: I0117 12:21:43.267449 2610 server.go:927] "Client rotation is on, will bootstrap in background" Jan 17 12:21:43.271323 kubelet[2610]: I0117 12:21:43.271286 2610 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 17 12:21:43.273887 kubelet[2610]: I0117 12:21:43.273611 2610 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:21:43.286130 kubelet[2610]: I0117 12:21:43.286093 2610 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:21:43.287285 kubelet[2610]: I0117 12:21:43.286788 2610 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:21:43.287285 kubelet[2610]: I0117 12:21:43.286837 2610 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 12:21:43.287285 kubelet[2610]: I0117 12:21:43.287140 2610 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:21:43.287285 kubelet[2610]: I0117 12:21:43.287157 2610 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 12:21:43.287694 kubelet[2610]: I0117 12:21:43.287222 2610 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:21:43.287694 kubelet[2610]: I0117 12:21:43.287449 2610 kubelet.go:400] "Attempting to sync node with API server" Jan 17 12:21:43.287694 kubelet[2610]: I0117 12:21:43.287471 2610 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:21:43.287694 kubelet[2610]: I0117 12:21:43.287504 2610 kubelet.go:312] "Adding apiserver pod source" Jan 17 12:21:43.287694 kubelet[2610]: I0117 12:21:43.287527 2610 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:21:43.290768 kubelet[2610]: I0117 12:21:43.290547 2610 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:21:43.293118 kubelet[2610]: I0117 12:21:43.290796 2610 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:21:43.293118 kubelet[2610]: I0117 12:21:43.291491 2610 server.go:1264] "Started kubelet" Jan 17 12:21:43.296321 kubelet[2610]: I0117 12:21:43.296094 2610 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:21:43.312603 kubelet[2610]: I0117 12:21:43.312534 2610 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:21:43.316489 kubelet[2610]: I0117 12:21:43.316460 2610 server.go:455] "Adding debug handlers to kubelet server" Jan 17 12:21:43.325398 kubelet[2610]: I0117 12:21:43.320219 2610 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:21:43.325830 kubelet[2610]: I0117 12:21:43.325810 2610 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:21:43.337083 kubelet[2610]: I0117 12:21:43.337040 2610 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 12:21:43.349872 kubelet[2610]: I0117 12:21:43.349827 2610 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 17 12:21:43.352069 kubelet[2610]: I0117 12:21:43.351436 2610 reconciler.go:26] "Reconciler: start to sync state" Jan 17 12:21:43.359834 kubelet[2610]: I0117 12:21:43.359771 2610 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:21:43.360958 kubelet[2610]: I0117 12:21:43.360898 2610 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:21:43.361143 kubelet[2610]: I0117 12:21:43.361031 2610 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:21:43.363039 kubelet[2610]: I0117 12:21:43.362965 2610 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:21:43.363606 kubelet[2610]: I0117 12:21:43.363240 2610 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:21:43.363606 kubelet[2610]: I0117 12:21:43.363302 2610 kubelet.go:2337] "Starting kubelet main sync loop" Jan 17 12:21:43.363606 kubelet[2610]: E0117 12:21:43.363359 2610 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:21:43.372123 kubelet[2610]: E0117 12:21:43.372038 2610 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:21:43.378496 kubelet[2610]: I0117 12:21:43.377855 2610 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:21:43.446100 kubelet[2610]: I0117 12:21:43.445604 2610 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:21:43.457659 kubelet[2610]: I0117 12:21:43.457591 2610 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:21:43.457659 kubelet[2610]: I0117 12:21:43.457643 2610 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:21:43.457659 kubelet[2610]: I0117 12:21:43.457673 2610 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:21:43.459617 kubelet[2610]: I0117 12:21:43.458076 2610 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 12:21:43.459617 kubelet[2610]: I0117 12:21:43.458101 2610 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 12:21:43.459617 kubelet[2610]: I0117 12:21:43.458166 2610 policy_none.go:49] "None policy: Start" Jan 17 12:21:43.459617 kubelet[2610]: I0117 12:21:43.459450 2610 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:21:43.459617 kubelet[2610]: I0117 12:21:43.459481 2610 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:21:43.460175 kubelet[2610]: I0117 12:21:43.459828 2610 state_mem.go:75] "Updated machine memory state" Jan 17 12:21:43.464195 kubelet[2610]: E0117 12:21:43.464131 2610 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 12:21:43.469971 kubelet[2610]: I0117 12:21:43.468028 2610 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:21:43.469971 kubelet[2610]: I0117 12:21:43.468136 2610 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:21:43.486070 kubelet[2610]: I0117 12:21:43.485295 2610 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:21:43.486070 kubelet[2610]: I0117 12:21:43.485538 2610 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 12:21:43.486070 kubelet[2610]: I0117 12:21:43.485672 2610 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:21:43.665281 kubelet[2610]: I0117 12:21:43.664809 2610 topology_manager.go:215] "Topology Admit Handler" podUID="e456e39c715f172f6fcaa581cb145a80" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:21:43.665281 kubelet[2610]: I0117 12:21:43.664952 2610 topology_manager.go:215] "Topology Admit Handler" podUID="41bc140503b60d7b9999ca9a5c556a94" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:21:43.665281 kubelet[2610]: I0117 12:21:43.665044 2610 topology_manager.go:215] "Topology Admit Handler" podUID="4061b45d52af7f83e52c20d9e00976cd" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:21:43.672490 kubelet[2610]: W0117 12:21:43.672225 2610 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jan 17 12:21:43.675337 kubelet[2610]: W0117 12:21:43.674286 2610 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jan 17 12:21:43.675337 kubelet[2610]: W0117 12:21:43.674980 2610 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jan 17 12:21:43.754505 kubelet[2610]: I0117 12:21:43.753973 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/41bc140503b60d7b9999ca9a5c556a94-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal\" (UID: \"41bc140503b60d7b9999ca9a5c556a94\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:21:43.754505 kubelet[2610]: I0117 12:21:43.754051 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4061b45d52af7f83e52c20d9e00976cd-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal\" (UID: \"4061b45d52af7f83e52c20d9e00976cd\") " pod="kube-system/kube-scheduler-ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:21:43.754505 kubelet[2610]: I0117 12:21:43.754085 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e456e39c715f172f6fcaa581cb145a80-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal\" (UID: \"e456e39c715f172f6fcaa581cb145a80\") " pod="kube-system/kube-apiserver-ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:21:43.754505 kubelet[2610]: I0117 12:21:43.754117 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e456e39c715f172f6fcaa581cb145a80-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal\" (UID: \"e456e39c715f172f6fcaa581cb145a80\") " pod="kube-system/kube-apiserver-ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:21:43.754862 kubelet[2610]: I0117 12:21:43.754152 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e456e39c715f172f6fcaa581cb145a80-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal\" (UID: \"e456e39c715f172f6fcaa581cb145a80\") " pod="kube-system/kube-apiserver-ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:21:43.754862 kubelet[2610]: I0117 12:21:43.754182 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/41bc140503b60d7b9999ca9a5c556a94-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal\" (UID: \"41bc140503b60d7b9999ca9a5c556a94\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:21:43.754862 kubelet[2610]: I0117 12:21:43.754213 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/41bc140503b60d7b9999ca9a5c556a94-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal\" (UID: \"41bc140503b60d7b9999ca9a5c556a94\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:21:43.754862 kubelet[2610]: I0117 12:21:43.754265 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/41bc140503b60d7b9999ca9a5c556a94-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal\" (UID: \"41bc140503b60d7b9999ca9a5c556a94\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:21:43.755097 kubelet[2610]: I0117 12:21:43.754301 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/41bc140503b60d7b9999ca9a5c556a94-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal\" (UID: \"41bc140503b60d7b9999ca9a5c556a94\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:21:44.308276 kubelet[2610]: I0117 12:21:44.306723 2610 apiserver.go:52] "Watching apiserver" Jan 17 12:21:44.351546 kubelet[2610]: I0117 12:21:44.351457 2610 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 17 12:21:44.542503 kubelet[2610]: I0117 12:21:44.542320 2610 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" podStartSLOduration=1.5422904910000002 podStartE2EDuration="1.542290491s" podCreationTimestamp="2025-01-17 12:21:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:21:44.510572503 +0000 UTC m=+1.338671062" watchObservedRunningTime="2025-01-17 12:21:44.542290491 +0000 UTC m=+1.370389047" Jan 17 12:21:44.576268 kubelet[2610]: I0117 12:21:44.575894 2610 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" podStartSLOduration=1.575870583 podStartE2EDuration="1.575870583s" podCreationTimestamp="2025-01-17 12:21:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:21:44.545825097 +0000 UTC m=+1.373923652" watchObservedRunningTime="2025-01-17 12:21:44.575870583 +0000 UTC m=+1.403969117" Jan 17 12:21:44.598539 kubelet[2610]: I0117 12:21:44.598453 2610 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" podStartSLOduration=1.598392824 podStartE2EDuration="1.598392824s" podCreationTimestamp="2025-01-17 12:21:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:21:44.577527675 +0000 UTC m=+1.405626232" watchObservedRunningTime="2025-01-17 12:21:44.598392824 +0000 UTC m=+1.426491380" Jan 17 12:21:49.055174 sudo[1708]: pam_unix(sudo:session): session closed for user root Jan 17 12:21:49.099087 sshd[1705]: pam_unix(sshd:session): session closed for user core Jan 17 12:21:49.104280 systemd[1]: sshd@6-10.128.0.38:22-139.178.89.65:51044.service: Deactivated successfully. Jan 17 12:21:49.107053 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 12:21:49.107463 systemd[1]: session-7.scope: Consumed 6.848s CPU time, 193.9M memory peak, 0B memory swap peak. Jan 17 12:21:49.109501 systemd-logind[1448]: Session 7 logged out. Waiting for processes to exit. Jan 17 12:21:49.111483 systemd-logind[1448]: Removed session 7. Jan 17 12:21:54.844425 update_engine[1449]: I20250117 12:21:54.844318 1449 update_attempter.cc:509] Updating boot flags... Jan 17 12:21:54.911293 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2697) Jan 17 12:21:55.043569 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2699) Jan 17 12:21:55.155283 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2699) Jan 17 12:21:55.813487 kubelet[2610]: I0117 12:21:55.813442 2610 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 12:21:55.814320 containerd[1468]: time="2025-01-17T12:21:55.814150895Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 12:21:55.815081 kubelet[2610]: I0117 12:21:55.814635 2610 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 12:21:56.628153 kubelet[2610]: I0117 12:21:56.628001 2610 topology_manager.go:215] "Topology Admit Handler" podUID="43d4e296-d946-4597-b07a-3ed263605dbd" podNamespace="kube-system" podName="kube-proxy-xsc2g" Jan 17 12:21:56.648346 systemd[1]: Created slice kubepods-besteffort-pod43d4e296_d946_4597_b07a_3ed263605dbd.slice - libcontainer container kubepods-besteffort-pod43d4e296_d946_4597_b07a_3ed263605dbd.slice. Jan 17 12:21:56.667295 kubelet[2610]: I0117 12:21:56.667232 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/43d4e296-d946-4597-b07a-3ed263605dbd-lib-modules\") pod \"kube-proxy-xsc2g\" (UID: \"43d4e296-d946-4597-b07a-3ed263605dbd\") " pod="kube-system/kube-proxy-xsc2g" Jan 17 12:21:56.667555 kubelet[2610]: I0117 12:21:56.667306 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnqkx\" (UniqueName: \"kubernetes.io/projected/43d4e296-d946-4597-b07a-3ed263605dbd-kube-api-access-jnqkx\") pod \"kube-proxy-xsc2g\" (UID: \"43d4e296-d946-4597-b07a-3ed263605dbd\") " pod="kube-system/kube-proxy-xsc2g" Jan 17 12:21:56.667555 kubelet[2610]: I0117 12:21:56.667348 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/43d4e296-d946-4597-b07a-3ed263605dbd-kube-proxy\") pod \"kube-proxy-xsc2g\" (UID: \"43d4e296-d946-4597-b07a-3ed263605dbd\") " pod="kube-system/kube-proxy-xsc2g" Jan 17 12:21:56.667555 kubelet[2610]: I0117 12:21:56.667373 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/43d4e296-d946-4597-b07a-3ed263605dbd-xtables-lock\") pod \"kube-proxy-xsc2g\" (UID: \"43d4e296-d946-4597-b07a-3ed263605dbd\") " pod="kube-system/kube-proxy-xsc2g" Jan 17 12:21:56.858820 kubelet[2610]: I0117 12:21:56.858765 2610 topology_manager.go:215] "Topology Admit Handler" podUID="1f28579b-ac7d-486c-a8e1-e96ccd1b9af1" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-5zxh5" Jan 17 12:21:56.866545 kubelet[2610]: W0117 12:21:56.866433 2610 reflector.go:547] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal' and this object Jan 17 12:21:56.866545 kubelet[2610]: E0117 12:21:56.866511 2610 reflector.go:150] object-"tigera-operator"/"kubernetes-services-endpoint": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal' and this object Jan 17 12:21:56.875481 systemd[1]: Created slice kubepods-besteffort-pod1f28579b_ac7d_486c_a8e1_e96ccd1b9af1.slice - libcontainer container kubepods-besteffort-pod1f28579b_ac7d_486c_a8e1_e96ccd1b9af1.slice. Jan 17 12:21:56.957883 containerd[1468]: time="2025-01-17T12:21:56.957705965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xsc2g,Uid:43d4e296-d946-4597-b07a-3ed263605dbd,Namespace:kube-system,Attempt:0,}" Jan 17 12:21:56.969301 kubelet[2610]: I0117 12:21:56.969082 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1f28579b-ac7d-486c-a8e1-e96ccd1b9af1-var-lib-calico\") pod \"tigera-operator-7bc55997bb-5zxh5\" (UID: \"1f28579b-ac7d-486c-a8e1-e96ccd1b9af1\") " pod="tigera-operator/tigera-operator-7bc55997bb-5zxh5" Jan 17 12:21:56.969301 kubelet[2610]: I0117 12:21:56.969141 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8blv4\" (UniqueName: \"kubernetes.io/projected/1f28579b-ac7d-486c-a8e1-e96ccd1b9af1-kube-api-access-8blv4\") pod \"tigera-operator-7bc55997bb-5zxh5\" (UID: \"1f28579b-ac7d-486c-a8e1-e96ccd1b9af1\") " pod="tigera-operator/tigera-operator-7bc55997bb-5zxh5" Jan 17 12:21:56.998428 containerd[1468]: time="2025-01-17T12:21:56.998302909Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:21:56.998428 containerd[1468]: time="2025-01-17T12:21:56.998391080Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:21:56.998928 containerd[1468]: time="2025-01-17T12:21:56.998409599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:21:56.998928 containerd[1468]: time="2025-01-17T12:21:56.998835479Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:21:57.036510 systemd[1]: Started cri-containerd-aaf3bf4d8f1fb5b367b72ce6bed1304c3fb1ef210b0c3fa10065e9f1322d49c3.scope - libcontainer container aaf3bf4d8f1fb5b367b72ce6bed1304c3fb1ef210b0c3fa10065e9f1322d49c3. Jan 17 12:21:57.068854 containerd[1468]: time="2025-01-17T12:21:57.068549175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xsc2g,Uid:43d4e296-d946-4597-b07a-3ed263605dbd,Namespace:kube-system,Attempt:0,} returns sandbox id \"aaf3bf4d8f1fb5b367b72ce6bed1304c3fb1ef210b0c3fa10065e9f1322d49c3\"" Jan 17 12:21:57.073635 containerd[1468]: time="2025-01-17T12:21:57.073555758Z" level=info msg="CreateContainer within sandbox \"aaf3bf4d8f1fb5b367b72ce6bed1304c3fb1ef210b0c3fa10065e9f1322d49c3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 12:21:57.101892 containerd[1468]: time="2025-01-17T12:21:57.101826496Z" level=info msg="CreateContainer within sandbox \"aaf3bf4d8f1fb5b367b72ce6bed1304c3fb1ef210b0c3fa10065e9f1322d49c3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9f8636b919be408b5f68105d86425511c9a27bfeb7cb7fa5f1e3a048ff4461cd\"" Jan 17 12:21:57.102787 containerd[1468]: time="2025-01-17T12:21:57.102570257Z" level=info msg="StartContainer for \"9f8636b919be408b5f68105d86425511c9a27bfeb7cb7fa5f1e3a048ff4461cd\"" Jan 17 12:21:57.141477 systemd[1]: Started cri-containerd-9f8636b919be408b5f68105d86425511c9a27bfeb7cb7fa5f1e3a048ff4461cd.scope - libcontainer container 9f8636b919be408b5f68105d86425511c9a27bfeb7cb7fa5f1e3a048ff4461cd. Jan 17 12:21:57.180025 containerd[1468]: time="2025-01-17T12:21:57.179974899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-5zxh5,Uid:1f28579b-ac7d-486c-a8e1-e96ccd1b9af1,Namespace:tigera-operator,Attempt:0,}" Jan 17 12:21:57.180808 containerd[1468]: time="2025-01-17T12:21:57.180134905Z" level=info msg="StartContainer for \"9f8636b919be408b5f68105d86425511c9a27bfeb7cb7fa5f1e3a048ff4461cd\" returns successfully" Jan 17 12:21:57.224908 containerd[1468]: time="2025-01-17T12:21:57.223309685Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:21:57.224908 containerd[1468]: time="2025-01-17T12:21:57.223604038Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:21:57.224908 containerd[1468]: time="2025-01-17T12:21:57.223628249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:21:57.224908 containerd[1468]: time="2025-01-17T12:21:57.223944636Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:21:57.256545 systemd[1]: Started cri-containerd-a5999f60c7b4b41b58d4c6616a51d764be5b582847e5e58a164bcd90fc786247.scope - libcontainer container a5999f60c7b4b41b58d4c6616a51d764be5b582847e5e58a164bcd90fc786247. Jan 17 12:21:57.346621 containerd[1468]: time="2025-01-17T12:21:57.346559716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-5zxh5,Uid:1f28579b-ac7d-486c-a8e1-e96ccd1b9af1,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a5999f60c7b4b41b58d4c6616a51d764be5b582847e5e58a164bcd90fc786247\"" Jan 17 12:21:57.349179 containerd[1468]: time="2025-01-17T12:21:57.349140067Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 17 12:21:59.418548 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4127234553.mount: Deactivated successfully. Jan 17 12:22:00.129817 containerd[1468]: time="2025-01-17T12:22:00.129747557Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:00.131322 containerd[1468]: time="2025-01-17T12:22:00.131239110Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21764289" Jan 17 12:22:00.132805 containerd[1468]: time="2025-01-17T12:22:00.132718031Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:00.136495 containerd[1468]: time="2025-01-17T12:22:00.136430604Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:00.137891 containerd[1468]: time="2025-01-17T12:22:00.137699389Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.788155406s" Jan 17 12:22:00.137891 containerd[1468]: time="2025-01-17T12:22:00.137745600Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 17 12:22:00.141556 containerd[1468]: time="2025-01-17T12:22:00.141507606Z" level=info msg="CreateContainer within sandbox \"a5999f60c7b4b41b58d4c6616a51d764be5b582847e5e58a164bcd90fc786247\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 17 12:22:00.161040 containerd[1468]: time="2025-01-17T12:22:00.160978040Z" level=info msg="CreateContainer within sandbox \"a5999f60c7b4b41b58d4c6616a51d764be5b582847e5e58a164bcd90fc786247\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"edf62c86352d82200d97647eaea2ad5b24d4dc157a357613b203f841934629df\"" Jan 17 12:22:00.161741 containerd[1468]: time="2025-01-17T12:22:00.161705768Z" level=info msg="StartContainer for \"edf62c86352d82200d97647eaea2ad5b24d4dc157a357613b203f841934629df\"" Jan 17 12:22:00.206678 systemd[1]: Started cri-containerd-edf62c86352d82200d97647eaea2ad5b24d4dc157a357613b203f841934629df.scope - libcontainer container edf62c86352d82200d97647eaea2ad5b24d4dc157a357613b203f841934629df. Jan 17 12:22:00.240817 containerd[1468]: time="2025-01-17T12:22:00.240646082Z" level=info msg="StartContainer for \"edf62c86352d82200d97647eaea2ad5b24d4dc157a357613b203f841934629df\" returns successfully" Jan 17 12:22:00.461869 kubelet[2610]: I0117 12:22:00.461097 2610 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xsc2g" podStartSLOduration=4.461072627 podStartE2EDuration="4.461072627s" podCreationTimestamp="2025-01-17 12:21:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:21:57.461288712 +0000 UTC m=+14.289387271" watchObservedRunningTime="2025-01-17 12:22:00.461072627 +0000 UTC m=+17.289171192" Jan 17 12:22:03.381691 kubelet[2610]: I0117 12:22:03.381603 2610 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-5zxh5" podStartSLOduration=4.5908078660000005 podStartE2EDuration="7.381577022s" podCreationTimestamp="2025-01-17 12:21:56 +0000 UTC" firstStartedPulling="2025-01-17 12:21:57.348426936 +0000 UTC m=+14.176525489" lastFinishedPulling="2025-01-17 12:22:00.139196105 +0000 UTC m=+16.967294645" observedRunningTime="2025-01-17 12:22:00.461677637 +0000 UTC m=+17.289776193" watchObservedRunningTime="2025-01-17 12:22:03.381577022 +0000 UTC m=+20.209675596" Jan 17 12:22:03.737506 kubelet[2610]: I0117 12:22:03.737343 2610 topology_manager.go:215] "Topology Admit Handler" podUID="04762b83-9859-4ba7-8017-1bd889867ddf" podNamespace="calico-system" podName="calico-typha-588cd587fd-rx6m6" Jan 17 12:22:03.752118 systemd[1]: Created slice kubepods-besteffort-pod04762b83_9859_4ba7_8017_1bd889867ddf.slice - libcontainer container kubepods-besteffort-pod04762b83_9859_4ba7_8017_1bd889867ddf.slice. Jan 17 12:22:03.819281 kubelet[2610]: I0117 12:22:03.817114 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/04762b83-9859-4ba7-8017-1bd889867ddf-typha-certs\") pod \"calico-typha-588cd587fd-rx6m6\" (UID: \"04762b83-9859-4ba7-8017-1bd889867ddf\") " pod="calico-system/calico-typha-588cd587fd-rx6m6" Jan 17 12:22:03.819281 kubelet[2610]: I0117 12:22:03.817180 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04762b83-9859-4ba7-8017-1bd889867ddf-tigera-ca-bundle\") pod \"calico-typha-588cd587fd-rx6m6\" (UID: \"04762b83-9859-4ba7-8017-1bd889867ddf\") " pod="calico-system/calico-typha-588cd587fd-rx6m6" Jan 17 12:22:03.819281 kubelet[2610]: I0117 12:22:03.817215 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdmbg\" (UniqueName: \"kubernetes.io/projected/04762b83-9859-4ba7-8017-1bd889867ddf-kube-api-access-jdmbg\") pod \"calico-typha-588cd587fd-rx6m6\" (UID: \"04762b83-9859-4ba7-8017-1bd889867ddf\") " pod="calico-system/calico-typha-588cd587fd-rx6m6" Jan 17 12:22:03.998309 kubelet[2610]: I0117 12:22:03.998126 2610 topology_manager.go:215] "Topology Admit Handler" podUID="3e894b53-0a8f-4242-acbe-0970b488515e" podNamespace="calico-system" podName="calico-node-hwr7p" Jan 17 12:22:04.016779 systemd[1]: Created slice kubepods-besteffort-pod3e894b53_0a8f_4242_acbe_0970b488515e.slice - libcontainer container kubepods-besteffort-pod3e894b53_0a8f_4242_acbe_0970b488515e.slice. Jan 17 12:22:04.057957 containerd[1468]: time="2025-01-17T12:22:04.057826804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-588cd587fd-rx6m6,Uid:04762b83-9859-4ba7-8017-1bd889867ddf,Namespace:calico-system,Attempt:0,}" Jan 17 12:22:04.106533 containerd[1468]: time="2025-01-17T12:22:04.105509840Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:22:04.106533 containerd[1468]: time="2025-01-17T12:22:04.105807231Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:22:04.108341 containerd[1468]: time="2025-01-17T12:22:04.107027826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:04.108341 containerd[1468]: time="2025-01-17T12:22:04.107168924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:04.121978 kubelet[2610]: I0117 12:22:04.120440 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e894b53-0a8f-4242-acbe-0970b488515e-lib-modules\") pod \"calico-node-hwr7p\" (UID: \"3e894b53-0a8f-4242-acbe-0970b488515e\") " pod="calico-system/calico-node-hwr7p" Jan 17 12:22:04.121978 kubelet[2610]: I0117 12:22:04.120503 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e894b53-0a8f-4242-acbe-0970b488515e-xtables-lock\") pod \"calico-node-hwr7p\" (UID: \"3e894b53-0a8f-4242-acbe-0970b488515e\") " pod="calico-system/calico-node-hwr7p" Jan 17 12:22:04.121978 kubelet[2610]: I0117 12:22:04.120534 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3e894b53-0a8f-4242-acbe-0970b488515e-node-certs\") pod \"calico-node-hwr7p\" (UID: \"3e894b53-0a8f-4242-acbe-0970b488515e\") " pod="calico-system/calico-node-hwr7p" Jan 17 12:22:04.121978 kubelet[2610]: I0117 12:22:04.120566 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3e894b53-0a8f-4242-acbe-0970b488515e-cni-log-dir\") pod \"calico-node-hwr7p\" (UID: \"3e894b53-0a8f-4242-acbe-0970b488515e\") " pod="calico-system/calico-node-hwr7p" Jan 17 12:22:04.121978 kubelet[2610]: I0117 12:22:04.120601 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxbz5\" (UniqueName: \"kubernetes.io/projected/3e894b53-0a8f-4242-acbe-0970b488515e-kube-api-access-kxbz5\") pod \"calico-node-hwr7p\" (UID: \"3e894b53-0a8f-4242-acbe-0970b488515e\") " pod="calico-system/calico-node-hwr7p" Jan 17 12:22:04.122394 kubelet[2610]: I0117 12:22:04.120630 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3e894b53-0a8f-4242-acbe-0970b488515e-var-lib-calico\") pod \"calico-node-hwr7p\" (UID: \"3e894b53-0a8f-4242-acbe-0970b488515e\") " pod="calico-system/calico-node-hwr7p" Jan 17 12:22:04.122394 kubelet[2610]: I0117 12:22:04.120662 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3e894b53-0a8f-4242-acbe-0970b488515e-flexvol-driver-host\") pod \"calico-node-hwr7p\" (UID: \"3e894b53-0a8f-4242-acbe-0970b488515e\") " pod="calico-system/calico-node-hwr7p" Jan 17 12:22:04.122394 kubelet[2610]: I0117 12:22:04.120693 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e894b53-0a8f-4242-acbe-0970b488515e-tigera-ca-bundle\") pod \"calico-node-hwr7p\" (UID: \"3e894b53-0a8f-4242-acbe-0970b488515e\") " pod="calico-system/calico-node-hwr7p" Jan 17 12:22:04.122394 kubelet[2610]: I0117 12:22:04.120722 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3e894b53-0a8f-4242-acbe-0970b488515e-var-run-calico\") pod \"calico-node-hwr7p\" (UID: \"3e894b53-0a8f-4242-acbe-0970b488515e\") " pod="calico-system/calico-node-hwr7p" Jan 17 12:22:04.122394 kubelet[2610]: I0117 12:22:04.120750 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3e894b53-0a8f-4242-acbe-0970b488515e-cni-net-dir\") pod \"calico-node-hwr7p\" (UID: \"3e894b53-0a8f-4242-acbe-0970b488515e\") " pod="calico-system/calico-node-hwr7p" Jan 17 12:22:04.122673 kubelet[2610]: I0117 12:22:04.120778 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3e894b53-0a8f-4242-acbe-0970b488515e-policysync\") pod \"calico-node-hwr7p\" (UID: \"3e894b53-0a8f-4242-acbe-0970b488515e\") " pod="calico-system/calico-node-hwr7p" Jan 17 12:22:04.122673 kubelet[2610]: I0117 12:22:04.120811 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3e894b53-0a8f-4242-acbe-0970b488515e-cni-bin-dir\") pod \"calico-node-hwr7p\" (UID: \"3e894b53-0a8f-4242-acbe-0970b488515e\") " pod="calico-system/calico-node-hwr7p" Jan 17 12:22:04.163508 systemd[1]: Started cri-containerd-c8ef91d6f3de0543d3a5e539f2083ca3036678d425cc060392d2e96b697c2165.scope - libcontainer container c8ef91d6f3de0543d3a5e539f2083ca3036678d425cc060392d2e96b697c2165. Jan 17 12:22:04.196048 kubelet[2610]: I0117 12:22:04.195153 2610 topology_manager.go:215] "Topology Admit Handler" podUID="68c040bb-a18d-4fed-9ea3-2d0c63ef70bc" podNamespace="calico-system" podName="csi-node-driver-gjb8c" Jan 17 12:22:04.199271 kubelet[2610]: E0117 12:22:04.197470 2610 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gjb8c" podUID="68c040bb-a18d-4fed-9ea3-2d0c63ef70bc" Jan 17 12:22:04.229807 kubelet[2610]: E0117 12:22:04.229570 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.229807 kubelet[2610]: W0117 12:22:04.229611 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.229807 kubelet[2610]: E0117 12:22:04.229647 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.231293 kubelet[2610]: E0117 12:22:04.230705 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.231293 kubelet[2610]: W0117 12:22:04.230736 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.231293 kubelet[2610]: E0117 12:22:04.230759 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.232383 kubelet[2610]: E0117 12:22:04.232286 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.232383 kubelet[2610]: W0117 12:22:04.232309 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.232383 kubelet[2610]: E0117 12:22:04.232331 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.236349 kubelet[2610]: E0117 12:22:04.234213 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.236349 kubelet[2610]: W0117 12:22:04.234239 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.236349 kubelet[2610]: E0117 12:22:04.234316 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.236349 kubelet[2610]: E0117 12:22:04.234646 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.236349 kubelet[2610]: W0117 12:22:04.234659 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.236349 kubelet[2610]: E0117 12:22:04.235933 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.240270 kubelet[2610]: E0117 12:22:04.237849 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.240270 kubelet[2610]: W0117 12:22:04.237870 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.240270 kubelet[2610]: E0117 12:22:04.237903 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.244623 kubelet[2610]: E0117 12:22:04.244588 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.244623 kubelet[2610]: W0117 12:22:04.244621 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.244842 kubelet[2610]: E0117 12:22:04.244650 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.245695 kubelet[2610]: E0117 12:22:04.245091 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.245695 kubelet[2610]: W0117 12:22:04.245116 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.245695 kubelet[2610]: E0117 12:22:04.245138 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.247743 kubelet[2610]: E0117 12:22:04.247630 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.247743 kubelet[2610]: W0117 12:22:04.247656 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.247743 kubelet[2610]: E0117 12:22:04.247679 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.249962 kubelet[2610]: E0117 12:22:04.248015 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.249962 kubelet[2610]: W0117 12:22:04.248036 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.249962 kubelet[2610]: E0117 12:22:04.248057 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.249962 kubelet[2610]: E0117 12:22:04.249470 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.249962 kubelet[2610]: W0117 12:22:04.249487 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.249962 kubelet[2610]: E0117 12:22:04.249849 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.254499 kubelet[2610]: E0117 12:22:04.252537 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.254499 kubelet[2610]: W0117 12:22:04.252554 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.254499 kubelet[2610]: E0117 12:22:04.252595 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.254499 kubelet[2610]: E0117 12:22:04.252899 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.254499 kubelet[2610]: W0117 12:22:04.252911 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.254499 kubelet[2610]: E0117 12:22:04.253001 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.254499 kubelet[2610]: E0117 12:22:04.253236 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.254499 kubelet[2610]: W0117 12:22:04.253261 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.254499 kubelet[2610]: E0117 12:22:04.253352 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.254499 kubelet[2610]: E0117 12:22:04.253650 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.255001 kubelet[2610]: W0117 12:22:04.253662 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.255001 kubelet[2610]: E0117 12:22:04.253766 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.255001 kubelet[2610]: E0117 12:22:04.254108 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.255001 kubelet[2610]: W0117 12:22:04.254150 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.255001 kubelet[2610]: E0117 12:22:04.254233 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.255001 kubelet[2610]: E0117 12:22:04.254607 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.255001 kubelet[2610]: W0117 12:22:04.254634 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.255001 kubelet[2610]: E0117 12:22:04.254652 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.256461 kubelet[2610]: E0117 12:22:04.256383 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.256461 kubelet[2610]: W0117 12:22:04.256404 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.256461 kubelet[2610]: E0117 12:22:04.256422 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.262272 kubelet[2610]: E0117 12:22:04.262017 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.262272 kubelet[2610]: W0117 12:22:04.262039 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.262272 kubelet[2610]: E0117 12:22:04.262080 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.288554 kubelet[2610]: E0117 12:22:04.288335 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.288554 kubelet[2610]: W0117 12:22:04.288362 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.288554 kubelet[2610]: E0117 12:22:04.288392 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.288910 kubelet[2610]: E0117 12:22:04.288796 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.288910 kubelet[2610]: W0117 12:22:04.288838 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.288910 kubelet[2610]: E0117 12:22:04.288866 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.290274 kubelet[2610]: E0117 12:22:04.289399 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.290274 kubelet[2610]: W0117 12:22:04.289440 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.290274 kubelet[2610]: E0117 12:22:04.289471 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.290274 kubelet[2610]: E0117 12:22:04.289840 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.290274 kubelet[2610]: W0117 12:22:04.289865 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.290274 kubelet[2610]: E0117 12:22:04.289890 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.291529 kubelet[2610]: E0117 12:22:04.290297 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.291529 kubelet[2610]: W0117 12:22:04.290312 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.291529 kubelet[2610]: E0117 12:22:04.290330 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.291529 kubelet[2610]: E0117 12:22:04.290831 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.291529 kubelet[2610]: W0117 12:22:04.291068 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.291529 kubelet[2610]: E0117 12:22:04.291092 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.293276 kubelet[2610]: E0117 12:22:04.292446 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.293276 kubelet[2610]: W0117 12:22:04.292465 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.293276 kubelet[2610]: E0117 12:22:04.292482 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.293483 kubelet[2610]: E0117 12:22:04.293380 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.293537 kubelet[2610]: W0117 12:22:04.293521 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.293587 kubelet[2610]: E0117 12:22:04.293546 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.296431 kubelet[2610]: E0117 12:22:04.296403 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.296431 kubelet[2610]: W0117 12:22:04.296427 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.296628 kubelet[2610]: E0117 12:22:04.296447 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.296853 kubelet[2610]: E0117 12:22:04.296810 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.296853 kubelet[2610]: W0117 12:22:04.296850 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.296977 kubelet[2610]: E0117 12:22:04.296877 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.297730 kubelet[2610]: E0117 12:22:04.297290 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.297730 kubelet[2610]: W0117 12:22:04.297310 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.297730 kubelet[2610]: E0117 12:22:04.297325 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.297730 kubelet[2610]: E0117 12:22:04.297705 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.297730 kubelet[2610]: W0117 12:22:04.297719 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.297730 kubelet[2610]: E0117 12:22:04.297735 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.298837 kubelet[2610]: E0117 12:22:04.298217 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.298837 kubelet[2610]: W0117 12:22:04.298234 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.298837 kubelet[2610]: E0117 12:22:04.298296 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.298837 kubelet[2610]: E0117 12:22:04.298674 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.298837 kubelet[2610]: W0117 12:22:04.298707 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.298837 kubelet[2610]: E0117 12:22:04.298725 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.299234 kubelet[2610]: E0117 12:22:04.299117 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.299234 kubelet[2610]: W0117 12:22:04.299133 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.299234 kubelet[2610]: E0117 12:22:04.299150 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.300108 kubelet[2610]: E0117 12:22:04.299555 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.300108 kubelet[2610]: W0117 12:22:04.299570 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.300108 kubelet[2610]: E0117 12:22:04.299587 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.300108 kubelet[2610]: E0117 12:22:04.300000 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.300108 kubelet[2610]: W0117 12:22:04.300015 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.300108 kubelet[2610]: E0117 12:22:04.300033 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.300108 kubelet[2610]: E0117 12:22:04.300384 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.300108 kubelet[2610]: W0117 12:22:04.300400 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.300108 kubelet[2610]: E0117 12:22:04.300418 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.301727 kubelet[2610]: E0117 12:22:04.300706 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.301727 kubelet[2610]: W0117 12:22:04.300718 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.301727 kubelet[2610]: E0117 12:22:04.300734 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.301727 kubelet[2610]: E0117 12:22:04.301099 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.301727 kubelet[2610]: W0117 12:22:04.301113 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.301727 kubelet[2610]: E0117 12:22:04.301129 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.323604 kubelet[2610]: E0117 12:22:04.323564 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.323604 kubelet[2610]: W0117 12:22:04.323595 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.323847 kubelet[2610]: E0117 12:22:04.323627 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.324458 kubelet[2610]: I0117 12:22:04.324061 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/68c040bb-a18d-4fed-9ea3-2d0c63ef70bc-kubelet-dir\") pod \"csi-node-driver-gjb8c\" (UID: \"68c040bb-a18d-4fed-9ea3-2d0c63ef70bc\") " pod="calico-system/csi-node-driver-gjb8c" Jan 17 12:22:04.326413 kubelet[2610]: E0117 12:22:04.326131 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.326413 kubelet[2610]: W0117 12:22:04.326162 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.326413 kubelet[2610]: E0117 12:22:04.326188 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.330272 kubelet[2610]: E0117 12:22:04.329153 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.330272 kubelet[2610]: W0117 12:22:04.330013 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.330272 kubelet[2610]: E0117 12:22:04.330133 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.330491 containerd[1468]: time="2025-01-17T12:22:04.329430630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hwr7p,Uid:3e894b53-0a8f-4242-acbe-0970b488515e,Namespace:calico-system,Attempt:0,}" Jan 17 12:22:04.330607 kubelet[2610]: E0117 12:22:04.330484 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.330607 kubelet[2610]: W0117 12:22:04.330499 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.330607 kubelet[2610]: E0117 12:22:04.330520 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.330607 kubelet[2610]: I0117 12:22:04.330583 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/68c040bb-a18d-4fed-9ea3-2d0c63ef70bc-socket-dir\") pod \"csi-node-driver-gjb8c\" (UID: \"68c040bb-a18d-4fed-9ea3-2d0c63ef70bc\") " pod="calico-system/csi-node-driver-gjb8c" Jan 17 12:22:04.331345 kubelet[2610]: E0117 12:22:04.331314 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.331468 kubelet[2610]: W0117 12:22:04.331341 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.331631 kubelet[2610]: E0117 12:22:04.331603 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.331723 kubelet[2610]: I0117 12:22:04.331655 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/68c040bb-a18d-4fed-9ea3-2d0c63ef70bc-registration-dir\") pod \"csi-node-driver-gjb8c\" (UID: \"68c040bb-a18d-4fed-9ea3-2d0c63ef70bc\") " pod="calico-system/csi-node-driver-gjb8c" Jan 17 12:22:04.333149 kubelet[2610]: E0117 12:22:04.332654 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.333149 kubelet[2610]: W0117 12:22:04.332674 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.333149 kubelet[2610]: E0117 12:22:04.332799 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.333149 kubelet[2610]: I0117 12:22:04.333123 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rskk\" (UniqueName: \"kubernetes.io/projected/68c040bb-a18d-4fed-9ea3-2d0c63ef70bc-kube-api-access-9rskk\") pod \"csi-node-driver-gjb8c\" (UID: \"68c040bb-a18d-4fed-9ea3-2d0c63ef70bc\") " pod="calico-system/csi-node-driver-gjb8c" Jan 17 12:22:04.335268 kubelet[2610]: E0117 12:22:04.335022 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.335268 kubelet[2610]: W0117 12:22:04.335042 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.336275 kubelet[2610]: E0117 12:22:04.335699 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.336275 kubelet[2610]: W0117 12:22:04.335718 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.337683 kubelet[2610]: E0117 12:22:04.337653 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.337683 kubelet[2610]: W0117 12:22:04.337678 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.339269 kubelet[2610]: E0117 12:22:04.339080 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.339269 kubelet[2610]: W0117 12:22:04.339101 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.339269 kubelet[2610]: E0117 12:22:04.339120 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.342066 kubelet[2610]: E0117 12:22:04.341605 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.342066 kubelet[2610]: E0117 12:22:04.341632 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.342066 kubelet[2610]: E0117 12:22:04.341643 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.342066 kubelet[2610]: I0117 12:22:04.341679 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/68c040bb-a18d-4fed-9ea3-2d0c63ef70bc-varrun\") pod \"csi-node-driver-gjb8c\" (UID: \"68c040bb-a18d-4fed-9ea3-2d0c63ef70bc\") " pod="calico-system/csi-node-driver-gjb8c" Jan 17 12:22:04.342066 kubelet[2610]: E0117 12:22:04.341736 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.342066 kubelet[2610]: W0117 12:22:04.341748 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.342066 kubelet[2610]: E0117 12:22:04.341765 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.344946 kubelet[2610]: E0117 12:22:04.344524 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.344946 kubelet[2610]: W0117 12:22:04.344545 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.344946 kubelet[2610]: E0117 12:22:04.344584 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.345178 kubelet[2610]: E0117 12:22:04.345007 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.345178 kubelet[2610]: W0117 12:22:04.345023 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.345178 kubelet[2610]: E0117 12:22:04.345041 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.347624 kubelet[2610]: E0117 12:22:04.347421 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.347624 kubelet[2610]: W0117 12:22:04.347444 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.347624 kubelet[2610]: E0117 12:22:04.347462 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.353279 kubelet[2610]: E0117 12:22:04.351395 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.353279 kubelet[2610]: W0117 12:22:04.351418 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.353279 kubelet[2610]: E0117 12:22:04.351438 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.363997 containerd[1468]: time="2025-01-17T12:22:04.363938637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-588cd587fd-rx6m6,Uid:04762b83-9859-4ba7-8017-1bd889867ddf,Namespace:calico-system,Attempt:0,} returns sandbox id \"c8ef91d6f3de0543d3a5e539f2083ca3036678d425cc060392d2e96b697c2165\"" Jan 17 12:22:04.367275 containerd[1468]: time="2025-01-17T12:22:04.366386147Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 17 12:22:04.404408 containerd[1468]: time="2025-01-17T12:22:04.404305917Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:22:04.404564 containerd[1468]: time="2025-01-17T12:22:04.404447235Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:22:04.404564 containerd[1468]: time="2025-01-17T12:22:04.404533755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:04.405497 containerd[1468]: time="2025-01-17T12:22:04.404857455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:04.440519 systemd[1]: Started cri-containerd-d222c5822bc7a5e4d6cc39e105d819fb42ddc057460292a00475c7bc703bf3f9.scope - libcontainer container d222c5822bc7a5e4d6cc39e105d819fb42ddc057460292a00475c7bc703bf3f9. Jan 17 12:22:04.444691 kubelet[2610]: E0117 12:22:04.443533 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.444691 kubelet[2610]: W0117 12:22:04.443564 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.444691 kubelet[2610]: E0117 12:22:04.443616 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.444691 kubelet[2610]: E0117 12:22:04.444151 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.444691 kubelet[2610]: W0117 12:22:04.444167 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.444691 kubelet[2610]: E0117 12:22:04.444188 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.447950 kubelet[2610]: E0117 12:22:04.444679 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.447950 kubelet[2610]: W0117 12:22:04.444722 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.447950 kubelet[2610]: E0117 12:22:04.444741 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.447950 kubelet[2610]: E0117 12:22:04.445205 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.447950 kubelet[2610]: W0117 12:22:04.445270 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.447950 kubelet[2610]: E0117 12:22:04.445292 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.447950 kubelet[2610]: E0117 12:22:04.445747 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.447950 kubelet[2610]: W0117 12:22:04.445765 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.447950 kubelet[2610]: E0117 12:22:04.445783 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.447950 kubelet[2610]: E0117 12:22:04.446203 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.448438 kubelet[2610]: W0117 12:22:04.446321 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.448438 kubelet[2610]: E0117 12:22:04.446344 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.448438 kubelet[2610]: E0117 12:22:04.446791 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.448438 kubelet[2610]: W0117 12:22:04.446805 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.448438 kubelet[2610]: E0117 12:22:04.447296 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.448438 kubelet[2610]: E0117 12:22:04.448308 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.448438 kubelet[2610]: W0117 12:22:04.448324 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.448438 kubelet[2610]: E0117 12:22:04.448377 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.452034 kubelet[2610]: E0117 12:22:04.448946 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.452034 kubelet[2610]: W0117 12:22:04.448964 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.452034 kubelet[2610]: E0117 12:22:04.448986 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.452034 kubelet[2610]: E0117 12:22:04.449408 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.452034 kubelet[2610]: W0117 12:22:04.449439 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.452034 kubelet[2610]: E0117 12:22:04.449516 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.452034 kubelet[2610]: E0117 12:22:04.449811 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.452034 kubelet[2610]: W0117 12:22:04.449896 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.452034 kubelet[2610]: E0117 12:22:04.450068 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.452034 kubelet[2610]: E0117 12:22:04.450395 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.452921 kubelet[2610]: W0117 12:22:04.450408 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.452921 kubelet[2610]: E0117 12:22:04.450599 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.452921 kubelet[2610]: E0117 12:22:04.450910 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.452921 kubelet[2610]: W0117 12:22:04.450922 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.452921 kubelet[2610]: E0117 12:22:04.450992 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.452921 kubelet[2610]: E0117 12:22:04.451393 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.452921 kubelet[2610]: W0117 12:22:04.451407 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.452921 kubelet[2610]: E0117 12:22:04.451564 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.452921 kubelet[2610]: E0117 12:22:04.451832 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.452921 kubelet[2610]: W0117 12:22:04.451875 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.455622 kubelet[2610]: E0117 12:22:04.452030 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.455622 kubelet[2610]: E0117 12:22:04.452424 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.455622 kubelet[2610]: W0117 12:22:04.452438 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.455622 kubelet[2610]: E0117 12:22:04.452597 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.455622 kubelet[2610]: E0117 12:22:04.452960 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.455622 kubelet[2610]: W0117 12:22:04.452974 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.455622 kubelet[2610]: E0117 12:22:04.453079 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.455622 kubelet[2610]: E0117 12:22:04.453436 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.455622 kubelet[2610]: W0117 12:22:04.453452 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.455622 kubelet[2610]: E0117 12:22:04.453522 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.456153 kubelet[2610]: E0117 12:22:04.453893 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.456153 kubelet[2610]: W0117 12:22:04.453907 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.456153 kubelet[2610]: E0117 12:22:04.454075 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.456153 kubelet[2610]: E0117 12:22:04.454400 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.456153 kubelet[2610]: W0117 12:22:04.454413 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.456153 kubelet[2610]: E0117 12:22:04.454468 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.456153 kubelet[2610]: E0117 12:22:04.454999 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.456153 kubelet[2610]: W0117 12:22:04.455011 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.456153 kubelet[2610]: E0117 12:22:04.455074 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.456153 kubelet[2610]: E0117 12:22:04.455462 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.458008 kubelet[2610]: W0117 12:22:04.455477 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.458008 kubelet[2610]: E0117 12:22:04.455631 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.458008 kubelet[2610]: E0117 12:22:04.455903 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.458008 kubelet[2610]: W0117 12:22:04.455918 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.458008 kubelet[2610]: E0117 12:22:04.455939 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.458008 kubelet[2610]: E0117 12:22:04.456325 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.458008 kubelet[2610]: W0117 12:22:04.456340 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.458008 kubelet[2610]: E0117 12:22:04.456374 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.458008 kubelet[2610]: E0117 12:22:04.457021 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.458008 kubelet[2610]: W0117 12:22:04.457037 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.460422 kubelet[2610]: E0117 12:22:04.457055 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.475374 kubelet[2610]: E0117 12:22:04.474981 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:04.475374 kubelet[2610]: W0117 12:22:04.475012 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:04.475374 kubelet[2610]: E0117 12:22:04.475060 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:04.518348 containerd[1468]: time="2025-01-17T12:22:04.518170044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hwr7p,Uid:3e894b53-0a8f-4242-acbe-0970b488515e,Namespace:calico-system,Attempt:0,} returns sandbox id \"d222c5822bc7a5e4d6cc39e105d819fb42ddc057460292a00475c7bc703bf3f9\"" Jan 17 12:22:05.506290 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1144375272.mount: Deactivated successfully. Jan 17 12:22:06.330003 containerd[1468]: time="2025-01-17T12:22:06.329925896Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:06.332209 containerd[1468]: time="2025-01-17T12:22:06.332134588Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Jan 17 12:22:06.334588 containerd[1468]: time="2025-01-17T12:22:06.334199812Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:06.338522 containerd[1468]: time="2025-01-17T12:22:06.338475130Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:06.340369 containerd[1468]: time="2025-01-17T12:22:06.340317785Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 1.97388111s" Jan 17 12:22:06.340687 containerd[1468]: time="2025-01-17T12:22:06.340560512Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 17 12:22:06.342810 containerd[1468]: time="2025-01-17T12:22:06.342549922Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 17 12:22:06.364459 kubelet[2610]: E0117 12:22:06.363586 2610 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gjb8c" podUID="68c040bb-a18d-4fed-9ea3-2d0c63ef70bc" Jan 17 12:22:06.367708 containerd[1468]: time="2025-01-17T12:22:06.367319036Z" level=info msg="CreateContainer within sandbox \"c8ef91d6f3de0543d3a5e539f2083ca3036678d425cc060392d2e96b697c2165\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 17 12:22:06.389684 containerd[1468]: time="2025-01-17T12:22:06.389566075Z" level=info msg="CreateContainer within sandbox \"c8ef91d6f3de0543d3a5e539f2083ca3036678d425cc060392d2e96b697c2165\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"672366ba8cba07e9db3f9dddf4a91b774ed132344eb44424e68445a7939b2c9c\"" Jan 17 12:22:06.391927 containerd[1468]: time="2025-01-17T12:22:06.390516965Z" level=info msg="StartContainer for \"672366ba8cba07e9db3f9dddf4a91b774ed132344eb44424e68445a7939b2c9c\"" Jan 17 12:22:06.448501 systemd[1]: Started cri-containerd-672366ba8cba07e9db3f9dddf4a91b774ed132344eb44424e68445a7939b2c9c.scope - libcontainer container 672366ba8cba07e9db3f9dddf4a91b774ed132344eb44424e68445a7939b2c9c. Jan 17 12:22:06.515274 containerd[1468]: time="2025-01-17T12:22:06.514478925Z" level=info msg="StartContainer for \"672366ba8cba07e9db3f9dddf4a91b774ed132344eb44424e68445a7939b2c9c\" returns successfully" Jan 17 12:22:07.415525 containerd[1468]: time="2025-01-17T12:22:07.415448411Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:07.416976 containerd[1468]: time="2025-01-17T12:22:07.416901934Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Jan 17 12:22:07.418836 containerd[1468]: time="2025-01-17T12:22:07.418760309Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:07.422557 containerd[1468]: time="2025-01-17T12:22:07.422504319Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:07.423714 containerd[1468]: time="2025-01-17T12:22:07.423423688Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.080828264s" Jan 17 12:22:07.423714 containerd[1468]: time="2025-01-17T12:22:07.423480688Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 17 12:22:07.428848 containerd[1468]: time="2025-01-17T12:22:07.428432321Z" level=info msg="CreateContainer within sandbox \"d222c5822bc7a5e4d6cc39e105d819fb42ddc057460292a00475c7bc703bf3f9\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 17 12:22:07.457028 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1316709653.mount: Deactivated successfully. Jan 17 12:22:07.460747 containerd[1468]: time="2025-01-17T12:22:07.457701361Z" level=info msg="CreateContainer within sandbox \"d222c5822bc7a5e4d6cc39e105d819fb42ddc057460292a00475c7bc703bf3f9\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b1ea437d3988c0fc1de3c6f592b6b179db3c27e04115404ee910712f0b62a78e\"" Jan 17 12:22:07.460747 containerd[1468]: time="2025-01-17T12:22:07.459632074Z" level=info msg="StartContainer for \"b1ea437d3988c0fc1de3c6f592b6b179db3c27e04115404ee910712f0b62a78e\"" Jan 17 12:22:07.523296 systemd[1]: Started cri-containerd-b1ea437d3988c0fc1de3c6f592b6b179db3c27e04115404ee910712f0b62a78e.scope - libcontainer container b1ea437d3988c0fc1de3c6f592b6b179db3c27e04115404ee910712f0b62a78e. Jan 17 12:22:07.525323 kubelet[2610]: I0117 12:22:07.524990 2610 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-588cd587fd-rx6m6" podStartSLOduration=2.54850516 podStartE2EDuration="4.524966693s" podCreationTimestamp="2025-01-17 12:22:03 +0000 UTC" firstStartedPulling="2025-01-17 12:22:04.365738362 +0000 UTC m=+21.193836896" lastFinishedPulling="2025-01-17 12:22:06.342199884 +0000 UTC m=+23.170298429" observedRunningTime="2025-01-17 12:22:07.522346187 +0000 UTC m=+24.350444743" watchObservedRunningTime="2025-01-17 12:22:07.524966693 +0000 UTC m=+24.353065249" Jan 17 12:22:07.531319 kubelet[2610]: E0117 12:22:07.530917 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:07.531319 kubelet[2610]: W0117 12:22:07.530950 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:07.531319 kubelet[2610]: E0117 12:22:07.531188 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:07.532276 kubelet[2610]: E0117 12:22:07.531973 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:07.532276 kubelet[2610]: W0117 12:22:07.531994 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:07.532276 kubelet[2610]: E0117 12:22:07.532036 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:07.534345 kubelet[2610]: E0117 12:22:07.534155 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:07.534345 kubelet[2610]: W0117 12:22:07.534179 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:07.534345 kubelet[2610]: E0117 12:22:07.534207 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:07.535349 kubelet[2610]: E0117 12:22:07.534925 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:07.535349 kubelet[2610]: W0117 12:22:07.534944 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:07.535349 kubelet[2610]: E0117 12:22:07.534963 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:07.537276 kubelet[2610]: E0117 12:22:07.537052 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:07.537276 kubelet[2610]: W0117 12:22:07.537073 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:07.537276 kubelet[2610]: E0117 12:22:07.537093 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:07.539664 kubelet[2610]: E0117 12:22:07.539539 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:07.539664 kubelet[2610]: W0117 12:22:07.539559 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:07.539664 kubelet[2610]: E0117 12:22:07.539579 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:07.541735 kubelet[2610]: E0117 12:22:07.541169 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:07.541735 kubelet[2610]: W0117 12:22:07.541190 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:07.541735 kubelet[2610]: E0117 12:22:07.541581 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:07.543270 kubelet[2610]: E0117 12:22:07.542855 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:07.543270 kubelet[2610]: W0117 12:22:07.542875 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:07.543270 kubelet[2610]: E0117 12:22:07.542894 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:07.545491 kubelet[2610]: E0117 12:22:07.545337 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:07.545491 kubelet[2610]: W0117 12:22:07.545358 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:07.545491 kubelet[2610]: E0117 12:22:07.545377 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:07.546535 kubelet[2610]: E0117 12:22:07.545680 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:07.546535 kubelet[2610]: W0117 12:22:07.545696 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:07.546535 kubelet[2610]: E0117 12:22:07.545712 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:07.547001 kubelet[2610]: E0117 12:22:07.546963 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:07.547001 kubelet[2610]: W0117 12:22:07.546987 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:07.547142 kubelet[2610]: E0117 12:22:07.547006 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:07.547403 kubelet[2610]: E0117 12:22:07.547342 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:07.547403 kubelet[2610]: W0117 12:22:07.547357 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:07.547403 kubelet[2610]: E0117 12:22:07.547374 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:07.547780 kubelet[2610]: E0117 12:22:07.547693 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:07.547780 kubelet[2610]: W0117 12:22:07.547706 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:07.547780 kubelet[2610]: E0117 12:22:07.547723 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:07.548608 kubelet[2610]: E0117 12:22:07.548132 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:07.548608 kubelet[2610]: W0117 12:22:07.548151 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:07.548608 kubelet[2610]: E0117 12:22:07.548170 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:07.550082 kubelet[2610]: E0117 12:22:07.549344 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:07.550082 kubelet[2610]: W0117 12:22:07.549363 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:07.550082 kubelet[2610]: E0117 12:22:07.549993 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:07.571068 kubelet[2610]: E0117 12:22:07.570930 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:07.572506 kubelet[2610]: W0117 12:22:07.570965 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:07.572506 kubelet[2610]: E0117 12:22:07.571754 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:07.575651 kubelet[2610]: E0117 12:22:07.575419 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:07.575651 kubelet[2610]: W0117 12:22:07.575466 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:07.576110 kubelet[2610]: E0117 12:22:07.575858 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:07.576110 kubelet[2610]: W0117 12:22:07.575876 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:07.576110 kubelet[2610]: E0117 12:22:07.575899 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:07.577571 kubelet[2610]: E0117 12:22:07.577389 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:07.577571 kubelet[2610]: W0117 12:22:07.577410 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:07.577571 kubelet[2610]: E0117 12:22:07.577432 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:07.579345 kubelet[2610]: E0117 12:22:07.579069 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:07.579345 kubelet[2610]: E0117 12:22:07.579293 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:07.579345 kubelet[2610]: W0117 12:22:07.579308 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:07.580004 kubelet[2610]: E0117 12:22:07.579593 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:07.581207 kubelet[2610]: E0117 12:22:07.580897 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:07.581207 kubelet[2610]: W0117 12:22:07.580977 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:07.581207 kubelet[2610]: E0117 12:22:07.580999 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:07.582455 kubelet[2610]: E0117 12:22:07.581690 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:07.582455 kubelet[2610]: W0117 12:22:07.581706 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:07.582455 kubelet[2610]: E0117 12:22:07.581733 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:07.584365 kubelet[2610]: E0117 12:22:07.583362 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:07.584365 kubelet[2610]: W0117 12:22:07.583401 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:07.584365 kubelet[2610]: E0117 12:22:07.584044 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:07.584729 containerd[1468]: time="2025-01-17T12:22:07.583408647Z" level=info msg="StartContainer for \"b1ea437d3988c0fc1de3c6f592b6b179db3c27e04115404ee910712f0b62a78e\" returns successfully" Jan 17 12:22:07.584792 kubelet[2610]: E0117 12:22:07.584557 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:07.584792 kubelet[2610]: W0117 12:22:07.584572 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:07.586274 kubelet[2610]: E0117 12:22:07.585083 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:07.586895 kubelet[2610]: E0117 12:22:07.586722 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:07.586895 kubelet[2610]: W0117 12:22:07.586741 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:07.587558 kubelet[2610]: E0117 12:22:07.587072 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:07.587558 kubelet[2610]: W0117 12:22:07.587090 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:07.587558 kubelet[2610]: E0117 12:22:07.587385 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:07.587558 kubelet[2610]: W0117 12:22:07.587397 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:07.587558 kubelet[2610]: E0117 12:22:07.587414 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:07.587558 kubelet[2610]: E0117 12:22:07.587447 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:07.587558 kubelet[2610]: E0117 12:22:07.587501 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:07.589422 kubelet[2610]: E0117 12:22:07.588832 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:07.589422 kubelet[2610]: W0117 12:22:07.588852 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:07.589422 kubelet[2610]: E0117 12:22:07.588871 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:07.590106 kubelet[2610]: E0117 12:22:07.589749 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:07.590106 kubelet[2610]: W0117 12:22:07.589765 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:07.590106 kubelet[2610]: E0117 12:22:07.590074 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:07.590106 kubelet[2610]: W0117 12:22:07.590088 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:07.590106 kubelet[2610]: E0117 12:22:07.590107 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:07.591646 kubelet[2610]: E0117 12:22:07.590788 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:07.591646 kubelet[2610]: W0117 12:22:07.590803 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:07.591646 kubelet[2610]: E0117 12:22:07.590818 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:07.591646 kubelet[2610]: E0117 12:22:07.590864 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:07.591646 kubelet[2610]: E0117 12:22:07.591176 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:07.591646 kubelet[2610]: W0117 12:22:07.591190 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:07.591646 kubelet[2610]: E0117 12:22:07.591208 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:07.592757 kubelet[2610]: E0117 12:22:07.592461 2610 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:22:07.592757 kubelet[2610]: W0117 12:22:07.592480 2610 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:22:07.592757 kubelet[2610]: E0117 12:22:07.592497 2610 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:22:07.621513 systemd[1]: cri-containerd-b1ea437d3988c0fc1de3c6f592b6b179db3c27e04115404ee910712f0b62a78e.scope: Deactivated successfully. Jan 17 12:22:08.255841 containerd[1468]: time="2025-01-17T12:22:08.255747051Z" level=info msg="shim disconnected" id=b1ea437d3988c0fc1de3c6f592b6b179db3c27e04115404ee910712f0b62a78e namespace=k8s.io Jan 17 12:22:08.255841 containerd[1468]: time="2025-01-17T12:22:08.255838012Z" level=warning msg="cleaning up after shim disconnected" id=b1ea437d3988c0fc1de3c6f592b6b179db3c27e04115404ee910712f0b62a78e namespace=k8s.io Jan 17 12:22:08.255841 containerd[1468]: time="2025-01-17T12:22:08.255853173Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:22:08.363972 kubelet[2610]: E0117 12:22:08.363897 2610 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gjb8c" podUID="68c040bb-a18d-4fed-9ea3-2d0c63ef70bc" Jan 17 12:22:08.449472 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b1ea437d3988c0fc1de3c6f592b6b179db3c27e04115404ee910712f0b62a78e-rootfs.mount: Deactivated successfully. Jan 17 12:22:08.501730 kubelet[2610]: I0117 12:22:08.501684 2610 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:22:08.503961 containerd[1468]: time="2025-01-17T12:22:08.503642944Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 17 12:22:10.364860 kubelet[2610]: E0117 12:22:10.364343 2610 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gjb8c" podUID="68c040bb-a18d-4fed-9ea3-2d0c63ef70bc" Jan 17 12:22:12.326804 containerd[1468]: time="2025-01-17T12:22:12.326738536Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:12.329111 containerd[1468]: time="2025-01-17T12:22:12.329020775Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 17 12:22:12.330482 containerd[1468]: time="2025-01-17T12:22:12.330297470Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:12.334159 containerd[1468]: time="2025-01-17T12:22:12.334118707Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:12.338151 containerd[1468]: time="2025-01-17T12:22:12.337174400Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 3.833479634s" Jan 17 12:22:12.338151 containerd[1468]: time="2025-01-17T12:22:12.337226321Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 17 12:22:12.349285 containerd[1468]: time="2025-01-17T12:22:12.348926465Z" level=info msg="CreateContainer within sandbox \"d222c5822bc7a5e4d6cc39e105d819fb42ddc057460292a00475c7bc703bf3f9\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 12:22:12.367277 kubelet[2610]: E0117 12:22:12.364318 2610 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gjb8c" podUID="68c040bb-a18d-4fed-9ea3-2d0c63ef70bc" Jan 17 12:22:12.376172 containerd[1468]: time="2025-01-17T12:22:12.376067832Z" level=info msg="CreateContainer within sandbox \"d222c5822bc7a5e4d6cc39e105d819fb42ddc057460292a00475c7bc703bf3f9\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"83b9107a76c5b913e1517ea8c18a6ea326782286bc882d47eb269a40f868a8c2\"" Jan 17 12:22:12.377192 containerd[1468]: time="2025-01-17T12:22:12.377050755Z" level=info msg="StartContainer for \"83b9107a76c5b913e1517ea8c18a6ea326782286bc882d47eb269a40f868a8c2\"" Jan 17 12:22:12.426648 systemd[1]: Started cri-containerd-83b9107a76c5b913e1517ea8c18a6ea326782286bc882d47eb269a40f868a8c2.scope - libcontainer container 83b9107a76c5b913e1517ea8c18a6ea326782286bc882d47eb269a40f868a8c2. Jan 17 12:22:12.497855 containerd[1468]: time="2025-01-17T12:22:12.497703725Z" level=info msg="StartContainer for \"83b9107a76c5b913e1517ea8c18a6ea326782286bc882d47eb269a40f868a8c2\" returns successfully" Jan 17 12:22:13.488000 containerd[1468]: time="2025-01-17T12:22:13.487924925Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:22:13.490692 systemd[1]: cri-containerd-83b9107a76c5b913e1517ea8c18a6ea326782286bc882d47eb269a40f868a8c2.scope: Deactivated successfully. Jan 17 12:22:13.531922 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-83b9107a76c5b913e1517ea8c18a6ea326782286bc882d47eb269a40f868a8c2-rootfs.mount: Deactivated successfully. Jan 17 12:22:13.575318 kubelet[2610]: I0117 12:22:13.574401 2610 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 17 12:22:13.629280 kubelet[2610]: I0117 12:22:13.628516 2610 topology_manager.go:215] "Topology Admit Handler" podUID="df01509d-c2e3-4521-bb4f-4b625ab957e3" podNamespace="calico-apiserver" podName="calico-apiserver-7b5f56598d-kfl2c" Jan 17 12:22:13.633280 kubelet[2610]: I0117 12:22:13.632080 2610 topology_manager.go:215] "Topology Admit Handler" podUID="d4d7de4f-610c-42bb-9ed3-95154eded5ac" podNamespace="kube-system" podName="coredns-7db6d8ff4d-xn5bq" Jan 17 12:22:13.662885 systemd[1]: Created slice kubepods-besteffort-poddf01509d_c2e3_4521_bb4f_4b625ab957e3.slice - libcontainer container kubepods-besteffort-poddf01509d_c2e3_4521_bb4f_4b625ab957e3.slice. Jan 17 12:22:13.664542 kubelet[2610]: I0117 12:22:13.663191 2610 topology_manager.go:215] "Topology Admit Handler" podUID="c715a196-a54b-4348-9ab5-065abb9617bb" podNamespace="calico-system" podName="calico-kube-controllers-7d68f7877f-n2vhz" Jan 17 12:22:13.669138 kubelet[2610]: I0117 12:22:13.669078 2610 topology_manager.go:215] "Topology Admit Handler" podUID="5d7aeb14-5869-48a1-96a7-a215252689a5" podNamespace="calico-apiserver" podName="calico-apiserver-7b5f56598d-sv9sp" Jan 17 12:22:13.671461 kubelet[2610]: I0117 12:22:13.671407 2610 topology_manager.go:215] "Topology Admit Handler" podUID="9a11ecfa-a757-43fc-8ab9-8e4424da26ba" podNamespace="kube-system" podName="coredns-7db6d8ff4d-9xkbk" Jan 17 12:22:13.689786 systemd[1]: Created slice kubepods-burstable-podd4d7de4f_610c_42bb_9ed3_95154eded5ac.slice - libcontainer container kubepods-burstable-podd4d7de4f_610c_42bb_9ed3_95154eded5ac.slice. Jan 17 12:22:13.702358 systemd[1]: Created slice kubepods-burstable-pod9a11ecfa_a757_43fc_8ab9_8e4424da26ba.slice - libcontainer container kubepods-burstable-pod9a11ecfa_a757_43fc_8ab9_8e4424da26ba.slice. Jan 17 12:22:13.734815 kubelet[2610]: I0117 12:22:13.723715 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/df01509d-c2e3-4521-bb4f-4b625ab957e3-calico-apiserver-certs\") pod \"calico-apiserver-7b5f56598d-kfl2c\" (UID: \"df01509d-c2e3-4521-bb4f-4b625ab957e3\") " pod="calico-apiserver/calico-apiserver-7b5f56598d-kfl2c" Jan 17 12:22:13.734815 kubelet[2610]: I0117 12:22:13.723763 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9k78b\" (UniqueName: \"kubernetes.io/projected/df01509d-c2e3-4521-bb4f-4b625ab957e3-kube-api-access-9k78b\") pod \"calico-apiserver-7b5f56598d-kfl2c\" (UID: \"df01509d-c2e3-4521-bb4f-4b625ab957e3\") " pod="calico-apiserver/calico-apiserver-7b5f56598d-kfl2c" Jan 17 12:22:13.734815 kubelet[2610]: I0117 12:22:13.723798 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bz2h\" (UniqueName: \"kubernetes.io/projected/d4d7de4f-610c-42bb-9ed3-95154eded5ac-kube-api-access-8bz2h\") pod \"coredns-7db6d8ff4d-xn5bq\" (UID: \"d4d7de4f-610c-42bb-9ed3-95154eded5ac\") " pod="kube-system/coredns-7db6d8ff4d-xn5bq" Jan 17 12:22:13.734815 kubelet[2610]: I0117 12:22:13.723832 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkstr\" (UniqueName: \"kubernetes.io/projected/9a11ecfa-a757-43fc-8ab9-8e4424da26ba-kube-api-access-qkstr\") pod \"coredns-7db6d8ff4d-9xkbk\" (UID: \"9a11ecfa-a757-43fc-8ab9-8e4424da26ba\") " pod="kube-system/coredns-7db6d8ff4d-9xkbk" Jan 17 12:22:13.734815 kubelet[2610]: I0117 12:22:13.723867 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z78s4\" (UniqueName: \"kubernetes.io/projected/c715a196-a54b-4348-9ab5-065abb9617bb-kube-api-access-z78s4\") pod \"calico-kube-controllers-7d68f7877f-n2vhz\" (UID: \"c715a196-a54b-4348-9ab5-065abb9617bb\") " pod="calico-system/calico-kube-controllers-7d68f7877f-n2vhz" Jan 17 12:22:13.714563 systemd[1]: Created slice kubepods-besteffort-podc715a196_a54b_4348_9ab5_065abb9617bb.slice - libcontainer container kubepods-besteffort-podc715a196_a54b_4348_9ab5_065abb9617bb.slice. Jan 17 12:22:13.737747 kubelet[2610]: I0117 12:22:13.723897 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5d7aeb14-5869-48a1-96a7-a215252689a5-calico-apiserver-certs\") pod \"calico-apiserver-7b5f56598d-sv9sp\" (UID: \"5d7aeb14-5869-48a1-96a7-a215252689a5\") " pod="calico-apiserver/calico-apiserver-7b5f56598d-sv9sp" Jan 17 12:22:13.737747 kubelet[2610]: I0117 12:22:13.723926 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f274r\" (UniqueName: \"kubernetes.io/projected/5d7aeb14-5869-48a1-96a7-a215252689a5-kube-api-access-f274r\") pod \"calico-apiserver-7b5f56598d-sv9sp\" (UID: \"5d7aeb14-5869-48a1-96a7-a215252689a5\") " pod="calico-apiserver/calico-apiserver-7b5f56598d-sv9sp" Jan 17 12:22:13.737747 kubelet[2610]: I0117 12:22:13.723959 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a11ecfa-a757-43fc-8ab9-8e4424da26ba-config-volume\") pod \"coredns-7db6d8ff4d-9xkbk\" (UID: \"9a11ecfa-a757-43fc-8ab9-8e4424da26ba\") " pod="kube-system/coredns-7db6d8ff4d-9xkbk" Jan 17 12:22:13.737747 kubelet[2610]: I0117 12:22:13.723987 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4d7de4f-610c-42bb-9ed3-95154eded5ac-config-volume\") pod \"coredns-7db6d8ff4d-xn5bq\" (UID: \"d4d7de4f-610c-42bb-9ed3-95154eded5ac\") " pod="kube-system/coredns-7db6d8ff4d-xn5bq" Jan 17 12:22:13.737747 kubelet[2610]: I0117 12:22:13.724021 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c715a196-a54b-4348-9ab5-065abb9617bb-tigera-ca-bundle\") pod \"calico-kube-controllers-7d68f7877f-n2vhz\" (UID: \"c715a196-a54b-4348-9ab5-065abb9617bb\") " pod="calico-system/calico-kube-controllers-7d68f7877f-n2vhz" Jan 17 12:22:13.725041 systemd[1]: Created slice kubepods-besteffort-pod5d7aeb14_5869_48a1_96a7_a215252689a5.slice - libcontainer container kubepods-besteffort-pod5d7aeb14_5869_48a1_96a7_a215252689a5.slice. Jan 17 12:22:14.037490 containerd[1468]: time="2025-01-17T12:22:14.037132541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b5f56598d-kfl2c,Uid:df01509d-c2e3-4521-bb4f-4b625ab957e3,Namespace:calico-apiserver,Attempt:0,}" Jan 17 12:22:14.043261 containerd[1468]: time="2025-01-17T12:22:14.042676140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xn5bq,Uid:d4d7de4f-610c-42bb-9ed3-95154eded5ac,Namespace:kube-system,Attempt:0,}" Jan 17 12:22:14.043261 containerd[1468]: time="2025-01-17T12:22:14.042805996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b5f56598d-sv9sp,Uid:5d7aeb14-5869-48a1-96a7-a215252689a5,Namespace:calico-apiserver,Attempt:0,}" Jan 17 12:22:14.043261 containerd[1468]: time="2025-01-17T12:22:14.043048717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9xkbk,Uid:9a11ecfa-a757-43fc-8ab9-8e4424da26ba,Namespace:kube-system,Attempt:0,}" Jan 17 12:22:14.043882 containerd[1468]: time="2025-01-17T12:22:14.043794500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d68f7877f-n2vhz,Uid:c715a196-a54b-4348-9ab5-065abb9617bb,Namespace:calico-system,Attempt:0,}" Jan 17 12:22:14.296588 containerd[1468]: time="2025-01-17T12:22:14.296380400Z" level=info msg="shim disconnected" id=83b9107a76c5b913e1517ea8c18a6ea326782286bc882d47eb269a40f868a8c2 namespace=k8s.io Jan 17 12:22:14.296588 containerd[1468]: time="2025-01-17T12:22:14.296467789Z" level=warning msg="cleaning up after shim disconnected" id=83b9107a76c5b913e1517ea8c18a6ea326782286bc882d47eb269a40f868a8c2 namespace=k8s.io Jan 17 12:22:14.296588 containerd[1468]: time="2025-01-17T12:22:14.296485255Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:22:14.378415 systemd[1]: Created slice kubepods-besteffort-pod68c040bb_a18d_4fed_9ea3_2d0c63ef70bc.slice - libcontainer container kubepods-besteffort-pod68c040bb_a18d_4fed_9ea3_2d0c63ef70bc.slice. Jan 17 12:22:14.388327 containerd[1468]: time="2025-01-17T12:22:14.387600617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gjb8c,Uid:68c040bb-a18d-4fed-9ea3-2d0c63ef70bc,Namespace:calico-system,Attempt:0,}" Jan 17 12:22:14.562180 containerd[1468]: time="2025-01-17T12:22:14.561941101Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 17 12:22:14.716299 containerd[1468]: time="2025-01-17T12:22:14.715840047Z" level=error msg="Failed to destroy network for sandbox \"913635f7277a52db91dcfb53c5da7a87373abb9db9f9bda690cdf2fb998d0d87\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:14.725305 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-913635f7277a52db91dcfb53c5da7a87373abb9db9f9bda690cdf2fb998d0d87-shm.mount: Deactivated successfully. Jan 17 12:22:14.727033 containerd[1468]: time="2025-01-17T12:22:14.726881333Z" level=error msg="encountered an error cleaning up failed sandbox \"913635f7277a52db91dcfb53c5da7a87373abb9db9f9bda690cdf2fb998d0d87\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:14.727033 containerd[1468]: time="2025-01-17T12:22:14.726976726Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b5f56598d-kfl2c,Uid:df01509d-c2e3-4521-bb4f-4b625ab957e3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"913635f7277a52db91dcfb53c5da7a87373abb9db9f9bda690cdf2fb998d0d87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:14.728057 kubelet[2610]: E0117 12:22:14.727821 2610 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"913635f7277a52db91dcfb53c5da7a87373abb9db9f9bda690cdf2fb998d0d87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:14.728057 kubelet[2610]: E0117 12:22:14.727924 2610 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"913635f7277a52db91dcfb53c5da7a87373abb9db9f9bda690cdf2fb998d0d87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b5f56598d-kfl2c" Jan 17 12:22:14.728057 kubelet[2610]: E0117 12:22:14.727962 2610 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"913635f7277a52db91dcfb53c5da7a87373abb9db9f9bda690cdf2fb998d0d87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b5f56598d-kfl2c" Jan 17 12:22:14.731017 kubelet[2610]: E0117 12:22:14.728023 2610 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7b5f56598d-kfl2c_calico-apiserver(df01509d-c2e3-4521-bb4f-4b625ab957e3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7b5f56598d-kfl2c_calico-apiserver(df01509d-c2e3-4521-bb4f-4b625ab957e3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"913635f7277a52db91dcfb53c5da7a87373abb9db9f9bda690cdf2fb998d0d87\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b5f56598d-kfl2c" podUID="df01509d-c2e3-4521-bb4f-4b625ab957e3" Jan 17 12:22:14.731763 containerd[1468]: time="2025-01-17T12:22:14.731598007Z" level=error msg="Failed to destroy network for sandbox \"94fbcac91a94bee62820828b2ba79f8d060d9c70188bf433ef70d6e67f504ee1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:14.738192 containerd[1468]: time="2025-01-17T12:22:14.738117369Z" level=error msg="encountered an error cleaning up failed sandbox \"94fbcac91a94bee62820828b2ba79f8d060d9c70188bf433ef70d6e67f504ee1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:14.739177 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-94fbcac91a94bee62820828b2ba79f8d060d9c70188bf433ef70d6e67f504ee1-shm.mount: Deactivated successfully. Jan 17 12:22:14.739700 containerd[1468]: time="2025-01-17T12:22:14.739309351Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b5f56598d-sv9sp,Uid:5d7aeb14-5869-48a1-96a7-a215252689a5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"94fbcac91a94bee62820828b2ba79f8d060d9c70188bf433ef70d6e67f504ee1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:14.739827 kubelet[2610]: E0117 12:22:14.739701 2610 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94fbcac91a94bee62820828b2ba79f8d060d9c70188bf433ef70d6e67f504ee1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:14.739827 kubelet[2610]: E0117 12:22:14.739777 2610 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94fbcac91a94bee62820828b2ba79f8d060d9c70188bf433ef70d6e67f504ee1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b5f56598d-sv9sp" Jan 17 12:22:14.739827 kubelet[2610]: E0117 12:22:14.739809 2610 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94fbcac91a94bee62820828b2ba79f8d060d9c70188bf433ef70d6e67f504ee1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b5f56598d-sv9sp" Jan 17 12:22:14.740015 kubelet[2610]: E0117 12:22:14.739879 2610 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7b5f56598d-sv9sp_calico-apiserver(5d7aeb14-5869-48a1-96a7-a215252689a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7b5f56598d-sv9sp_calico-apiserver(5d7aeb14-5869-48a1-96a7-a215252689a5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"94fbcac91a94bee62820828b2ba79f8d060d9c70188bf433ef70d6e67f504ee1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b5f56598d-sv9sp" podUID="5d7aeb14-5869-48a1-96a7-a215252689a5" Jan 17 12:22:14.755113 containerd[1468]: time="2025-01-17T12:22:14.753812700Z" level=error msg="Failed to destroy network for sandbox \"133fef128fcaf8a502aef03ac72a1717d13908857264794d3bc477127f965044\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:14.755113 containerd[1468]: time="2025-01-17T12:22:14.754401926Z" level=error msg="encountered an error cleaning up failed sandbox \"133fef128fcaf8a502aef03ac72a1717d13908857264794d3bc477127f965044\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:14.755113 containerd[1468]: time="2025-01-17T12:22:14.754492930Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xn5bq,Uid:d4d7de4f-610c-42bb-9ed3-95154eded5ac,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"133fef128fcaf8a502aef03ac72a1717d13908857264794d3bc477127f965044\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:14.757667 kubelet[2610]: E0117 12:22:14.757515 2610 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"133fef128fcaf8a502aef03ac72a1717d13908857264794d3bc477127f965044\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:14.757667 kubelet[2610]: E0117 12:22:14.757594 2610 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"133fef128fcaf8a502aef03ac72a1717d13908857264794d3bc477127f965044\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-xn5bq" Jan 17 12:22:14.757667 kubelet[2610]: E0117 12:22:14.757627 2610 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"133fef128fcaf8a502aef03ac72a1717d13908857264794d3bc477127f965044\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-xn5bq" Jan 17 12:22:14.757910 kubelet[2610]: E0117 12:22:14.757699 2610 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-xn5bq_kube-system(d4d7de4f-610c-42bb-9ed3-95154eded5ac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-xn5bq_kube-system(d4d7de4f-610c-42bb-9ed3-95154eded5ac)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"133fef128fcaf8a502aef03ac72a1717d13908857264794d3bc477127f965044\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-xn5bq" podUID="d4d7de4f-610c-42bb-9ed3-95154eded5ac" Jan 17 12:22:14.761885 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-133fef128fcaf8a502aef03ac72a1717d13908857264794d3bc477127f965044-shm.mount: Deactivated successfully. Jan 17 12:22:14.774185 containerd[1468]: time="2025-01-17T12:22:14.774104068Z" level=error msg="Failed to destroy network for sandbox \"6a38206cb2f609e975cab0bd1aa8127284e181469edf2fcfd2a31bb99827918f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:14.775440 containerd[1468]: time="2025-01-17T12:22:14.775384406Z" level=error msg="Failed to destroy network for sandbox \"d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:14.779940 containerd[1468]: time="2025-01-17T12:22:14.779630911Z" level=error msg="encountered an error cleaning up failed sandbox \"d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:14.780092 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6a38206cb2f609e975cab0bd1aa8127284e181469edf2fcfd2a31bb99827918f-shm.mount: Deactivated successfully. Jan 17 12:22:14.780336 containerd[1468]: time="2025-01-17T12:22:14.780295611Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d68f7877f-n2vhz,Uid:c715a196-a54b-4348-9ab5-065abb9617bb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:14.783008 kubelet[2610]: E0117 12:22:14.780804 2610 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:14.783008 kubelet[2610]: E0117 12:22:14.780884 2610 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d68f7877f-n2vhz" Jan 17 12:22:14.783008 kubelet[2610]: E0117 12:22:14.780928 2610 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d68f7877f-n2vhz" Jan 17 12:22:14.783304 kubelet[2610]: E0117 12:22:14.780997 2610 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7d68f7877f-n2vhz_calico-system(c715a196-a54b-4348-9ab5-065abb9617bb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7d68f7877f-n2vhz_calico-system(c715a196-a54b-4348-9ab5-065abb9617bb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7d68f7877f-n2vhz" podUID="c715a196-a54b-4348-9ab5-065abb9617bb" Jan 17 12:22:14.783760 containerd[1468]: time="2025-01-17T12:22:14.783712587Z" level=error msg="encountered an error cleaning up failed sandbox \"6a38206cb2f609e975cab0bd1aa8127284e181469edf2fcfd2a31bb99827918f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:14.783944 containerd[1468]: time="2025-01-17T12:22:14.783909421Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9xkbk,Uid:9a11ecfa-a757-43fc-8ab9-8e4424da26ba,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6a38206cb2f609e975cab0bd1aa8127284e181469edf2fcfd2a31bb99827918f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:14.785348 kubelet[2610]: E0117 12:22:14.784695 2610 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a38206cb2f609e975cab0bd1aa8127284e181469edf2fcfd2a31bb99827918f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:14.785348 kubelet[2610]: E0117 12:22:14.784773 2610 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a38206cb2f609e975cab0bd1aa8127284e181469edf2fcfd2a31bb99827918f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-9xkbk" Jan 17 12:22:14.785348 kubelet[2610]: E0117 12:22:14.784804 2610 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a38206cb2f609e975cab0bd1aa8127284e181469edf2fcfd2a31bb99827918f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-9xkbk" Jan 17 12:22:14.785603 kubelet[2610]: E0117 12:22:14.784861 2610 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-9xkbk_kube-system(9a11ecfa-a757-43fc-8ab9-8e4424da26ba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-9xkbk_kube-system(9a11ecfa-a757-43fc-8ab9-8e4424da26ba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6a38206cb2f609e975cab0bd1aa8127284e181469edf2fcfd2a31bb99827918f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-9xkbk" podUID="9a11ecfa-a757-43fc-8ab9-8e4424da26ba" Jan 17 12:22:14.791268 containerd[1468]: time="2025-01-17T12:22:14.791180405Z" level=error msg="Failed to destroy network for sandbox \"fdbdc5c167e2a38e6f499b803ce2f0a0cb7e21bf89471ea2e426a9d65a2724d5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:14.791669 containerd[1468]: time="2025-01-17T12:22:14.791619302Z" level=error msg="encountered an error cleaning up failed sandbox \"fdbdc5c167e2a38e6f499b803ce2f0a0cb7e21bf89471ea2e426a9d65a2724d5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:14.791813 containerd[1468]: time="2025-01-17T12:22:14.791708205Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gjb8c,Uid:68c040bb-a18d-4fed-9ea3-2d0c63ef70bc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fdbdc5c167e2a38e6f499b803ce2f0a0cb7e21bf89471ea2e426a9d65a2724d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:14.792033 kubelet[2610]: E0117 12:22:14.791988 2610 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fdbdc5c167e2a38e6f499b803ce2f0a0cb7e21bf89471ea2e426a9d65a2724d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:14.792128 kubelet[2610]: E0117 12:22:14.792063 2610 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fdbdc5c167e2a38e6f499b803ce2f0a0cb7e21bf89471ea2e426a9d65a2724d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gjb8c" Jan 17 12:22:14.792128 kubelet[2610]: E0117 12:22:14.792094 2610 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fdbdc5c167e2a38e6f499b803ce2f0a0cb7e21bf89471ea2e426a9d65a2724d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gjb8c" Jan 17 12:22:14.792278 kubelet[2610]: E0117 12:22:14.792157 2610 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gjb8c_calico-system(68c040bb-a18d-4fed-9ea3-2d0c63ef70bc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gjb8c_calico-system(68c040bb-a18d-4fed-9ea3-2d0c63ef70bc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fdbdc5c167e2a38e6f499b803ce2f0a0cb7e21bf89471ea2e426a9d65a2724d5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gjb8c" podUID="68c040bb-a18d-4fed-9ea3-2d0c63ef70bc" Jan 17 12:22:15.535747 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fdbdc5c167e2a38e6f499b803ce2f0a0cb7e21bf89471ea2e426a9d65a2724d5-shm.mount: Deactivated successfully. Jan 17 12:22:15.536505 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf-shm.mount: Deactivated successfully. Jan 17 12:22:15.550066 kubelet[2610]: I0117 12:22:15.550019 2610 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94fbcac91a94bee62820828b2ba79f8d060d9c70188bf433ef70d6e67f504ee1" Jan 17 12:22:15.558503 kubelet[2610]: I0117 12:22:15.553441 2610 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="133fef128fcaf8a502aef03ac72a1717d13908857264794d3bc477127f965044" Jan 17 12:22:15.560309 containerd[1468]: time="2025-01-17T12:22:15.554073540Z" level=info msg="StopPodSandbox for \"94fbcac91a94bee62820828b2ba79f8d060d9c70188bf433ef70d6e67f504ee1\"" Jan 17 12:22:15.560309 containerd[1468]: time="2025-01-17T12:22:15.554338764Z" level=info msg="Ensure that sandbox 94fbcac91a94bee62820828b2ba79f8d060d9c70188bf433ef70d6e67f504ee1 in task-service has been cleanup successfully" Jan 17 12:22:15.567815 containerd[1468]: time="2025-01-17T12:22:15.567766975Z" level=info msg="StopPodSandbox for \"133fef128fcaf8a502aef03ac72a1717d13908857264794d3bc477127f965044\"" Jan 17 12:22:15.568372 containerd[1468]: time="2025-01-17T12:22:15.568118939Z" level=info msg="Ensure that sandbox 133fef128fcaf8a502aef03ac72a1717d13908857264794d3bc477127f965044 in task-service has been cleanup successfully" Jan 17 12:22:15.573084 kubelet[2610]: I0117 12:22:15.573055 2610 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fdbdc5c167e2a38e6f499b803ce2f0a0cb7e21bf89471ea2e426a9d65a2724d5" Jan 17 12:22:15.577272 containerd[1468]: time="2025-01-17T12:22:15.576693039Z" level=info msg="StopPodSandbox for \"fdbdc5c167e2a38e6f499b803ce2f0a0cb7e21bf89471ea2e426a9d65a2724d5\"" Jan 17 12:22:15.577454 kubelet[2610]: I0117 12:22:15.576240 2610 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf" Jan 17 12:22:15.579504 containerd[1468]: time="2025-01-17T12:22:15.579455201Z" level=info msg="StopPodSandbox for \"d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf\"" Jan 17 12:22:15.579956 containerd[1468]: time="2025-01-17T12:22:15.579924182Z" level=info msg="Ensure that sandbox d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf in task-service has been cleanup successfully" Jan 17 12:22:15.581962 kubelet[2610]: I0117 12:22:15.581851 2610 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="913635f7277a52db91dcfb53c5da7a87373abb9db9f9bda690cdf2fb998d0d87" Jan 17 12:22:15.584440 containerd[1468]: time="2025-01-17T12:22:15.583443244Z" level=info msg="Ensure that sandbox fdbdc5c167e2a38e6f499b803ce2f0a0cb7e21bf89471ea2e426a9d65a2724d5 in task-service has been cleanup successfully" Jan 17 12:22:15.587128 kubelet[2610]: I0117 12:22:15.586575 2610 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a38206cb2f609e975cab0bd1aa8127284e181469edf2fcfd2a31bb99827918f" Jan 17 12:22:15.589868 containerd[1468]: time="2025-01-17T12:22:15.583625224Z" level=info msg="StopPodSandbox for \"913635f7277a52db91dcfb53c5da7a87373abb9db9f9bda690cdf2fb998d0d87\"" Jan 17 12:22:15.590346 containerd[1468]: time="2025-01-17T12:22:15.590297886Z" level=info msg="Ensure that sandbox 913635f7277a52db91dcfb53c5da7a87373abb9db9f9bda690cdf2fb998d0d87 in task-service has been cleanup successfully" Jan 17 12:22:15.593615 containerd[1468]: time="2025-01-17T12:22:15.593337359Z" level=info msg="StopPodSandbox for \"6a38206cb2f609e975cab0bd1aa8127284e181469edf2fcfd2a31bb99827918f\"" Jan 17 12:22:15.594773 containerd[1468]: time="2025-01-17T12:22:15.594198349Z" level=info msg="Ensure that sandbox 6a38206cb2f609e975cab0bd1aa8127284e181469edf2fcfd2a31bb99827918f in task-service has been cleanup successfully" Jan 17 12:22:15.704564 containerd[1468]: time="2025-01-17T12:22:15.704480599Z" level=error msg="StopPodSandbox for \"94fbcac91a94bee62820828b2ba79f8d060d9c70188bf433ef70d6e67f504ee1\" failed" error="failed to destroy network for sandbox \"94fbcac91a94bee62820828b2ba79f8d060d9c70188bf433ef70d6e67f504ee1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:15.705115 kubelet[2610]: E0117 12:22:15.704961 2610 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"94fbcac91a94bee62820828b2ba79f8d060d9c70188bf433ef70d6e67f504ee1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="94fbcac91a94bee62820828b2ba79f8d060d9c70188bf433ef70d6e67f504ee1" Jan 17 12:22:15.705465 kubelet[2610]: E0117 12:22:15.705286 2610 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"94fbcac91a94bee62820828b2ba79f8d060d9c70188bf433ef70d6e67f504ee1"} Jan 17 12:22:15.705655 kubelet[2610]: E0117 12:22:15.705580 2610 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5d7aeb14-5869-48a1-96a7-a215252689a5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"94fbcac91a94bee62820828b2ba79f8d060d9c70188bf433ef70d6e67f504ee1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:22:15.705810 kubelet[2610]: E0117 12:22:15.705676 2610 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5d7aeb14-5869-48a1-96a7-a215252689a5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"94fbcac91a94bee62820828b2ba79f8d060d9c70188bf433ef70d6e67f504ee1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b5f56598d-sv9sp" podUID="5d7aeb14-5869-48a1-96a7-a215252689a5" Jan 17 12:22:15.739173 containerd[1468]: time="2025-01-17T12:22:15.739079484Z" level=error msg="StopPodSandbox for \"fdbdc5c167e2a38e6f499b803ce2f0a0cb7e21bf89471ea2e426a9d65a2724d5\" failed" error="failed to destroy network for sandbox \"fdbdc5c167e2a38e6f499b803ce2f0a0cb7e21bf89471ea2e426a9d65a2724d5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:15.739509 kubelet[2610]: E0117 12:22:15.739457 2610 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fdbdc5c167e2a38e6f499b803ce2f0a0cb7e21bf89471ea2e426a9d65a2724d5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fdbdc5c167e2a38e6f499b803ce2f0a0cb7e21bf89471ea2e426a9d65a2724d5" Jan 17 12:22:15.740055 kubelet[2610]: E0117 12:22:15.739530 2610 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fdbdc5c167e2a38e6f499b803ce2f0a0cb7e21bf89471ea2e426a9d65a2724d5"} Jan 17 12:22:15.740055 kubelet[2610]: E0117 12:22:15.739581 2610 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"68c040bb-a18d-4fed-9ea3-2d0c63ef70bc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fdbdc5c167e2a38e6f499b803ce2f0a0cb7e21bf89471ea2e426a9d65a2724d5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:22:15.740055 kubelet[2610]: E0117 12:22:15.739620 2610 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"68c040bb-a18d-4fed-9ea3-2d0c63ef70bc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fdbdc5c167e2a38e6f499b803ce2f0a0cb7e21bf89471ea2e426a9d65a2724d5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gjb8c" podUID="68c040bb-a18d-4fed-9ea3-2d0c63ef70bc" Jan 17 12:22:15.746303 containerd[1468]: time="2025-01-17T12:22:15.746221450Z" level=error msg="StopPodSandbox for \"133fef128fcaf8a502aef03ac72a1717d13908857264794d3bc477127f965044\" failed" error="failed to destroy network for sandbox \"133fef128fcaf8a502aef03ac72a1717d13908857264794d3bc477127f965044\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:15.746571 kubelet[2610]: E0117 12:22:15.746525 2610 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"133fef128fcaf8a502aef03ac72a1717d13908857264794d3bc477127f965044\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="133fef128fcaf8a502aef03ac72a1717d13908857264794d3bc477127f965044" Jan 17 12:22:15.746701 kubelet[2610]: E0117 12:22:15.746591 2610 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"133fef128fcaf8a502aef03ac72a1717d13908857264794d3bc477127f965044"} Jan 17 12:22:15.746701 kubelet[2610]: E0117 12:22:15.746645 2610 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d4d7de4f-610c-42bb-9ed3-95154eded5ac\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"133fef128fcaf8a502aef03ac72a1717d13908857264794d3bc477127f965044\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:22:15.746701 kubelet[2610]: E0117 12:22:15.746681 2610 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d4d7de4f-610c-42bb-9ed3-95154eded5ac\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"133fef128fcaf8a502aef03ac72a1717d13908857264794d3bc477127f965044\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-xn5bq" podUID="d4d7de4f-610c-42bb-9ed3-95154eded5ac" Jan 17 12:22:15.771013 containerd[1468]: time="2025-01-17T12:22:15.770943544Z" level=error msg="StopPodSandbox for \"d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf\" failed" error="failed to destroy network for sandbox \"d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:15.771368 kubelet[2610]: E0117 12:22:15.771300 2610 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf" Jan 17 12:22:15.771532 kubelet[2610]: E0117 12:22:15.771395 2610 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf"} Jan 17 12:22:15.771532 kubelet[2610]: E0117 12:22:15.771473 2610 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c715a196-a54b-4348-9ab5-065abb9617bb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:22:15.771716 kubelet[2610]: E0117 12:22:15.771531 2610 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c715a196-a54b-4348-9ab5-065abb9617bb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7d68f7877f-n2vhz" podUID="c715a196-a54b-4348-9ab5-065abb9617bb" Jan 17 12:22:15.771855 containerd[1468]: time="2025-01-17T12:22:15.771805464Z" level=error msg="StopPodSandbox for \"913635f7277a52db91dcfb53c5da7a87373abb9db9f9bda690cdf2fb998d0d87\" failed" error="failed to destroy network for sandbox \"913635f7277a52db91dcfb53c5da7a87373abb9db9f9bda690cdf2fb998d0d87\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:15.772060 kubelet[2610]: E0117 12:22:15.772019 2610 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"913635f7277a52db91dcfb53c5da7a87373abb9db9f9bda690cdf2fb998d0d87\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="913635f7277a52db91dcfb53c5da7a87373abb9db9f9bda690cdf2fb998d0d87" Jan 17 12:22:15.772151 kubelet[2610]: E0117 12:22:15.772074 2610 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"913635f7277a52db91dcfb53c5da7a87373abb9db9f9bda690cdf2fb998d0d87"} Jan 17 12:22:15.772151 kubelet[2610]: E0117 12:22:15.772130 2610 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"df01509d-c2e3-4521-bb4f-4b625ab957e3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"913635f7277a52db91dcfb53c5da7a87373abb9db9f9bda690cdf2fb998d0d87\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:22:15.772615 kubelet[2610]: E0117 12:22:15.772163 2610 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"df01509d-c2e3-4521-bb4f-4b625ab957e3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"913635f7277a52db91dcfb53c5da7a87373abb9db9f9bda690cdf2fb998d0d87\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b5f56598d-kfl2c" podUID="df01509d-c2e3-4521-bb4f-4b625ab957e3" Jan 17 12:22:15.776320 containerd[1468]: time="2025-01-17T12:22:15.775896103Z" level=error msg="StopPodSandbox for \"6a38206cb2f609e975cab0bd1aa8127284e181469edf2fcfd2a31bb99827918f\" failed" error="failed to destroy network for sandbox \"6a38206cb2f609e975cab0bd1aa8127284e181469edf2fcfd2a31bb99827918f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:22:15.776442 kubelet[2610]: E0117 12:22:15.776115 2610 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6a38206cb2f609e975cab0bd1aa8127284e181469edf2fcfd2a31bb99827918f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6a38206cb2f609e975cab0bd1aa8127284e181469edf2fcfd2a31bb99827918f" Jan 17 12:22:15.776442 kubelet[2610]: E0117 12:22:15.776163 2610 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6a38206cb2f609e975cab0bd1aa8127284e181469edf2fcfd2a31bb99827918f"} Jan 17 12:22:15.776442 kubelet[2610]: E0117 12:22:15.776205 2610 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9a11ecfa-a757-43fc-8ab9-8e4424da26ba\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a38206cb2f609e975cab0bd1aa8127284e181469edf2fcfd2a31bb99827918f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:22:15.776442 kubelet[2610]: E0117 12:22:15.776240 2610 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9a11ecfa-a757-43fc-8ab9-8e4424da26ba\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a38206cb2f609e975cab0bd1aa8127284e181469edf2fcfd2a31bb99827918f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-9xkbk" podUID="9a11ecfa-a757-43fc-8ab9-8e4424da26ba" Jan 17 12:22:21.170717 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2637619591.mount: Deactivated successfully. Jan 17 12:22:21.217471 containerd[1468]: time="2025-01-17T12:22:21.217382350Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:21.219003 containerd[1468]: time="2025-01-17T12:22:21.218917319Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 17 12:22:21.220525 containerd[1468]: time="2025-01-17T12:22:21.220478801Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:21.224003 containerd[1468]: time="2025-01-17T12:22:21.223963502Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:21.226313 containerd[1468]: time="2025-01-17T12:22:21.225711694Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 6.663709268s" Jan 17 12:22:21.226313 containerd[1468]: time="2025-01-17T12:22:21.225766275Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 17 12:22:21.251733 containerd[1468]: time="2025-01-17T12:22:21.251408657Z" level=info msg="CreateContainer within sandbox \"d222c5822bc7a5e4d6cc39e105d819fb42ddc057460292a00475c7bc703bf3f9\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 17 12:22:21.276979 containerd[1468]: time="2025-01-17T12:22:21.276908967Z" level=info msg="CreateContainer within sandbox \"d222c5822bc7a5e4d6cc39e105d819fb42ddc057460292a00475c7bc703bf3f9\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ca7d21af515bd9ed9aa48443193c9ee328e85a112d8f121f6d6a631aad0127d3\"" Jan 17 12:22:21.277755 containerd[1468]: time="2025-01-17T12:22:21.277714506Z" level=info msg="StartContainer for \"ca7d21af515bd9ed9aa48443193c9ee328e85a112d8f121f6d6a631aad0127d3\"" Jan 17 12:22:21.318536 systemd[1]: Started cri-containerd-ca7d21af515bd9ed9aa48443193c9ee328e85a112d8f121f6d6a631aad0127d3.scope - libcontainer container ca7d21af515bd9ed9aa48443193c9ee328e85a112d8f121f6d6a631aad0127d3. Jan 17 12:22:21.366283 containerd[1468]: time="2025-01-17T12:22:21.365755644Z" level=info msg="StartContainer for \"ca7d21af515bd9ed9aa48443193c9ee328e85a112d8f121f6d6a631aad0127d3\" returns successfully" Jan 17 12:22:21.473397 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 17 12:22:21.473580 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 17 12:22:21.639080 kubelet[2610]: I0117 12:22:21.638994 2610 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-hwr7p" podStartSLOduration=1.932727233 podStartE2EDuration="18.638970936s" podCreationTimestamp="2025-01-17 12:22:03 +0000 UTC" firstStartedPulling="2025-01-17 12:22:04.52093029 +0000 UTC m=+21.349028840" lastFinishedPulling="2025-01-17 12:22:21.227174004 +0000 UTC m=+38.055272543" observedRunningTime="2025-01-17 12:22:21.638234497 +0000 UTC m=+38.466333055" watchObservedRunningTime="2025-01-17 12:22:21.638970936 +0000 UTC m=+38.467069493" Jan 17 12:22:22.209489 kubelet[2610]: I0117 12:22:22.208854 2610 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:22:23.373283 kernel: bpftool[3886]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 17 12:22:23.669379 systemd-networkd[1378]: vxlan.calico: Link UP Jan 17 12:22:23.670437 systemd-networkd[1378]: vxlan.calico: Gained carrier Jan 17 12:22:24.798527 systemd-networkd[1378]: vxlan.calico: Gained IPv6LL Jan 17 12:22:26.365456 containerd[1468]: time="2025-01-17T12:22:26.365234485Z" level=info msg="StopPodSandbox for \"d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf\"" Jan 17 12:22:26.521008 containerd[1468]: 2025-01-17 12:22:26.474 [INFO][3972] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf" Jan 17 12:22:26.521008 containerd[1468]: 2025-01-17 12:22:26.475 [INFO][3972] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf" iface="eth0" netns="/var/run/netns/cni-2dd670c9-a789-88d2-50bb-472c5639ba98" Jan 17 12:22:26.521008 containerd[1468]: 2025-01-17 12:22:26.476 [INFO][3972] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf" iface="eth0" netns="/var/run/netns/cni-2dd670c9-a789-88d2-50bb-472c5639ba98" Jan 17 12:22:26.521008 containerd[1468]: 2025-01-17 12:22:26.476 [INFO][3972] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf" iface="eth0" netns="/var/run/netns/cni-2dd670c9-a789-88d2-50bb-472c5639ba98" Jan 17 12:22:26.521008 containerd[1468]: 2025-01-17 12:22:26.476 [INFO][3972] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf" Jan 17 12:22:26.521008 containerd[1468]: 2025-01-17 12:22:26.476 [INFO][3972] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf" Jan 17 12:22:26.521008 containerd[1468]: 2025-01-17 12:22:26.507 [INFO][3978] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf" HandleID="k8s-pod-network.d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--kube--controllers--7d68f7877f--n2vhz-eth0" Jan 17 12:22:26.521008 containerd[1468]: 2025-01-17 12:22:26.507 [INFO][3978] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:22:26.521008 containerd[1468]: 2025-01-17 12:22:26.507 [INFO][3978] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:22:26.521008 containerd[1468]: 2025-01-17 12:22:26.514 [WARNING][3978] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf" HandleID="k8s-pod-network.d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--kube--controllers--7d68f7877f--n2vhz-eth0" Jan 17 12:22:26.521008 containerd[1468]: 2025-01-17 12:22:26.514 [INFO][3978] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf" HandleID="k8s-pod-network.d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--kube--controllers--7d68f7877f--n2vhz-eth0" Jan 17 12:22:26.521008 containerd[1468]: 2025-01-17 12:22:26.516 [INFO][3978] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:22:26.521008 containerd[1468]: 2025-01-17 12:22:26.519 [INFO][3972] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf" Jan 17 12:22:26.525171 containerd[1468]: time="2025-01-17T12:22:26.522714247Z" level=info msg="TearDown network for sandbox \"d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf\" successfully" Jan 17 12:22:26.525171 containerd[1468]: time="2025-01-17T12:22:26.522767784Z" level=info msg="StopPodSandbox for \"d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf\" returns successfully" Jan 17 12:22:26.525171 containerd[1468]: time="2025-01-17T12:22:26.524666220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d68f7877f-n2vhz,Uid:c715a196-a54b-4348-9ab5-065abb9617bb,Namespace:calico-system,Attempt:1,}" Jan 17 12:22:26.526629 systemd[1]: run-netns-cni\x2d2dd670c9\x2da789\x2d88d2\x2d50bb\x2d472c5639ba98.mount: Deactivated successfully. Jan 17 12:22:26.690091 systemd-networkd[1378]: cali667f700302b: Link UP Jan 17 12:22:26.691627 systemd-networkd[1378]: cali667f700302b: Gained carrier Jan 17 12:22:26.695780 kubelet[2610]: I0117 12:22:26.695465 2610 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:22:26.726947 containerd[1468]: 2025-01-17 12:22:26.595 [INFO][3984] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--kube--controllers--7d68f7877f--n2vhz-eth0 calico-kube-controllers-7d68f7877f- calico-system c715a196-a54b-4348-9ab5-065abb9617bb 822 0 2025-01-17 12:22:04 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7d68f7877f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal calico-kube-controllers-7d68f7877f-n2vhz eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali667f700302b [] []}} ContainerID="cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622" Namespace="calico-system" Pod="calico-kube-controllers-7d68f7877f-n2vhz" WorkloadEndpoint="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--kube--controllers--7d68f7877f--n2vhz-" Jan 17 12:22:26.726947 containerd[1468]: 2025-01-17 12:22:26.595 [INFO][3984] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622" Namespace="calico-system" Pod="calico-kube-controllers-7d68f7877f-n2vhz" WorkloadEndpoint="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--kube--controllers--7d68f7877f--n2vhz-eth0" Jan 17 12:22:26.726947 containerd[1468]: 2025-01-17 12:22:26.636 [INFO][3995] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622" HandleID="k8s-pod-network.cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--kube--controllers--7d68f7877f--n2vhz-eth0" Jan 17 12:22:26.726947 containerd[1468]: 2025-01-17 12:22:26.649 [INFO][3995] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622" HandleID="k8s-pod-network.cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--kube--controllers--7d68f7877f--n2vhz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290ed0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal", "pod":"calico-kube-controllers-7d68f7877f-n2vhz", "timestamp":"2025-01-17 12:22:26.636070151 +0000 UTC"}, Hostname:"ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:22:26.726947 containerd[1468]: 2025-01-17 12:22:26.649 [INFO][3995] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:22:26.726947 containerd[1468]: 2025-01-17 12:22:26.649 [INFO][3995] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:22:26.726947 containerd[1468]: 2025-01-17 12:22:26.649 [INFO][3995] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal' Jan 17 12:22:26.726947 containerd[1468]: 2025-01-17 12:22:26.652 [INFO][3995] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622" host="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:22:26.726947 containerd[1468]: 2025-01-17 12:22:26.657 [INFO][3995] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:22:26.726947 containerd[1468]: 2025-01-17 12:22:26.662 [INFO][3995] ipam/ipam.go 489: Trying affinity for 192.168.80.128/26 host="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:22:26.726947 containerd[1468]: 2025-01-17 12:22:26.664 [INFO][3995] ipam/ipam.go 155: Attempting to load block cidr=192.168.80.128/26 host="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:22:26.726947 containerd[1468]: 2025-01-17 12:22:26.667 [INFO][3995] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.80.128/26 host="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:22:26.726947 containerd[1468]: 2025-01-17 12:22:26.667 [INFO][3995] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.80.128/26 handle="k8s-pod-network.cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622" host="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:22:26.726947 containerd[1468]: 2025-01-17 12:22:26.669 [INFO][3995] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622 Jan 17 12:22:26.726947 containerd[1468]: 2025-01-17 12:22:26.674 [INFO][3995] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.80.128/26 handle="k8s-pod-network.cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622" host="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:22:26.726947 containerd[1468]: 2025-01-17 12:22:26.682 [INFO][3995] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.80.129/26] block=192.168.80.128/26 handle="k8s-pod-network.cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622" host="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:22:26.726947 containerd[1468]: 2025-01-17 12:22:26.682 [INFO][3995] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.80.129/26] handle="k8s-pod-network.cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622" host="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:22:26.726947 containerd[1468]: 2025-01-17 12:22:26.682 [INFO][3995] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:22:26.726947 containerd[1468]: 2025-01-17 12:22:26.682 [INFO][3995] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.80.129/26] IPv6=[] ContainerID="cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622" HandleID="k8s-pod-network.cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--kube--controllers--7d68f7877f--n2vhz-eth0" Jan 17 12:22:26.728868 containerd[1468]: 2025-01-17 12:22:26.684 [INFO][3984] cni-plugin/k8s.go 386: Populated endpoint ContainerID="cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622" Namespace="calico-system" Pod="calico-kube-controllers-7d68f7877f-n2vhz" WorkloadEndpoint="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--kube--controllers--7d68f7877f--n2vhz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--kube--controllers--7d68f7877f--n2vhz-eth0", GenerateName:"calico-kube-controllers-7d68f7877f-", Namespace:"calico-system", SelfLink:"", UID:"c715a196-a54b-4348-9ab5-065abb9617bb", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d68f7877f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-kube-controllers-7d68f7877f-n2vhz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.80.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali667f700302b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:22:26.728868 containerd[1468]: 2025-01-17 12:22:26.685 [INFO][3984] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.80.129/32] ContainerID="cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622" Namespace="calico-system" Pod="calico-kube-controllers-7d68f7877f-n2vhz" WorkloadEndpoint="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--kube--controllers--7d68f7877f--n2vhz-eth0" Jan 17 12:22:26.728868 containerd[1468]: 2025-01-17 12:22:26.685 [INFO][3984] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali667f700302b ContainerID="cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622" Namespace="calico-system" Pod="calico-kube-controllers-7d68f7877f-n2vhz" WorkloadEndpoint="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--kube--controllers--7d68f7877f--n2vhz-eth0" Jan 17 12:22:26.728868 containerd[1468]: 2025-01-17 12:22:26.692 [INFO][3984] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622" Namespace="calico-system" Pod="calico-kube-controllers-7d68f7877f-n2vhz" WorkloadEndpoint="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--kube--controllers--7d68f7877f--n2vhz-eth0" Jan 17 12:22:26.728868 containerd[1468]: 2025-01-17 12:22:26.694 [INFO][3984] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622" Namespace="calico-system" Pod="calico-kube-controllers-7d68f7877f-n2vhz" WorkloadEndpoint="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--kube--controllers--7d68f7877f--n2vhz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--kube--controllers--7d68f7877f--n2vhz-eth0", GenerateName:"calico-kube-controllers-7d68f7877f-", Namespace:"calico-system", SelfLink:"", UID:"c715a196-a54b-4348-9ab5-065abb9617bb", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d68f7877f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal", ContainerID:"cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622", Pod:"calico-kube-controllers-7d68f7877f-n2vhz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.80.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali667f700302b", MAC:"6e:c7:44:e5:c4:12", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:22:26.728868 containerd[1468]: 2025-01-17 12:22:26.713 [INFO][3984] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622" Namespace="calico-system" Pod="calico-kube-controllers-7d68f7877f-n2vhz" WorkloadEndpoint="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--kube--controllers--7d68f7877f--n2vhz-eth0" Jan 17 12:22:26.775669 systemd[1]: run-containerd-runc-k8s.io-ca7d21af515bd9ed9aa48443193c9ee328e85a112d8f121f6d6a631aad0127d3-runc.UQEqUz.mount: Deactivated successfully. Jan 17 12:22:26.793561 containerd[1468]: time="2025-01-17T12:22:26.793402841Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:22:26.793561 containerd[1468]: time="2025-01-17T12:22:26.793492559Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:22:26.793561 containerd[1468]: time="2025-01-17T12:22:26.793519222Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:26.793878 containerd[1468]: time="2025-01-17T12:22:26.793642049Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:26.826762 systemd[1]: Started cri-containerd-cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622.scope - libcontainer container cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622. Jan 17 12:22:26.939464 containerd[1468]: time="2025-01-17T12:22:26.939296413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d68f7877f-n2vhz,Uid:c715a196-a54b-4348-9ab5-065abb9617bb,Namespace:calico-system,Attempt:1,} returns sandbox id \"cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622\"" Jan 17 12:22:26.947125 containerd[1468]: time="2025-01-17T12:22:26.944921739Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 17 12:22:27.367545 containerd[1468]: time="2025-01-17T12:22:27.366161073Z" level=info msg="StopPodSandbox for \"94fbcac91a94bee62820828b2ba79f8d060d9c70188bf433ef70d6e67f504ee1\"" Jan 17 12:22:27.367545 containerd[1468]: time="2025-01-17T12:22:27.366802193Z" level=info msg="StopPodSandbox for \"fdbdc5c167e2a38e6f499b803ce2f0a0cb7e21bf89471ea2e426a9d65a2724d5\"" Jan 17 12:22:27.517191 containerd[1468]: 2025-01-17 12:22:27.458 [INFO][4125] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="94fbcac91a94bee62820828b2ba79f8d060d9c70188bf433ef70d6e67f504ee1" Jan 17 12:22:27.517191 containerd[1468]: 2025-01-17 12:22:27.459 [INFO][4125] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="94fbcac91a94bee62820828b2ba79f8d060d9c70188bf433ef70d6e67f504ee1" iface="eth0" netns="/var/run/netns/cni-00c57bf7-0afa-45c4-7fd1-f75dae858e74" Jan 17 12:22:27.517191 containerd[1468]: 2025-01-17 12:22:27.459 [INFO][4125] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="94fbcac91a94bee62820828b2ba79f8d060d9c70188bf433ef70d6e67f504ee1" iface="eth0" netns="/var/run/netns/cni-00c57bf7-0afa-45c4-7fd1-f75dae858e74" Jan 17 12:22:27.517191 containerd[1468]: 2025-01-17 12:22:27.460 [INFO][4125] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="94fbcac91a94bee62820828b2ba79f8d060d9c70188bf433ef70d6e67f504ee1" iface="eth0" netns="/var/run/netns/cni-00c57bf7-0afa-45c4-7fd1-f75dae858e74" Jan 17 12:22:27.517191 containerd[1468]: 2025-01-17 12:22:27.460 [INFO][4125] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="94fbcac91a94bee62820828b2ba79f8d060d9c70188bf433ef70d6e67f504ee1" Jan 17 12:22:27.517191 containerd[1468]: 2025-01-17 12:22:27.460 [INFO][4125] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="94fbcac91a94bee62820828b2ba79f8d060d9c70188bf433ef70d6e67f504ee1" Jan 17 12:22:27.517191 containerd[1468]: 2025-01-17 12:22:27.501 [INFO][4141] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="94fbcac91a94bee62820828b2ba79f8d060d9c70188bf433ef70d6e67f504ee1" HandleID="k8s-pod-network.94fbcac91a94bee62820828b2ba79f8d060d9c70188bf433ef70d6e67f504ee1" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--apiserver--7b5f56598d--sv9sp-eth0" Jan 17 12:22:27.517191 containerd[1468]: 2025-01-17 12:22:27.501 [INFO][4141] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:22:27.517191 containerd[1468]: 2025-01-17 12:22:27.501 [INFO][4141] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:22:27.517191 containerd[1468]: 2025-01-17 12:22:27.511 [WARNING][4141] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="94fbcac91a94bee62820828b2ba79f8d060d9c70188bf433ef70d6e67f504ee1" HandleID="k8s-pod-network.94fbcac91a94bee62820828b2ba79f8d060d9c70188bf433ef70d6e67f504ee1" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--apiserver--7b5f56598d--sv9sp-eth0" Jan 17 12:22:27.517191 containerd[1468]: 2025-01-17 12:22:27.511 [INFO][4141] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="94fbcac91a94bee62820828b2ba79f8d060d9c70188bf433ef70d6e67f504ee1" HandleID="k8s-pod-network.94fbcac91a94bee62820828b2ba79f8d060d9c70188bf433ef70d6e67f504ee1" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--apiserver--7b5f56598d--sv9sp-eth0" Jan 17 12:22:27.517191 containerd[1468]: 2025-01-17 12:22:27.513 [INFO][4141] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:22:27.517191 containerd[1468]: 2025-01-17 12:22:27.515 [INFO][4125] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="94fbcac91a94bee62820828b2ba79f8d060d9c70188bf433ef70d6e67f504ee1" Jan 17 12:22:27.517987 containerd[1468]: time="2025-01-17T12:22:27.517416901Z" level=info msg="TearDown network for sandbox \"94fbcac91a94bee62820828b2ba79f8d060d9c70188bf433ef70d6e67f504ee1\" successfully" Jan 17 12:22:27.517987 containerd[1468]: time="2025-01-17T12:22:27.517501021Z" level=info msg="StopPodSandbox for \"94fbcac91a94bee62820828b2ba79f8d060d9c70188bf433ef70d6e67f504ee1\" returns successfully" Jan 17 12:22:27.518908 containerd[1468]: time="2025-01-17T12:22:27.518868246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b5f56598d-sv9sp,Uid:5d7aeb14-5869-48a1-96a7-a215252689a5,Namespace:calico-apiserver,Attempt:1,}" Jan 17 12:22:27.527939 systemd[1]: run-netns-cni\x2d00c57bf7\x2d0afa\x2d45c4\x2d7fd1\x2df75dae858e74.mount: Deactivated successfully. Jan 17 12:22:27.535545 containerd[1468]: 2025-01-17 12:22:27.458 [INFO][4129] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fdbdc5c167e2a38e6f499b803ce2f0a0cb7e21bf89471ea2e426a9d65a2724d5" Jan 17 12:22:27.535545 containerd[1468]: 2025-01-17 12:22:27.462 [INFO][4129] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fdbdc5c167e2a38e6f499b803ce2f0a0cb7e21bf89471ea2e426a9d65a2724d5" iface="eth0" netns="/var/run/netns/cni-47ae5d7b-8eef-ee83-5b38-d1c2a1cc7ff7" Jan 17 12:22:27.535545 containerd[1468]: 2025-01-17 12:22:27.463 [INFO][4129] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fdbdc5c167e2a38e6f499b803ce2f0a0cb7e21bf89471ea2e426a9d65a2724d5" iface="eth0" netns="/var/run/netns/cni-47ae5d7b-8eef-ee83-5b38-d1c2a1cc7ff7" Jan 17 12:22:27.535545 containerd[1468]: 2025-01-17 12:22:27.465 [INFO][4129] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fdbdc5c167e2a38e6f499b803ce2f0a0cb7e21bf89471ea2e426a9d65a2724d5" iface="eth0" netns="/var/run/netns/cni-47ae5d7b-8eef-ee83-5b38-d1c2a1cc7ff7" Jan 17 12:22:27.535545 containerd[1468]: 2025-01-17 12:22:27.465 [INFO][4129] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fdbdc5c167e2a38e6f499b803ce2f0a0cb7e21bf89471ea2e426a9d65a2724d5" Jan 17 12:22:27.535545 containerd[1468]: 2025-01-17 12:22:27.465 [INFO][4129] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fdbdc5c167e2a38e6f499b803ce2f0a0cb7e21bf89471ea2e426a9d65a2724d5" Jan 17 12:22:27.535545 containerd[1468]: 2025-01-17 12:22:27.506 [INFO][4142] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fdbdc5c167e2a38e6f499b803ce2f0a0cb7e21bf89471ea2e426a9d65a2724d5" HandleID="k8s-pod-network.fdbdc5c167e2a38e6f499b803ce2f0a0cb7e21bf89471ea2e426a9d65a2724d5" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-csi--node--driver--gjb8c-eth0" Jan 17 12:22:27.535545 containerd[1468]: 2025-01-17 12:22:27.506 [INFO][4142] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:22:27.535545 containerd[1468]: 2025-01-17 12:22:27.513 [INFO][4142] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:22:27.535545 containerd[1468]: 2025-01-17 12:22:27.525 [WARNING][4142] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fdbdc5c167e2a38e6f499b803ce2f0a0cb7e21bf89471ea2e426a9d65a2724d5" HandleID="k8s-pod-network.fdbdc5c167e2a38e6f499b803ce2f0a0cb7e21bf89471ea2e426a9d65a2724d5" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-csi--node--driver--gjb8c-eth0" Jan 17 12:22:27.535545 containerd[1468]: 2025-01-17 12:22:27.525 [INFO][4142] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fdbdc5c167e2a38e6f499b803ce2f0a0cb7e21bf89471ea2e426a9d65a2724d5" HandleID="k8s-pod-network.fdbdc5c167e2a38e6f499b803ce2f0a0cb7e21bf89471ea2e426a9d65a2724d5" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-csi--node--driver--gjb8c-eth0" Jan 17 12:22:27.535545 containerd[1468]: 2025-01-17 12:22:27.529 [INFO][4142] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:22:27.535545 containerd[1468]: 2025-01-17 12:22:27.533 [INFO][4129] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fdbdc5c167e2a38e6f499b803ce2f0a0cb7e21bf89471ea2e426a9d65a2724d5" Jan 17 12:22:27.537448 containerd[1468]: time="2025-01-17T12:22:27.536098809Z" level=info msg="TearDown network for sandbox \"fdbdc5c167e2a38e6f499b803ce2f0a0cb7e21bf89471ea2e426a9d65a2724d5\" successfully" Jan 17 12:22:27.537448 containerd[1468]: time="2025-01-17T12:22:27.536135507Z" level=info msg="StopPodSandbox for \"fdbdc5c167e2a38e6f499b803ce2f0a0cb7e21bf89471ea2e426a9d65a2724d5\" returns successfully" Jan 17 12:22:27.538542 containerd[1468]: time="2025-01-17T12:22:27.538496606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gjb8c,Uid:68c040bb-a18d-4fed-9ea3-2d0c63ef70bc,Namespace:calico-system,Attempt:1,}" Jan 17 12:22:27.541474 systemd[1]: run-netns-cni\x2d47ae5d7b\x2d8eef\x2dee83\x2d5b38\x2dd1c2a1cc7ff7.mount: Deactivated successfully. Jan 17 12:22:27.766108 systemd-networkd[1378]: calie177f1e9a5f: Link UP Jan 17 12:22:27.771978 systemd-networkd[1378]: calie177f1e9a5f: Gained carrier Jan 17 12:22:27.795365 containerd[1468]: 2025-01-17 12:22:27.628 [INFO][4154] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--apiserver--7b5f56598d--sv9sp-eth0 calico-apiserver-7b5f56598d- calico-apiserver 5d7aeb14-5869-48a1-96a7-a215252689a5 835 0 2025-01-17 12:22:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7b5f56598d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal calico-apiserver-7b5f56598d-sv9sp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie177f1e9a5f [] []}} ContainerID="6d126c17640086f95183a4bc24c26ce76d27923110615f1da0385381d24a0f21" Namespace="calico-apiserver" Pod="calico-apiserver-7b5f56598d-sv9sp" WorkloadEndpoint="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--apiserver--7b5f56598d--sv9sp-" Jan 17 12:22:27.795365 containerd[1468]: 2025-01-17 12:22:27.629 [INFO][4154] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6d126c17640086f95183a4bc24c26ce76d27923110615f1da0385381d24a0f21" Namespace="calico-apiserver" Pod="calico-apiserver-7b5f56598d-sv9sp" WorkloadEndpoint="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--apiserver--7b5f56598d--sv9sp-eth0" Jan 17 12:22:27.795365 containerd[1468]: 2025-01-17 12:22:27.688 [INFO][4175] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6d126c17640086f95183a4bc24c26ce76d27923110615f1da0385381d24a0f21" HandleID="k8s-pod-network.6d126c17640086f95183a4bc24c26ce76d27923110615f1da0385381d24a0f21" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--apiserver--7b5f56598d--sv9sp-eth0" Jan 17 12:22:27.795365 containerd[1468]: 2025-01-17 12:22:27.707 [INFO][4175] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6d126c17640086f95183a4bc24c26ce76d27923110615f1da0385381d24a0f21" HandleID="k8s-pod-network.6d126c17640086f95183a4bc24c26ce76d27923110615f1da0385381d24a0f21" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--apiserver--7b5f56598d--sv9sp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000513c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal", "pod":"calico-apiserver-7b5f56598d-sv9sp", "timestamp":"2025-01-17 12:22:27.688874668 +0000 UTC"}, Hostname:"ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:22:27.795365 containerd[1468]: 2025-01-17 12:22:27.707 [INFO][4175] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:22:27.795365 containerd[1468]: 2025-01-17 12:22:27.708 [INFO][4175] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:22:27.795365 containerd[1468]: 2025-01-17 12:22:27.708 [INFO][4175] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal' Jan 17 12:22:27.795365 containerd[1468]: 2025-01-17 12:22:27.711 [INFO][4175] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6d126c17640086f95183a4bc24c26ce76d27923110615f1da0385381d24a0f21" host="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:22:27.795365 containerd[1468]: 2025-01-17 12:22:27.722 [INFO][4175] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:22:27.795365 containerd[1468]: 2025-01-17 12:22:27.730 [INFO][4175] ipam/ipam.go 489: Trying affinity for 192.168.80.128/26 host="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:22:27.795365 containerd[1468]: 2025-01-17 12:22:27.732 [INFO][4175] ipam/ipam.go 155: Attempting to load block cidr=192.168.80.128/26 host="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:22:27.795365 containerd[1468]: 2025-01-17 12:22:27.735 [INFO][4175] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.80.128/26 host="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:22:27.795365 containerd[1468]: 2025-01-17 12:22:27.736 [INFO][4175] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.80.128/26 handle="k8s-pod-network.6d126c17640086f95183a4bc24c26ce76d27923110615f1da0385381d24a0f21" host="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:22:27.795365 containerd[1468]: 2025-01-17 12:22:27.738 [INFO][4175] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6d126c17640086f95183a4bc24c26ce76d27923110615f1da0385381d24a0f21 Jan 17 12:22:27.795365 containerd[1468]: 2025-01-17 12:22:27.746 [INFO][4175] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.80.128/26 handle="k8s-pod-network.6d126c17640086f95183a4bc24c26ce76d27923110615f1da0385381d24a0f21" host="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:22:27.795365 containerd[1468]: 2025-01-17 12:22:27.755 [INFO][4175] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.80.130/26] block=192.168.80.128/26 handle="k8s-pod-network.6d126c17640086f95183a4bc24c26ce76d27923110615f1da0385381d24a0f21" host="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:22:27.795365 containerd[1468]: 2025-01-17 12:22:27.755 [INFO][4175] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.80.130/26] handle="k8s-pod-network.6d126c17640086f95183a4bc24c26ce76d27923110615f1da0385381d24a0f21" host="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:22:27.795365 containerd[1468]: 2025-01-17 12:22:27.755 [INFO][4175] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:22:27.795365 containerd[1468]: 2025-01-17 12:22:27.755 [INFO][4175] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.80.130/26] IPv6=[] ContainerID="6d126c17640086f95183a4bc24c26ce76d27923110615f1da0385381d24a0f21" HandleID="k8s-pod-network.6d126c17640086f95183a4bc24c26ce76d27923110615f1da0385381d24a0f21" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--apiserver--7b5f56598d--sv9sp-eth0" Jan 17 12:22:27.798520 containerd[1468]: 2025-01-17 12:22:27.758 [INFO][4154] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6d126c17640086f95183a4bc24c26ce76d27923110615f1da0385381d24a0f21" Namespace="calico-apiserver" Pod="calico-apiserver-7b5f56598d-sv9sp" WorkloadEndpoint="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--apiserver--7b5f56598d--sv9sp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--apiserver--7b5f56598d--sv9sp-eth0", GenerateName:"calico-apiserver-7b5f56598d-", Namespace:"calico-apiserver", SelfLink:"", UID:"5d7aeb14-5869-48a1-96a7-a215252689a5", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b5f56598d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-7b5f56598d-sv9sp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.80.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie177f1e9a5f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:22:27.798520 containerd[1468]: 2025-01-17 12:22:27.759 [INFO][4154] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.80.130/32] ContainerID="6d126c17640086f95183a4bc24c26ce76d27923110615f1da0385381d24a0f21" Namespace="calico-apiserver" Pod="calico-apiserver-7b5f56598d-sv9sp" WorkloadEndpoint="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--apiserver--7b5f56598d--sv9sp-eth0" Jan 17 12:22:27.798520 containerd[1468]: 2025-01-17 12:22:27.759 [INFO][4154] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie177f1e9a5f ContainerID="6d126c17640086f95183a4bc24c26ce76d27923110615f1da0385381d24a0f21" Namespace="calico-apiserver" Pod="calico-apiserver-7b5f56598d-sv9sp" WorkloadEndpoint="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--apiserver--7b5f56598d--sv9sp-eth0" Jan 17 12:22:27.798520 containerd[1468]: 2025-01-17 12:22:27.773 [INFO][4154] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6d126c17640086f95183a4bc24c26ce76d27923110615f1da0385381d24a0f21" Namespace="calico-apiserver" Pod="calico-apiserver-7b5f56598d-sv9sp" WorkloadEndpoint="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--apiserver--7b5f56598d--sv9sp-eth0" Jan 17 12:22:27.798520 containerd[1468]: 2025-01-17 12:22:27.774 [INFO][4154] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6d126c17640086f95183a4bc24c26ce76d27923110615f1da0385381d24a0f21" Namespace="calico-apiserver" Pod="calico-apiserver-7b5f56598d-sv9sp" WorkloadEndpoint="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--apiserver--7b5f56598d--sv9sp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--apiserver--7b5f56598d--sv9sp-eth0", GenerateName:"calico-apiserver-7b5f56598d-", Namespace:"calico-apiserver", SelfLink:"", UID:"5d7aeb14-5869-48a1-96a7-a215252689a5", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b5f56598d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal", ContainerID:"6d126c17640086f95183a4bc24c26ce76d27923110615f1da0385381d24a0f21", Pod:"calico-apiserver-7b5f56598d-sv9sp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.80.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie177f1e9a5f", MAC:"02:b7:24:f1:b3:51", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:22:27.798520 containerd[1468]: 2025-01-17 12:22:27.791 [INFO][4154] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6d126c17640086f95183a4bc24c26ce76d27923110615f1da0385381d24a0f21" Namespace="calico-apiserver" Pod="calico-apiserver-7b5f56598d-sv9sp" WorkloadEndpoint="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--apiserver--7b5f56598d--sv9sp-eth0" Jan 17 12:22:27.849031 systemd-networkd[1378]: cali57eb19459d5: Link UP Jan 17 12:22:27.851566 systemd-networkd[1378]: cali57eb19459d5: Gained carrier Jan 17 12:22:27.872818 systemd-networkd[1378]: cali667f700302b: Gained IPv6LL Jan 17 12:22:27.877281 containerd[1468]: time="2025-01-17T12:22:27.876616854Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:22:27.877281 containerd[1468]: time="2025-01-17T12:22:27.877182446Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:22:27.879152 containerd[1468]: time="2025-01-17T12:22:27.877416390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:27.884656 containerd[1468]: time="2025-01-17T12:22:27.884434061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:27.886107 containerd[1468]: 2025-01-17 12:22:27.662 [INFO][4163] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-csi--node--driver--gjb8c-eth0 csi-node-driver- calico-system 68c040bb-a18d-4fed-9ea3-2d0c63ef70bc 834 0 2025-01-17 12:22:04 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal csi-node-driver-gjb8c eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali57eb19459d5 [] []}} ContainerID="afad4e44e505c1ce870e0cf93d904ef1ee24c99458cd5e11fc5cccdb0ed74060" Namespace="calico-system" Pod="csi-node-driver-gjb8c" WorkloadEndpoint="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-csi--node--driver--gjb8c-" Jan 17 12:22:27.886107 containerd[1468]: 2025-01-17 12:22:27.662 [INFO][4163] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="afad4e44e505c1ce870e0cf93d904ef1ee24c99458cd5e11fc5cccdb0ed74060" Namespace="calico-system" Pod="csi-node-driver-gjb8c" WorkloadEndpoint="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-csi--node--driver--gjb8c-eth0" Jan 17 12:22:27.886107 containerd[1468]: 2025-01-17 12:22:27.726 [INFO][4179] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="afad4e44e505c1ce870e0cf93d904ef1ee24c99458cd5e11fc5cccdb0ed74060" HandleID="k8s-pod-network.afad4e44e505c1ce870e0cf93d904ef1ee24c99458cd5e11fc5cccdb0ed74060" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-csi--node--driver--gjb8c-eth0" Jan 17 12:22:27.886107 containerd[1468]: 2025-01-17 12:22:27.743 [INFO][4179] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="afad4e44e505c1ce870e0cf93d904ef1ee24c99458cd5e11fc5cccdb0ed74060" HandleID="k8s-pod-network.afad4e44e505c1ce870e0cf93d904ef1ee24c99458cd5e11fc5cccdb0ed74060" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-csi--node--driver--gjb8c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319930), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal", "pod":"csi-node-driver-gjb8c", "timestamp":"2025-01-17 12:22:27.726664948 +0000 UTC"}, Hostname:"ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:22:27.886107 containerd[1468]: 2025-01-17 12:22:27.743 [INFO][4179] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:22:27.886107 containerd[1468]: 2025-01-17 12:22:27.756 [INFO][4179] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:22:27.886107 containerd[1468]: 2025-01-17 12:22:27.756 [INFO][4179] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal' Jan 17 12:22:27.886107 containerd[1468]: 2025-01-17 12:22:27.759 [INFO][4179] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.afad4e44e505c1ce870e0cf93d904ef1ee24c99458cd5e11fc5cccdb0ed74060" host="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:22:27.886107 containerd[1468]: 2025-01-17 12:22:27.773 [INFO][4179] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:22:27.886107 containerd[1468]: 2025-01-17 12:22:27.788 [INFO][4179] ipam/ipam.go 489: Trying affinity for 192.168.80.128/26 host="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:22:27.886107 containerd[1468]: 2025-01-17 12:22:27.798 [INFO][4179] ipam/ipam.go 155: Attempting to load block cidr=192.168.80.128/26 host="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:22:27.886107 containerd[1468]: 2025-01-17 12:22:27.803 [INFO][4179] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.80.128/26 host="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:22:27.886107 containerd[1468]: 2025-01-17 12:22:27.804 [INFO][4179] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.80.128/26 handle="k8s-pod-network.afad4e44e505c1ce870e0cf93d904ef1ee24c99458cd5e11fc5cccdb0ed74060" host="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:22:27.886107 containerd[1468]: 2025-01-17 12:22:27.807 [INFO][4179] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.afad4e44e505c1ce870e0cf93d904ef1ee24c99458cd5e11fc5cccdb0ed74060 Jan 17 12:22:27.886107 containerd[1468]: 2025-01-17 12:22:27.819 [INFO][4179] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.80.128/26 handle="k8s-pod-network.afad4e44e505c1ce870e0cf93d904ef1ee24c99458cd5e11fc5cccdb0ed74060" host="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:22:27.886107 containerd[1468]: 2025-01-17 12:22:27.835 [INFO][4179] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.80.131/26] block=192.168.80.128/26 handle="k8s-pod-network.afad4e44e505c1ce870e0cf93d904ef1ee24c99458cd5e11fc5cccdb0ed74060" host="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:22:27.886107 containerd[1468]: 2025-01-17 12:22:27.835 [INFO][4179] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.80.131/26] handle="k8s-pod-network.afad4e44e505c1ce870e0cf93d904ef1ee24c99458cd5e11fc5cccdb0ed74060" host="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:22:27.886107 containerd[1468]: 2025-01-17 12:22:27.835 [INFO][4179] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:22:27.886107 containerd[1468]: 2025-01-17 12:22:27.835 [INFO][4179] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.80.131/26] IPv6=[] ContainerID="afad4e44e505c1ce870e0cf93d904ef1ee24c99458cd5e11fc5cccdb0ed74060" HandleID="k8s-pod-network.afad4e44e505c1ce870e0cf93d904ef1ee24c99458cd5e11fc5cccdb0ed74060" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-csi--node--driver--gjb8c-eth0" Jan 17 12:22:27.888912 containerd[1468]: 2025-01-17 12:22:27.841 [INFO][4163] cni-plugin/k8s.go 386: Populated endpoint ContainerID="afad4e44e505c1ce870e0cf93d904ef1ee24c99458cd5e11fc5cccdb0ed74060" Namespace="calico-system" Pod="csi-node-driver-gjb8c" WorkloadEndpoint="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-csi--node--driver--gjb8c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-csi--node--driver--gjb8c-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"68c040bb-a18d-4fed-9ea3-2d0c63ef70bc", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal", ContainerID:"", Pod:"csi-node-driver-gjb8c", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.80.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali57eb19459d5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:22:27.888912 containerd[1468]: 2025-01-17 12:22:27.841 [INFO][4163] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.80.131/32] ContainerID="afad4e44e505c1ce870e0cf93d904ef1ee24c99458cd5e11fc5cccdb0ed74060" Namespace="calico-system" Pod="csi-node-driver-gjb8c" WorkloadEndpoint="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-csi--node--driver--gjb8c-eth0" Jan 17 12:22:27.888912 containerd[1468]: 2025-01-17 12:22:27.841 [INFO][4163] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali57eb19459d5 ContainerID="afad4e44e505c1ce870e0cf93d904ef1ee24c99458cd5e11fc5cccdb0ed74060" Namespace="calico-system" Pod="csi-node-driver-gjb8c" WorkloadEndpoint="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-csi--node--driver--gjb8c-eth0" Jan 17 12:22:27.888912 containerd[1468]: 2025-01-17 12:22:27.852 [INFO][4163] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="afad4e44e505c1ce870e0cf93d904ef1ee24c99458cd5e11fc5cccdb0ed74060" Namespace="calico-system" Pod="csi-node-driver-gjb8c" WorkloadEndpoint="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-csi--node--driver--gjb8c-eth0" Jan 17 12:22:27.888912 containerd[1468]: 2025-01-17 12:22:27.854 [INFO][4163] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="afad4e44e505c1ce870e0cf93d904ef1ee24c99458cd5e11fc5cccdb0ed74060" Namespace="calico-system" Pod="csi-node-driver-gjb8c" WorkloadEndpoint="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-csi--node--driver--gjb8c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-csi--node--driver--gjb8c-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"68c040bb-a18d-4fed-9ea3-2d0c63ef70bc", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal", ContainerID:"afad4e44e505c1ce870e0cf93d904ef1ee24c99458cd5e11fc5cccdb0ed74060", Pod:"csi-node-driver-gjb8c", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.80.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali57eb19459d5", MAC:"b2:f4:7e:97:0f:c1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:22:27.888912 containerd[1468]: 2025-01-17 12:22:27.879 [INFO][4163] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="afad4e44e505c1ce870e0cf93d904ef1ee24c99458cd5e11fc5cccdb0ed74060" Namespace="calico-system" Pod="csi-node-driver-gjb8c" WorkloadEndpoint="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-csi--node--driver--gjb8c-eth0" Jan 17 12:22:27.923502 systemd[1]: Started cri-containerd-6d126c17640086f95183a4bc24c26ce76d27923110615f1da0385381d24a0f21.scope - libcontainer container 6d126c17640086f95183a4bc24c26ce76d27923110615f1da0385381d24a0f21. Jan 17 12:22:27.982608 containerd[1468]: time="2025-01-17T12:22:27.980804504Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:22:27.982608 containerd[1468]: time="2025-01-17T12:22:27.982177695Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:22:27.982608 containerd[1468]: time="2025-01-17T12:22:27.982210445Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:27.982608 containerd[1468]: time="2025-01-17T12:22:27.982355420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:28.014510 systemd[1]: Started cri-containerd-afad4e44e505c1ce870e0cf93d904ef1ee24c99458cd5e11fc5cccdb0ed74060.scope - libcontainer container afad4e44e505c1ce870e0cf93d904ef1ee24c99458cd5e11fc5cccdb0ed74060. Jan 17 12:22:28.096484 containerd[1468]: time="2025-01-17T12:22:28.096235007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gjb8c,Uid:68c040bb-a18d-4fed-9ea3-2d0c63ef70bc,Namespace:calico-system,Attempt:1,} returns sandbox id \"afad4e44e505c1ce870e0cf93d904ef1ee24c99458cd5e11fc5cccdb0ed74060\"" Jan 17 12:22:28.116314 containerd[1468]: time="2025-01-17T12:22:28.115550276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b5f56598d-sv9sp,Uid:5d7aeb14-5869-48a1-96a7-a215252689a5,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"6d126c17640086f95183a4bc24c26ce76d27923110615f1da0385381d24a0f21\"" Jan 17 12:22:29.022521 systemd-networkd[1378]: cali57eb19459d5: Gained IPv6LL Jan 17 12:22:29.072442 containerd[1468]: time="2025-01-17T12:22:29.072368151Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:29.073845 containerd[1468]: time="2025-01-17T12:22:29.073765663Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 17 12:22:29.075582 containerd[1468]: time="2025-01-17T12:22:29.075484790Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:29.079191 containerd[1468]: time="2025-01-17T12:22:29.079096963Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:29.080634 containerd[1468]: time="2025-01-17T12:22:29.080363834Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.135093403s" Jan 17 12:22:29.080634 containerd[1468]: time="2025-01-17T12:22:29.080412859Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 17 12:22:29.083184 containerd[1468]: time="2025-01-17T12:22:29.082940874Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 17 12:22:29.105816 containerd[1468]: time="2025-01-17T12:22:29.105733108Z" level=info msg="CreateContainer within sandbox \"cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 17 12:22:29.127185 containerd[1468]: time="2025-01-17T12:22:29.127122229Z" level=info msg="CreateContainer within sandbox \"cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"347dff1db46fe15811ded95dce41963944b945de9c14f908877e96deeea10018\"" Jan 17 12:22:29.129346 containerd[1468]: time="2025-01-17T12:22:29.128034319Z" level=info msg="StartContainer for \"347dff1db46fe15811ded95dce41963944b945de9c14f908877e96deeea10018\"" Jan 17 12:22:29.169569 systemd[1]: Started cri-containerd-347dff1db46fe15811ded95dce41963944b945de9c14f908877e96deeea10018.scope - libcontainer container 347dff1db46fe15811ded95dce41963944b945de9c14f908877e96deeea10018. Jan 17 12:22:29.226948 containerd[1468]: time="2025-01-17T12:22:29.226844091Z" level=info msg="StartContainer for \"347dff1db46fe15811ded95dce41963944b945de9c14f908877e96deeea10018\" returns successfully" Jan 17 12:22:29.366274 containerd[1468]: time="2025-01-17T12:22:29.366178197Z" level=info msg="StopPodSandbox for \"6a38206cb2f609e975cab0bd1aa8127284e181469edf2fcfd2a31bb99827918f\"" Jan 17 12:22:29.366997 containerd[1468]: time="2025-01-17T12:22:29.366925646Z" level=info msg="StopPodSandbox for \"133fef128fcaf8a502aef03ac72a1717d13908857264794d3bc477127f965044\"" Jan 17 12:22:29.594030 containerd[1468]: 2025-01-17 12:22:29.483 [INFO][4369] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6a38206cb2f609e975cab0bd1aa8127284e181469edf2fcfd2a31bb99827918f" Jan 17 12:22:29.594030 containerd[1468]: 2025-01-17 12:22:29.483 [INFO][4369] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6a38206cb2f609e975cab0bd1aa8127284e181469edf2fcfd2a31bb99827918f" iface="eth0" netns="/var/run/netns/cni-3b188c6f-7dd2-da39-029a-8a29e2a5d5a3" Jan 17 12:22:29.594030 containerd[1468]: 2025-01-17 12:22:29.483 [INFO][4369] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6a38206cb2f609e975cab0bd1aa8127284e181469edf2fcfd2a31bb99827918f" iface="eth0" netns="/var/run/netns/cni-3b188c6f-7dd2-da39-029a-8a29e2a5d5a3" Jan 17 12:22:29.594030 containerd[1468]: 2025-01-17 12:22:29.487 [INFO][4369] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6a38206cb2f609e975cab0bd1aa8127284e181469edf2fcfd2a31bb99827918f" iface="eth0" netns="/var/run/netns/cni-3b188c6f-7dd2-da39-029a-8a29e2a5d5a3" Jan 17 12:22:29.594030 containerd[1468]: 2025-01-17 12:22:29.487 [INFO][4369] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6a38206cb2f609e975cab0bd1aa8127284e181469edf2fcfd2a31bb99827918f" Jan 17 12:22:29.594030 containerd[1468]: 2025-01-17 12:22:29.487 [INFO][4369] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6a38206cb2f609e975cab0bd1aa8127284e181469edf2fcfd2a31bb99827918f" Jan 17 12:22:29.594030 containerd[1468]: 2025-01-17 12:22:29.576 [INFO][4386] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6a38206cb2f609e975cab0bd1aa8127284e181469edf2fcfd2a31bb99827918f" HandleID="k8s-pod-network.6a38206cb2f609e975cab0bd1aa8127284e181469edf2fcfd2a31bb99827918f" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--9xkbk-eth0" Jan 17 12:22:29.594030 containerd[1468]: 2025-01-17 12:22:29.577 [INFO][4386] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:22:29.594030 containerd[1468]: 2025-01-17 12:22:29.578 [INFO][4386] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:22:29.594030 containerd[1468]: 2025-01-17 12:22:29.588 [WARNING][4386] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6a38206cb2f609e975cab0bd1aa8127284e181469edf2fcfd2a31bb99827918f" HandleID="k8s-pod-network.6a38206cb2f609e975cab0bd1aa8127284e181469edf2fcfd2a31bb99827918f" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--9xkbk-eth0" Jan 17 12:22:29.594030 containerd[1468]: 2025-01-17 12:22:29.588 [INFO][4386] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6a38206cb2f609e975cab0bd1aa8127284e181469edf2fcfd2a31bb99827918f" HandleID="k8s-pod-network.6a38206cb2f609e975cab0bd1aa8127284e181469edf2fcfd2a31bb99827918f" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--9xkbk-eth0" Jan 17 12:22:29.594030 containerd[1468]: 2025-01-17 12:22:29.590 [INFO][4386] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:22:29.594030 containerd[1468]: 2025-01-17 12:22:29.592 [INFO][4369] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6a38206cb2f609e975cab0bd1aa8127284e181469edf2fcfd2a31bb99827918f" Jan 17 12:22:29.598031 containerd[1468]: time="2025-01-17T12:22:29.594208167Z" level=info msg="TearDown network for sandbox \"6a38206cb2f609e975cab0bd1aa8127284e181469edf2fcfd2a31bb99827918f\" successfully" Jan 17 12:22:29.598031 containerd[1468]: time="2025-01-17T12:22:29.596279137Z" level=info msg="StopPodSandbox for \"6a38206cb2f609e975cab0bd1aa8127284e181469edf2fcfd2a31bb99827918f\" returns successfully" Jan 17 12:22:29.598031 containerd[1468]: time="2025-01-17T12:22:29.597140107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9xkbk,Uid:9a11ecfa-a757-43fc-8ab9-8e4424da26ba,Namespace:kube-system,Attempt:1,}" Jan 17 12:22:29.604816 systemd[1]: run-netns-cni\x2d3b188c6f\x2d7dd2\x2dda39\x2d029a\x2d8a29e2a5d5a3.mount: Deactivated successfully. Jan 17 12:22:29.618039 containerd[1468]: 2025-01-17 12:22:29.471 [INFO][4373] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="133fef128fcaf8a502aef03ac72a1717d13908857264794d3bc477127f965044" Jan 17 12:22:29.618039 containerd[1468]: 2025-01-17 12:22:29.471 [INFO][4373] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="133fef128fcaf8a502aef03ac72a1717d13908857264794d3bc477127f965044" iface="eth0" netns="/var/run/netns/cni-e80333f9-3c44-0545-01b0-c33229c3070a" Jan 17 12:22:29.618039 containerd[1468]: 2025-01-17 12:22:29.472 [INFO][4373] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="133fef128fcaf8a502aef03ac72a1717d13908857264794d3bc477127f965044" iface="eth0" netns="/var/run/netns/cni-e80333f9-3c44-0545-01b0-c33229c3070a" Jan 17 12:22:29.618039 containerd[1468]: 2025-01-17 12:22:29.473 [INFO][4373] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="133fef128fcaf8a502aef03ac72a1717d13908857264794d3bc477127f965044" iface="eth0" netns="/var/run/netns/cni-e80333f9-3c44-0545-01b0-c33229c3070a" Jan 17 12:22:29.618039 containerd[1468]: 2025-01-17 12:22:29.473 [INFO][4373] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="133fef128fcaf8a502aef03ac72a1717d13908857264794d3bc477127f965044" Jan 17 12:22:29.618039 containerd[1468]: 2025-01-17 12:22:29.473 [INFO][4373] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="133fef128fcaf8a502aef03ac72a1717d13908857264794d3bc477127f965044" Jan 17 12:22:29.618039 containerd[1468]: 2025-01-17 12:22:29.581 [INFO][4385] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="133fef128fcaf8a502aef03ac72a1717d13908857264794d3bc477127f965044" HandleID="k8s-pod-network.133fef128fcaf8a502aef03ac72a1717d13908857264794d3bc477127f965044" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--xn5bq-eth0" Jan 17 12:22:29.618039 containerd[1468]: 2025-01-17 12:22:29.581 [INFO][4385] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:22:29.618039 containerd[1468]: 2025-01-17 12:22:29.590 [INFO][4385] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:22:29.618039 containerd[1468]: 2025-01-17 12:22:29.609 [WARNING][4385] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="133fef128fcaf8a502aef03ac72a1717d13908857264794d3bc477127f965044" HandleID="k8s-pod-network.133fef128fcaf8a502aef03ac72a1717d13908857264794d3bc477127f965044" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--xn5bq-eth0" Jan 17 12:22:29.618039 containerd[1468]: 2025-01-17 12:22:29.610 [INFO][4385] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="133fef128fcaf8a502aef03ac72a1717d13908857264794d3bc477127f965044" HandleID="k8s-pod-network.133fef128fcaf8a502aef03ac72a1717d13908857264794d3bc477127f965044" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--xn5bq-eth0" Jan 17 12:22:29.618039 containerd[1468]: 2025-01-17 12:22:29.612 [INFO][4385] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:22:29.618039 containerd[1468]: 2025-01-17 12:22:29.614 [INFO][4373] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="133fef128fcaf8a502aef03ac72a1717d13908857264794d3bc477127f965044" Jan 17 12:22:29.618039 containerd[1468]: time="2025-01-17T12:22:29.617138555Z" level=info msg="TearDown network for sandbox \"133fef128fcaf8a502aef03ac72a1717d13908857264794d3bc477127f965044\" successfully" Jan 17 12:22:29.618039 containerd[1468]: time="2025-01-17T12:22:29.617170396Z" level=info msg="StopPodSandbox for \"133fef128fcaf8a502aef03ac72a1717d13908857264794d3bc477127f965044\" returns successfully" Jan 17 12:22:29.619053 containerd[1468]: time="2025-01-17T12:22:29.618092179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xn5bq,Uid:d4d7de4f-610c-42bb-9ed3-95154eded5ac,Namespace:kube-system,Attempt:1,}" Jan 17 12:22:29.626908 systemd[1]: run-netns-cni\x2de80333f9\x2d3c44\x2d0545\x2d01b0\x2dc33229c3070a.mount: Deactivated successfully. Jan 17 12:22:29.790837 systemd-networkd[1378]: calie177f1e9a5f: Gained IPv6LL Jan 17 12:22:29.913446 systemd-networkd[1378]: cali24b7f2a76c3: Link UP Jan 17 12:22:29.916008 systemd-networkd[1378]: cali24b7f2a76c3: Gained carrier Jan 17 12:22:29.944917 kubelet[2610]: I0117 12:22:29.944720 2610 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7d68f7877f-n2vhz" podStartSLOduration=23.806738695 podStartE2EDuration="25.944581975s" podCreationTimestamp="2025-01-17 12:22:04 +0000 UTC" firstStartedPulling="2025-01-17 12:22:26.94413794 +0000 UTC m=+43.772236484" lastFinishedPulling="2025-01-17 12:22:29.081981208 +0000 UTC m=+45.910079764" observedRunningTime="2025-01-17 12:22:29.692822205 +0000 UTC m=+46.520920765" watchObservedRunningTime="2025-01-17 12:22:29.944581975 +0000 UTC m=+46.772680532" Jan 17 12:22:29.959314 containerd[1468]: 2025-01-17 12:22:29.748 [INFO][4398] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--9xkbk-eth0 coredns-7db6d8ff4d- kube-system 9a11ecfa-a757-43fc-8ab9-8e4424da26ba 856 0 2025-01-17 12:21:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal coredns-7db6d8ff4d-9xkbk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali24b7f2a76c3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c6eb896913cdd7c710d2785bcddf0d99e8118b7b638f7183c0ae7d06179cddad" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9xkbk" WorkloadEndpoint="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--9xkbk-" Jan 17 12:22:29.959314 containerd[1468]: 2025-01-17 12:22:29.749 [INFO][4398] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c6eb896913cdd7c710d2785bcddf0d99e8118b7b638f7183c0ae7d06179cddad" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9xkbk" WorkloadEndpoint="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--9xkbk-eth0" Jan 17 12:22:29.959314 containerd[1468]: 2025-01-17 12:22:29.826 [INFO][4419] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c6eb896913cdd7c710d2785bcddf0d99e8118b7b638f7183c0ae7d06179cddad" HandleID="k8s-pod-network.c6eb896913cdd7c710d2785bcddf0d99e8118b7b638f7183c0ae7d06179cddad" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--9xkbk-eth0" Jan 17 12:22:29.959314 containerd[1468]: 2025-01-17 12:22:29.847 [INFO][4419] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c6eb896913cdd7c710d2785bcddf0d99e8118b7b638f7183c0ae7d06179cddad" HandleID="k8s-pod-network.c6eb896913cdd7c710d2785bcddf0d99e8118b7b638f7183c0ae7d06179cddad" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--9xkbk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051990), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal", "pod":"coredns-7db6d8ff4d-9xkbk", "timestamp":"2025-01-17 12:22:29.82600129 +0000 UTC"}, Hostname:"ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:22:29.959314 containerd[1468]: 2025-01-17 12:22:29.847 [INFO][4419] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:22:29.959314 containerd[1468]: 2025-01-17 12:22:29.847 [INFO][4419] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:22:29.959314 containerd[1468]: 2025-01-17 12:22:29.847 [INFO][4419] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal' Jan 17 12:22:29.959314 containerd[1468]: 2025-01-17 12:22:29.850 [INFO][4419] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c6eb896913cdd7c710d2785bcddf0d99e8118b7b638f7183c0ae7d06179cddad" host="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:22:29.959314 containerd[1468]: 2025-01-17 12:22:29.859 [INFO][4419] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:22:29.959314 containerd[1468]: 2025-01-17 12:22:29.865 [INFO][4419] ipam/ipam.go 489: Trying affinity for 192.168.80.128/26 host="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:22:29.959314 containerd[1468]: 2025-01-17 12:22:29.869 [INFO][4419] ipam/ipam.go 155: Attempting to load block cidr=192.168.80.128/26 host="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:22:29.959314 containerd[1468]: 2025-01-17 12:22:29.872 [INFO][4419] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.80.128/26 host="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:22:29.959314 containerd[1468]: 2025-01-17 12:22:29.872 [INFO][4419] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.80.128/26 handle="k8s-pod-network.c6eb896913cdd7c710d2785bcddf0d99e8118b7b638f7183c0ae7d06179cddad" host="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:22:29.959314 containerd[1468]: 2025-01-17 12:22:29.875 [INFO][4419] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c6eb896913cdd7c710d2785bcddf0d99e8118b7b638f7183c0ae7d06179cddad Jan 17 12:22:29.959314 containerd[1468]: 2025-01-17 12:22:29.881 [INFO][4419] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.80.128/26 handle="k8s-pod-network.c6eb896913cdd7c710d2785bcddf0d99e8118b7b638f7183c0ae7d06179cddad" host="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:22:29.959314 containerd[1468]: 2025-01-17 12:22:29.898 [INFO][4419] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.80.132/26] block=192.168.80.128/26 handle="k8s-pod-network.c6eb896913cdd7c710d2785bcddf0d99e8118b7b638f7183c0ae7d06179cddad" host="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:22:29.959314 containerd[1468]: 2025-01-17 12:22:29.898 [INFO][4419] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.80.132/26] handle="k8s-pod-network.c6eb896913cdd7c710d2785bcddf0d99e8118b7b638f7183c0ae7d06179cddad" host="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:22:29.959314 containerd[1468]: 2025-01-17 12:22:29.899 [INFO][4419] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:22:29.959314 containerd[1468]: 2025-01-17 12:22:29.899 [INFO][4419] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.80.132/26] IPv6=[] ContainerID="c6eb896913cdd7c710d2785bcddf0d99e8118b7b638f7183c0ae7d06179cddad" HandleID="k8s-pod-network.c6eb896913cdd7c710d2785bcddf0d99e8118b7b638f7183c0ae7d06179cddad" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--9xkbk-eth0" Jan 17 12:22:29.961737 containerd[1468]: 2025-01-17 12:22:29.903 [INFO][4398] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c6eb896913cdd7c710d2785bcddf0d99e8118b7b638f7183c0ae7d06179cddad" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9xkbk" WorkloadEndpoint="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--9xkbk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--9xkbk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"9a11ecfa-a757-43fc-8ab9-8e4424da26ba", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 21, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-7db6d8ff4d-9xkbk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.80.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali24b7f2a76c3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:22:29.961737 containerd[1468]: 2025-01-17 12:22:29.903 [INFO][4398] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.80.132/32] ContainerID="c6eb896913cdd7c710d2785bcddf0d99e8118b7b638f7183c0ae7d06179cddad" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9xkbk" WorkloadEndpoint="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--9xkbk-eth0" Jan 17 12:22:29.961737 containerd[1468]: 2025-01-17 12:22:29.903 [INFO][4398] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali24b7f2a76c3 ContainerID="c6eb896913cdd7c710d2785bcddf0d99e8118b7b638f7183c0ae7d06179cddad" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9xkbk" WorkloadEndpoint="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--9xkbk-eth0" Jan 17 12:22:29.961737 containerd[1468]: 2025-01-17 12:22:29.916 [INFO][4398] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c6eb896913cdd7c710d2785bcddf0d99e8118b7b638f7183c0ae7d06179cddad" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9xkbk" WorkloadEndpoint="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--9xkbk-eth0" Jan 17 12:22:29.961737 containerd[1468]: 2025-01-17 12:22:29.918 [INFO][4398] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c6eb896913cdd7c710d2785bcddf0d99e8118b7b638f7183c0ae7d06179cddad" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9xkbk" WorkloadEndpoint="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--9xkbk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--9xkbk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"9a11ecfa-a757-43fc-8ab9-8e4424da26ba", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 21, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal", ContainerID:"c6eb896913cdd7c710d2785bcddf0d99e8118b7b638f7183c0ae7d06179cddad", Pod:"coredns-7db6d8ff4d-9xkbk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.80.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali24b7f2a76c3", MAC:"26:c1:9c:28:9d:5e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:22:29.961737 containerd[1468]: 2025-01-17 12:22:29.955 [INFO][4398] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c6eb896913cdd7c710d2785bcddf0d99e8118b7b638f7183c0ae7d06179cddad" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9xkbk" WorkloadEndpoint="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--9xkbk-eth0" Jan 17 12:22:30.014494 systemd-networkd[1378]: caliaa96cb793f3: Link UP Jan 17 12:22:30.014900 systemd-networkd[1378]: caliaa96cb793f3: Gained carrier Jan 17 12:22:30.035018 containerd[1468]: time="2025-01-17T12:22:30.034289833Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:22:30.036463 containerd[1468]: time="2025-01-17T12:22:30.036337170Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:22:30.036463 containerd[1468]: time="2025-01-17T12:22:30.036401481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:30.038573 containerd[1468]: time="2025-01-17T12:22:30.037059446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:30.065364 containerd[1468]: 2025-01-17 12:22:29.773 [INFO][4408] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--xn5bq-eth0 coredns-7db6d8ff4d- kube-system d4d7de4f-610c-42bb-9ed3-95154eded5ac 855 0 2025-01-17 12:21:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal coredns-7db6d8ff4d-xn5bq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliaa96cb793f3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="22539a676064a3dd381aa5e1246b85961a9fabd502db97fce911140880fb4908" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xn5bq" WorkloadEndpoint="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--xn5bq-" Jan 17 12:22:30.065364 containerd[1468]: 2025-01-17 12:22:29.774 [INFO][4408] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="22539a676064a3dd381aa5e1246b85961a9fabd502db97fce911140880fb4908" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xn5bq" WorkloadEndpoint="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--xn5bq-eth0" Jan 17 12:22:30.065364 containerd[1468]: 2025-01-17 12:22:29.861 [INFO][4425] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="22539a676064a3dd381aa5e1246b85961a9fabd502db97fce911140880fb4908" HandleID="k8s-pod-network.22539a676064a3dd381aa5e1246b85961a9fabd502db97fce911140880fb4908" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--xn5bq-eth0" Jan 17 12:22:30.065364 containerd[1468]: 2025-01-17 12:22:29.891 [INFO][4425] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="22539a676064a3dd381aa5e1246b85961a9fabd502db97fce911140880fb4908" HandleID="k8s-pod-network.22539a676064a3dd381aa5e1246b85961a9fabd502db97fce911140880fb4908" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--xn5bq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000524ab0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal", "pod":"coredns-7db6d8ff4d-xn5bq", "timestamp":"2025-01-17 12:22:29.861555355 +0000 UTC"}, Hostname:"ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:22:30.065364 containerd[1468]: 2025-01-17 12:22:29.892 [INFO][4425] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:22:30.065364 containerd[1468]: 2025-01-17 12:22:29.900 [INFO][4425] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:22:30.065364 containerd[1468]: 2025-01-17 12:22:29.900 [INFO][4425] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal' Jan 17 12:22:30.065364 containerd[1468]: 2025-01-17 12:22:29.911 [INFO][4425] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.22539a676064a3dd381aa5e1246b85961a9fabd502db97fce911140880fb4908" host="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:22:30.065364 containerd[1468]: 2025-01-17 12:22:29.928 [INFO][4425] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:22:30.065364 containerd[1468]: 2025-01-17 12:22:29.941 [INFO][4425] ipam/ipam.go 489: Trying affinity for 192.168.80.128/26 host="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:22:30.065364 containerd[1468]: 2025-01-17 12:22:29.953 [INFO][4425] ipam/ipam.go 155: Attempting to load block cidr=192.168.80.128/26 host="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:22:30.065364 containerd[1468]: 2025-01-17 12:22:29.958 [INFO][4425] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.80.128/26 host="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:22:30.065364 containerd[1468]: 2025-01-17 12:22:29.958 [INFO][4425] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.80.128/26 handle="k8s-pod-network.22539a676064a3dd381aa5e1246b85961a9fabd502db97fce911140880fb4908" host="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:22:30.065364 containerd[1468]: 2025-01-17 12:22:29.966 [INFO][4425] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.22539a676064a3dd381aa5e1246b85961a9fabd502db97fce911140880fb4908 Jan 17 12:22:30.065364 containerd[1468]: 2025-01-17 12:22:29.979 [INFO][4425] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.80.128/26 handle="k8s-pod-network.22539a676064a3dd381aa5e1246b85961a9fabd502db97fce911140880fb4908" host="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:22:30.065364 containerd[1468]: 2025-01-17 12:22:29.991 [INFO][4425] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.80.133/26] block=192.168.80.128/26 handle="k8s-pod-network.22539a676064a3dd381aa5e1246b85961a9fabd502db97fce911140880fb4908" host="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:22:30.065364 containerd[1468]: 2025-01-17 12:22:29.992 [INFO][4425] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.80.133/26] handle="k8s-pod-network.22539a676064a3dd381aa5e1246b85961a9fabd502db97fce911140880fb4908" host="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:22:30.065364 containerd[1468]: 2025-01-17 12:22:29.992 [INFO][4425] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:22:30.065364 containerd[1468]: 2025-01-17 12:22:29.993 [INFO][4425] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.80.133/26] IPv6=[] ContainerID="22539a676064a3dd381aa5e1246b85961a9fabd502db97fce911140880fb4908" HandleID="k8s-pod-network.22539a676064a3dd381aa5e1246b85961a9fabd502db97fce911140880fb4908" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--xn5bq-eth0" Jan 17 12:22:30.066641 containerd[1468]: 2025-01-17 12:22:30.002 [INFO][4408] cni-plugin/k8s.go 386: Populated endpoint ContainerID="22539a676064a3dd381aa5e1246b85961a9fabd502db97fce911140880fb4908" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xn5bq" WorkloadEndpoint="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--xn5bq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--xn5bq-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"d4d7de4f-610c-42bb-9ed3-95154eded5ac", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 21, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-7db6d8ff4d-xn5bq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.80.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaa96cb793f3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:22:30.066641 containerd[1468]: 2025-01-17 12:22:30.004 [INFO][4408] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.80.133/32] ContainerID="22539a676064a3dd381aa5e1246b85961a9fabd502db97fce911140880fb4908" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xn5bq" WorkloadEndpoint="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--xn5bq-eth0" Jan 17 12:22:30.066641 containerd[1468]: 2025-01-17 12:22:30.005 [INFO][4408] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaa96cb793f3 ContainerID="22539a676064a3dd381aa5e1246b85961a9fabd502db97fce911140880fb4908" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xn5bq" WorkloadEndpoint="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--xn5bq-eth0" Jan 17 12:22:30.066641 containerd[1468]: 2025-01-17 12:22:30.013 [INFO][4408] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="22539a676064a3dd381aa5e1246b85961a9fabd502db97fce911140880fb4908" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xn5bq" WorkloadEndpoint="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--xn5bq-eth0" Jan 17 12:22:30.066641 containerd[1468]: 2025-01-17 12:22:30.016 [INFO][4408] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="22539a676064a3dd381aa5e1246b85961a9fabd502db97fce911140880fb4908" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xn5bq" WorkloadEndpoint="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--xn5bq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--xn5bq-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"d4d7de4f-610c-42bb-9ed3-95154eded5ac", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 21, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal", ContainerID:"22539a676064a3dd381aa5e1246b85961a9fabd502db97fce911140880fb4908", Pod:"coredns-7db6d8ff4d-xn5bq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.80.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaa96cb793f3", MAC:"1e:f3:75:7a:7c:d9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:22:30.066641 containerd[1468]: 2025-01-17 12:22:30.056 [INFO][4408] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="22539a676064a3dd381aa5e1246b85961a9fabd502db97fce911140880fb4908" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xn5bq" WorkloadEndpoint="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--xn5bq-eth0" Jan 17 12:22:30.110921 systemd[1]: Started cri-containerd-c6eb896913cdd7c710d2785bcddf0d99e8118b7b638f7183c0ae7d06179cddad.scope - libcontainer container c6eb896913cdd7c710d2785bcddf0d99e8118b7b638f7183c0ae7d06179cddad. Jan 17 12:22:30.179618 containerd[1468]: time="2025-01-17T12:22:30.179464392Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:22:30.181859 containerd[1468]: time="2025-01-17T12:22:30.180292456Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:22:30.183313 containerd[1468]: time="2025-01-17T12:22:30.181900363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:30.183313 containerd[1468]: time="2025-01-17T12:22:30.182042511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:30.222560 systemd[1]: Started cri-containerd-22539a676064a3dd381aa5e1246b85961a9fabd502db97fce911140880fb4908.scope - libcontainer container 22539a676064a3dd381aa5e1246b85961a9fabd502db97fce911140880fb4908. Jan 17 12:22:30.285223 containerd[1468]: time="2025-01-17T12:22:30.281200156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9xkbk,Uid:9a11ecfa-a757-43fc-8ab9-8e4424da26ba,Namespace:kube-system,Attempt:1,} returns sandbox id \"c6eb896913cdd7c710d2785bcddf0d99e8118b7b638f7183c0ae7d06179cddad\"" Jan 17 12:22:30.329770 containerd[1468]: time="2025-01-17T12:22:30.329448139Z" level=info msg="CreateContainer within sandbox \"c6eb896913cdd7c710d2785bcddf0d99e8118b7b638f7183c0ae7d06179cddad\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:22:30.366550 containerd[1468]: time="2025-01-17T12:22:30.365739542Z" level=info msg="CreateContainer within sandbox \"c6eb896913cdd7c710d2785bcddf0d99e8118b7b638f7183c0ae7d06179cddad\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2500e188f2eff340b736b2082a8f90dc08fc5e52a1da3dbf0d3a64b4e139ddd9\"" Jan 17 12:22:30.367293 containerd[1468]: time="2025-01-17T12:22:30.367229146Z" level=info msg="StopPodSandbox for \"913635f7277a52db91dcfb53c5da7a87373abb9db9f9bda690cdf2fb998d0d87\"" Jan 17 12:22:30.374384 containerd[1468]: time="2025-01-17T12:22:30.374115103Z" level=info msg="StartContainer for \"2500e188f2eff340b736b2082a8f90dc08fc5e52a1da3dbf0d3a64b4e139ddd9\"" Jan 17 12:22:30.429563 containerd[1468]: time="2025-01-17T12:22:30.429139967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xn5bq,Uid:d4d7de4f-610c-42bb-9ed3-95154eded5ac,Namespace:kube-system,Attempt:1,} returns sandbox id \"22539a676064a3dd381aa5e1246b85961a9fabd502db97fce911140880fb4908\"" Jan 17 12:22:30.474376 containerd[1468]: time="2025-01-17T12:22:30.472094664Z" level=info msg="CreateContainer within sandbox \"22539a676064a3dd381aa5e1246b85961a9fabd502db97fce911140880fb4908\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:22:30.496815 systemd[1]: Started cri-containerd-2500e188f2eff340b736b2082a8f90dc08fc5e52a1da3dbf0d3a64b4e139ddd9.scope - libcontainer container 2500e188f2eff340b736b2082a8f90dc08fc5e52a1da3dbf0d3a64b4e139ddd9. Jan 17 12:22:30.548001 containerd[1468]: time="2025-01-17T12:22:30.547883319Z" level=info msg="CreateContainer within sandbox \"22539a676064a3dd381aa5e1246b85961a9fabd502db97fce911140880fb4908\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4e2687f75ebabbd05b501250e1501945710cc6bf798b27419500e65a9d419d3b\"" Jan 17 12:22:30.550608 containerd[1468]: time="2025-01-17T12:22:30.550345366Z" level=info msg="StartContainer for \"4e2687f75ebabbd05b501250e1501945710cc6bf798b27419500e65a9d419d3b\"" Jan 17 12:22:30.653523 containerd[1468]: time="2025-01-17T12:22:30.652052901Z" level=info msg="StopContainer for \"672366ba8cba07e9db3f9dddf4a91b774ed132344eb44424e68445a7939b2c9c\" with timeout 300 (s)" Jan 17 12:22:30.657947 containerd[1468]: time="2025-01-17T12:22:30.657899267Z" level=info msg="Stop container \"672366ba8cba07e9db3f9dddf4a91b774ed132344eb44424e68445a7939b2c9c\" with signal terminated" Jan 17 12:22:30.722511 systemd[1]: Started cri-containerd-4e2687f75ebabbd05b501250e1501945710cc6bf798b27419500e65a9d419d3b.scope - libcontainer container 4e2687f75ebabbd05b501250e1501945710cc6bf798b27419500e65a9d419d3b. Jan 17 12:22:30.830375 containerd[1468]: time="2025-01-17T12:22:30.830181346Z" level=info msg="StartContainer for \"2500e188f2eff340b736b2082a8f90dc08fc5e52a1da3dbf0d3a64b4e139ddd9\" returns successfully" Jan 17 12:22:30.918978 containerd[1468]: time="2025-01-17T12:22:30.918904371Z" level=info msg="StartContainer for \"4e2687f75ebabbd05b501250e1501945710cc6bf798b27419500e65a9d419d3b\" returns successfully" Jan 17 12:22:31.013477 containerd[1468]: 2025-01-17 12:22:30.728 [INFO][4561] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="913635f7277a52db91dcfb53c5da7a87373abb9db9f9bda690cdf2fb998d0d87" Jan 17 12:22:31.013477 containerd[1468]: 2025-01-17 12:22:30.728 [INFO][4561] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="913635f7277a52db91dcfb53c5da7a87373abb9db9f9bda690cdf2fb998d0d87" iface="eth0" netns="/var/run/netns/cni-8e7a3e2c-2d87-ee38-5afa-ad2f242f5c29" Jan 17 12:22:31.013477 containerd[1468]: 2025-01-17 12:22:30.730 [INFO][4561] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="913635f7277a52db91dcfb53c5da7a87373abb9db9f9bda690cdf2fb998d0d87" iface="eth0" netns="/var/run/netns/cni-8e7a3e2c-2d87-ee38-5afa-ad2f242f5c29" Jan 17 12:22:31.013477 containerd[1468]: 2025-01-17 12:22:30.735 [INFO][4561] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="913635f7277a52db91dcfb53c5da7a87373abb9db9f9bda690cdf2fb998d0d87" iface="eth0" netns="/var/run/netns/cni-8e7a3e2c-2d87-ee38-5afa-ad2f242f5c29" Jan 17 12:22:31.013477 containerd[1468]: 2025-01-17 12:22:30.735 [INFO][4561] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="913635f7277a52db91dcfb53c5da7a87373abb9db9f9bda690cdf2fb998d0d87" Jan 17 12:22:31.013477 containerd[1468]: 2025-01-17 12:22:30.735 [INFO][4561] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="913635f7277a52db91dcfb53c5da7a87373abb9db9f9bda690cdf2fb998d0d87" Jan 17 12:22:31.013477 containerd[1468]: 2025-01-17 12:22:30.963 [INFO][4614] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="913635f7277a52db91dcfb53c5da7a87373abb9db9f9bda690cdf2fb998d0d87" HandleID="k8s-pod-network.913635f7277a52db91dcfb53c5da7a87373abb9db9f9bda690cdf2fb998d0d87" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--apiserver--7b5f56598d--kfl2c-eth0" Jan 17 12:22:31.013477 containerd[1468]: 2025-01-17 12:22:30.965 [INFO][4614] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:22:31.013477 containerd[1468]: 2025-01-17 12:22:30.965 [INFO][4614] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:22:31.013477 containerd[1468]: 2025-01-17 12:22:30.997 [WARNING][4614] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="913635f7277a52db91dcfb53c5da7a87373abb9db9f9bda690cdf2fb998d0d87" HandleID="k8s-pod-network.913635f7277a52db91dcfb53c5da7a87373abb9db9f9bda690cdf2fb998d0d87" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--apiserver--7b5f56598d--kfl2c-eth0" Jan 17 12:22:31.013477 containerd[1468]: 2025-01-17 12:22:30.997 [INFO][4614] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="913635f7277a52db91dcfb53c5da7a87373abb9db9f9bda690cdf2fb998d0d87" HandleID="k8s-pod-network.913635f7277a52db91dcfb53c5da7a87373abb9db9f9bda690cdf2fb998d0d87" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--apiserver--7b5f56598d--kfl2c-eth0" Jan 17 12:22:31.013477 containerd[1468]: 2025-01-17 12:22:31.001 [INFO][4614] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:22:31.013477 containerd[1468]: 2025-01-17 12:22:31.007 [INFO][4561] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="913635f7277a52db91dcfb53c5da7a87373abb9db9f9bda690cdf2fb998d0d87" Jan 17 12:22:31.018425 containerd[1468]: time="2025-01-17T12:22:31.016891538Z" level=info msg="TearDown network for sandbox \"913635f7277a52db91dcfb53c5da7a87373abb9db9f9bda690cdf2fb998d0d87\" successfully" Jan 17 12:22:31.018425 containerd[1468]: time="2025-01-17T12:22:31.016935559Z" level=info msg="StopPodSandbox for \"913635f7277a52db91dcfb53c5da7a87373abb9db9f9bda690cdf2fb998d0d87\" returns successfully" Jan 17 12:22:31.025270 containerd[1468]: time="2025-01-17T12:22:31.022359813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b5f56598d-kfl2c,Uid:df01509d-c2e3-4521-bb4f-4b625ab957e3,Namespace:calico-apiserver,Attempt:1,}" Jan 17 12:22:31.027709 systemd[1]: run-netns-cni\x2d8e7a3e2c\x2d2d87\x2dee38\x2d5afa\x2dad2f242f5c29.mount: Deactivated successfully. Jan 17 12:22:31.135353 systemd-networkd[1378]: caliaa96cb793f3: Gained IPv6LL Jan 17 12:22:31.239217 containerd[1468]: time="2025-01-17T12:22:31.239102935Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:31.242457 containerd[1468]: time="2025-01-17T12:22:31.242389295Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 17 12:22:31.244613 containerd[1468]: time="2025-01-17T12:22:31.244507503Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:31.253273 containerd[1468]: time="2025-01-17T12:22:31.252823977Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.169840164s" Jan 17 12:22:31.253273 containerd[1468]: time="2025-01-17T12:22:31.252881669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 17 12:22:31.254378 containerd[1468]: time="2025-01-17T12:22:31.254328754Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:31.260280 containerd[1468]: time="2025-01-17T12:22:31.259856647Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 17 12:22:31.261856 containerd[1468]: time="2025-01-17T12:22:31.261810941Z" level=info msg="CreateContainer within sandbox \"afad4e44e505c1ce870e0cf93d904ef1ee24c99458cd5e11fc5cccdb0ed74060\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 17 12:22:31.296653 containerd[1468]: time="2025-01-17T12:22:31.296459532Z" level=info msg="CreateContainer within sandbox \"afad4e44e505c1ce870e0cf93d904ef1ee24c99458cd5e11fc5cccdb0ed74060\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"a8d0122d012490d7e0069dada3091a818a184a970d18abe8c8d6ecf120c9e969\"" Jan 17 12:22:31.297997 containerd[1468]: time="2025-01-17T12:22:31.297489991Z" level=info msg="StartContainer for \"a8d0122d012490d7e0069dada3091a818a184a970d18abe8c8d6ecf120c9e969\"" Jan 17 12:22:31.327882 systemd-networkd[1378]: cali24b7f2a76c3: Gained IPv6LL Jan 17 12:22:31.414885 systemd[1]: Started cri-containerd-a8d0122d012490d7e0069dada3091a818a184a970d18abe8c8d6ecf120c9e969.scope - libcontainer container a8d0122d012490d7e0069dada3091a818a184a970d18abe8c8d6ecf120c9e969. Jan 17 12:22:31.478151 systemd-networkd[1378]: cali3af6971d627: Link UP Jan 17 12:22:31.478540 systemd-networkd[1378]: cali3af6971d627: Gained carrier Jan 17 12:22:31.509084 containerd[1468]: 2025-01-17 12:22:31.233 [INFO][4668] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--apiserver--7b5f56598d--kfl2c-eth0 calico-apiserver-7b5f56598d- calico-apiserver df01509d-c2e3-4521-bb4f-4b625ab957e3 892 0 2025-01-17 12:22:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7b5f56598d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal calico-apiserver-7b5f56598d-kfl2c eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3af6971d627 [] []}} ContainerID="a8d7c215db44b8190ee77c848a375951ed9c10a363627babc1e98d2765766060" Namespace="calico-apiserver" Pod="calico-apiserver-7b5f56598d-kfl2c" WorkloadEndpoint="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--apiserver--7b5f56598d--kfl2c-" Jan 17 12:22:31.509084 containerd[1468]: 2025-01-17 12:22:31.233 [INFO][4668] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a8d7c215db44b8190ee77c848a375951ed9c10a363627babc1e98d2765766060" Namespace="calico-apiserver" Pod="calico-apiserver-7b5f56598d-kfl2c" WorkloadEndpoint="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--apiserver--7b5f56598d--kfl2c-eth0" Jan 17 12:22:31.509084 containerd[1468]: 2025-01-17 12:22:31.338 [INFO][4680] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a8d7c215db44b8190ee77c848a375951ed9c10a363627babc1e98d2765766060" HandleID="k8s-pod-network.a8d7c215db44b8190ee77c848a375951ed9c10a363627babc1e98d2765766060" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--apiserver--7b5f56598d--kfl2c-eth0" Jan 17 12:22:31.509084 containerd[1468]: 2025-01-17 12:22:31.377 [INFO][4680] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a8d7c215db44b8190ee77c848a375951ed9c10a363627babc1e98d2765766060" HandleID="k8s-pod-network.a8d7c215db44b8190ee77c848a375951ed9c10a363627babc1e98d2765766060" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--apiserver--7b5f56598d--kfl2c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051a70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal", "pod":"calico-apiserver-7b5f56598d-kfl2c", "timestamp":"2025-01-17 12:22:31.338167737 +0000 UTC"}, Hostname:"ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:22:31.509084 containerd[1468]: 2025-01-17 12:22:31.377 [INFO][4680] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:22:31.509084 containerd[1468]: 2025-01-17 12:22:31.377 [INFO][4680] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:22:31.509084 containerd[1468]: 2025-01-17 12:22:31.377 [INFO][4680] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal' Jan 17 12:22:31.509084 containerd[1468]: 2025-01-17 12:22:31.388 [INFO][4680] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a8d7c215db44b8190ee77c848a375951ed9c10a363627babc1e98d2765766060" host="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:22:31.509084 containerd[1468]: 2025-01-17 12:22:31.420 [INFO][4680] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:22:31.509084 containerd[1468]: 2025-01-17 12:22:31.429 [INFO][4680] ipam/ipam.go 489: Trying affinity for 192.168.80.128/26 host="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:22:31.509084 containerd[1468]: 2025-01-17 12:22:31.432 [INFO][4680] ipam/ipam.go 155: Attempting to load block cidr=192.168.80.128/26 host="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:22:31.509084 containerd[1468]: 2025-01-17 12:22:31.436 [INFO][4680] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.80.128/26 host="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:22:31.509084 containerd[1468]: 2025-01-17 12:22:31.436 [INFO][4680] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.80.128/26 handle="k8s-pod-network.a8d7c215db44b8190ee77c848a375951ed9c10a363627babc1e98d2765766060" host="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:22:31.509084 containerd[1468]: 2025-01-17 12:22:31.439 [INFO][4680] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a8d7c215db44b8190ee77c848a375951ed9c10a363627babc1e98d2765766060 Jan 17 12:22:31.509084 containerd[1468]: 2025-01-17 12:22:31.450 [INFO][4680] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.80.128/26 handle="k8s-pod-network.a8d7c215db44b8190ee77c848a375951ed9c10a363627babc1e98d2765766060" host="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:22:31.509084 containerd[1468]: 2025-01-17 12:22:31.464 [INFO][4680] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.80.134/26] block=192.168.80.128/26 handle="k8s-pod-network.a8d7c215db44b8190ee77c848a375951ed9c10a363627babc1e98d2765766060" host="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:22:31.509084 containerd[1468]: 2025-01-17 12:22:31.464 [INFO][4680] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.80.134/26] handle="k8s-pod-network.a8d7c215db44b8190ee77c848a375951ed9c10a363627babc1e98d2765766060" host="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:22:31.509084 containerd[1468]: 2025-01-17 12:22:31.464 [INFO][4680] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:22:31.509084 containerd[1468]: 2025-01-17 12:22:31.464 [INFO][4680] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.80.134/26] IPv6=[] ContainerID="a8d7c215db44b8190ee77c848a375951ed9c10a363627babc1e98d2765766060" HandleID="k8s-pod-network.a8d7c215db44b8190ee77c848a375951ed9c10a363627babc1e98d2765766060" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--apiserver--7b5f56598d--kfl2c-eth0" Jan 17 12:22:31.512541 containerd[1468]: 2025-01-17 12:22:31.469 [INFO][4668] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a8d7c215db44b8190ee77c848a375951ed9c10a363627babc1e98d2765766060" Namespace="calico-apiserver" Pod="calico-apiserver-7b5f56598d-kfl2c" WorkloadEndpoint="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--apiserver--7b5f56598d--kfl2c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--apiserver--7b5f56598d--kfl2c-eth0", GenerateName:"calico-apiserver-7b5f56598d-", Namespace:"calico-apiserver", SelfLink:"", UID:"df01509d-c2e3-4521-bb4f-4b625ab957e3", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b5f56598d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-7b5f56598d-kfl2c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.80.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3af6971d627", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:22:31.512541 containerd[1468]: 2025-01-17 12:22:31.469 [INFO][4668] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.80.134/32] ContainerID="a8d7c215db44b8190ee77c848a375951ed9c10a363627babc1e98d2765766060" Namespace="calico-apiserver" Pod="calico-apiserver-7b5f56598d-kfl2c" WorkloadEndpoint="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--apiserver--7b5f56598d--kfl2c-eth0" Jan 17 12:22:31.512541 containerd[1468]: 2025-01-17 12:22:31.469 [INFO][4668] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3af6971d627 ContainerID="a8d7c215db44b8190ee77c848a375951ed9c10a363627babc1e98d2765766060" Namespace="calico-apiserver" Pod="calico-apiserver-7b5f56598d-kfl2c" WorkloadEndpoint="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--apiserver--7b5f56598d--kfl2c-eth0" Jan 17 12:22:31.512541 containerd[1468]: 2025-01-17 12:22:31.477 [INFO][4668] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a8d7c215db44b8190ee77c848a375951ed9c10a363627babc1e98d2765766060" Namespace="calico-apiserver" Pod="calico-apiserver-7b5f56598d-kfl2c" WorkloadEndpoint="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--apiserver--7b5f56598d--kfl2c-eth0" Jan 17 12:22:31.512541 containerd[1468]: 2025-01-17 12:22:31.482 [INFO][4668] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a8d7c215db44b8190ee77c848a375951ed9c10a363627babc1e98d2765766060" Namespace="calico-apiserver" Pod="calico-apiserver-7b5f56598d-kfl2c" WorkloadEndpoint="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--apiserver--7b5f56598d--kfl2c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--apiserver--7b5f56598d--kfl2c-eth0", GenerateName:"calico-apiserver-7b5f56598d-", Namespace:"calico-apiserver", SelfLink:"", UID:"df01509d-c2e3-4521-bb4f-4b625ab957e3", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b5f56598d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal", ContainerID:"a8d7c215db44b8190ee77c848a375951ed9c10a363627babc1e98d2765766060", Pod:"calico-apiserver-7b5f56598d-kfl2c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.80.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3af6971d627", MAC:"76:c0:e6:74:82:29", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:22:31.512541 containerd[1468]: 2025-01-17 12:22:31.504 [INFO][4668] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a8d7c215db44b8190ee77c848a375951ed9c10a363627babc1e98d2765766060" Namespace="calico-apiserver" Pod="calico-apiserver-7b5f56598d-kfl2c" WorkloadEndpoint="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--apiserver--7b5f56598d--kfl2c-eth0" Jan 17 12:22:31.577438 containerd[1468]: time="2025-01-17T12:22:31.576356881Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:22:31.577438 containerd[1468]: time="2025-01-17T12:22:31.576460330Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:22:31.577438 containerd[1468]: time="2025-01-17T12:22:31.576483774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:31.577438 containerd[1468]: time="2025-01-17T12:22:31.576624509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:31.638509 systemd[1]: Started cri-containerd-a8d7c215db44b8190ee77c848a375951ed9c10a363627babc1e98d2765766060.scope - libcontainer container a8d7c215db44b8190ee77c848a375951ed9c10a363627babc1e98d2765766060. Jan 17 12:22:31.749326 kubelet[2610]: I0117 12:22:31.748732 2610 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-xn5bq" podStartSLOduration=35.748693389 podStartE2EDuration="35.748693389s" podCreationTimestamp="2025-01-17 12:21:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:22:31.74641777 +0000 UTC m=+48.574516327" watchObservedRunningTime="2025-01-17 12:22:31.748693389 +0000 UTC m=+48.576791950" Jan 17 12:22:31.749326 kubelet[2610]: I0117 12:22:31.749007 2610 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-9xkbk" podStartSLOduration=35.748991357 podStartE2EDuration="35.748991357s" podCreationTimestamp="2025-01-17 12:21:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:22:31.716543641 +0000 UTC m=+48.544642199" watchObservedRunningTime="2025-01-17 12:22:31.748991357 +0000 UTC m=+48.577089911" Jan 17 12:22:31.762966 containerd[1468]: time="2025-01-17T12:22:31.761488946Z" level=info msg="StartContainer for \"a8d0122d012490d7e0069dada3091a818a184a970d18abe8c8d6ecf120c9e969\" returns successfully" Jan 17 12:22:32.063783 containerd[1468]: time="2025-01-17T12:22:32.063571104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b5f56598d-kfl2c,Uid:df01509d-c2e3-4521-bb4f-4b625ab957e3,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"a8d7c215db44b8190ee77c848a375951ed9c10a363627babc1e98d2765766060\"" Jan 17 12:22:32.180936 containerd[1468]: time="2025-01-17T12:22:32.180876083Z" level=info msg="StopContainer for \"ca7d21af515bd9ed9aa48443193c9ee328e85a112d8f121f6d6a631aad0127d3\" with timeout 5 (s)" Jan 17 12:22:32.181452 containerd[1468]: time="2025-01-17T12:22:32.181383109Z" level=info msg="Stop container \"ca7d21af515bd9ed9aa48443193c9ee328e85a112d8f121f6d6a631aad0127d3\" with signal terminated" Jan 17 12:22:32.250947 systemd[1]: cri-containerd-ca7d21af515bd9ed9aa48443193c9ee328e85a112d8f121f6d6a631aad0127d3.scope: Deactivated successfully. Jan 17 12:22:32.251829 systemd[1]: cri-containerd-ca7d21af515bd9ed9aa48443193c9ee328e85a112d8f121f6d6a631aad0127d3.scope: Consumed 2.174s CPU time. Jan 17 12:22:32.327674 containerd[1468]: time="2025-01-17T12:22:32.327261394Z" level=info msg="shim disconnected" id=ca7d21af515bd9ed9aa48443193c9ee328e85a112d8f121f6d6a631aad0127d3 namespace=k8s.io Jan 17 12:22:32.327674 containerd[1468]: time="2025-01-17T12:22:32.327338454Z" level=warning msg="cleaning up after shim disconnected" id=ca7d21af515bd9ed9aa48443193c9ee328e85a112d8f121f6d6a631aad0127d3 namespace=k8s.io Jan 17 12:22:32.327674 containerd[1468]: time="2025-01-17T12:22:32.327355395Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:22:32.332372 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca7d21af515bd9ed9aa48443193c9ee328e85a112d8f121f6d6a631aad0127d3-rootfs.mount: Deactivated successfully. Jan 17 12:22:32.377500 containerd[1468]: time="2025-01-17T12:22:32.377313513Z" level=warning msg="cleanup warnings time=\"2025-01-17T12:22:32Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 12:22:32.714768 containerd[1468]: time="2025-01-17T12:22:32.714513751Z" level=info msg="StopContainer for \"347dff1db46fe15811ded95dce41963944b945de9c14f908877e96deeea10018\" with timeout 30 (s)" Jan 17 12:22:32.715380 containerd[1468]: time="2025-01-17T12:22:32.715220516Z" level=info msg="Stop container \"347dff1db46fe15811ded95dce41963944b945de9c14f908877e96deeea10018\" with signal terminated" Jan 17 12:22:32.731521 systemd[1]: cri-containerd-347dff1db46fe15811ded95dce41963944b945de9c14f908877e96deeea10018.scope: Deactivated successfully. Jan 17 12:22:32.786026 containerd[1468]: time="2025-01-17T12:22:32.785646039Z" level=info msg="shim disconnected" id=347dff1db46fe15811ded95dce41963944b945de9c14f908877e96deeea10018 namespace=k8s.io Jan 17 12:22:32.786026 containerd[1468]: time="2025-01-17T12:22:32.785750949Z" level=warning msg="cleaning up after shim disconnected" id=347dff1db46fe15811ded95dce41963944b945de9c14f908877e96deeea10018 namespace=k8s.io Jan 17 12:22:32.786026 containerd[1468]: time="2025-01-17T12:22:32.785786998Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:22:32.796296 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-347dff1db46fe15811ded95dce41963944b945de9c14f908877e96deeea10018-rootfs.mount: Deactivated successfully. Jan 17 12:22:32.800661 systemd-networkd[1378]: cali3af6971d627: Gained IPv6LL Jan 17 12:22:34.470155 systemd[1]: cri-containerd-672366ba8cba07e9db3f9dddf4a91b774ed132344eb44424e68445a7939b2c9c.scope: Deactivated successfully. Jan 17 12:22:34.511728 containerd[1468]: time="2025-01-17T12:22:34.511349221Z" level=info msg="shim disconnected" id=672366ba8cba07e9db3f9dddf4a91b774ed132344eb44424e68445a7939b2c9c namespace=k8s.io Jan 17 12:22:34.511728 containerd[1468]: time="2025-01-17T12:22:34.511497207Z" level=warning msg="cleaning up after shim disconnected" id=672366ba8cba07e9db3f9dddf4a91b774ed132344eb44424e68445a7939b2c9c namespace=k8s.io Jan 17 12:22:34.511728 containerd[1468]: time="2025-01-17T12:22:34.511524890Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:22:34.519224 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-672366ba8cba07e9db3f9dddf4a91b774ed132344eb44424e68445a7939b2c9c-rootfs.mount: Deactivated successfully. Jan 17 12:22:34.780433 containerd[1468]: time="2025-01-17T12:22:34.779664240Z" level=info msg="StopContainer for \"ca7d21af515bd9ed9aa48443193c9ee328e85a112d8f121f6d6a631aad0127d3\" returns successfully" Jan 17 12:22:34.783767 containerd[1468]: time="2025-01-17T12:22:34.783729220Z" level=info msg="StopPodSandbox for \"d222c5822bc7a5e4d6cc39e105d819fb42ddc057460292a00475c7bc703bf3f9\"" Jan 17 12:22:34.785516 containerd[1468]: time="2025-01-17T12:22:34.785478274Z" level=info msg="Container to stop \"b1ea437d3988c0fc1de3c6f592b6b179db3c27e04115404ee910712f0b62a78e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:22:34.785824 containerd[1468]: time="2025-01-17T12:22:34.785665439Z" level=info msg="Container to stop \"83b9107a76c5b913e1517ea8c18a6ea326782286bc882d47eb269a40f868a8c2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:22:34.785824 containerd[1468]: time="2025-01-17T12:22:34.785695345Z" level=info msg="Container to stop \"ca7d21af515bd9ed9aa48443193c9ee328e85a112d8f121f6d6a631aad0127d3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:22:34.790382 containerd[1468]: time="2025-01-17T12:22:34.789669329Z" level=info msg="StopContainer for \"672366ba8cba07e9db3f9dddf4a91b774ed132344eb44424e68445a7939b2c9c\" returns successfully" Jan 17 12:22:34.792856 containerd[1468]: time="2025-01-17T12:22:34.790797168Z" level=info msg="StopContainer for \"347dff1db46fe15811ded95dce41963944b945de9c14f908877e96deeea10018\" returns successfully" Jan 17 12:22:34.794961 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d222c5822bc7a5e4d6cc39e105d819fb42ddc057460292a00475c7bc703bf3f9-shm.mount: Deactivated successfully. Jan 17 12:22:34.795686 containerd[1468]: time="2025-01-17T12:22:34.795446860Z" level=info msg="StopPodSandbox for \"cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622\"" Jan 17 12:22:34.795686 containerd[1468]: time="2025-01-17T12:22:34.795500266Z" level=info msg="Container to stop \"347dff1db46fe15811ded95dce41963944b945de9c14f908877e96deeea10018\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:22:34.801644 containerd[1468]: time="2025-01-17T12:22:34.801596400Z" level=info msg="StopPodSandbox for \"c8ef91d6f3de0543d3a5e539f2083ca3036678d425cc060392d2e96b697c2165\"" Jan 17 12:22:34.801804 containerd[1468]: time="2025-01-17T12:22:34.801653266Z" level=info msg="Container to stop \"672366ba8cba07e9db3f9dddf4a91b774ed132344eb44424e68445a7939b2c9c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:22:34.810324 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622-shm.mount: Deactivated successfully. Jan 17 12:22:34.826094 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c8ef91d6f3de0543d3a5e539f2083ca3036678d425cc060392d2e96b697c2165-shm.mount: Deactivated successfully. Jan 17 12:22:34.833714 systemd[1]: cri-containerd-cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622.scope: Deactivated successfully. Jan 17 12:22:34.845000 systemd[1]: cri-containerd-d222c5822bc7a5e4d6cc39e105d819fb42ddc057460292a00475c7bc703bf3f9.scope: Deactivated successfully. Jan 17 12:22:34.848188 systemd[1]: cri-containerd-c8ef91d6f3de0543d3a5e539f2083ca3036678d425cc060392d2e96b697c2165.scope: Deactivated successfully. Jan 17 12:22:34.962994 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d222c5822bc7a5e4d6cc39e105d819fb42ddc057460292a00475c7bc703bf3f9-rootfs.mount: Deactivated successfully. Jan 17 12:22:34.970776 containerd[1468]: time="2025-01-17T12:22:34.969156727Z" level=info msg="shim disconnected" id=d222c5822bc7a5e4d6cc39e105d819fb42ddc057460292a00475c7bc703bf3f9 namespace=k8s.io Jan 17 12:22:34.970776 containerd[1468]: time="2025-01-17T12:22:34.969233678Z" level=warning msg="cleaning up after shim disconnected" id=d222c5822bc7a5e4d6cc39e105d819fb42ddc057460292a00475c7bc703bf3f9 namespace=k8s.io Jan 17 12:22:34.970776 containerd[1468]: time="2025-01-17T12:22:34.969278986Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:22:34.979241 containerd[1468]: time="2025-01-17T12:22:34.978715053Z" level=info msg="shim disconnected" id=cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622 namespace=k8s.io Jan 17 12:22:34.979241 containerd[1468]: time="2025-01-17T12:22:34.978786524Z" level=warning msg="cleaning up after shim disconnected" id=cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622 namespace=k8s.io Jan 17 12:22:34.979241 containerd[1468]: time="2025-01-17T12:22:34.978801023Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:22:34.981065 containerd[1468]: time="2025-01-17T12:22:34.980829883Z" level=info msg="shim disconnected" id=c8ef91d6f3de0543d3a5e539f2083ca3036678d425cc060392d2e96b697c2165 namespace=k8s.io Jan 17 12:22:34.981065 containerd[1468]: time="2025-01-17T12:22:34.980894950Z" level=warning msg="cleaning up after shim disconnected" id=c8ef91d6f3de0543d3a5e539f2083ca3036678d425cc060392d2e96b697c2165 namespace=k8s.io Jan 17 12:22:34.981065 containerd[1468]: time="2025-01-17T12:22:34.980911995Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:22:35.008670 containerd[1468]: time="2025-01-17T12:22:35.008593466Z" level=warning msg="cleanup warnings time=\"2025-01-17T12:22:35Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 12:22:35.032061 containerd[1468]: time="2025-01-17T12:22:35.031290816Z" level=info msg="TearDown network for sandbox \"d222c5822bc7a5e4d6cc39e105d819fb42ddc057460292a00475c7bc703bf3f9\" successfully" Jan 17 12:22:35.034879 containerd[1468]: time="2025-01-17T12:22:35.033687128Z" level=info msg="StopPodSandbox for \"d222c5822bc7a5e4d6cc39e105d819fb42ddc057460292a00475c7bc703bf3f9\" returns successfully" Jan 17 12:22:35.073403 containerd[1468]: time="2025-01-17T12:22:35.073218011Z" level=info msg="TearDown network for sandbox \"c8ef91d6f3de0543d3a5e539f2083ca3036678d425cc060392d2e96b697c2165\" successfully" Jan 17 12:22:35.073403 containerd[1468]: time="2025-01-17T12:22:35.073293516Z" level=info msg="StopPodSandbox for \"c8ef91d6f3de0543d3a5e539f2083ca3036678d425cc060392d2e96b697c2165\" returns successfully" Jan 17 12:22:35.102943 kubelet[2610]: I0117 12:22:35.100401 2610 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3e894b53-0a8f-4242-acbe-0970b488515e-cni-bin-dir\") pod \"3e894b53-0a8f-4242-acbe-0970b488515e\" (UID: \"3e894b53-0a8f-4242-acbe-0970b488515e\") " Jan 17 12:22:35.102943 kubelet[2610]: I0117 12:22:35.100454 2610 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3e894b53-0a8f-4242-acbe-0970b488515e-var-run-calico\") pod \"3e894b53-0a8f-4242-acbe-0970b488515e\" (UID: \"3e894b53-0a8f-4242-acbe-0970b488515e\") " Jan 17 12:22:35.102943 kubelet[2610]: I0117 12:22:35.100497 2610 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3e894b53-0a8f-4242-acbe-0970b488515e-node-certs\") pod \"3e894b53-0a8f-4242-acbe-0970b488515e\" (UID: \"3e894b53-0a8f-4242-acbe-0970b488515e\") " Jan 17 12:22:35.102943 kubelet[2610]: I0117 12:22:35.100530 2610 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxbz5\" (UniqueName: \"kubernetes.io/projected/3e894b53-0a8f-4242-acbe-0970b488515e-kube-api-access-kxbz5\") pod \"3e894b53-0a8f-4242-acbe-0970b488515e\" (UID: \"3e894b53-0a8f-4242-acbe-0970b488515e\") " Jan 17 12:22:35.102943 kubelet[2610]: I0117 12:22:35.100568 2610 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3e894b53-0a8f-4242-acbe-0970b488515e-policysync\") pod \"3e894b53-0a8f-4242-acbe-0970b488515e\" (UID: \"3e894b53-0a8f-4242-acbe-0970b488515e\") " Jan 17 12:22:35.102943 kubelet[2610]: I0117 12:22:35.100594 2610 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3e894b53-0a8f-4242-acbe-0970b488515e-flexvol-driver-host\") pod \"3e894b53-0a8f-4242-acbe-0970b488515e\" (UID: \"3e894b53-0a8f-4242-acbe-0970b488515e\") " Jan 17 12:22:35.103852 kubelet[2610]: I0117 12:22:35.100620 2610 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3e894b53-0a8f-4242-acbe-0970b488515e-cni-net-dir\") pod \"3e894b53-0a8f-4242-acbe-0970b488515e\" (UID: \"3e894b53-0a8f-4242-acbe-0970b488515e\") " Jan 17 12:22:35.103852 kubelet[2610]: I0117 12:22:35.100647 2610 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3e894b53-0a8f-4242-acbe-0970b488515e-cni-log-dir\") pod \"3e894b53-0a8f-4242-acbe-0970b488515e\" (UID: \"3e894b53-0a8f-4242-acbe-0970b488515e\") " Jan 17 12:22:35.103852 kubelet[2610]: I0117 12:22:35.100676 2610 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3e894b53-0a8f-4242-acbe-0970b488515e-var-lib-calico\") pod \"3e894b53-0a8f-4242-acbe-0970b488515e\" (UID: \"3e894b53-0a8f-4242-acbe-0970b488515e\") " Jan 17 12:22:35.103852 kubelet[2610]: I0117 12:22:35.100707 2610 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e894b53-0a8f-4242-acbe-0970b488515e-tigera-ca-bundle\") pod \"3e894b53-0a8f-4242-acbe-0970b488515e\" (UID: \"3e894b53-0a8f-4242-acbe-0970b488515e\") " Jan 17 12:22:35.103852 kubelet[2610]: I0117 12:22:35.100736 2610 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e894b53-0a8f-4242-acbe-0970b488515e-lib-modules\") pod \"3e894b53-0a8f-4242-acbe-0970b488515e\" (UID: \"3e894b53-0a8f-4242-acbe-0970b488515e\") " Jan 17 12:22:35.103852 kubelet[2610]: I0117 12:22:35.100761 2610 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e894b53-0a8f-4242-acbe-0970b488515e-xtables-lock\") pod \"3e894b53-0a8f-4242-acbe-0970b488515e\" (UID: \"3e894b53-0a8f-4242-acbe-0970b488515e\") " Jan 17 12:22:35.106519 kubelet[2610]: I0117 12:22:35.100871 2610 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e894b53-0a8f-4242-acbe-0970b488515e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3e894b53-0a8f-4242-acbe-0970b488515e" (UID: "3e894b53-0a8f-4242-acbe-0970b488515e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:22:35.106519 kubelet[2610]: I0117 12:22:35.100930 2610 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e894b53-0a8f-4242-acbe-0970b488515e-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "3e894b53-0a8f-4242-acbe-0970b488515e" (UID: "3e894b53-0a8f-4242-acbe-0970b488515e"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:22:35.106519 kubelet[2610]: I0117 12:22:35.100959 2610 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e894b53-0a8f-4242-acbe-0970b488515e-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "3e894b53-0a8f-4242-acbe-0970b488515e" (UID: "3e894b53-0a8f-4242-acbe-0970b488515e"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:22:35.107803 ntpd[1430]: 17 Jan 12:22:35 ntpd[1430]: Listen normally on 7 vxlan.calico 192.168.80.128:123 Jan 17 12:22:35.107803 ntpd[1430]: 17 Jan 12:22:35 ntpd[1430]: Listen normally on 8 vxlan.calico [fe80::6447:71ff:fec8:2e29%4]:123 Jan 17 12:22:35.107803 ntpd[1430]: 17 Jan 12:22:35 ntpd[1430]: Listen normally on 9 cali667f700302b [fe80::ecee:eeff:feee:eeee%7]:123 Jan 17 12:22:35.107803 ntpd[1430]: 17 Jan 12:22:35 ntpd[1430]: Listen normally on 10 calie177f1e9a5f [fe80::ecee:eeff:feee:eeee%8]:123 Jan 17 12:22:35.107803 ntpd[1430]: 17 Jan 12:22:35 ntpd[1430]: Listen normally on 11 cali57eb19459d5 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 17 12:22:35.107803 ntpd[1430]: 17 Jan 12:22:35 ntpd[1430]: Listen normally on 12 cali24b7f2a76c3 [fe80::ecee:eeff:feee:eeee%10]:123 Jan 17 12:22:35.107803 ntpd[1430]: 17 Jan 12:22:35 ntpd[1430]: Listen normally on 13 caliaa96cb793f3 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 17 12:22:35.107803 ntpd[1430]: 17 Jan 12:22:35 ntpd[1430]: Listen normally on 14 cali3af6971d627 [fe80::ecee:eeff:feee:eeee%12]:123 Jan 17 12:22:35.104724 ntpd[1430]: Listen normally on 7 vxlan.calico 192.168.80.128:123 Jan 17 12:22:35.111039 kubelet[2610]: I0117 12:22:35.108705 2610 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e894b53-0a8f-4242-acbe-0970b488515e-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "3e894b53-0a8f-4242-acbe-0970b488515e" (UID: "3e894b53-0a8f-4242-acbe-0970b488515e"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:22:35.111039 kubelet[2610]: I0117 12:22:35.108760 2610 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e894b53-0a8f-4242-acbe-0970b488515e-policysync" (OuterVolumeSpecName: "policysync") pod "3e894b53-0a8f-4242-acbe-0970b488515e" (UID: "3e894b53-0a8f-4242-acbe-0970b488515e"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:22:35.111039 kubelet[2610]: I0117 12:22:35.108787 2610 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e894b53-0a8f-4242-acbe-0970b488515e-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "3e894b53-0a8f-4242-acbe-0970b488515e" (UID: "3e894b53-0a8f-4242-acbe-0970b488515e"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:22:35.111039 kubelet[2610]: I0117 12:22:35.108817 2610 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e894b53-0a8f-4242-acbe-0970b488515e-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "3e894b53-0a8f-4242-acbe-0970b488515e" (UID: "3e894b53-0a8f-4242-acbe-0970b488515e"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:22:35.104825 ntpd[1430]: Listen normally on 8 vxlan.calico [fe80::6447:71ff:fec8:2e29%4]:123 Jan 17 12:22:35.104907 ntpd[1430]: Listen normally on 9 cali667f700302b [fe80::ecee:eeff:feee:eeee%7]:123 Jan 17 12:22:35.104966 ntpd[1430]: Listen normally on 10 calie177f1e9a5f [fe80::ecee:eeff:feee:eeee%8]:123 Jan 17 12:22:35.105023 ntpd[1430]: Listen normally on 11 cali57eb19459d5 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 17 12:22:35.105077 ntpd[1430]: Listen normally on 12 cali24b7f2a76c3 [fe80::ecee:eeff:feee:eeee%10]:123 Jan 17 12:22:35.105131 ntpd[1430]: Listen normally on 13 caliaa96cb793f3 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 17 12:22:35.106201 ntpd[1430]: Listen normally on 14 cali3af6971d627 [fe80::ecee:eeff:feee:eeee%12]:123 Jan 17 12:22:35.114870 kubelet[2610]: I0117 12:22:35.114485 2610 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e894b53-0a8f-4242-acbe-0970b488515e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3e894b53-0a8f-4242-acbe-0970b488515e" (UID: "3e894b53-0a8f-4242-acbe-0970b488515e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:22:35.114870 kubelet[2610]: I0117 12:22:35.114630 2610 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e894b53-0a8f-4242-acbe-0970b488515e-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "3e894b53-0a8f-4242-acbe-0970b488515e" (UID: "3e894b53-0a8f-4242-acbe-0970b488515e"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:22:35.128818 kubelet[2610]: I0117 12:22:35.128716 2610 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e894b53-0a8f-4242-acbe-0970b488515e-node-certs" (OuterVolumeSpecName: "node-certs") pod "3e894b53-0a8f-4242-acbe-0970b488515e" (UID: "3e894b53-0a8f-4242-acbe-0970b488515e"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 17 12:22:35.129185 kubelet[2610]: I0117 12:22:35.129095 2610 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e894b53-0a8f-4242-acbe-0970b488515e-kube-api-access-kxbz5" (OuterVolumeSpecName: "kube-api-access-kxbz5") pod "3e894b53-0a8f-4242-acbe-0970b488515e" (UID: "3e894b53-0a8f-4242-acbe-0970b488515e"). InnerVolumeSpecName "kube-api-access-kxbz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 17 12:22:35.140660 kubelet[2610]: I0117 12:22:35.140395 2610 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e894b53-0a8f-4242-acbe-0970b488515e-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "3e894b53-0a8f-4242-acbe-0970b488515e" (UID: "3e894b53-0a8f-4242-acbe-0970b488515e"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 17 12:22:35.177058 kubelet[2610]: I0117 12:22:35.174068 2610 topology_manager.go:215] "Topology Admit Handler" podUID="e126b548-4bb4-4411-ac57-4228ab1056b8" podNamespace="calico-system" podName="calico-node-sjk7p" Jan 17 12:22:35.177058 kubelet[2610]: E0117 12:22:35.174164 2610 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3e894b53-0a8f-4242-acbe-0970b488515e" containerName="flexvol-driver" Jan 17 12:22:35.177058 kubelet[2610]: E0117 12:22:35.174180 2610 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3e894b53-0a8f-4242-acbe-0970b488515e" containerName="install-cni" Jan 17 12:22:35.177058 kubelet[2610]: E0117 12:22:35.174191 2610 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3e894b53-0a8f-4242-acbe-0970b488515e" containerName="calico-node" Jan 17 12:22:35.177058 kubelet[2610]: E0117 12:22:35.174203 2610 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="04762b83-9859-4ba7-8017-1bd889867ddf" containerName="calico-typha" Jan 17 12:22:35.177058 kubelet[2610]: I0117 12:22:35.174274 2610 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e894b53-0a8f-4242-acbe-0970b488515e" containerName="calico-node" Jan 17 12:22:35.177058 kubelet[2610]: I0117 12:22:35.174287 2610 memory_manager.go:354] "RemoveStaleState removing state" podUID="04762b83-9859-4ba7-8017-1bd889867ddf" containerName="calico-typha" Jan 17 12:22:35.192745 systemd[1]: Created slice kubepods-besteffort-pode126b548_4bb4_4411_ac57_4228ab1056b8.slice - libcontainer container kubepods-besteffort-pode126b548_4bb4_4411_ac57_4228ab1056b8.slice. Jan 17 12:22:35.202062 kubelet[2610]: I0117 12:22:35.202021 2610 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdmbg\" (UniqueName: \"kubernetes.io/projected/04762b83-9859-4ba7-8017-1bd889867ddf-kube-api-access-jdmbg\") pod \"04762b83-9859-4ba7-8017-1bd889867ddf\" (UID: \"04762b83-9859-4ba7-8017-1bd889867ddf\") " Jan 17 12:22:35.202838 kubelet[2610]: I0117 12:22:35.202578 2610 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/04762b83-9859-4ba7-8017-1bd889867ddf-typha-certs\") pod \"04762b83-9859-4ba7-8017-1bd889867ddf\" (UID: \"04762b83-9859-4ba7-8017-1bd889867ddf\") " Jan 17 12:22:35.213289 kubelet[2610]: I0117 12:22:35.211088 2610 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04762b83-9859-4ba7-8017-1bd889867ddf-tigera-ca-bundle\") pod \"04762b83-9859-4ba7-8017-1bd889867ddf\" (UID: \"04762b83-9859-4ba7-8017-1bd889867ddf\") " Jan 17 12:22:35.213289 kubelet[2610]: I0117 12:22:35.211195 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e126b548-4bb4-4411-ac57-4228ab1056b8-cni-log-dir\") pod \"calico-node-sjk7p\" (UID: \"e126b548-4bb4-4411-ac57-4228ab1056b8\") " pod="calico-system/calico-node-sjk7p" Jan 17 12:22:35.213289 kubelet[2610]: I0117 12:22:35.211231 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e126b548-4bb4-4411-ac57-4228ab1056b8-var-lib-calico\") pod \"calico-node-sjk7p\" (UID: \"e126b548-4bb4-4411-ac57-4228ab1056b8\") " pod="calico-system/calico-node-sjk7p" Jan 17 12:22:35.213289 kubelet[2610]: I0117 12:22:35.211287 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e126b548-4bb4-4411-ac57-4228ab1056b8-xtables-lock\") pod \"calico-node-sjk7p\" (UID: \"e126b548-4bb4-4411-ac57-4228ab1056b8\") " pod="calico-system/calico-node-sjk7p" Jan 17 12:22:35.213289 kubelet[2610]: I0117 12:22:35.211321 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qd77\" (UniqueName: \"kubernetes.io/projected/e126b548-4bb4-4411-ac57-4228ab1056b8-kube-api-access-9qd77\") pod \"calico-node-sjk7p\" (UID: \"e126b548-4bb4-4411-ac57-4228ab1056b8\") " pod="calico-system/calico-node-sjk7p" Jan 17 12:22:35.213654 kubelet[2610]: I0117 12:22:35.211352 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e126b548-4bb4-4411-ac57-4228ab1056b8-cni-bin-dir\") pod \"calico-node-sjk7p\" (UID: \"e126b548-4bb4-4411-ac57-4228ab1056b8\") " pod="calico-system/calico-node-sjk7p" Jan 17 12:22:35.213654 kubelet[2610]: I0117 12:22:35.211386 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e126b548-4bb4-4411-ac57-4228ab1056b8-tigera-ca-bundle\") pod \"calico-node-sjk7p\" (UID: \"e126b548-4bb4-4411-ac57-4228ab1056b8\") " pod="calico-system/calico-node-sjk7p" Jan 17 12:22:35.213654 kubelet[2610]: I0117 12:22:35.211416 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e126b548-4bb4-4411-ac57-4228ab1056b8-policysync\") pod \"calico-node-sjk7p\" (UID: \"e126b548-4bb4-4411-ac57-4228ab1056b8\") " pod="calico-system/calico-node-sjk7p" Jan 17 12:22:35.213654 kubelet[2610]: I0117 12:22:35.211449 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e126b548-4bb4-4411-ac57-4228ab1056b8-var-run-calico\") pod \"calico-node-sjk7p\" (UID: \"e126b548-4bb4-4411-ac57-4228ab1056b8\") " pod="calico-system/calico-node-sjk7p" Jan 17 12:22:35.213654 kubelet[2610]: I0117 12:22:35.211477 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e126b548-4bb4-4411-ac57-4228ab1056b8-cni-net-dir\") pod \"calico-node-sjk7p\" (UID: \"e126b548-4bb4-4411-ac57-4228ab1056b8\") " pod="calico-system/calico-node-sjk7p" Jan 17 12:22:35.213905 kubelet[2610]: I0117 12:22:35.211513 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e126b548-4bb4-4411-ac57-4228ab1056b8-node-certs\") pod \"calico-node-sjk7p\" (UID: \"e126b548-4bb4-4411-ac57-4228ab1056b8\") " pod="calico-system/calico-node-sjk7p" Jan 17 12:22:35.213905 kubelet[2610]: I0117 12:22:35.211541 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e126b548-4bb4-4411-ac57-4228ab1056b8-flexvol-driver-host\") pod \"calico-node-sjk7p\" (UID: \"e126b548-4bb4-4411-ac57-4228ab1056b8\") " pod="calico-system/calico-node-sjk7p" Jan 17 12:22:35.213905 kubelet[2610]: I0117 12:22:35.211571 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e126b548-4bb4-4411-ac57-4228ab1056b8-lib-modules\") pod \"calico-node-sjk7p\" (UID: \"e126b548-4bb4-4411-ac57-4228ab1056b8\") " pod="calico-system/calico-node-sjk7p" Jan 17 12:22:35.213905 kubelet[2610]: I0117 12:22:35.211610 2610 reconciler_common.go:289] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3e894b53-0a8f-4242-acbe-0970b488515e-flexvol-driver-host\") on node \"ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal\" DevicePath \"\"" Jan 17 12:22:35.213905 kubelet[2610]: I0117 12:22:35.211629 2610 reconciler_common.go:289] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3e894b53-0a8f-4242-acbe-0970b488515e-cni-net-dir\") on node \"ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal\" DevicePath \"\"" Jan 17 12:22:35.213905 kubelet[2610]: I0117 12:22:35.211648 2610 reconciler_common.go:289] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3e894b53-0a8f-4242-acbe-0970b488515e-cni-log-dir\") on node \"ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal\" DevicePath \"\"" Jan 17 12:22:35.214210 kubelet[2610]: I0117 12:22:35.211666 2610 reconciler_common.go:289] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3e894b53-0a8f-4242-acbe-0970b488515e-var-lib-calico\") on node \"ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal\" DevicePath \"\"" Jan 17 12:22:35.214210 kubelet[2610]: I0117 12:22:35.211684 2610 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e894b53-0a8f-4242-acbe-0970b488515e-tigera-ca-bundle\") on node \"ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal\" DevicePath \"\"" Jan 17 12:22:35.214210 kubelet[2610]: I0117 12:22:35.211701 2610 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e894b53-0a8f-4242-acbe-0970b488515e-lib-modules\") on node \"ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal\" DevicePath \"\"" Jan 17 12:22:35.214210 kubelet[2610]: I0117 12:22:35.211718 2610 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e894b53-0a8f-4242-acbe-0970b488515e-xtables-lock\") on node \"ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal\" DevicePath \"\"" Jan 17 12:22:35.214210 kubelet[2610]: I0117 12:22:35.211735 2610 reconciler_common.go:289] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3e894b53-0a8f-4242-acbe-0970b488515e-cni-bin-dir\") on node \"ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal\" DevicePath \"\"" Jan 17 12:22:35.214210 kubelet[2610]: I0117 12:22:35.211751 2610 reconciler_common.go:289] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3e894b53-0a8f-4242-acbe-0970b488515e-node-certs\") on node \"ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal\" DevicePath \"\"" Jan 17 12:22:35.214210 kubelet[2610]: I0117 12:22:35.211769 2610 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-kxbz5\" (UniqueName: \"kubernetes.io/projected/3e894b53-0a8f-4242-acbe-0970b488515e-kube-api-access-kxbz5\") on node \"ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal\" DevicePath \"\"" Jan 17 12:22:35.214820 kubelet[2610]: I0117 12:22:35.211787 2610 reconciler_common.go:289] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3e894b53-0a8f-4242-acbe-0970b488515e-var-run-calico\") on node \"ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal\" DevicePath \"\"" Jan 17 12:22:35.214820 kubelet[2610]: I0117 12:22:35.211803 2610 reconciler_common.go:289] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3e894b53-0a8f-4242-acbe-0970b488515e-policysync\") on node \"ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal\" DevicePath \"\"" Jan 17 12:22:35.233286 kubelet[2610]: I0117 12:22:35.230745 2610 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04762b83-9859-4ba7-8017-1bd889867ddf-kube-api-access-jdmbg" (OuterVolumeSpecName: "kube-api-access-jdmbg") pod "04762b83-9859-4ba7-8017-1bd889867ddf" (UID: "04762b83-9859-4ba7-8017-1bd889867ddf"). InnerVolumeSpecName "kube-api-access-jdmbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 17 12:22:35.233562 kubelet[2610]: I0117 12:22:35.232351 2610 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04762b83-9859-4ba7-8017-1bd889867ddf-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "04762b83-9859-4ba7-8017-1bd889867ddf" (UID: "04762b83-9859-4ba7-8017-1bd889867ddf"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 17 12:22:35.240305 kubelet[2610]: I0117 12:22:35.237846 2610 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04762b83-9859-4ba7-8017-1bd889867ddf-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "04762b83-9859-4ba7-8017-1bd889867ddf" (UID: "04762b83-9859-4ba7-8017-1bd889867ddf"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 17 12:22:35.317000 kubelet[2610]: I0117 12:22:35.316874 2610 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-jdmbg\" (UniqueName: \"kubernetes.io/projected/04762b83-9859-4ba7-8017-1bd889867ddf-kube-api-access-jdmbg\") on node \"ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal\" DevicePath \"\"" Jan 17 12:22:35.319330 kubelet[2610]: I0117 12:22:35.317346 2610 reconciler_common.go:289] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/04762b83-9859-4ba7-8017-1bd889867ddf-typha-certs\") on node \"ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal\" DevicePath \"\"" Jan 17 12:22:35.319619 kubelet[2610]: I0117 12:22:35.319591 2610 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04762b83-9859-4ba7-8017-1bd889867ddf-tigera-ca-bundle\") on node \"ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal\" DevicePath \"\"" Jan 17 12:22:35.391934 systemd[1]: Removed slice kubepods-besteffort-pod04762b83_9859_4ba7_8017_1bd889867ddf.slice - libcontainer container kubepods-besteffort-pod04762b83_9859_4ba7_8017_1bd889867ddf.slice. Jan 17 12:22:35.400805 systemd[1]: Removed slice kubepods-besteffort-pod3e894b53_0a8f_4242_acbe_0970b488515e.slice - libcontainer container kubepods-besteffort-pod3e894b53_0a8f_4242_acbe_0970b488515e.slice. Jan 17 12:22:35.400967 systemd[1]: kubepods-besteffort-pod3e894b53_0a8f_4242_acbe_0970b488515e.slice: Consumed 2.885s CPU time. Jan 17 12:22:35.421420 systemd-networkd[1378]: cali667f700302b: Link DOWN Jan 17 12:22:35.421433 systemd-networkd[1378]: cali667f700302b: Lost carrier Jan 17 12:22:35.514437 containerd[1468]: time="2025-01-17T12:22:35.514176892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sjk7p,Uid:e126b548-4bb4-4411-ac57-4228ab1056b8,Namespace:calico-system,Attempt:0,}" Jan 17 12:22:35.527875 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622-rootfs.mount: Deactivated successfully. Jan 17 12:22:35.530447 systemd[1]: var-lib-kubelet-pods-3e894b53\x2d0a8f\x2d4242\x2dacbe\x2d0970b488515e-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. Jan 17 12:22:35.530577 systemd[1]: var-lib-kubelet-pods-04762b83\x2d9859\x2d4ba7\x2d8017\x2d1bd889867ddf-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Jan 17 12:22:35.530691 systemd[1]: var-lib-kubelet-pods-3e894b53\x2d0a8f\x2d4242\x2dacbe\x2d0970b488515e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkxbz5.mount: Deactivated successfully. Jan 17 12:22:35.530802 systemd[1]: var-lib-kubelet-pods-3e894b53\x2d0a8f\x2d4242\x2dacbe\x2d0970b488515e-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Jan 17 12:22:35.530905 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c8ef91d6f3de0543d3a5e539f2083ca3036678d425cc060392d2e96b697c2165-rootfs.mount: Deactivated successfully. Jan 17 12:22:35.531001 systemd[1]: var-lib-kubelet-pods-04762b83\x2d9859\x2d4ba7\x2d8017\x2d1bd889867ddf-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djdmbg.mount: Deactivated successfully. Jan 17 12:22:35.531096 systemd[1]: var-lib-kubelet-pods-04762b83\x2d9859\x2d4ba7\x2d8017\x2d1bd889867ddf-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Jan 17 12:22:35.638456 containerd[1468]: time="2025-01-17T12:22:35.635631091Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:22:35.638456 containerd[1468]: time="2025-01-17T12:22:35.635700572Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:22:35.638456 containerd[1468]: time="2025-01-17T12:22:35.635718567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:35.638456 containerd[1468]: time="2025-01-17T12:22:35.635862713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:35.652523 containerd[1468]: 2025-01-17 12:22:35.417 [INFO][5043] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622" Jan 17 12:22:35.652523 containerd[1468]: 2025-01-17 12:22:35.417 [INFO][5043] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622" iface="eth0" netns="/var/run/netns/cni-88e556d4-3f6b-0005-0965-316ab0b6f2ce" Jan 17 12:22:35.652523 containerd[1468]: 2025-01-17 12:22:35.417 [INFO][5043] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622" iface="eth0" netns="/var/run/netns/cni-88e556d4-3f6b-0005-0965-316ab0b6f2ce" Jan 17 12:22:35.652523 containerd[1468]: 2025-01-17 12:22:35.431 [INFO][5043] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622" after=13.817438ms iface="eth0" netns="/var/run/netns/cni-88e556d4-3f6b-0005-0965-316ab0b6f2ce" Jan 17 12:22:35.652523 containerd[1468]: 2025-01-17 12:22:35.431 [INFO][5043] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622" Jan 17 12:22:35.652523 containerd[1468]: 2025-01-17 12:22:35.431 [INFO][5043] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622" Jan 17 12:22:35.652523 containerd[1468]: 2025-01-17 12:22:35.485 [INFO][5057] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622" HandleID="k8s-pod-network.cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--kube--controllers--7d68f7877f--n2vhz-eth0" Jan 17 12:22:35.652523 containerd[1468]: 2025-01-17 12:22:35.487 [INFO][5057] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:22:35.652523 containerd[1468]: 2025-01-17 12:22:35.487 [INFO][5057] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:22:35.652523 containerd[1468]: 2025-01-17 12:22:35.621 [INFO][5057] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622" HandleID="k8s-pod-network.cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--kube--controllers--7d68f7877f--n2vhz-eth0" Jan 17 12:22:35.652523 containerd[1468]: 2025-01-17 12:22:35.621 [INFO][5057] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622" HandleID="k8s-pod-network.cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--kube--controllers--7d68f7877f--n2vhz-eth0" Jan 17 12:22:35.652523 containerd[1468]: 2025-01-17 12:22:35.633 [INFO][5057] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:22:35.652523 containerd[1468]: 2025-01-17 12:22:35.645 [INFO][5043] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622" Jan 17 12:22:35.657230 containerd[1468]: time="2025-01-17T12:22:35.653464692Z" level=info msg="TearDown network for sandbox \"cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622\" successfully" Jan 17 12:22:35.657230 containerd[1468]: time="2025-01-17T12:22:35.653514768Z" level=info msg="StopPodSandbox for \"cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622\" returns successfully" Jan 17 12:22:35.664286 containerd[1468]: time="2025-01-17T12:22:35.663314090Z" level=info msg="StopPodSandbox for \"d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf\"" Jan 17 12:22:35.674129 systemd[1]: run-netns-cni\x2d88e556d4\x2d3f6b\x2d0005\x2d0965\x2d316ab0b6f2ce.mount: Deactivated successfully. Jan 17 12:22:35.746517 systemd[1]: Started cri-containerd-8a92c8cf5d89a3b86f2e146ea3bc2879e6d97c2c7ed24b3c6fc9b03f77191bd3.scope - libcontainer container 8a92c8cf5d89a3b86f2e146ea3bc2879e6d97c2c7ed24b3c6fc9b03f77191bd3. Jan 17 12:22:35.800349 containerd[1468]: time="2025-01-17T12:22:35.800277561Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:35.805561 kubelet[2610]: I0117 12:22:35.804707 2610 scope.go:117] "RemoveContainer" containerID="672366ba8cba07e9db3f9dddf4a91b774ed132344eb44424e68445a7939b2c9c" Jan 17 12:22:35.810557 containerd[1468]: time="2025-01-17T12:22:35.810495630Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 17 12:22:35.817451 containerd[1468]: time="2025-01-17T12:22:35.814216743Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:35.818467 containerd[1468]: time="2025-01-17T12:22:35.818370790Z" level=info msg="RemoveContainer for \"672366ba8cba07e9db3f9dddf4a91b774ed132344eb44424e68445a7939b2c9c\"" Jan 17 12:22:35.840260 containerd[1468]: time="2025-01-17T12:22:35.840165255Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:35.843540 containerd[1468]: time="2025-01-17T12:22:35.842931801Z" level=info msg="RemoveContainer for \"672366ba8cba07e9db3f9dddf4a91b774ed132344eb44424e68445a7939b2c9c\" returns successfully" Jan 17 12:22:35.845153 kubelet[2610]: I0117 12:22:35.845120 2610 scope.go:117] "RemoveContainer" containerID="672366ba8cba07e9db3f9dddf4a91b774ed132344eb44424e68445a7939b2c9c" Jan 17 12:22:35.846127 containerd[1468]: time="2025-01-17T12:22:35.846077219Z" level=error msg="ContainerStatus for \"672366ba8cba07e9db3f9dddf4a91b774ed132344eb44424e68445a7939b2c9c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"672366ba8cba07e9db3f9dddf4a91b774ed132344eb44424e68445a7939b2c9c\": not found" Jan 17 12:22:35.847279 kubelet[2610]: E0117 12:22:35.847220 2610 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"672366ba8cba07e9db3f9dddf4a91b774ed132344eb44424e68445a7939b2c9c\": not found" containerID="672366ba8cba07e9db3f9dddf4a91b774ed132344eb44424e68445a7939b2c9c" Jan 17 12:22:35.847518 kubelet[2610]: I0117 12:22:35.847486 2610 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"672366ba8cba07e9db3f9dddf4a91b774ed132344eb44424e68445a7939b2c9c"} err="failed to get container status \"672366ba8cba07e9db3f9dddf4a91b774ed132344eb44424e68445a7939b2c9c\": rpc error: code = NotFound desc = an error occurred when try to find container \"672366ba8cba07e9db3f9dddf4a91b774ed132344eb44424e68445a7939b2c9c\": not found" Jan 17 12:22:35.847674 kubelet[2610]: I0117 12:22:35.847656 2610 scope.go:117] "RemoveContainer" containerID="ca7d21af515bd9ed9aa48443193c9ee328e85a112d8f121f6d6a631aad0127d3" Jan 17 12:22:35.849762 containerd[1468]: time="2025-01-17T12:22:35.849476928Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 4.589556803s" Jan 17 12:22:35.851749 containerd[1468]: time="2025-01-17T12:22:35.851703249Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 17 12:22:35.853026 kubelet[2610]: I0117 12:22:35.852973 2610 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622" Jan 17 12:22:35.855017 containerd[1468]: time="2025-01-17T12:22:35.854439705Z" level=info msg="RemoveContainer for \"ca7d21af515bd9ed9aa48443193c9ee328e85a112d8f121f6d6a631aad0127d3\"" Jan 17 12:22:35.857529 containerd[1468]: time="2025-01-17T12:22:35.857484672Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 17 12:22:35.865360 containerd[1468]: time="2025-01-17T12:22:35.865307331Z" level=info msg="CreateContainer within sandbox \"6d126c17640086f95183a4bc24c26ce76d27923110615f1da0385381d24a0f21\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 17 12:22:35.871911 containerd[1468]: time="2025-01-17T12:22:35.871853456Z" level=info msg="RemoveContainer for \"ca7d21af515bd9ed9aa48443193c9ee328e85a112d8f121f6d6a631aad0127d3\" returns successfully" Jan 17 12:22:35.872506 kubelet[2610]: I0117 12:22:35.872374 2610 scope.go:117] "RemoveContainer" containerID="83b9107a76c5b913e1517ea8c18a6ea326782286bc882d47eb269a40f868a8c2" Jan 17 12:22:35.874607 containerd[1468]: time="2025-01-17T12:22:35.874571183Z" level=info msg="RemoveContainer for \"83b9107a76c5b913e1517ea8c18a6ea326782286bc882d47eb269a40f868a8c2\"" Jan 17 12:22:35.877885 containerd[1468]: time="2025-01-17T12:22:35.877796693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sjk7p,Uid:e126b548-4bb4-4411-ac57-4228ab1056b8,Namespace:calico-system,Attempt:0,} returns sandbox id \"8a92c8cf5d89a3b86f2e146ea3bc2879e6d97c2c7ed24b3c6fc9b03f77191bd3\"" Jan 17 12:22:35.880171 containerd[1468]: time="2025-01-17T12:22:35.879733215Z" level=info msg="RemoveContainer for \"83b9107a76c5b913e1517ea8c18a6ea326782286bc882d47eb269a40f868a8c2\" returns successfully" Jan 17 12:22:35.880740 kubelet[2610]: I0117 12:22:35.880560 2610 scope.go:117] "RemoveContainer" containerID="b1ea437d3988c0fc1de3c6f592b6b179db3c27e04115404ee910712f0b62a78e" Jan 17 12:22:35.886614 containerd[1468]: time="2025-01-17T12:22:35.885882342Z" level=info msg="RemoveContainer for \"b1ea437d3988c0fc1de3c6f592b6b179db3c27e04115404ee910712f0b62a78e\"" Jan 17 12:22:35.894623 containerd[1468]: time="2025-01-17T12:22:35.894415158Z" level=info msg="CreateContainer within sandbox \"8a92c8cf5d89a3b86f2e146ea3bc2879e6d97c2c7ed24b3c6fc9b03f77191bd3\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 17 12:22:35.916657 containerd[1468]: time="2025-01-17T12:22:35.916606983Z" level=info msg="RemoveContainer for \"b1ea437d3988c0fc1de3c6f592b6b179db3c27e04115404ee910712f0b62a78e\" returns successfully" Jan 17 12:22:35.933193 containerd[1468]: time="2025-01-17T12:22:35.933127143Z" level=info msg="CreateContainer within sandbox \"6d126c17640086f95183a4bc24c26ce76d27923110615f1da0385381d24a0f21\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4c7f9a9476e068ea928a1cd73985b509e3f94c487eb14d494cce7a5a8ab0ec32\"" Jan 17 12:22:35.938802 containerd[1468]: time="2025-01-17T12:22:35.938597182Z" level=info msg="StartContainer for \"4c7f9a9476e068ea928a1cd73985b509e3f94c487eb14d494cce7a5a8ab0ec32\"" Jan 17 12:22:35.988561 containerd[1468]: time="2025-01-17T12:22:35.986930560Z" level=info msg="CreateContainer within sandbox \"8a92c8cf5d89a3b86f2e146ea3bc2879e6d97c2c7ed24b3c6fc9b03f77191bd3\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e9a900807dc723eb9325940a1fef5d255bb9babbf9caed434e2d367221b99bf4\"" Jan 17 12:22:35.997835 containerd[1468]: time="2025-01-17T12:22:35.997725421Z" level=info msg="StartContainer for \"e9a900807dc723eb9325940a1fef5d255bb9babbf9caed434e2d367221b99bf4\"" Jan 17 12:22:36.032510 systemd[1]: Started cri-containerd-4c7f9a9476e068ea928a1cd73985b509e3f94c487eb14d494cce7a5a8ab0ec32.scope - libcontainer container 4c7f9a9476e068ea928a1cd73985b509e3f94c487eb14d494cce7a5a8ab0ec32. Jan 17 12:22:36.091765 systemd[1]: Started cri-containerd-e9a900807dc723eb9325940a1fef5d255bb9babbf9caed434e2d367221b99bf4.scope - libcontainer container e9a900807dc723eb9325940a1fef5d255bb9babbf9caed434e2d367221b99bf4. Jan 17 12:22:36.110746 containerd[1468]: 2025-01-17 12:22:35.944 [WARNING][5101] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--kube--controllers--7d68f7877f--n2vhz-eth0", GenerateName:"calico-kube-controllers-7d68f7877f-", Namespace:"calico-system", SelfLink:"", UID:"c715a196-a54b-4348-9ab5-065abb9617bb", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d68f7877f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal", ContainerID:"cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622", Pod:"calico-kube-controllers-7d68f7877f-n2vhz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali667f700302b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:22:36.110746 containerd[1468]: 2025-01-17 12:22:35.945 [INFO][5101] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf" Jan 17 12:22:36.110746 containerd[1468]: 2025-01-17 12:22:35.945 [INFO][5101] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf" iface="eth0" netns="" Jan 17 12:22:36.110746 containerd[1468]: 2025-01-17 12:22:35.945 [INFO][5101] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf" Jan 17 12:22:36.110746 containerd[1468]: 2025-01-17 12:22:35.945 [INFO][5101] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf" Jan 17 12:22:36.110746 containerd[1468]: 2025-01-17 12:22:36.077 [INFO][5128] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf" HandleID="k8s-pod-network.d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--kube--controllers--7d68f7877f--n2vhz-eth0" Jan 17 12:22:36.110746 containerd[1468]: 2025-01-17 12:22:36.077 [INFO][5128] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:22:36.110746 containerd[1468]: 2025-01-17 12:22:36.077 [INFO][5128] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:22:36.110746 containerd[1468]: 2025-01-17 12:22:36.094 [WARNING][5128] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf" HandleID="k8s-pod-network.d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--kube--controllers--7d68f7877f--n2vhz-eth0" Jan 17 12:22:36.110746 containerd[1468]: 2025-01-17 12:22:36.094 [INFO][5128] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf" HandleID="k8s-pod-network.d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--kube--controllers--7d68f7877f--n2vhz-eth0" Jan 17 12:22:36.110746 containerd[1468]: 2025-01-17 12:22:36.099 [INFO][5128] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:22:36.110746 containerd[1468]: 2025-01-17 12:22:36.102 [INFO][5101] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf" Jan 17 12:22:36.113206 containerd[1468]: time="2025-01-17T12:22:36.111587691Z" level=info msg="TearDown network for sandbox \"d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf\" successfully" Jan 17 12:22:36.113206 containerd[1468]: time="2025-01-17T12:22:36.112797979Z" level=info msg="StopPodSandbox for \"d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf\" returns successfully" Jan 17 12:22:36.186951 containerd[1468]: time="2025-01-17T12:22:36.186809341Z" level=info msg="StartContainer for \"e9a900807dc723eb9325940a1fef5d255bb9babbf9caed434e2d367221b99bf4\" returns successfully" Jan 17 12:22:36.224302 kubelet[2610]: I0117 12:22:36.224220 2610 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z78s4\" (UniqueName: \"kubernetes.io/projected/c715a196-a54b-4348-9ab5-065abb9617bb-kube-api-access-z78s4\") pod \"c715a196-a54b-4348-9ab5-065abb9617bb\" (UID: \"c715a196-a54b-4348-9ab5-065abb9617bb\") " Jan 17 12:22:36.227529 kubelet[2610]: I0117 12:22:36.226671 2610 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c715a196-a54b-4348-9ab5-065abb9617bb-tigera-ca-bundle\") pod \"c715a196-a54b-4348-9ab5-065abb9617bb\" (UID: \"c715a196-a54b-4348-9ab5-065abb9617bb\") " Jan 17 12:22:36.245978 kubelet[2610]: I0117 12:22:36.245617 2610 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c715a196-a54b-4348-9ab5-065abb9617bb-kube-api-access-z78s4" (OuterVolumeSpecName: "kube-api-access-z78s4") pod "c715a196-a54b-4348-9ab5-065abb9617bb" (UID: "c715a196-a54b-4348-9ab5-065abb9617bb"). InnerVolumeSpecName "kube-api-access-z78s4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 17 12:22:36.247289 kubelet[2610]: I0117 12:22:36.246456 2610 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c715a196-a54b-4348-9ab5-065abb9617bb-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "c715a196-a54b-4348-9ab5-065abb9617bb" (UID: "c715a196-a54b-4348-9ab5-065abb9617bb"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 17 12:22:36.247441 containerd[1468]: time="2025-01-17T12:22:36.246187073Z" level=info msg="StartContainer for \"4c7f9a9476e068ea928a1cd73985b509e3f94c487eb14d494cce7a5a8ab0ec32\" returns successfully" Jan 17 12:22:36.253014 systemd[1]: cri-containerd-e9a900807dc723eb9325940a1fef5d255bb9babbf9caed434e2d367221b99bf4.scope: Deactivated successfully. Jan 17 12:22:36.329176 kubelet[2610]: I0117 12:22:36.328872 2610 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c715a196-a54b-4348-9ab5-065abb9617bb-tigera-ca-bundle\") on node \"ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal\" DevicePath \"\"" Jan 17 12:22:36.329176 kubelet[2610]: I0117 12:22:36.328943 2610 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-z78s4\" (UniqueName: \"kubernetes.io/projected/c715a196-a54b-4348-9ab5-065abb9617bb-kube-api-access-z78s4\") on node \"ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal\" DevicePath \"\"" Jan 17 12:22:36.357988 containerd[1468]: time="2025-01-17T12:22:36.357894546Z" level=info msg="shim disconnected" id=e9a900807dc723eb9325940a1fef5d255bb9babbf9caed434e2d367221b99bf4 namespace=k8s.io Jan 17 12:22:36.357988 containerd[1468]: time="2025-01-17T12:22:36.357972573Z" level=warning msg="cleaning up after shim disconnected" id=e9a900807dc723eb9325940a1fef5d255bb9babbf9caed434e2d367221b99bf4 namespace=k8s.io Jan 17 12:22:36.357988 containerd[1468]: time="2025-01-17T12:22:36.357991619Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:22:36.515787 systemd[1]: var-lib-kubelet-pods-c715a196\x2da54b\x2d4348\x2d9ab5\x2d065abb9617bb-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dkube\x2dcontrollers-1.mount: Deactivated successfully. Jan 17 12:22:36.515955 systemd[1]: var-lib-kubelet-pods-c715a196\x2da54b\x2d4348\x2d9ab5\x2d065abb9617bb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dz78s4.mount: Deactivated successfully. Jan 17 12:22:36.881941 containerd[1468]: time="2025-01-17T12:22:36.881213401Z" level=info msg="CreateContainer within sandbox \"8a92c8cf5d89a3b86f2e146ea3bc2879e6d97c2c7ed24b3c6fc9b03f77191bd3\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 12:22:36.896461 kubelet[2610]: I0117 12:22:36.891960 2610 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7b5f56598d-sv9sp" podStartSLOduration=25.156744758 podStartE2EDuration="32.891932131s" podCreationTimestamp="2025-01-17 12:22:04 +0000 UTC" firstStartedPulling="2025-01-17 12:22:28.1189261 +0000 UTC m=+44.947024641" lastFinishedPulling="2025-01-17 12:22:35.854113468 +0000 UTC m=+52.682212014" observedRunningTime="2025-01-17 12:22:36.886216773 +0000 UTC m=+53.714315331" watchObservedRunningTime="2025-01-17 12:22:36.891932131 +0000 UTC m=+53.720030689" Jan 17 12:22:36.895223 systemd[1]: Removed slice kubepods-besteffort-podc715a196_a54b_4348_9ab5_065abb9617bb.slice - libcontainer container kubepods-besteffort-podc715a196_a54b_4348_9ab5_065abb9617bb.slice. Jan 17 12:22:36.950021 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount479369572.mount: Deactivated successfully. Jan 17 12:22:36.958123 containerd[1468]: time="2025-01-17T12:22:36.958063024Z" level=info msg="CreateContainer within sandbox \"8a92c8cf5d89a3b86f2e146ea3bc2879e6d97c2c7ed24b3c6fc9b03f77191bd3\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"fd0ca61eec27e6f6d62f9a4138db9d530d956e0db41a247f297978977ac1ef0b\"" Jan 17 12:22:36.959463 containerd[1468]: time="2025-01-17T12:22:36.958859131Z" level=info msg="StartContainer for \"fd0ca61eec27e6f6d62f9a4138db9d530d956e0db41a247f297978977ac1ef0b\"" Jan 17 12:22:37.094507 systemd[1]: Started cri-containerd-fd0ca61eec27e6f6d62f9a4138db9d530d956e0db41a247f297978977ac1ef0b.scope - libcontainer container fd0ca61eec27e6f6d62f9a4138db9d530d956e0db41a247f297978977ac1ef0b. Jan 17 12:22:37.181231 containerd[1468]: time="2025-01-17T12:22:37.180763517Z" level=info msg="StartContainer for \"fd0ca61eec27e6f6d62f9a4138db9d530d956e0db41a247f297978977ac1ef0b\" returns successfully" Jan 17 12:22:37.374712 kubelet[2610]: I0117 12:22:37.374662 2610 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04762b83-9859-4ba7-8017-1bd889867ddf" path="/var/lib/kubelet/pods/04762b83-9859-4ba7-8017-1bd889867ddf/volumes" Jan 17 12:22:37.379161 kubelet[2610]: I0117 12:22:37.377318 2610 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e894b53-0a8f-4242-acbe-0970b488515e" path="/var/lib/kubelet/pods/3e894b53-0a8f-4242-acbe-0970b488515e/volumes" Jan 17 12:22:37.379960 kubelet[2610]: I0117 12:22:37.379921 2610 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c715a196-a54b-4348-9ab5-065abb9617bb" path="/var/lib/kubelet/pods/c715a196-a54b-4348-9ab5-065abb9617bb/volumes" Jan 17 12:22:37.475402 kubelet[2610]: I0117 12:22:37.475142 2610 topology_manager.go:215] "Topology Admit Handler" podUID="d296630b-0cd5-439e-b2f2-6eb680e08a86" podNamespace="calico-system" podName="calico-typha-845c4cc7fb-kjfqw" Jan 17 12:22:37.477130 kubelet[2610]: E0117 12:22:37.476940 2610 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c715a196-a54b-4348-9ab5-065abb9617bb" containerName="calico-kube-controllers" Jan 17 12:22:37.477530 kubelet[2610]: I0117 12:22:37.477404 2610 memory_manager.go:354] "RemoveStaleState removing state" podUID="c715a196-a54b-4348-9ab5-065abb9617bb" containerName="calico-kube-controllers" Jan 17 12:22:37.499160 systemd[1]: Created slice kubepods-besteffort-podd296630b_0cd5_439e_b2f2_6eb680e08a86.slice - libcontainer container kubepods-besteffort-podd296630b_0cd5_439e_b2f2_6eb680e08a86.slice. Jan 17 12:22:37.515978 systemd[1]: run-containerd-runc-k8s.io-fd0ca61eec27e6f6d62f9a4138db9d530d956e0db41a247f297978977ac1ef0b-runc.IGe4TK.mount: Deactivated successfully. Jan 17 12:22:37.536317 kubelet[2610]: I0117 12:22:37.536267 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vzv8\" (UniqueName: \"kubernetes.io/projected/d296630b-0cd5-439e-b2f2-6eb680e08a86-kube-api-access-8vzv8\") pod \"calico-typha-845c4cc7fb-kjfqw\" (UID: \"d296630b-0cd5-439e-b2f2-6eb680e08a86\") " pod="calico-system/calico-typha-845c4cc7fb-kjfqw" Jan 17 12:22:37.537156 kubelet[2610]: I0117 12:22:37.537071 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d296630b-0cd5-439e-b2f2-6eb680e08a86-typha-certs\") pod \"calico-typha-845c4cc7fb-kjfqw\" (UID: \"d296630b-0cd5-439e-b2f2-6eb680e08a86\") " pod="calico-system/calico-typha-845c4cc7fb-kjfqw" Jan 17 12:22:37.538074 kubelet[2610]: I0117 12:22:37.537983 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d296630b-0cd5-439e-b2f2-6eb680e08a86-tigera-ca-bundle\") pod \"calico-typha-845c4cc7fb-kjfqw\" (UID: \"d296630b-0cd5-439e-b2f2-6eb680e08a86\") " pod="calico-system/calico-typha-845c4cc7fb-kjfqw" Jan 17 12:22:37.602417 containerd[1468]: time="2025-01-17T12:22:37.602359125Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:37.604007 containerd[1468]: time="2025-01-17T12:22:37.603817522Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 17 12:22:37.606311 containerd[1468]: time="2025-01-17T12:22:37.605743939Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:37.611104 containerd[1468]: time="2025-01-17T12:22:37.611063608Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:37.613553 containerd[1468]: time="2025-01-17T12:22:37.613509374Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.755204261s" Jan 17 12:22:37.613743 containerd[1468]: time="2025-01-17T12:22:37.613712787Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 17 12:22:37.616457 containerd[1468]: time="2025-01-17T12:22:37.616145727Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 17 12:22:37.618264 containerd[1468]: time="2025-01-17T12:22:37.617955321Z" level=info msg="CreateContainer within sandbox \"afad4e44e505c1ce870e0cf93d904ef1ee24c99458cd5e11fc5cccdb0ed74060\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 17 12:22:37.641072 containerd[1468]: time="2025-01-17T12:22:37.640828674Z" level=info msg="CreateContainer within sandbox \"afad4e44e505c1ce870e0cf93d904ef1ee24c99458cd5e11fc5cccdb0ed74060\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"7f824296a3cb6c91484cc2f6ee6c32365e3015508e3d496beef48050589e1db1\"" Jan 17 12:22:37.648301 containerd[1468]: time="2025-01-17T12:22:37.644569345Z" level=info msg="StartContainer for \"7f824296a3cb6c91484cc2f6ee6c32365e3015508e3d496beef48050589e1db1\"" Jan 17 12:22:37.781570 systemd[1]: Started cri-containerd-7f824296a3cb6c91484cc2f6ee6c32365e3015508e3d496beef48050589e1db1.scope - libcontainer container 7f824296a3cb6c91484cc2f6ee6c32365e3015508e3d496beef48050589e1db1. Jan 17 12:22:37.814180 containerd[1468]: time="2025-01-17T12:22:37.813425846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-845c4cc7fb-kjfqw,Uid:d296630b-0cd5-439e-b2f2-6eb680e08a86,Namespace:calico-system,Attempt:0,}" Jan 17 12:22:37.899874 kubelet[2610]: I0117 12:22:37.899653 2610 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:22:37.920086 containerd[1468]: time="2025-01-17T12:22:37.918828696Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:37.923355 containerd[1468]: time="2025-01-17T12:22:37.923287143Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 17 12:22:37.927832 containerd[1468]: time="2025-01-17T12:22:37.927711360Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:22:37.927996 containerd[1468]: time="2025-01-17T12:22:37.927808156Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:22:37.927996 containerd[1468]: time="2025-01-17T12:22:37.927833338Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:37.927996 containerd[1468]: time="2025-01-17T12:22:37.927967250Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:37.938482 containerd[1468]: time="2025-01-17T12:22:37.938429488Z" level=info msg="StartContainer for \"7f824296a3cb6c91484cc2f6ee6c32365e3015508e3d496beef48050589e1db1\" returns successfully" Jan 17 12:22:37.940707 containerd[1468]: time="2025-01-17T12:22:37.940652506Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 324.466925ms" Jan 17 12:22:37.940851 containerd[1468]: time="2025-01-17T12:22:37.940711757Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 17 12:22:37.947727 containerd[1468]: time="2025-01-17T12:22:37.947677746Z" level=info msg="CreateContainer within sandbox \"a8d7c215db44b8190ee77c848a375951ed9c10a363627babc1e98d2765766060\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 17 12:22:37.982141 systemd[1]: Started cri-containerd-f799416127e5dc111fcc52a66365888ed80f234b52453d0e25500c8b4b500d52.scope - libcontainer container f799416127e5dc111fcc52a66365888ed80f234b52453d0e25500c8b4b500d52. Jan 17 12:22:37.985349 containerd[1468]: time="2025-01-17T12:22:37.984959310Z" level=info msg="CreateContainer within sandbox \"a8d7c215db44b8190ee77c848a375951ed9c10a363627babc1e98d2765766060\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8e8c2426248d037d6fcff4d2a9925b9e1e64a50916243f73a46e18d8901acd45\"" Jan 17 12:22:37.987657 containerd[1468]: time="2025-01-17T12:22:37.986560068Z" level=info msg="StartContainer for \"8e8c2426248d037d6fcff4d2a9925b9e1e64a50916243f73a46e18d8901acd45\"" Jan 17 12:22:38.062499 systemd[1]: Started cri-containerd-8e8c2426248d037d6fcff4d2a9925b9e1e64a50916243f73a46e18d8901acd45.scope - libcontainer container 8e8c2426248d037d6fcff4d2a9925b9e1e64a50916243f73a46e18d8901acd45. Jan 17 12:22:38.102706 ntpd[1430]: Deleting interface #9 cali667f700302b, fe80::ecee:eeff:feee:eeee%7#123, interface stats: received=0, sent=0, dropped=0, active_time=3 secs Jan 17 12:22:38.103377 ntpd[1430]: 17 Jan 12:22:38 ntpd[1430]: Deleting interface #9 cali667f700302b, fe80::ecee:eeff:feee:eeee%7#123, interface stats: received=0, sent=0, dropped=0, active_time=3 secs Jan 17 12:22:38.205206 containerd[1468]: time="2025-01-17T12:22:38.204225431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-845c4cc7fb-kjfqw,Uid:d296630b-0cd5-439e-b2f2-6eb680e08a86,Namespace:calico-system,Attempt:0,} returns sandbox id \"f799416127e5dc111fcc52a66365888ed80f234b52453d0e25500c8b4b500d52\"" Jan 17 12:22:38.226559 containerd[1468]: time="2025-01-17T12:22:38.226329439Z" level=info msg="CreateContainer within sandbox \"f799416127e5dc111fcc52a66365888ed80f234b52453d0e25500c8b4b500d52\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 17 12:22:38.248686 containerd[1468]: time="2025-01-17T12:22:38.248609984Z" level=info msg="CreateContainer within sandbox \"f799416127e5dc111fcc52a66365888ed80f234b52453d0e25500c8b4b500d52\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"5740c0232cac445c67f2d53460c92038a44a33c860e638ce9ef73add34cf601e\"" Jan 17 12:22:38.250289 containerd[1468]: time="2025-01-17T12:22:38.249835631Z" level=info msg="StartContainer for \"5740c0232cac445c67f2d53460c92038a44a33c860e638ce9ef73add34cf601e\"" Jan 17 12:22:38.312123 systemd[1]: Started cri-containerd-5740c0232cac445c67f2d53460c92038a44a33c860e638ce9ef73add34cf601e.scope - libcontainer container 5740c0232cac445c67f2d53460c92038a44a33c860e638ce9ef73add34cf601e. Jan 17 12:22:38.326315 containerd[1468]: time="2025-01-17T12:22:38.325436063Z" level=info msg="StartContainer for \"8e8c2426248d037d6fcff4d2a9925b9e1e64a50916243f73a46e18d8901acd45\" returns successfully" Jan 17 12:22:38.560830 kubelet[2610]: I0117 12:22:38.560551 2610 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 17 12:22:38.560830 kubelet[2610]: I0117 12:22:38.560624 2610 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 17 12:22:38.610560 containerd[1468]: time="2025-01-17T12:22:38.608595297Z" level=info msg="StartContainer for \"5740c0232cac445c67f2d53460c92038a44a33c860e638ce9ef73add34cf601e\" returns successfully" Jan 17 12:22:38.961549 kubelet[2610]: I0117 12:22:38.961357 2610 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-gjb8c" podStartSLOduration=25.448608505 podStartE2EDuration="34.961329597s" podCreationTimestamp="2025-01-17 12:22:04 +0000 UTC" firstStartedPulling="2025-01-17 12:22:28.102298024 +0000 UTC m=+44.930396570" lastFinishedPulling="2025-01-17 12:22:37.615019121 +0000 UTC m=+54.443117662" observedRunningTime="2025-01-17 12:22:38.93801262 +0000 UTC m=+55.766111197" watchObservedRunningTime="2025-01-17 12:22:38.961329597 +0000 UTC m=+55.789428155" Jan 17 12:22:39.002202 kubelet[2610]: I0117 12:22:39.002117 2610 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7b5f56598d-kfl2c" podStartSLOduration=29.127248636 podStartE2EDuration="35.002090468s" podCreationTimestamp="2025-01-17 12:22:04 +0000 UTC" firstStartedPulling="2025-01-17 12:22:32.067032642 +0000 UTC m=+48.895131180" lastFinishedPulling="2025-01-17 12:22:37.941874469 +0000 UTC m=+54.769973012" observedRunningTime="2025-01-17 12:22:38.962338099 +0000 UTC m=+55.790436654" watchObservedRunningTime="2025-01-17 12:22:39.002090468 +0000 UTC m=+55.830189024" Jan 17 12:22:39.004272 kubelet[2610]: I0117 12:22:39.002846 2610 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-845c4cc7fb-kjfqw" podStartSLOduration=9.002826105 podStartE2EDuration="9.002826105s" podCreationTimestamp="2025-01-17 12:22:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:22:38.999103357 +0000 UTC m=+55.827201915" watchObservedRunningTime="2025-01-17 12:22:39.002826105 +0000 UTC m=+55.830924661" Jan 17 12:22:39.923872 kubelet[2610]: I0117 12:22:39.923823 2610 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:22:40.094543 systemd[1]: cri-containerd-fd0ca61eec27e6f6d62f9a4138db9d530d956e0db41a247f297978977ac1ef0b.scope: Deactivated successfully. Jan 17 12:22:40.141767 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd0ca61eec27e6f6d62f9a4138db9d530d956e0db41a247f297978977ac1ef0b-rootfs.mount: Deactivated successfully. Jan 17 12:22:40.202628 containerd[1468]: time="2025-01-17T12:22:40.202455680Z" level=info msg="shim disconnected" id=fd0ca61eec27e6f6d62f9a4138db9d530d956e0db41a247f297978977ac1ef0b namespace=k8s.io Jan 17 12:22:40.202628 containerd[1468]: time="2025-01-17T12:22:40.202529367Z" level=warning msg="cleaning up after shim disconnected" id=fd0ca61eec27e6f6d62f9a4138db9d530d956e0db41a247f297978977ac1ef0b namespace=k8s.io Jan 17 12:22:40.202628 containerd[1468]: time="2025-01-17T12:22:40.202541457Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:22:40.238940 containerd[1468]: time="2025-01-17T12:22:40.238842684Z" level=warning msg="cleanup warnings time=\"2025-01-17T12:22:40Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 12:22:40.954414 containerd[1468]: time="2025-01-17T12:22:40.952800482Z" level=info msg="CreateContainer within sandbox \"8a92c8cf5d89a3b86f2e146ea3bc2879e6d97c2c7ed24b3c6fc9b03f77191bd3\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 17 12:22:40.988587 containerd[1468]: time="2025-01-17T12:22:40.988524779Z" level=info msg="CreateContainer within sandbox \"8a92c8cf5d89a3b86f2e146ea3bc2879e6d97c2c7ed24b3c6fc9b03f77191bd3\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9c8fa11eb25f929badde0c75c1767c16ac1e23186daebdfb007274f5c0318dcc\"" Jan 17 12:22:40.990618 containerd[1468]: time="2025-01-17T12:22:40.990573075Z" level=info msg="StartContainer for \"9c8fa11eb25f929badde0c75c1767c16ac1e23186daebdfb007274f5c0318dcc\"" Jan 17 12:22:41.061490 systemd[1]: Started cri-containerd-9c8fa11eb25f929badde0c75c1767c16ac1e23186daebdfb007274f5c0318dcc.scope - libcontainer container 9c8fa11eb25f929badde0c75c1767c16ac1e23186daebdfb007274f5c0318dcc. Jan 17 12:22:41.142402 containerd[1468]: time="2025-01-17T12:22:41.142310101Z" level=info msg="StartContainer for \"9c8fa11eb25f929badde0c75c1767c16ac1e23186daebdfb007274f5c0318dcc\" returns successfully" Jan 17 12:22:41.294222 kubelet[2610]: I0117 12:22:41.292848 2610 topology_manager.go:215] "Topology Admit Handler" podUID="a94d8998-2d47-46dc-8a77-8c8eb86007b6" podNamespace="calico-system" podName="calico-kube-controllers-796fcf66c4-smftv" Jan 17 12:22:41.308648 systemd[1]: Created slice kubepods-besteffort-poda94d8998_2d47_46dc_8a77_8c8eb86007b6.slice - libcontainer container kubepods-besteffort-poda94d8998_2d47_46dc_8a77_8c8eb86007b6.slice. Jan 17 12:22:41.374747 kubelet[2610]: I0117 12:22:41.371516 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a94d8998-2d47-46dc-8a77-8c8eb86007b6-tigera-ca-bundle\") pod \"calico-kube-controllers-796fcf66c4-smftv\" (UID: \"a94d8998-2d47-46dc-8a77-8c8eb86007b6\") " pod="calico-system/calico-kube-controllers-796fcf66c4-smftv" Jan 17 12:22:41.374747 kubelet[2610]: I0117 12:22:41.371572 2610 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjccw\" (UniqueName: \"kubernetes.io/projected/a94d8998-2d47-46dc-8a77-8c8eb86007b6-kube-api-access-qjccw\") pod \"calico-kube-controllers-796fcf66c4-smftv\" (UID: \"a94d8998-2d47-46dc-8a77-8c8eb86007b6\") " pod="calico-system/calico-kube-controllers-796fcf66c4-smftv" Jan 17 12:22:41.617186 containerd[1468]: time="2025-01-17T12:22:41.617110939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-796fcf66c4-smftv,Uid:a94d8998-2d47-46dc-8a77-8c8eb86007b6,Namespace:calico-system,Attempt:0,}" Jan 17 12:22:41.777566 systemd-networkd[1378]: cali772c77b22a8: Link UP Jan 17 12:22:41.777934 systemd-networkd[1378]: cali772c77b22a8: Gained carrier Jan 17 12:22:41.797297 containerd[1468]: 2025-01-17 12:22:41.694 [INFO][5536] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--kube--controllers--796fcf66c4--smftv-eth0 calico-kube-controllers-796fcf66c4- calico-system a94d8998-2d47-46dc-8a77-8c8eb86007b6 1057 0 2025-01-17 12:22:37 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:796fcf66c4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal calico-kube-controllers-796fcf66c4-smftv eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali772c77b22a8 [] []}} ContainerID="c937311fa5fb9be12f0aaf7539c67d8822be42cf96a5a4cc5a54fc3d1d91e528" Namespace="calico-system" Pod="calico-kube-controllers-796fcf66c4-smftv" WorkloadEndpoint="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--kube--controllers--796fcf66c4--smftv-" Jan 17 12:22:41.797297 containerd[1468]: 2025-01-17 12:22:41.694 [INFO][5536] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c937311fa5fb9be12f0aaf7539c67d8822be42cf96a5a4cc5a54fc3d1d91e528" Namespace="calico-system" Pod="calico-kube-controllers-796fcf66c4-smftv" WorkloadEndpoint="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--kube--controllers--796fcf66c4--smftv-eth0" Jan 17 12:22:41.797297 containerd[1468]: 2025-01-17 12:22:41.727 [INFO][5546] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c937311fa5fb9be12f0aaf7539c67d8822be42cf96a5a4cc5a54fc3d1d91e528" HandleID="k8s-pod-network.c937311fa5fb9be12f0aaf7539c67d8822be42cf96a5a4cc5a54fc3d1d91e528" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--kube--controllers--796fcf66c4--smftv-eth0" Jan 17 12:22:41.797297 containerd[1468]: 2025-01-17 12:22:41.739 [INFO][5546] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c937311fa5fb9be12f0aaf7539c67d8822be42cf96a5a4cc5a54fc3d1d91e528" HandleID="k8s-pod-network.c937311fa5fb9be12f0aaf7539c67d8822be42cf96a5a4cc5a54fc3d1d91e528" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--kube--controllers--796fcf66c4--smftv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290b70), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal", "pod":"calico-kube-controllers-796fcf66c4-smftv", "timestamp":"2025-01-17 12:22:41.727925551 +0000 UTC"}, Hostname:"ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:22:41.797297 containerd[1468]: 2025-01-17 12:22:41.739 [INFO][5546] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:22:41.797297 containerd[1468]: 2025-01-17 12:22:41.739 [INFO][5546] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:22:41.797297 containerd[1468]: 2025-01-17 12:22:41.739 [INFO][5546] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal' Jan 17 12:22:41.797297 containerd[1468]: 2025-01-17 12:22:41.741 [INFO][5546] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c937311fa5fb9be12f0aaf7539c67d8822be42cf96a5a4cc5a54fc3d1d91e528" host="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:22:41.797297 containerd[1468]: 2025-01-17 12:22:41.745 [INFO][5546] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:22:41.797297 containerd[1468]: 2025-01-17 12:22:41.750 [INFO][5546] ipam/ipam.go 489: Trying affinity for 192.168.80.128/26 host="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:22:41.797297 containerd[1468]: 2025-01-17 12:22:41.752 [INFO][5546] ipam/ipam.go 155: Attempting to load block cidr=192.168.80.128/26 host="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:22:41.797297 containerd[1468]: 2025-01-17 12:22:41.754 [INFO][5546] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.80.128/26 host="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:22:41.797297 containerd[1468]: 2025-01-17 12:22:41.754 [INFO][5546] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.80.128/26 handle="k8s-pod-network.c937311fa5fb9be12f0aaf7539c67d8822be42cf96a5a4cc5a54fc3d1d91e528" host="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:22:41.797297 containerd[1468]: 2025-01-17 12:22:41.756 [INFO][5546] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c937311fa5fb9be12f0aaf7539c67d8822be42cf96a5a4cc5a54fc3d1d91e528 Jan 17 12:22:41.797297 containerd[1468]: 2025-01-17 12:22:41.762 [INFO][5546] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.80.128/26 handle="k8s-pod-network.c937311fa5fb9be12f0aaf7539c67d8822be42cf96a5a4cc5a54fc3d1d91e528" host="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:22:41.797297 containerd[1468]: 2025-01-17 12:22:41.769 [INFO][5546] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.80.135/26] block=192.168.80.128/26 handle="k8s-pod-network.c937311fa5fb9be12f0aaf7539c67d8822be42cf96a5a4cc5a54fc3d1d91e528" host="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:22:41.797297 containerd[1468]: 2025-01-17 12:22:41.770 [INFO][5546] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.80.135/26] handle="k8s-pod-network.c937311fa5fb9be12f0aaf7539c67d8822be42cf96a5a4cc5a54fc3d1d91e528" host="ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal" Jan 17 12:22:41.797297 containerd[1468]: 2025-01-17 12:22:41.770 [INFO][5546] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:22:41.797297 containerd[1468]: 2025-01-17 12:22:41.770 [INFO][5546] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.80.135/26] IPv6=[] ContainerID="c937311fa5fb9be12f0aaf7539c67d8822be42cf96a5a4cc5a54fc3d1d91e528" HandleID="k8s-pod-network.c937311fa5fb9be12f0aaf7539c67d8822be42cf96a5a4cc5a54fc3d1d91e528" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--kube--controllers--796fcf66c4--smftv-eth0" Jan 17 12:22:41.803939 containerd[1468]: 2025-01-17 12:22:41.772 [INFO][5536] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c937311fa5fb9be12f0aaf7539c67d8822be42cf96a5a4cc5a54fc3d1d91e528" Namespace="calico-system" Pod="calico-kube-controllers-796fcf66c4-smftv" WorkloadEndpoint="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--kube--controllers--796fcf66c4--smftv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--kube--controllers--796fcf66c4--smftv-eth0", GenerateName:"calico-kube-controllers-796fcf66c4-", Namespace:"calico-system", SelfLink:"", UID:"a94d8998-2d47-46dc-8a77-8c8eb86007b6", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"796fcf66c4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-kube-controllers-796fcf66c4-smftv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.80.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali772c77b22a8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:22:41.803939 containerd[1468]: 2025-01-17 12:22:41.772 [INFO][5536] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.80.135/32] ContainerID="c937311fa5fb9be12f0aaf7539c67d8822be42cf96a5a4cc5a54fc3d1d91e528" Namespace="calico-system" Pod="calico-kube-controllers-796fcf66c4-smftv" WorkloadEndpoint="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--kube--controllers--796fcf66c4--smftv-eth0" Jan 17 12:22:41.803939 containerd[1468]: 2025-01-17 12:22:41.772 [INFO][5536] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali772c77b22a8 ContainerID="c937311fa5fb9be12f0aaf7539c67d8822be42cf96a5a4cc5a54fc3d1d91e528" Namespace="calico-system" Pod="calico-kube-controllers-796fcf66c4-smftv" WorkloadEndpoint="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--kube--controllers--796fcf66c4--smftv-eth0" Jan 17 12:22:41.803939 containerd[1468]: 2025-01-17 12:22:41.777 [INFO][5536] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c937311fa5fb9be12f0aaf7539c67d8822be42cf96a5a4cc5a54fc3d1d91e528" Namespace="calico-system" Pod="calico-kube-controllers-796fcf66c4-smftv" WorkloadEndpoint="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--kube--controllers--796fcf66c4--smftv-eth0" Jan 17 12:22:41.803939 containerd[1468]: 2025-01-17 12:22:41.777 [INFO][5536] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c937311fa5fb9be12f0aaf7539c67d8822be42cf96a5a4cc5a54fc3d1d91e528" Namespace="calico-system" Pod="calico-kube-controllers-796fcf66c4-smftv" WorkloadEndpoint="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--kube--controllers--796fcf66c4--smftv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--kube--controllers--796fcf66c4--smftv-eth0", GenerateName:"calico-kube-controllers-796fcf66c4-", Namespace:"calico-system", SelfLink:"", UID:"a94d8998-2d47-46dc-8a77-8c8eb86007b6", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"796fcf66c4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal", ContainerID:"c937311fa5fb9be12f0aaf7539c67d8822be42cf96a5a4cc5a54fc3d1d91e528", Pod:"calico-kube-controllers-796fcf66c4-smftv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.80.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali772c77b22a8", MAC:"c2:b3:81:bb:fb:df", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:22:41.803939 containerd[1468]: 2025-01-17 12:22:41.791 [INFO][5536] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c937311fa5fb9be12f0aaf7539c67d8822be42cf96a5a4cc5a54fc3d1d91e528" Namespace="calico-system" Pod="calico-kube-controllers-796fcf66c4-smftv" WorkloadEndpoint="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--kube--controllers--796fcf66c4--smftv-eth0" Jan 17 12:22:41.844757 containerd[1468]: time="2025-01-17T12:22:41.844623941Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:22:41.844946 containerd[1468]: time="2025-01-17T12:22:41.844807357Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:22:41.845022 containerd[1468]: time="2025-01-17T12:22:41.844895838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:41.845301 containerd[1468]: time="2025-01-17T12:22:41.845110392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:41.879517 systemd[1]: Started cri-containerd-c937311fa5fb9be12f0aaf7539c67d8822be42cf96a5a4cc5a54fc3d1d91e528.scope - libcontainer container c937311fa5fb9be12f0aaf7539c67d8822be42cf96a5a4cc5a54fc3d1d91e528. Jan 17 12:22:41.952217 containerd[1468]: time="2025-01-17T12:22:41.952041538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-796fcf66c4-smftv,Uid:a94d8998-2d47-46dc-8a77-8c8eb86007b6,Namespace:calico-system,Attempt:0,} returns sandbox id \"c937311fa5fb9be12f0aaf7539c67d8822be42cf96a5a4cc5a54fc3d1d91e528\"" Jan 17 12:22:41.974293 kubelet[2610]: I0117 12:22:41.974215 2610 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-sjk7p" podStartSLOduration=6.974189409 podStartE2EDuration="6.974189409s" podCreationTimestamp="2025-01-17 12:22:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:22:41.973219943 +0000 UTC m=+58.801318524" watchObservedRunningTime="2025-01-17 12:22:41.974189409 +0000 UTC m=+58.802287966" Jan 17 12:22:41.975543 containerd[1468]: time="2025-01-17T12:22:41.975473528Z" level=info msg="CreateContainer within sandbox \"c937311fa5fb9be12f0aaf7539c67d8822be42cf96a5a4cc5a54fc3d1d91e528\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 17 12:22:41.998605 containerd[1468]: time="2025-01-17T12:22:41.998536802Z" level=info msg="CreateContainer within sandbox \"c937311fa5fb9be12f0aaf7539c67d8822be42cf96a5a4cc5a54fc3d1d91e528\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"509afbec35523005dfd43799b1546be97cd0ca7502e4ecf29ea575ed4fb6171d\"" Jan 17 12:22:42.002721 containerd[1468]: time="2025-01-17T12:22:41.999388744Z" level=info msg="StartContainer for \"509afbec35523005dfd43799b1546be97cd0ca7502e4ecf29ea575ed4fb6171d\"" Jan 17 12:22:42.049510 systemd[1]: Started cri-containerd-509afbec35523005dfd43799b1546be97cd0ca7502e4ecf29ea575ed4fb6171d.scope - libcontainer container 509afbec35523005dfd43799b1546be97cd0ca7502e4ecf29ea575ed4fb6171d. Jan 17 12:22:42.116435 containerd[1468]: time="2025-01-17T12:22:42.115816767Z" level=info msg="StartContainer for \"509afbec35523005dfd43799b1546be97cd0ca7502e4ecf29ea575ed4fb6171d\" returns successfully" Jan 17 12:22:43.367188 kubelet[2610]: I0117 12:22:43.367143 2610 scope.go:117] "RemoveContainer" containerID="347dff1db46fe15811ded95dce41963944b945de9c14f908877e96deeea10018" Jan 17 12:22:43.371487 containerd[1468]: time="2025-01-17T12:22:43.370325822Z" level=info msg="RemoveContainer for \"347dff1db46fe15811ded95dce41963944b945de9c14f908877e96deeea10018\"" Jan 17 12:22:43.378496 containerd[1468]: time="2025-01-17T12:22:43.378354214Z" level=info msg="RemoveContainer for \"347dff1db46fe15811ded95dce41963944b945de9c14f908877e96deeea10018\" returns successfully" Jan 17 12:22:43.380278 containerd[1468]: time="2025-01-17T12:22:43.380224584Z" level=info msg="StopPodSandbox for \"cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622\"" Jan 17 12:22:43.525857 containerd[1468]: 2025-01-17 12:22:43.447 [WARNING][5799] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622" WorkloadEndpoint="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--kube--controllers--7d68f7877f--n2vhz-eth0" Jan 17 12:22:43.525857 containerd[1468]: 2025-01-17 12:22:43.449 [INFO][5799] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622" Jan 17 12:22:43.525857 containerd[1468]: 2025-01-17 12:22:43.449 [INFO][5799] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622" iface="eth0" netns="" Jan 17 12:22:43.525857 containerd[1468]: 2025-01-17 12:22:43.449 [INFO][5799] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622" Jan 17 12:22:43.525857 containerd[1468]: 2025-01-17 12:22:43.449 [INFO][5799] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622" Jan 17 12:22:43.525857 containerd[1468]: 2025-01-17 12:22:43.496 [INFO][5805] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622" HandleID="k8s-pod-network.cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--kube--controllers--7d68f7877f--n2vhz-eth0" Jan 17 12:22:43.525857 containerd[1468]: 2025-01-17 12:22:43.497 [INFO][5805] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:22:43.525857 containerd[1468]: 2025-01-17 12:22:43.497 [INFO][5805] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:22:43.525857 containerd[1468]: 2025-01-17 12:22:43.519 [WARNING][5805] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622" HandleID="k8s-pod-network.cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--kube--controllers--7d68f7877f--n2vhz-eth0" Jan 17 12:22:43.525857 containerd[1468]: 2025-01-17 12:22:43.519 [INFO][5805] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622" HandleID="k8s-pod-network.cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--kube--controllers--7d68f7877f--n2vhz-eth0" Jan 17 12:22:43.525857 containerd[1468]: 2025-01-17 12:22:43.521 [INFO][5805] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:22:43.525857 containerd[1468]: 2025-01-17 12:22:43.523 [INFO][5799] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622" Jan 17 12:22:43.527235 containerd[1468]: time="2025-01-17T12:22:43.526727995Z" level=info msg="TearDown network for sandbox \"cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622\" successfully" Jan 17 12:22:43.527235 containerd[1468]: time="2025-01-17T12:22:43.526769871Z" level=info msg="StopPodSandbox for \"cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622\" returns successfully" Jan 17 12:22:43.528189 containerd[1468]: time="2025-01-17T12:22:43.527543200Z" level=info msg="RemovePodSandbox for \"cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622\"" Jan 17 12:22:43.528189 containerd[1468]: time="2025-01-17T12:22:43.527580430Z" level=info msg="Forcibly stopping sandbox \"cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622\"" Jan 17 12:22:43.550632 systemd-networkd[1378]: cali772c77b22a8: Gained IPv6LL Jan 17 12:22:43.673927 containerd[1468]: 2025-01-17 12:22:43.591 [WARNING][5824] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622" WorkloadEndpoint="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--kube--controllers--7d68f7877f--n2vhz-eth0" Jan 17 12:22:43.673927 containerd[1468]: 2025-01-17 12:22:43.591 [INFO][5824] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622" Jan 17 12:22:43.673927 containerd[1468]: 2025-01-17 12:22:43.591 [INFO][5824] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622" iface="eth0" netns="" Jan 17 12:22:43.673927 containerd[1468]: 2025-01-17 12:22:43.591 [INFO][5824] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622" Jan 17 12:22:43.673927 containerd[1468]: 2025-01-17 12:22:43.591 [INFO][5824] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622" Jan 17 12:22:43.673927 containerd[1468]: 2025-01-17 12:22:43.648 [INFO][5830] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622" HandleID="k8s-pod-network.cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--kube--controllers--7d68f7877f--n2vhz-eth0" Jan 17 12:22:43.673927 containerd[1468]: 2025-01-17 12:22:43.648 [INFO][5830] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:22:43.673927 containerd[1468]: 2025-01-17 12:22:43.648 [INFO][5830] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:22:43.673927 containerd[1468]: 2025-01-17 12:22:43.667 [WARNING][5830] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622" HandleID="k8s-pod-network.cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--kube--controllers--7d68f7877f--n2vhz-eth0" Jan 17 12:22:43.673927 containerd[1468]: 2025-01-17 12:22:43.667 [INFO][5830] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622" HandleID="k8s-pod-network.cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--kube--controllers--7d68f7877f--n2vhz-eth0" Jan 17 12:22:43.673927 containerd[1468]: 2025-01-17 12:22:43.669 [INFO][5830] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:22:43.673927 containerd[1468]: 2025-01-17 12:22:43.671 [INFO][5824] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622" Jan 17 12:22:43.673927 containerd[1468]: time="2025-01-17T12:22:43.673512254Z" level=info msg="TearDown network for sandbox \"cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622\" successfully" Jan 17 12:22:43.684423 containerd[1468]: time="2025-01-17T12:22:43.684061910Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:22:43.684423 containerd[1468]: time="2025-01-17T12:22:43.684200620Z" level=info msg="RemovePodSandbox \"cfb4e96ee52d1a4c10026c958456cf8cc4b2bb5806691e8d18cf198be93b3622\" returns successfully" Jan 17 12:22:43.685980 containerd[1468]: time="2025-01-17T12:22:43.685520282Z" level=info msg="StopPodSandbox for \"d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf\"" Jan 17 12:22:43.843211 containerd[1468]: 2025-01-17 12:22:43.765 [WARNING][5854] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf" WorkloadEndpoint="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--kube--controllers--7d68f7877f--n2vhz-eth0" Jan 17 12:22:43.843211 containerd[1468]: 2025-01-17 12:22:43.766 [INFO][5854] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf" Jan 17 12:22:43.843211 containerd[1468]: 2025-01-17 12:22:43.766 [INFO][5854] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf" iface="eth0" netns="" Jan 17 12:22:43.843211 containerd[1468]: 2025-01-17 12:22:43.766 [INFO][5854] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf" Jan 17 12:22:43.843211 containerd[1468]: 2025-01-17 12:22:43.766 [INFO][5854] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf" Jan 17 12:22:43.843211 containerd[1468]: 2025-01-17 12:22:43.817 [INFO][5871] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf" HandleID="k8s-pod-network.d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--kube--controllers--7d68f7877f--n2vhz-eth0" Jan 17 12:22:43.843211 containerd[1468]: 2025-01-17 12:22:43.818 [INFO][5871] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:22:43.843211 containerd[1468]: 2025-01-17 12:22:43.818 [INFO][5871] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:22:43.843211 containerd[1468]: 2025-01-17 12:22:43.833 [WARNING][5871] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf" HandleID="k8s-pod-network.d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--kube--controllers--7d68f7877f--n2vhz-eth0" Jan 17 12:22:43.843211 containerd[1468]: 2025-01-17 12:22:43.833 [INFO][5871] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf" HandleID="k8s-pod-network.d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--kube--controllers--7d68f7877f--n2vhz-eth0" Jan 17 12:22:43.843211 containerd[1468]: 2025-01-17 12:22:43.837 [INFO][5871] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:22:43.843211 containerd[1468]: 2025-01-17 12:22:43.840 [INFO][5854] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf" Jan 17 12:22:43.844723 containerd[1468]: time="2025-01-17T12:22:43.843177073Z" level=info msg="TearDown network for sandbox \"d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf\" successfully" Jan 17 12:22:43.844723 containerd[1468]: time="2025-01-17T12:22:43.844488621Z" level=info msg="StopPodSandbox for \"d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf\" returns successfully" Jan 17 12:22:43.847298 containerd[1468]: time="2025-01-17T12:22:43.846761313Z" level=info msg="RemovePodSandbox for \"d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf\"" Jan 17 12:22:43.847298 containerd[1468]: time="2025-01-17T12:22:43.847005137Z" level=info msg="Forcibly stopping sandbox \"d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf\"" Jan 17 12:22:44.097747 containerd[1468]: 2025-01-17 12:22:44.010 [WARNING][5896] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf" WorkloadEndpoint="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--kube--controllers--7d68f7877f--n2vhz-eth0" Jan 17 12:22:44.097747 containerd[1468]: 2025-01-17 12:22:44.010 [INFO][5896] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf" Jan 17 12:22:44.097747 containerd[1468]: 2025-01-17 12:22:44.010 [INFO][5896] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf" iface="eth0" netns="" Jan 17 12:22:44.097747 containerd[1468]: 2025-01-17 12:22:44.011 [INFO][5896] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf" Jan 17 12:22:44.097747 containerd[1468]: 2025-01-17 12:22:44.011 [INFO][5896] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf" Jan 17 12:22:44.097747 containerd[1468]: 2025-01-17 12:22:44.074 [INFO][5925] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf" HandleID="k8s-pod-network.d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--kube--controllers--7d68f7877f--n2vhz-eth0" Jan 17 12:22:44.097747 containerd[1468]: 2025-01-17 12:22:44.077 [INFO][5925] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:22:44.097747 containerd[1468]: 2025-01-17 12:22:44.077 [INFO][5925] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:22:44.097747 containerd[1468]: 2025-01-17 12:22:44.091 [WARNING][5925] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf" HandleID="k8s-pod-network.d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--kube--controllers--7d68f7877f--n2vhz-eth0" Jan 17 12:22:44.097747 containerd[1468]: 2025-01-17 12:22:44.091 [INFO][5925] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf" HandleID="k8s-pod-network.d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--kube--controllers--7d68f7877f--n2vhz-eth0" Jan 17 12:22:44.097747 containerd[1468]: 2025-01-17 12:22:44.094 [INFO][5925] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:22:44.097747 containerd[1468]: 2025-01-17 12:22:44.095 [INFO][5896] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf" Jan 17 12:22:44.099074 containerd[1468]: time="2025-01-17T12:22:44.098373220Z" level=info msg="TearDown network for sandbox \"d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf\" successfully" Jan 17 12:22:44.107548 containerd[1468]: time="2025-01-17T12:22:44.107286168Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:22:44.107548 containerd[1468]: time="2025-01-17T12:22:44.107377422Z" level=info msg="RemovePodSandbox \"d60542e1d2fdfb82cdc38d651362d6798f325c3686f6d49bbe1ec6d5545a9dcf\" returns successfully" Jan 17 12:22:44.108890 containerd[1468]: time="2025-01-17T12:22:44.108538847Z" level=info msg="StopPodSandbox for \"94fbcac91a94bee62820828b2ba79f8d060d9c70188bf433ef70d6e67f504ee1\"" Jan 17 12:22:44.258534 containerd[1468]: 2025-01-17 12:22:44.186 [WARNING][5950] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="94fbcac91a94bee62820828b2ba79f8d060d9c70188bf433ef70d6e67f504ee1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--apiserver--7b5f56598d--sv9sp-eth0", GenerateName:"calico-apiserver-7b5f56598d-", Namespace:"calico-apiserver", SelfLink:"", UID:"5d7aeb14-5869-48a1-96a7-a215252689a5", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b5f56598d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal", ContainerID:"6d126c17640086f95183a4bc24c26ce76d27923110615f1da0385381d24a0f21", Pod:"calico-apiserver-7b5f56598d-sv9sp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.80.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie177f1e9a5f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:22:44.258534 containerd[1468]: 2025-01-17 12:22:44.186 [INFO][5950] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="94fbcac91a94bee62820828b2ba79f8d060d9c70188bf433ef70d6e67f504ee1" Jan 17 12:22:44.258534 containerd[1468]: 2025-01-17 12:22:44.186 [INFO][5950] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="94fbcac91a94bee62820828b2ba79f8d060d9c70188bf433ef70d6e67f504ee1" iface="eth0" netns="" Jan 17 12:22:44.258534 containerd[1468]: 2025-01-17 12:22:44.186 [INFO][5950] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="94fbcac91a94bee62820828b2ba79f8d060d9c70188bf433ef70d6e67f504ee1" Jan 17 12:22:44.258534 containerd[1468]: 2025-01-17 12:22:44.188 [INFO][5950] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="94fbcac91a94bee62820828b2ba79f8d060d9c70188bf433ef70d6e67f504ee1" Jan 17 12:22:44.258534 containerd[1468]: 2025-01-17 12:22:44.236 [INFO][5957] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="94fbcac91a94bee62820828b2ba79f8d060d9c70188bf433ef70d6e67f504ee1" HandleID="k8s-pod-network.94fbcac91a94bee62820828b2ba79f8d060d9c70188bf433ef70d6e67f504ee1" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--apiserver--7b5f56598d--sv9sp-eth0" Jan 17 12:22:44.258534 containerd[1468]: 2025-01-17 12:22:44.236 [INFO][5957] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:22:44.258534 containerd[1468]: 2025-01-17 12:22:44.236 [INFO][5957] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:22:44.258534 containerd[1468]: 2025-01-17 12:22:44.246 [WARNING][5957] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="94fbcac91a94bee62820828b2ba79f8d060d9c70188bf433ef70d6e67f504ee1" HandleID="k8s-pod-network.94fbcac91a94bee62820828b2ba79f8d060d9c70188bf433ef70d6e67f504ee1" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--apiserver--7b5f56598d--sv9sp-eth0" Jan 17 12:22:44.258534 containerd[1468]: 2025-01-17 12:22:44.246 [INFO][5957] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="94fbcac91a94bee62820828b2ba79f8d060d9c70188bf433ef70d6e67f504ee1" HandleID="k8s-pod-network.94fbcac91a94bee62820828b2ba79f8d060d9c70188bf433ef70d6e67f504ee1" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--apiserver--7b5f56598d--sv9sp-eth0" Jan 17 12:22:44.258534 containerd[1468]: 2025-01-17 12:22:44.252 [INFO][5957] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:22:44.258534 containerd[1468]: 2025-01-17 12:22:44.254 [INFO][5950] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="94fbcac91a94bee62820828b2ba79f8d060d9c70188bf433ef70d6e67f504ee1" Jan 17 12:22:44.258534 containerd[1468]: time="2025-01-17T12:22:44.256143490Z" level=info msg="TearDown network for sandbox \"94fbcac91a94bee62820828b2ba79f8d060d9c70188bf433ef70d6e67f504ee1\" successfully" Jan 17 12:22:44.258534 containerd[1468]: time="2025-01-17T12:22:44.256176533Z" level=info msg="StopPodSandbox for \"94fbcac91a94bee62820828b2ba79f8d060d9c70188bf433ef70d6e67f504ee1\" returns successfully" Jan 17 12:22:44.258534 containerd[1468]: time="2025-01-17T12:22:44.256976034Z" level=info msg="RemovePodSandbox for \"94fbcac91a94bee62820828b2ba79f8d060d9c70188bf433ef70d6e67f504ee1\"" Jan 17 12:22:44.258534 containerd[1468]: time="2025-01-17T12:22:44.257047245Z" level=info msg="Forcibly stopping sandbox \"94fbcac91a94bee62820828b2ba79f8d060d9c70188bf433ef70d6e67f504ee1\"" Jan 17 12:22:44.379366 containerd[1468]: 2025-01-17 12:22:44.321 [WARNING][5979] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="94fbcac91a94bee62820828b2ba79f8d060d9c70188bf433ef70d6e67f504ee1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--apiserver--7b5f56598d--sv9sp-eth0", GenerateName:"calico-apiserver-7b5f56598d-", Namespace:"calico-apiserver", SelfLink:"", UID:"5d7aeb14-5869-48a1-96a7-a215252689a5", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b5f56598d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal", ContainerID:"6d126c17640086f95183a4bc24c26ce76d27923110615f1da0385381d24a0f21", Pod:"calico-apiserver-7b5f56598d-sv9sp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.80.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie177f1e9a5f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:22:44.379366 containerd[1468]: 2025-01-17 12:22:44.321 [INFO][5979] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="94fbcac91a94bee62820828b2ba79f8d060d9c70188bf433ef70d6e67f504ee1" Jan 17 12:22:44.379366 containerd[1468]: 2025-01-17 12:22:44.322 [INFO][5979] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="94fbcac91a94bee62820828b2ba79f8d060d9c70188bf433ef70d6e67f504ee1" iface="eth0" netns="" Jan 17 12:22:44.379366 containerd[1468]: 2025-01-17 12:22:44.322 [INFO][5979] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="94fbcac91a94bee62820828b2ba79f8d060d9c70188bf433ef70d6e67f504ee1" Jan 17 12:22:44.379366 containerd[1468]: 2025-01-17 12:22:44.322 [INFO][5979] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="94fbcac91a94bee62820828b2ba79f8d060d9c70188bf433ef70d6e67f504ee1" Jan 17 12:22:44.379366 containerd[1468]: 2025-01-17 12:22:44.364 [INFO][5986] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="94fbcac91a94bee62820828b2ba79f8d060d9c70188bf433ef70d6e67f504ee1" HandleID="k8s-pod-network.94fbcac91a94bee62820828b2ba79f8d060d9c70188bf433ef70d6e67f504ee1" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--apiserver--7b5f56598d--sv9sp-eth0" Jan 17 12:22:44.379366 containerd[1468]: 2025-01-17 12:22:44.364 [INFO][5986] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:22:44.379366 containerd[1468]: 2025-01-17 12:22:44.364 [INFO][5986] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:22:44.379366 containerd[1468]: 2025-01-17 12:22:44.373 [WARNING][5986] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="94fbcac91a94bee62820828b2ba79f8d060d9c70188bf433ef70d6e67f504ee1" HandleID="k8s-pod-network.94fbcac91a94bee62820828b2ba79f8d060d9c70188bf433ef70d6e67f504ee1" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--apiserver--7b5f56598d--sv9sp-eth0" Jan 17 12:22:44.379366 containerd[1468]: 2025-01-17 12:22:44.373 [INFO][5986] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="94fbcac91a94bee62820828b2ba79f8d060d9c70188bf433ef70d6e67f504ee1" HandleID="k8s-pod-network.94fbcac91a94bee62820828b2ba79f8d060d9c70188bf433ef70d6e67f504ee1" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--apiserver--7b5f56598d--sv9sp-eth0" Jan 17 12:22:44.379366 containerd[1468]: 2025-01-17 12:22:44.375 [INFO][5986] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:22:44.379366 containerd[1468]: 2025-01-17 12:22:44.377 [INFO][5979] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="94fbcac91a94bee62820828b2ba79f8d060d9c70188bf433ef70d6e67f504ee1" Jan 17 12:22:44.382012 containerd[1468]: time="2025-01-17T12:22:44.379604902Z" level=info msg="TearDown network for sandbox \"94fbcac91a94bee62820828b2ba79f8d060d9c70188bf433ef70d6e67f504ee1\" successfully" Jan 17 12:22:44.400023 containerd[1468]: time="2025-01-17T12:22:44.399473023Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"94fbcac91a94bee62820828b2ba79f8d060d9c70188bf433ef70d6e67f504ee1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:22:44.400023 containerd[1468]: time="2025-01-17T12:22:44.399583755Z" level=info msg="RemovePodSandbox \"94fbcac91a94bee62820828b2ba79f8d060d9c70188bf433ef70d6e67f504ee1\" returns successfully" Jan 17 12:22:44.401803 containerd[1468]: time="2025-01-17T12:22:44.401595693Z" level=info msg="StopPodSandbox for \"d222c5822bc7a5e4d6cc39e105d819fb42ddc057460292a00475c7bc703bf3f9\"" Jan 17 12:22:44.401803 containerd[1468]: time="2025-01-17T12:22:44.401749231Z" level=info msg="TearDown network for sandbox \"d222c5822bc7a5e4d6cc39e105d819fb42ddc057460292a00475c7bc703bf3f9\" successfully" Jan 17 12:22:44.401803 containerd[1468]: time="2025-01-17T12:22:44.401768634Z" level=info msg="StopPodSandbox for \"d222c5822bc7a5e4d6cc39e105d819fb42ddc057460292a00475c7bc703bf3f9\" returns successfully" Jan 17 12:22:44.404339 containerd[1468]: time="2025-01-17T12:22:44.402500673Z" level=info msg="RemovePodSandbox for \"d222c5822bc7a5e4d6cc39e105d819fb42ddc057460292a00475c7bc703bf3f9\"" Jan 17 12:22:44.404339 containerd[1468]: time="2025-01-17T12:22:44.402535633Z" level=info msg="Forcibly stopping sandbox \"d222c5822bc7a5e4d6cc39e105d819fb42ddc057460292a00475c7bc703bf3f9\"" Jan 17 12:22:44.404339 containerd[1468]: time="2025-01-17T12:22:44.402618997Z" level=info msg="TearDown network for sandbox \"d222c5822bc7a5e4d6cc39e105d819fb42ddc057460292a00475c7bc703bf3f9\" successfully" Jan 17 12:22:44.408117 containerd[1468]: time="2025-01-17T12:22:44.408066113Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d222c5822bc7a5e4d6cc39e105d819fb42ddc057460292a00475c7bc703bf3f9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:22:44.408230 containerd[1468]: time="2025-01-17T12:22:44.408158731Z" level=info msg="RemovePodSandbox \"d222c5822bc7a5e4d6cc39e105d819fb42ddc057460292a00475c7bc703bf3f9\" returns successfully" Jan 17 12:22:44.408775 containerd[1468]: time="2025-01-17T12:22:44.408746970Z" level=info msg="StopPodSandbox for \"6a38206cb2f609e975cab0bd1aa8127284e181469edf2fcfd2a31bb99827918f\"" Jan 17 12:22:44.534649 containerd[1468]: 2025-01-17 12:22:44.471 [WARNING][6005] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6a38206cb2f609e975cab0bd1aa8127284e181469edf2fcfd2a31bb99827918f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--9xkbk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"9a11ecfa-a757-43fc-8ab9-8e4424da26ba", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 21, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal", ContainerID:"c6eb896913cdd7c710d2785bcddf0d99e8118b7b638f7183c0ae7d06179cddad", Pod:"coredns-7db6d8ff4d-9xkbk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.80.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali24b7f2a76c3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:22:44.534649 containerd[1468]: 2025-01-17 12:22:44.472 [INFO][6005] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6a38206cb2f609e975cab0bd1aa8127284e181469edf2fcfd2a31bb99827918f" Jan 17 12:22:44.534649 containerd[1468]: 2025-01-17 12:22:44.472 [INFO][6005] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6a38206cb2f609e975cab0bd1aa8127284e181469edf2fcfd2a31bb99827918f" iface="eth0" netns="" Jan 17 12:22:44.534649 containerd[1468]: 2025-01-17 12:22:44.472 [INFO][6005] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6a38206cb2f609e975cab0bd1aa8127284e181469edf2fcfd2a31bb99827918f" Jan 17 12:22:44.534649 containerd[1468]: 2025-01-17 12:22:44.472 [INFO][6005] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6a38206cb2f609e975cab0bd1aa8127284e181469edf2fcfd2a31bb99827918f" Jan 17 12:22:44.534649 containerd[1468]: 2025-01-17 12:22:44.510 [INFO][6016] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6a38206cb2f609e975cab0bd1aa8127284e181469edf2fcfd2a31bb99827918f" HandleID="k8s-pod-network.6a38206cb2f609e975cab0bd1aa8127284e181469edf2fcfd2a31bb99827918f" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--9xkbk-eth0" Jan 17 12:22:44.534649 containerd[1468]: 2025-01-17 12:22:44.510 [INFO][6016] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:22:44.534649 containerd[1468]: 2025-01-17 12:22:44.511 [INFO][6016] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:22:44.534649 containerd[1468]: 2025-01-17 12:22:44.527 [WARNING][6016] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6a38206cb2f609e975cab0bd1aa8127284e181469edf2fcfd2a31bb99827918f" HandleID="k8s-pod-network.6a38206cb2f609e975cab0bd1aa8127284e181469edf2fcfd2a31bb99827918f" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--9xkbk-eth0" Jan 17 12:22:44.534649 containerd[1468]: 2025-01-17 12:22:44.527 [INFO][6016] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6a38206cb2f609e975cab0bd1aa8127284e181469edf2fcfd2a31bb99827918f" HandleID="k8s-pod-network.6a38206cb2f609e975cab0bd1aa8127284e181469edf2fcfd2a31bb99827918f" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--9xkbk-eth0" Jan 17 12:22:44.534649 containerd[1468]: 2025-01-17 12:22:44.530 [INFO][6016] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:22:44.534649 containerd[1468]: 2025-01-17 12:22:44.531 [INFO][6005] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6a38206cb2f609e975cab0bd1aa8127284e181469edf2fcfd2a31bb99827918f" Jan 17 12:22:44.535784 containerd[1468]: time="2025-01-17T12:22:44.535677110Z" level=info msg="TearDown network for sandbox \"6a38206cb2f609e975cab0bd1aa8127284e181469edf2fcfd2a31bb99827918f\" successfully" Jan 17 12:22:44.536069 containerd[1468]: time="2025-01-17T12:22:44.535917968Z" level=info msg="StopPodSandbox for \"6a38206cb2f609e975cab0bd1aa8127284e181469edf2fcfd2a31bb99827918f\" returns successfully" Jan 17 12:22:44.537275 containerd[1468]: time="2025-01-17T12:22:44.536845185Z" level=info msg="RemovePodSandbox for \"6a38206cb2f609e975cab0bd1aa8127284e181469edf2fcfd2a31bb99827918f\"" Jan 17 12:22:44.537275 containerd[1468]: time="2025-01-17T12:22:44.536887380Z" level=info msg="Forcibly stopping sandbox \"6a38206cb2f609e975cab0bd1aa8127284e181469edf2fcfd2a31bb99827918f\"" Jan 17 12:22:44.673056 containerd[1468]: 2025-01-17 12:22:44.624 [WARNING][6052] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6a38206cb2f609e975cab0bd1aa8127284e181469edf2fcfd2a31bb99827918f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--9xkbk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"9a11ecfa-a757-43fc-8ab9-8e4424da26ba", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 21, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal", ContainerID:"c6eb896913cdd7c710d2785bcddf0d99e8118b7b638f7183c0ae7d06179cddad", Pod:"coredns-7db6d8ff4d-9xkbk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.80.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali24b7f2a76c3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:22:44.673056 containerd[1468]: 2025-01-17 12:22:44.625 [INFO][6052] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6a38206cb2f609e975cab0bd1aa8127284e181469edf2fcfd2a31bb99827918f" Jan 17 12:22:44.673056 containerd[1468]: 2025-01-17 12:22:44.625 [INFO][6052] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6a38206cb2f609e975cab0bd1aa8127284e181469edf2fcfd2a31bb99827918f" iface="eth0" netns="" Jan 17 12:22:44.673056 containerd[1468]: 2025-01-17 12:22:44.625 [INFO][6052] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6a38206cb2f609e975cab0bd1aa8127284e181469edf2fcfd2a31bb99827918f" Jan 17 12:22:44.673056 containerd[1468]: 2025-01-17 12:22:44.625 [INFO][6052] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6a38206cb2f609e975cab0bd1aa8127284e181469edf2fcfd2a31bb99827918f" Jan 17 12:22:44.673056 containerd[1468]: 2025-01-17 12:22:44.658 [INFO][6066] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6a38206cb2f609e975cab0bd1aa8127284e181469edf2fcfd2a31bb99827918f" HandleID="k8s-pod-network.6a38206cb2f609e975cab0bd1aa8127284e181469edf2fcfd2a31bb99827918f" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--9xkbk-eth0" Jan 17 12:22:44.673056 containerd[1468]: 2025-01-17 12:22:44.658 [INFO][6066] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:22:44.673056 containerd[1468]: 2025-01-17 12:22:44.658 [INFO][6066] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:22:44.673056 containerd[1468]: 2025-01-17 12:22:44.667 [WARNING][6066] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6a38206cb2f609e975cab0bd1aa8127284e181469edf2fcfd2a31bb99827918f" HandleID="k8s-pod-network.6a38206cb2f609e975cab0bd1aa8127284e181469edf2fcfd2a31bb99827918f" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--9xkbk-eth0" Jan 17 12:22:44.673056 containerd[1468]: 2025-01-17 12:22:44.667 [INFO][6066] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6a38206cb2f609e975cab0bd1aa8127284e181469edf2fcfd2a31bb99827918f" HandleID="k8s-pod-network.6a38206cb2f609e975cab0bd1aa8127284e181469edf2fcfd2a31bb99827918f" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--9xkbk-eth0" Jan 17 12:22:44.673056 containerd[1468]: 2025-01-17 12:22:44.669 [INFO][6066] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:22:44.673056 containerd[1468]: 2025-01-17 12:22:44.671 [INFO][6052] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6a38206cb2f609e975cab0bd1aa8127284e181469edf2fcfd2a31bb99827918f" Jan 17 12:22:44.673056 containerd[1468]: time="2025-01-17T12:22:44.672658668Z" level=info msg="TearDown network for sandbox \"6a38206cb2f609e975cab0bd1aa8127284e181469edf2fcfd2a31bb99827918f\" successfully" Jan 17 12:22:44.680121 containerd[1468]: time="2025-01-17T12:22:44.680067310Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6a38206cb2f609e975cab0bd1aa8127284e181469edf2fcfd2a31bb99827918f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:22:44.680324 containerd[1468]: time="2025-01-17T12:22:44.680160178Z" level=info msg="RemovePodSandbox \"6a38206cb2f609e975cab0bd1aa8127284e181469edf2fcfd2a31bb99827918f\" returns successfully" Jan 17 12:22:44.680969 containerd[1468]: time="2025-01-17T12:22:44.680934662Z" level=info msg="StopPodSandbox for \"133fef128fcaf8a502aef03ac72a1717d13908857264794d3bc477127f965044\"" Jan 17 12:22:44.769937 containerd[1468]: 2025-01-17 12:22:44.731 [WARNING][6086] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="133fef128fcaf8a502aef03ac72a1717d13908857264794d3bc477127f965044" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--xn5bq-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"d4d7de4f-610c-42bb-9ed3-95154eded5ac", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 21, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal", ContainerID:"22539a676064a3dd381aa5e1246b85961a9fabd502db97fce911140880fb4908", Pod:"coredns-7db6d8ff4d-xn5bq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.80.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaa96cb793f3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:22:44.769937 containerd[1468]: 2025-01-17 12:22:44.731 [INFO][6086] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="133fef128fcaf8a502aef03ac72a1717d13908857264794d3bc477127f965044" Jan 17 12:22:44.769937 containerd[1468]: 2025-01-17 12:22:44.731 [INFO][6086] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="133fef128fcaf8a502aef03ac72a1717d13908857264794d3bc477127f965044" iface="eth0" netns="" Jan 17 12:22:44.769937 containerd[1468]: 2025-01-17 12:22:44.731 [INFO][6086] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="133fef128fcaf8a502aef03ac72a1717d13908857264794d3bc477127f965044" Jan 17 12:22:44.769937 containerd[1468]: 2025-01-17 12:22:44.731 [INFO][6086] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="133fef128fcaf8a502aef03ac72a1717d13908857264794d3bc477127f965044" Jan 17 12:22:44.769937 containerd[1468]: 2025-01-17 12:22:44.757 [INFO][6093] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="133fef128fcaf8a502aef03ac72a1717d13908857264794d3bc477127f965044" HandleID="k8s-pod-network.133fef128fcaf8a502aef03ac72a1717d13908857264794d3bc477127f965044" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--xn5bq-eth0" Jan 17 12:22:44.769937 containerd[1468]: 2025-01-17 12:22:44.757 [INFO][6093] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:22:44.769937 containerd[1468]: 2025-01-17 12:22:44.757 [INFO][6093] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:22:44.769937 containerd[1468]: 2025-01-17 12:22:44.765 [WARNING][6093] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="133fef128fcaf8a502aef03ac72a1717d13908857264794d3bc477127f965044" HandleID="k8s-pod-network.133fef128fcaf8a502aef03ac72a1717d13908857264794d3bc477127f965044" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--xn5bq-eth0" Jan 17 12:22:44.769937 containerd[1468]: 2025-01-17 12:22:44.765 [INFO][6093] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="133fef128fcaf8a502aef03ac72a1717d13908857264794d3bc477127f965044" HandleID="k8s-pod-network.133fef128fcaf8a502aef03ac72a1717d13908857264794d3bc477127f965044" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--xn5bq-eth0" Jan 17 12:22:44.769937 containerd[1468]: 2025-01-17 12:22:44.767 [INFO][6093] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:22:44.769937 containerd[1468]: 2025-01-17 12:22:44.768 [INFO][6086] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="133fef128fcaf8a502aef03ac72a1717d13908857264794d3bc477127f965044" Jan 17 12:22:44.770784 containerd[1468]: time="2025-01-17T12:22:44.770067033Z" level=info msg="TearDown network for sandbox \"133fef128fcaf8a502aef03ac72a1717d13908857264794d3bc477127f965044\" successfully" Jan 17 12:22:44.770784 containerd[1468]: time="2025-01-17T12:22:44.770106389Z" level=info msg="StopPodSandbox for \"133fef128fcaf8a502aef03ac72a1717d13908857264794d3bc477127f965044\" returns successfully" Jan 17 12:22:44.771213 containerd[1468]: time="2025-01-17T12:22:44.771105783Z" level=info msg="RemovePodSandbox for \"133fef128fcaf8a502aef03ac72a1717d13908857264794d3bc477127f965044\"" Jan 17 12:22:44.771213 containerd[1468]: time="2025-01-17T12:22:44.771148308Z" level=info msg="Forcibly stopping sandbox \"133fef128fcaf8a502aef03ac72a1717d13908857264794d3bc477127f965044\"" Jan 17 12:22:44.857497 containerd[1468]: 2025-01-17 12:22:44.817 [WARNING][6111] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="133fef128fcaf8a502aef03ac72a1717d13908857264794d3bc477127f965044" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--xn5bq-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"d4d7de4f-610c-42bb-9ed3-95154eded5ac", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 21, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal", ContainerID:"22539a676064a3dd381aa5e1246b85961a9fabd502db97fce911140880fb4908", Pod:"coredns-7db6d8ff4d-xn5bq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.80.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaa96cb793f3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:22:44.857497 containerd[1468]: 2025-01-17 12:22:44.817 [INFO][6111] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="133fef128fcaf8a502aef03ac72a1717d13908857264794d3bc477127f965044" Jan 17 12:22:44.857497 containerd[1468]: 2025-01-17 12:22:44.817 [INFO][6111] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="133fef128fcaf8a502aef03ac72a1717d13908857264794d3bc477127f965044" iface="eth0" netns="" Jan 17 12:22:44.857497 containerd[1468]: 2025-01-17 12:22:44.817 [INFO][6111] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="133fef128fcaf8a502aef03ac72a1717d13908857264794d3bc477127f965044" Jan 17 12:22:44.857497 containerd[1468]: 2025-01-17 12:22:44.817 [INFO][6111] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="133fef128fcaf8a502aef03ac72a1717d13908857264794d3bc477127f965044" Jan 17 12:22:44.857497 containerd[1468]: 2025-01-17 12:22:44.844 [INFO][6117] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="133fef128fcaf8a502aef03ac72a1717d13908857264794d3bc477127f965044" HandleID="k8s-pod-network.133fef128fcaf8a502aef03ac72a1717d13908857264794d3bc477127f965044" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--xn5bq-eth0" Jan 17 12:22:44.857497 containerd[1468]: 2025-01-17 12:22:44.844 [INFO][6117] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:22:44.857497 containerd[1468]: 2025-01-17 12:22:44.844 [INFO][6117] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:22:44.857497 containerd[1468]: 2025-01-17 12:22:44.853 [WARNING][6117] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="133fef128fcaf8a502aef03ac72a1717d13908857264794d3bc477127f965044" HandleID="k8s-pod-network.133fef128fcaf8a502aef03ac72a1717d13908857264794d3bc477127f965044" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--xn5bq-eth0" Jan 17 12:22:44.857497 containerd[1468]: 2025-01-17 12:22:44.853 [INFO][6117] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="133fef128fcaf8a502aef03ac72a1717d13908857264794d3bc477127f965044" HandleID="k8s-pod-network.133fef128fcaf8a502aef03ac72a1717d13908857264794d3bc477127f965044" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--xn5bq-eth0" Jan 17 12:22:44.857497 containerd[1468]: 2025-01-17 12:22:44.854 [INFO][6117] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:22:44.857497 containerd[1468]: 2025-01-17 12:22:44.856 [INFO][6111] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="133fef128fcaf8a502aef03ac72a1717d13908857264794d3bc477127f965044" Jan 17 12:22:44.858397 containerd[1468]: time="2025-01-17T12:22:44.857548321Z" level=info msg="TearDown network for sandbox \"133fef128fcaf8a502aef03ac72a1717d13908857264794d3bc477127f965044\" successfully" Jan 17 12:22:44.862774 containerd[1468]: time="2025-01-17T12:22:44.862675624Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"133fef128fcaf8a502aef03ac72a1717d13908857264794d3bc477127f965044\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:22:44.862774 containerd[1468]: time="2025-01-17T12:22:44.862771903Z" level=info msg="RemovePodSandbox \"133fef128fcaf8a502aef03ac72a1717d13908857264794d3bc477127f965044\" returns successfully" Jan 17 12:22:44.863550 containerd[1468]: time="2025-01-17T12:22:44.863457400Z" level=info msg="StopPodSandbox for \"913635f7277a52db91dcfb53c5da7a87373abb9db9f9bda690cdf2fb998d0d87\"" Jan 17 12:22:44.947201 containerd[1468]: 2025-01-17 12:22:44.908 [WARNING][6135] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="913635f7277a52db91dcfb53c5da7a87373abb9db9f9bda690cdf2fb998d0d87" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--apiserver--7b5f56598d--kfl2c-eth0", GenerateName:"calico-apiserver-7b5f56598d-", Namespace:"calico-apiserver", SelfLink:"", UID:"df01509d-c2e3-4521-bb4f-4b625ab957e3", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b5f56598d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal", ContainerID:"a8d7c215db44b8190ee77c848a375951ed9c10a363627babc1e98d2765766060", Pod:"calico-apiserver-7b5f56598d-kfl2c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.80.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3af6971d627", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:22:44.947201 containerd[1468]: 2025-01-17 12:22:44.908 [INFO][6135] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="913635f7277a52db91dcfb53c5da7a87373abb9db9f9bda690cdf2fb998d0d87" Jan 17 12:22:44.947201 containerd[1468]: 2025-01-17 12:22:44.908 [INFO][6135] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="913635f7277a52db91dcfb53c5da7a87373abb9db9f9bda690cdf2fb998d0d87" iface="eth0" netns="" Jan 17 12:22:44.947201 containerd[1468]: 2025-01-17 12:22:44.908 [INFO][6135] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="913635f7277a52db91dcfb53c5da7a87373abb9db9f9bda690cdf2fb998d0d87" Jan 17 12:22:44.947201 containerd[1468]: 2025-01-17 12:22:44.908 [INFO][6135] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="913635f7277a52db91dcfb53c5da7a87373abb9db9f9bda690cdf2fb998d0d87" Jan 17 12:22:44.947201 containerd[1468]: 2025-01-17 12:22:44.934 [INFO][6141] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="913635f7277a52db91dcfb53c5da7a87373abb9db9f9bda690cdf2fb998d0d87" HandleID="k8s-pod-network.913635f7277a52db91dcfb53c5da7a87373abb9db9f9bda690cdf2fb998d0d87" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--apiserver--7b5f56598d--kfl2c-eth0" Jan 17 12:22:44.947201 containerd[1468]: 2025-01-17 12:22:44.934 [INFO][6141] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:22:44.947201 containerd[1468]: 2025-01-17 12:22:44.934 [INFO][6141] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:22:44.947201 containerd[1468]: 2025-01-17 12:22:44.942 [WARNING][6141] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="913635f7277a52db91dcfb53c5da7a87373abb9db9f9bda690cdf2fb998d0d87" HandleID="k8s-pod-network.913635f7277a52db91dcfb53c5da7a87373abb9db9f9bda690cdf2fb998d0d87" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--apiserver--7b5f56598d--kfl2c-eth0" Jan 17 12:22:44.947201 containerd[1468]: 2025-01-17 12:22:44.942 [INFO][6141] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="913635f7277a52db91dcfb53c5da7a87373abb9db9f9bda690cdf2fb998d0d87" HandleID="k8s-pod-network.913635f7277a52db91dcfb53c5da7a87373abb9db9f9bda690cdf2fb998d0d87" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--apiserver--7b5f56598d--kfl2c-eth0" Jan 17 12:22:44.947201 containerd[1468]: 2025-01-17 12:22:44.944 [INFO][6141] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:22:44.947201 containerd[1468]: 2025-01-17 12:22:44.945 [INFO][6135] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="913635f7277a52db91dcfb53c5da7a87373abb9db9f9bda690cdf2fb998d0d87" Jan 17 12:22:44.947201 containerd[1468]: time="2025-01-17T12:22:44.947135525Z" level=info msg="TearDown network for sandbox \"913635f7277a52db91dcfb53c5da7a87373abb9db9f9bda690cdf2fb998d0d87\" successfully" Jan 17 12:22:44.947201 containerd[1468]: time="2025-01-17T12:22:44.947171584Z" level=info msg="StopPodSandbox for \"913635f7277a52db91dcfb53c5da7a87373abb9db9f9bda690cdf2fb998d0d87\" returns successfully" Jan 17 12:22:44.950430 containerd[1468]: time="2025-01-17T12:22:44.948672097Z" level=info msg="RemovePodSandbox for \"913635f7277a52db91dcfb53c5da7a87373abb9db9f9bda690cdf2fb998d0d87\"" Jan 17 12:22:44.950430 containerd[1468]: time="2025-01-17T12:22:44.948715522Z" level=info msg="Forcibly stopping sandbox \"913635f7277a52db91dcfb53c5da7a87373abb9db9f9bda690cdf2fb998d0d87\"" Jan 17 12:22:45.050003 containerd[1468]: 2025-01-17 12:22:45.008 [WARNING][6159] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="913635f7277a52db91dcfb53c5da7a87373abb9db9f9bda690cdf2fb998d0d87" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--apiserver--7b5f56598d--kfl2c-eth0", GenerateName:"calico-apiserver-7b5f56598d-", Namespace:"calico-apiserver", SelfLink:"", UID:"df01509d-c2e3-4521-bb4f-4b625ab957e3", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b5f56598d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal", ContainerID:"a8d7c215db44b8190ee77c848a375951ed9c10a363627babc1e98d2765766060", Pod:"calico-apiserver-7b5f56598d-kfl2c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.80.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3af6971d627", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:22:45.050003 containerd[1468]: 2025-01-17 12:22:45.008 [INFO][6159] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="913635f7277a52db91dcfb53c5da7a87373abb9db9f9bda690cdf2fb998d0d87" Jan 17 12:22:45.050003 containerd[1468]: 2025-01-17 12:22:45.008 [INFO][6159] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="913635f7277a52db91dcfb53c5da7a87373abb9db9f9bda690cdf2fb998d0d87" iface="eth0" netns="" Jan 17 12:22:45.050003 containerd[1468]: 2025-01-17 12:22:45.008 [INFO][6159] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="913635f7277a52db91dcfb53c5da7a87373abb9db9f9bda690cdf2fb998d0d87" Jan 17 12:22:45.050003 containerd[1468]: 2025-01-17 12:22:45.009 [INFO][6159] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="913635f7277a52db91dcfb53c5da7a87373abb9db9f9bda690cdf2fb998d0d87" Jan 17 12:22:45.050003 containerd[1468]: 2025-01-17 12:22:45.038 [INFO][6166] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="913635f7277a52db91dcfb53c5da7a87373abb9db9f9bda690cdf2fb998d0d87" HandleID="k8s-pod-network.913635f7277a52db91dcfb53c5da7a87373abb9db9f9bda690cdf2fb998d0d87" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--apiserver--7b5f56598d--kfl2c-eth0" Jan 17 12:22:45.050003 containerd[1468]: 2025-01-17 12:22:45.039 [INFO][6166] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:22:45.050003 containerd[1468]: 2025-01-17 12:22:45.039 [INFO][6166] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:22:45.050003 containerd[1468]: 2025-01-17 12:22:45.045 [WARNING][6166] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="913635f7277a52db91dcfb53c5da7a87373abb9db9f9bda690cdf2fb998d0d87" HandleID="k8s-pod-network.913635f7277a52db91dcfb53c5da7a87373abb9db9f9bda690cdf2fb998d0d87" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--apiserver--7b5f56598d--kfl2c-eth0" Jan 17 12:22:45.050003 containerd[1468]: 2025-01-17 12:22:45.045 [INFO][6166] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="913635f7277a52db91dcfb53c5da7a87373abb9db9f9bda690cdf2fb998d0d87" HandleID="k8s-pod-network.913635f7277a52db91dcfb53c5da7a87373abb9db9f9bda690cdf2fb998d0d87" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-calico--apiserver--7b5f56598d--kfl2c-eth0" Jan 17 12:22:45.050003 containerd[1468]: 2025-01-17 12:22:45.047 [INFO][6166] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:22:45.050003 containerd[1468]: 2025-01-17 12:22:45.048 [INFO][6159] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="913635f7277a52db91dcfb53c5da7a87373abb9db9f9bda690cdf2fb998d0d87" Jan 17 12:22:45.051083 containerd[1468]: time="2025-01-17T12:22:45.050004330Z" level=info msg="TearDown network for sandbox \"913635f7277a52db91dcfb53c5da7a87373abb9db9f9bda690cdf2fb998d0d87\" successfully" Jan 17 12:22:45.055298 containerd[1468]: time="2025-01-17T12:22:45.055225908Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"913635f7277a52db91dcfb53c5da7a87373abb9db9f9bda690cdf2fb998d0d87\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:22:45.055490 containerd[1468]: time="2025-01-17T12:22:45.055344044Z" level=info msg="RemovePodSandbox \"913635f7277a52db91dcfb53c5da7a87373abb9db9f9bda690cdf2fb998d0d87\" returns successfully" Jan 17 12:22:45.055992 containerd[1468]: time="2025-01-17T12:22:45.055959514Z" level=info msg="StopPodSandbox for \"c8ef91d6f3de0543d3a5e539f2083ca3036678d425cc060392d2e96b697c2165\"" Jan 17 12:22:45.056208 containerd[1468]: time="2025-01-17T12:22:45.056176886Z" level=info msg="TearDown network for sandbox \"c8ef91d6f3de0543d3a5e539f2083ca3036678d425cc060392d2e96b697c2165\" successfully" Jan 17 12:22:45.056208 containerd[1468]: time="2025-01-17T12:22:45.056205603Z" level=info msg="StopPodSandbox for \"c8ef91d6f3de0543d3a5e539f2083ca3036678d425cc060392d2e96b697c2165\" returns successfully" Jan 17 12:22:45.056822 containerd[1468]: time="2025-01-17T12:22:45.056669855Z" level=info msg="RemovePodSandbox for \"c8ef91d6f3de0543d3a5e539f2083ca3036678d425cc060392d2e96b697c2165\"" Jan 17 12:22:45.056822 containerd[1468]: time="2025-01-17T12:22:45.056720617Z" level=info msg="Forcibly stopping sandbox \"c8ef91d6f3de0543d3a5e539f2083ca3036678d425cc060392d2e96b697c2165\"" Jan 17 12:22:45.056967 containerd[1468]: time="2025-01-17T12:22:45.056826900Z" level=info msg="TearDown network for sandbox \"c8ef91d6f3de0543d3a5e539f2083ca3036678d425cc060392d2e96b697c2165\" successfully" Jan 17 12:22:45.061748 containerd[1468]: time="2025-01-17T12:22:45.061694357Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c8ef91d6f3de0543d3a5e539f2083ca3036678d425cc060392d2e96b697c2165\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:22:45.061961 containerd[1468]: time="2025-01-17T12:22:45.061773912Z" level=info msg="RemovePodSandbox \"c8ef91d6f3de0543d3a5e539f2083ca3036678d425cc060392d2e96b697c2165\" returns successfully" Jan 17 12:22:45.062452 containerd[1468]: time="2025-01-17T12:22:45.062237068Z" level=info msg="StopPodSandbox for \"fdbdc5c167e2a38e6f499b803ce2f0a0cb7e21bf89471ea2e426a9d65a2724d5\"" Jan 17 12:22:45.147658 containerd[1468]: 2025-01-17 12:22:45.108 [WARNING][6184] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fdbdc5c167e2a38e6f499b803ce2f0a0cb7e21bf89471ea2e426a9d65a2724d5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-csi--node--driver--gjb8c-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"68c040bb-a18d-4fed-9ea3-2d0c63ef70bc", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal", ContainerID:"afad4e44e505c1ce870e0cf93d904ef1ee24c99458cd5e11fc5cccdb0ed74060", Pod:"csi-node-driver-gjb8c", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.80.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali57eb19459d5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:22:45.147658 containerd[1468]: 2025-01-17 12:22:45.108 [INFO][6184] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fdbdc5c167e2a38e6f499b803ce2f0a0cb7e21bf89471ea2e426a9d65a2724d5" Jan 17 12:22:45.147658 containerd[1468]: 2025-01-17 12:22:45.108 [INFO][6184] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fdbdc5c167e2a38e6f499b803ce2f0a0cb7e21bf89471ea2e426a9d65a2724d5" iface="eth0" netns="" Jan 17 12:22:45.147658 containerd[1468]: 2025-01-17 12:22:45.108 [INFO][6184] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fdbdc5c167e2a38e6f499b803ce2f0a0cb7e21bf89471ea2e426a9d65a2724d5" Jan 17 12:22:45.147658 containerd[1468]: 2025-01-17 12:22:45.108 [INFO][6184] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fdbdc5c167e2a38e6f499b803ce2f0a0cb7e21bf89471ea2e426a9d65a2724d5" Jan 17 12:22:45.147658 containerd[1468]: 2025-01-17 12:22:45.136 [INFO][6190] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fdbdc5c167e2a38e6f499b803ce2f0a0cb7e21bf89471ea2e426a9d65a2724d5" HandleID="k8s-pod-network.fdbdc5c167e2a38e6f499b803ce2f0a0cb7e21bf89471ea2e426a9d65a2724d5" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-csi--node--driver--gjb8c-eth0" Jan 17 12:22:45.147658 containerd[1468]: 2025-01-17 12:22:45.136 [INFO][6190] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:22:45.147658 containerd[1468]: 2025-01-17 12:22:45.136 [INFO][6190] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:22:45.147658 containerd[1468]: 2025-01-17 12:22:45.143 [WARNING][6190] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fdbdc5c167e2a38e6f499b803ce2f0a0cb7e21bf89471ea2e426a9d65a2724d5" HandleID="k8s-pod-network.fdbdc5c167e2a38e6f499b803ce2f0a0cb7e21bf89471ea2e426a9d65a2724d5" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-csi--node--driver--gjb8c-eth0" Jan 17 12:22:45.147658 containerd[1468]: 2025-01-17 12:22:45.143 [INFO][6190] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fdbdc5c167e2a38e6f499b803ce2f0a0cb7e21bf89471ea2e426a9d65a2724d5" HandleID="k8s-pod-network.fdbdc5c167e2a38e6f499b803ce2f0a0cb7e21bf89471ea2e426a9d65a2724d5" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-csi--node--driver--gjb8c-eth0" Jan 17 12:22:45.147658 containerd[1468]: 2025-01-17 12:22:45.145 [INFO][6190] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:22:45.147658 containerd[1468]: 2025-01-17 12:22:45.146 [INFO][6184] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fdbdc5c167e2a38e6f499b803ce2f0a0cb7e21bf89471ea2e426a9d65a2724d5" Jan 17 12:22:45.148854 containerd[1468]: time="2025-01-17T12:22:45.147749418Z" level=info msg="TearDown network for sandbox \"fdbdc5c167e2a38e6f499b803ce2f0a0cb7e21bf89471ea2e426a9d65a2724d5\" successfully" Jan 17 12:22:45.148854 containerd[1468]: time="2025-01-17T12:22:45.147785991Z" level=info msg="StopPodSandbox for \"fdbdc5c167e2a38e6f499b803ce2f0a0cb7e21bf89471ea2e426a9d65a2724d5\" returns successfully" Jan 17 12:22:45.148972 containerd[1468]: time="2025-01-17T12:22:45.148900789Z" level=info msg="RemovePodSandbox for \"fdbdc5c167e2a38e6f499b803ce2f0a0cb7e21bf89471ea2e426a9d65a2724d5\"" Jan 17 12:22:45.148972 containerd[1468]: time="2025-01-17T12:22:45.148941233Z" level=info msg="Forcibly stopping sandbox \"fdbdc5c167e2a38e6f499b803ce2f0a0cb7e21bf89471ea2e426a9d65a2724d5\"" Jan 17 12:22:45.234542 containerd[1468]: 2025-01-17 12:22:45.195 [WARNING][6208] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fdbdc5c167e2a38e6f499b803ce2f0a0cb7e21bf89471ea2e426a9d65a2724d5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-csi--node--driver--gjb8c-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"68c040bb-a18d-4fed-9ea3-2d0c63ef70bc", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 22, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-e966b69e4db9c6f11aa6.c.flatcar-212911.internal", ContainerID:"afad4e44e505c1ce870e0cf93d904ef1ee24c99458cd5e11fc5cccdb0ed74060", Pod:"csi-node-driver-gjb8c", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.80.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali57eb19459d5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:22:45.234542 containerd[1468]: 2025-01-17 12:22:45.196 [INFO][6208] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fdbdc5c167e2a38e6f499b803ce2f0a0cb7e21bf89471ea2e426a9d65a2724d5" Jan 17 12:22:45.234542 containerd[1468]: 2025-01-17 12:22:45.196 [INFO][6208] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fdbdc5c167e2a38e6f499b803ce2f0a0cb7e21bf89471ea2e426a9d65a2724d5" iface="eth0" netns="" Jan 17 12:22:45.234542 containerd[1468]: 2025-01-17 12:22:45.196 [INFO][6208] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fdbdc5c167e2a38e6f499b803ce2f0a0cb7e21bf89471ea2e426a9d65a2724d5" Jan 17 12:22:45.234542 containerd[1468]: 2025-01-17 12:22:45.196 [INFO][6208] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fdbdc5c167e2a38e6f499b803ce2f0a0cb7e21bf89471ea2e426a9d65a2724d5" Jan 17 12:22:45.234542 containerd[1468]: 2025-01-17 12:22:45.221 [INFO][6214] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fdbdc5c167e2a38e6f499b803ce2f0a0cb7e21bf89471ea2e426a9d65a2724d5" HandleID="k8s-pod-network.fdbdc5c167e2a38e6f499b803ce2f0a0cb7e21bf89471ea2e426a9d65a2724d5" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-csi--node--driver--gjb8c-eth0" Jan 17 12:22:45.234542 containerd[1468]: 2025-01-17 12:22:45.221 [INFO][6214] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:22:45.234542 containerd[1468]: 2025-01-17 12:22:45.221 [INFO][6214] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:22:45.234542 containerd[1468]: 2025-01-17 12:22:45.228 [WARNING][6214] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fdbdc5c167e2a38e6f499b803ce2f0a0cb7e21bf89471ea2e426a9d65a2724d5" HandleID="k8s-pod-network.fdbdc5c167e2a38e6f499b803ce2f0a0cb7e21bf89471ea2e426a9d65a2724d5" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-csi--node--driver--gjb8c-eth0" Jan 17 12:22:45.234542 containerd[1468]: 2025-01-17 12:22:45.228 [INFO][6214] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fdbdc5c167e2a38e6f499b803ce2f0a0cb7e21bf89471ea2e426a9d65a2724d5" HandleID="k8s-pod-network.fdbdc5c167e2a38e6f499b803ce2f0a0cb7e21bf89471ea2e426a9d65a2724d5" Workload="ci--4081--3--0--e966b69e4db9c6f11aa6.c.flatcar--212911.internal-k8s-csi--node--driver--gjb8c-eth0" Jan 17 12:22:45.234542 containerd[1468]: 2025-01-17 12:22:45.231 [INFO][6214] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:22:45.234542 containerd[1468]: 2025-01-17 12:22:45.232 [INFO][6208] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fdbdc5c167e2a38e6f499b803ce2f0a0cb7e21bf89471ea2e426a9d65a2724d5" Jan 17 12:22:45.234542 containerd[1468]: time="2025-01-17T12:22:45.234114317Z" level=info msg="TearDown network for sandbox \"fdbdc5c167e2a38e6f499b803ce2f0a0cb7e21bf89471ea2e426a9d65a2724d5\" successfully" Jan 17 12:22:45.239557 containerd[1468]: time="2025-01-17T12:22:45.239503072Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fdbdc5c167e2a38e6f499b803ce2f0a0cb7e21bf89471ea2e426a9d65a2724d5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:22:45.239691 containerd[1468]: time="2025-01-17T12:22:45.239603284Z" level=info msg="RemovePodSandbox \"fdbdc5c167e2a38e6f499b803ce2f0a0cb7e21bf89471ea2e426a9d65a2724d5\" returns successfully" Jan 17 12:22:46.995723 kubelet[2610]: I0117 12:22:46.995240 2610 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:22:47.032507 kubelet[2610]: I0117 12:22:47.031414 2610 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-796fcf66c4-smftv" podStartSLOduration=10.031384835 podStartE2EDuration="10.031384835s" podCreationTimestamp="2025-01-17 12:22:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:22:43.01051814 +0000 UTC m=+59.838616700" watchObservedRunningTime="2025-01-17 12:22:47.031384835 +0000 UTC m=+63.859483392" Jan 17 12:22:47.102766 ntpd[1430]: Listen normally on 15 cali772c77b22a8 [fe80::ecee:eeff:feee:eeee%13]:123 Jan 17 12:22:47.103269 ntpd[1430]: 17 Jan 12:22:47 ntpd[1430]: Listen normally on 15 cali772c77b22a8 [fe80::ecee:eeff:feee:eeee%13]:123 Jan 17 12:22:50.907609 kubelet[2610]: I0117 12:22:50.907379 2610 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:22:57.440794 systemd[1]: Started sshd@7-10.128.0.38:22-139.178.89.65:60048.service - OpenSSH per-connection server daemon (139.178.89.65:60048). Jan 17 12:22:57.752482 sshd[6247]: Accepted publickey for core from 139.178.89.65 port 60048 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:22:57.755723 sshd[6247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:22:57.765817 systemd-logind[1448]: New session 8 of user core. Jan 17 12:22:57.771832 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 12:22:58.064926 sshd[6247]: pam_unix(sshd:session): session closed for user core Jan 17 12:22:58.070982 systemd[1]: sshd@7-10.128.0.38:22-139.178.89.65:60048.service: Deactivated successfully. Jan 17 12:22:58.074436 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 12:22:58.077416 systemd-logind[1448]: Session 8 logged out. Waiting for processes to exit. Jan 17 12:22:58.079298 systemd-logind[1448]: Removed session 8. Jan 17 12:23:03.124240 systemd[1]: Started sshd@8-10.128.0.38:22-139.178.89.65:39152.service - OpenSSH per-connection server daemon (139.178.89.65:39152). Jan 17 12:23:03.431381 sshd[6261]: Accepted publickey for core from 139.178.89.65 port 39152 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:23:03.433886 sshd[6261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:03.443443 systemd-logind[1448]: New session 9 of user core. Jan 17 12:23:03.450638 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 12:23:03.746945 sshd[6261]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:03.752139 systemd[1]: sshd@8-10.128.0.38:22-139.178.89.65:39152.service: Deactivated successfully. Jan 17 12:23:03.754959 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 12:23:03.757118 systemd-logind[1448]: Session 9 logged out. Waiting for processes to exit. Jan 17 12:23:03.759031 systemd-logind[1448]: Removed session 9. Jan 17 12:23:05.636818 systemd[1]: run-containerd-runc-k8s.io-9c8fa11eb25f929badde0c75c1767c16ac1e23186daebdfb007274f5c0318dcc-runc.6HpMjv.mount: Deactivated successfully. Jan 17 12:23:08.804670 systemd[1]: Started sshd@9-10.128.0.38:22-139.178.89.65:39158.service - OpenSSH per-connection server daemon (139.178.89.65:39158). Jan 17 12:23:09.089690 sshd[6327]: Accepted publickey for core from 139.178.89.65 port 39158 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:23:09.091739 sshd[6327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:09.097930 systemd-logind[1448]: New session 10 of user core. Jan 17 12:23:09.103483 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 12:23:09.405302 sshd[6327]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:09.411875 systemd-logind[1448]: Session 10 logged out. Waiting for processes to exit. Jan 17 12:23:09.415174 systemd[1]: sshd@9-10.128.0.38:22-139.178.89.65:39158.service: Deactivated successfully. Jan 17 12:23:09.420545 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 12:23:09.423945 systemd-logind[1448]: Removed session 10. Jan 17 12:23:09.462840 systemd[1]: Started sshd@10-10.128.0.38:22-139.178.89.65:39174.service - OpenSSH per-connection server daemon (139.178.89.65:39174). Jan 17 12:23:09.744086 sshd[6340]: Accepted publickey for core from 139.178.89.65 port 39174 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:23:09.746351 sshd[6340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:09.755074 systemd-logind[1448]: New session 11 of user core. Jan 17 12:23:09.760483 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 12:23:10.108762 sshd[6340]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:10.118311 systemd[1]: sshd@10-10.128.0.38:22-139.178.89.65:39174.service: Deactivated successfully. Jan 17 12:23:10.124214 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 12:23:10.129369 systemd-logind[1448]: Session 11 logged out. Waiting for processes to exit. Jan 17 12:23:10.131906 systemd-logind[1448]: Removed session 11. Jan 17 12:23:10.162697 systemd[1]: Started sshd@11-10.128.0.38:22-139.178.89.65:39182.service - OpenSSH per-connection server daemon (139.178.89.65:39182). Jan 17 12:23:10.457810 sshd[6351]: Accepted publickey for core from 139.178.89.65 port 39182 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:23:10.459689 sshd[6351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:10.465329 systemd-logind[1448]: New session 12 of user core. Jan 17 12:23:10.469535 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 12:23:10.770939 sshd[6351]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:10.777800 systemd[1]: sshd@11-10.128.0.38:22-139.178.89.65:39182.service: Deactivated successfully. Jan 17 12:23:10.783710 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 12:23:10.786072 systemd-logind[1448]: Session 12 logged out. Waiting for processes to exit. Jan 17 12:23:10.787747 systemd-logind[1448]: Removed session 12. Jan 17 12:23:15.825648 systemd[1]: Started sshd@12-10.128.0.38:22-139.178.89.65:32896.service - OpenSSH per-connection server daemon (139.178.89.65:32896). Jan 17 12:23:16.119521 sshd[6386]: Accepted publickey for core from 139.178.89.65 port 32896 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:23:16.121706 sshd[6386]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:16.128086 systemd-logind[1448]: New session 13 of user core. Jan 17 12:23:16.137558 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 12:23:16.409594 sshd[6386]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:16.415119 systemd[1]: sshd@12-10.128.0.38:22-139.178.89.65:32896.service: Deactivated successfully. Jan 17 12:23:16.418258 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 12:23:16.420591 systemd-logind[1448]: Session 13 logged out. Waiting for processes to exit. Jan 17 12:23:16.422193 systemd-logind[1448]: Removed session 13. Jan 17 12:23:21.465736 systemd[1]: Started sshd@13-10.128.0.38:22-139.178.89.65:51120.service - OpenSSH per-connection server daemon (139.178.89.65:51120). Jan 17 12:23:21.746765 sshd[6399]: Accepted publickey for core from 139.178.89.65 port 51120 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:23:21.748769 sshd[6399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:21.754603 systemd-logind[1448]: New session 14 of user core. Jan 17 12:23:21.760489 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 12:23:22.038265 sshd[6399]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:22.044395 systemd[1]: sshd@13-10.128.0.38:22-139.178.89.65:51120.service: Deactivated successfully. Jan 17 12:23:22.047006 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 12:23:22.048455 systemd-logind[1448]: Session 14 logged out. Waiting for processes to exit. Jan 17 12:23:22.050137 systemd-logind[1448]: Removed session 14. Jan 17 12:23:27.095681 systemd[1]: Started sshd@14-10.128.0.38:22-139.178.89.65:51132.service - OpenSSH per-connection server daemon (139.178.89.65:51132). Jan 17 12:23:27.383696 sshd[6423]: Accepted publickey for core from 139.178.89.65 port 51132 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:23:27.385708 sshd[6423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:27.392586 systemd-logind[1448]: New session 15 of user core. Jan 17 12:23:27.397466 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 12:23:27.670679 sshd[6423]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:27.676652 systemd[1]: sshd@14-10.128.0.38:22-139.178.89.65:51132.service: Deactivated successfully. Jan 17 12:23:27.679512 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 12:23:27.680679 systemd-logind[1448]: Session 15 logged out. Waiting for processes to exit. Jan 17 12:23:27.682289 systemd-logind[1448]: Removed session 15. Jan 17 12:23:32.730700 systemd[1]: Started sshd@15-10.128.0.38:22-139.178.89.65:58566.service - OpenSSH per-connection server daemon (139.178.89.65:58566). Jan 17 12:23:33.033610 sshd[6438]: Accepted publickey for core from 139.178.89.65 port 58566 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:23:33.037612 sshd[6438]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:33.050083 systemd-logind[1448]: New session 16 of user core. Jan 17 12:23:33.056717 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 12:23:33.372368 sshd[6438]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:33.379034 systemd[1]: sshd@15-10.128.0.38:22-139.178.89.65:58566.service: Deactivated successfully. Jan 17 12:23:33.382085 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 12:23:33.383390 systemd-logind[1448]: Session 16 logged out. Waiting for processes to exit. Jan 17 12:23:33.385128 systemd-logind[1448]: Removed session 16. Jan 17 12:23:33.426958 systemd[1]: Started sshd@16-10.128.0.38:22-139.178.89.65:58570.service - OpenSSH per-connection server daemon (139.178.89.65:58570). Jan 17 12:23:33.730181 sshd[6451]: Accepted publickey for core from 139.178.89.65 port 58570 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:23:33.732426 sshd[6451]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:33.744409 systemd-logind[1448]: New session 17 of user core. Jan 17 12:23:33.750789 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 12:23:34.142569 sshd[6451]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:34.154663 systemd[1]: sshd@16-10.128.0.38:22-139.178.89.65:58570.service: Deactivated successfully. Jan 17 12:23:34.163542 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 12:23:34.167336 systemd-logind[1448]: Session 17 logged out. Waiting for processes to exit. Jan 17 12:23:34.172318 systemd-logind[1448]: Removed session 17. Jan 17 12:23:34.208744 systemd[1]: Started sshd@17-10.128.0.38:22-139.178.89.65:58576.service - OpenSSH per-connection server daemon (139.178.89.65:58576). Jan 17 12:23:34.509372 sshd[6462]: Accepted publickey for core from 139.178.89.65 port 58576 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:23:34.511493 sshd[6462]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:34.518416 systemd-logind[1448]: New session 18 of user core. Jan 17 12:23:34.528576 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 12:23:36.909183 sshd[6462]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:36.915595 systemd[1]: sshd@17-10.128.0.38:22-139.178.89.65:58576.service: Deactivated successfully. Jan 17 12:23:36.919008 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 12:23:36.921064 systemd-logind[1448]: Session 18 logged out. Waiting for processes to exit. Jan 17 12:23:36.922765 systemd-logind[1448]: Removed session 18. Jan 17 12:23:36.965214 systemd[1]: Started sshd@18-10.128.0.38:22-139.178.89.65:58592.service - OpenSSH per-connection server daemon (139.178.89.65:58592). Jan 17 12:23:37.251563 sshd[6501]: Accepted publickey for core from 139.178.89.65 port 58592 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:23:37.253731 sshd[6501]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:37.262022 systemd-logind[1448]: New session 19 of user core. Jan 17 12:23:37.267561 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 12:23:37.697369 sshd[6501]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:37.707102 systemd[1]: sshd@18-10.128.0.38:22-139.178.89.65:58592.service: Deactivated successfully. Jan 17 12:23:37.714872 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 12:23:37.718458 systemd-logind[1448]: Session 19 logged out. Waiting for processes to exit. Jan 17 12:23:37.723354 systemd-logind[1448]: Removed session 19. Jan 17 12:23:37.761775 systemd[1]: Started sshd@19-10.128.0.38:22-139.178.89.65:58604.service - OpenSSH per-connection server daemon (139.178.89.65:58604). Jan 17 12:23:38.079732 sshd[6512]: Accepted publickey for core from 139.178.89.65 port 58604 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:23:38.081810 sshd[6512]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:38.088760 systemd-logind[1448]: New session 20 of user core. Jan 17 12:23:38.093470 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 12:23:38.367296 sshd[6512]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:38.372788 systemd[1]: sshd@19-10.128.0.38:22-139.178.89.65:58604.service: Deactivated successfully. Jan 17 12:23:38.375698 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 12:23:38.376899 systemd-logind[1448]: Session 20 logged out. Waiting for processes to exit. Jan 17 12:23:38.378507 systemd-logind[1448]: Removed session 20. Jan 17 12:23:43.421696 systemd[1]: Started sshd@20-10.128.0.38:22-139.178.89.65:47478.service - OpenSSH per-connection server daemon (139.178.89.65:47478). Jan 17 12:23:43.711104 sshd[6546]: Accepted publickey for core from 139.178.89.65 port 47478 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:23:43.714665 sshd[6546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:43.722575 systemd-logind[1448]: New session 21 of user core. Jan 17 12:23:43.731678 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 12:23:44.046594 sshd[6546]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:44.053713 systemd[1]: sshd@20-10.128.0.38:22-139.178.89.65:47478.service: Deactivated successfully. Jan 17 12:23:44.060146 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 12:23:44.063318 systemd-logind[1448]: Session 21 logged out. Waiting for processes to exit. Jan 17 12:23:44.066167 systemd-logind[1448]: Removed session 21. Jan 17 12:23:49.106666 systemd[1]: Started sshd@21-10.128.0.38:22-139.178.89.65:47494.service - OpenSSH per-connection server daemon (139.178.89.65:47494). Jan 17 12:23:49.401926 sshd[6581]: Accepted publickey for core from 139.178.89.65 port 47494 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:23:49.404123 sshd[6581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:49.409971 systemd-logind[1448]: New session 22 of user core. Jan 17 12:23:49.415459 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 12:23:49.694859 sshd[6581]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:49.700079 systemd[1]: sshd@21-10.128.0.38:22-139.178.89.65:47494.service: Deactivated successfully. Jan 17 12:23:49.702902 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 12:23:49.705178 systemd-logind[1448]: Session 22 logged out. Waiting for processes to exit. Jan 17 12:23:49.707370 systemd-logind[1448]: Removed session 22. Jan 17 12:23:54.754832 systemd[1]: Started sshd@22-10.128.0.38:22-139.178.89.65:57958.service - OpenSSH per-connection server daemon (139.178.89.65:57958). Jan 17 12:23:55.040036 sshd[6594]: Accepted publickey for core from 139.178.89.65 port 57958 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:23:55.042419 sshd[6594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:55.049178 systemd-logind[1448]: New session 23 of user core. Jan 17 12:23:55.057500 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 12:23:55.328073 sshd[6594]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:55.332850 systemd[1]: sshd@22-10.128.0.38:22-139.178.89.65:57958.service: Deactivated successfully. Jan 17 12:23:55.336020 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 12:23:55.338761 systemd-logind[1448]: Session 23 logged out. Waiting for processes to exit. Jan 17 12:23:55.340385 systemd-logind[1448]: Removed session 23. Jan 17 12:24:00.383618 systemd[1]: Started sshd@23-10.128.0.38:22-139.178.89.65:57970.service - OpenSSH per-connection server daemon (139.178.89.65:57970). Jan 17 12:24:00.676123 sshd[6609]: Accepted publickey for core from 139.178.89.65 port 57970 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:24:00.678223 sshd[6609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:24:00.684799 systemd-logind[1448]: New session 24 of user core. Jan 17 12:24:00.689482 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 17 12:24:00.965589 sshd[6609]: pam_unix(sshd:session): session closed for user core Jan 17 12:24:00.973880 systemd[1]: sshd@23-10.128.0.38:22-139.178.89.65:57970.service: Deactivated successfully. Jan 17 12:24:00.976980 systemd[1]: session-24.scope: Deactivated successfully. Jan 17 12:24:00.978337 systemd-logind[1448]: Session 24 logged out. Waiting for processes to exit. Jan 17 12:24:00.979894 systemd-logind[1448]: Removed session 24.