Jan 30 13:47:25.113600 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 13:47:25.113648 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:47:25.113666 kernel: BIOS-provided physical RAM map: Jan 30 13:47:25.113681 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Jan 30 13:47:25.113693 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Jan 30 13:47:25.113706 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Jan 30 13:47:25.113721 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Jan 30 13:47:25.113739 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Jan 30 13:47:25.113753 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Jan 30 13:47:25.113767 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Jan 30 13:47:25.113781 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Jan 30 13:47:25.113794 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Jan 30 13:47:25.113808 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Jan 30 13:47:25.113821 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Jan 30 13:47:25.113841 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Jan 30 13:47:25.113856 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Jan 30 13:47:25.113872 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Jan 30 13:47:25.113887 kernel: NX (Execute Disable) protection: active Jan 30 13:47:25.113904 kernel: APIC: Static calls initialized Jan 30 13:47:25.113920 kernel: efi: EFI v2.7 by EDK II Jan 30 13:47:25.113936 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Jan 30 13:47:25.113952 kernel: SMBIOS 2.4 present. Jan 30 13:47:25.113969 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Jan 30 13:47:25.113985 kernel: Hypervisor detected: KVM Jan 30 13:47:25.114005 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 13:47:25.114021 kernel: kvm-clock: using sched offset of 12638355307 cycles Jan 30 13:47:25.114044 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 13:47:25.114061 kernel: tsc: Detected 2299.998 MHz processor Jan 30 13:47:25.114077 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 13:47:25.114094 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 13:47:25.114110 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Jan 30 13:47:25.114126 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Jan 30 13:47:25.114142 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 13:47:25.114192 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Jan 30 13:47:25.114208 kernel: Using GB pages for direct mapping Jan 30 13:47:25.114224 kernel: Secure boot disabled Jan 30 13:47:25.114240 kernel: ACPI: Early table checksum verification disabled Jan 30 13:47:25.114257 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Jan 30 13:47:25.114273 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Jan 30 13:47:25.114290 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Jan 30 13:47:25.114313 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Jan 30 13:47:25.114334 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Jan 30 13:47:25.114351 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Jan 30 13:47:25.114369 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Jan 30 13:47:25.114386 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Jan 30 13:47:25.114404 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Jan 30 13:47:25.114420 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Jan 30 13:47:25.114441 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Jan 30 13:47:25.114458 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Jan 30 13:47:25.114475 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Jan 30 13:47:25.114493 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Jan 30 13:47:25.114510 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Jan 30 13:47:25.114527 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Jan 30 13:47:25.114545 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Jan 30 13:47:25.114562 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Jan 30 13:47:25.114579 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Jan 30 13:47:25.114599 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Jan 30 13:47:25.114617 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 30 13:47:25.114634 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 30 13:47:25.114651 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 30 13:47:25.114668 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Jan 30 13:47:25.114685 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Jan 30 13:47:25.114701 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Jan 30 13:47:25.114716 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Jan 30 13:47:25.114731 kernel: NODE_DATA(0) allocated [mem 0x21fff8000-0x21fffdfff] Jan 30 13:47:25.114751 kernel: Zone ranges: Jan 30 13:47:25.114767 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 13:47:25.114784 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 30 13:47:25.114801 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Jan 30 13:47:25.114818 kernel: Movable zone start for each node Jan 30 13:47:25.114836 kernel: Early memory node ranges Jan 30 13:47:25.114852 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Jan 30 13:47:25.114870 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Jan 30 13:47:25.114887 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Jan 30 13:47:25.114902 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Jan 30 13:47:25.114922 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Jan 30 13:47:25.114939 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Jan 30 13:47:25.114956 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:47:25.114973 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Jan 30 13:47:25.114988 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Jan 30 13:47:25.115006 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 30 13:47:25.115024 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Jan 30 13:47:25.115048 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 30 13:47:25.115066 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 13:47:25.115088 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 30 13:47:25.115105 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 13:47:25.115120 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 13:47:25.115137 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 13:47:25.115168 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 13:47:25.115194 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 13:47:25.115211 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 30 13:47:25.115229 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 30 13:47:25.115246 kernel: Booting paravirtualized kernel on KVM Jan 30 13:47:25.115269 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 13:47:25.115287 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 30 13:47:25.115304 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 30 13:47:25.115322 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 30 13:47:25.115338 kernel: pcpu-alloc: [0] 0 1 Jan 30 13:47:25.115354 kernel: kvm-guest: PV spinlocks enabled Jan 30 13:47:25.115371 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 30 13:47:25.115390 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:47:25.115413 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:47:25.115429 kernel: random: crng init done Jan 30 13:47:25.115447 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 30 13:47:25.115464 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 13:47:25.115482 kernel: Fallback order for Node 0: 0 Jan 30 13:47:25.115499 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Jan 30 13:47:25.115516 kernel: Policy zone: Normal Jan 30 13:47:25.115533 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:47:25.115549 kernel: software IO TLB: area num 2. Jan 30 13:47:25.115570 kernel: Memory: 7513372K/7860584K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 346952K reserved, 0K cma-reserved) Jan 30 13:47:25.115587 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 13:47:25.115603 kernel: Kernel/User page tables isolation: enabled Jan 30 13:47:25.115620 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 13:47:25.115635 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 13:47:25.115652 kernel: Dynamic Preempt: voluntary Jan 30 13:47:25.115670 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:47:25.115694 kernel: rcu: RCU event tracing is enabled. Jan 30 13:47:25.115728 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 13:47:25.115744 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:47:25.115781 kernel: Rude variant of Tasks RCU enabled. Jan 30 13:47:25.115801 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:47:25.115817 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:47:25.115833 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 13:47:25.115849 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 30 13:47:25.115865 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:47:25.115882 kernel: Console: colour dummy device 80x25 Jan 30 13:47:25.115902 kernel: printk: console [ttyS0] enabled Jan 30 13:47:25.115919 kernel: ACPI: Core revision 20230628 Jan 30 13:47:25.115938 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 13:47:25.115955 kernel: x2apic enabled Jan 30 13:47:25.115972 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 13:47:25.115988 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Jan 30 13:47:25.116004 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 30 13:47:25.116021 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Jan 30 13:47:25.116052 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Jan 30 13:47:25.116071 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Jan 30 13:47:25.116089 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 13:47:25.116108 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 30 13:47:25.116127 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 30 13:47:25.116145 kernel: Spectre V2 : Mitigation: IBRS Jan 30 13:47:25.116180 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 13:47:25.116199 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 13:47:25.116218 kernel: RETBleed: Mitigation: IBRS Jan 30 13:47:25.116242 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 13:47:25.116260 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Jan 30 13:47:25.116278 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 13:47:25.116297 kernel: MDS: Mitigation: Clear CPU buffers Jan 30 13:47:25.116314 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 30 13:47:25.116331 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 13:47:25.116349 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 13:47:25.116367 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 13:47:25.116386 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 13:47:25.116408 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 30 13:47:25.116426 kernel: Freeing SMP alternatives memory: 32K Jan 30 13:47:25.116442 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:47:25.116459 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:47:25.116478 kernel: landlock: Up and running. Jan 30 13:47:25.116497 kernel: SELinux: Initializing. Jan 30 13:47:25.116515 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 13:47:25.116532 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 13:47:25.116551 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Jan 30 13:47:25.116576 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:47:25.116595 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:47:25.116613 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:47:25.116630 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Jan 30 13:47:25.116647 kernel: signal: max sigframe size: 1776 Jan 30 13:47:25.116663 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:47:25.116680 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:47:25.116697 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 30 13:47:25.116713 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:47:25.116736 kernel: smpboot: x86: Booting SMP configuration: Jan 30 13:47:25.116752 kernel: .... node #0, CPUs: #1 Jan 30 13:47:25.116770 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 30 13:47:25.116787 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 30 13:47:25.116804 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 13:47:25.116821 kernel: smpboot: Max logical packages: 1 Jan 30 13:47:25.116861 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jan 30 13:47:25.116877 kernel: devtmpfs: initialized Jan 30 13:47:25.116900 kernel: x86/mm: Memory block size: 128MB Jan 30 13:47:25.116916 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Jan 30 13:47:25.116933 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:47:25.116951 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 13:47:25.116968 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:47:25.116985 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:47:25.117003 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:47:25.117022 kernel: audit: type=2000 audit(1738244844.201:1): state=initialized audit_enabled=0 res=1 Jan 30 13:47:25.117080 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:47:25.117103 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 13:47:25.117119 kernel: cpuidle: using governor menu Jan 30 13:47:25.117137 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:47:25.117187 kernel: dca service started, version 1.12.1 Jan 30 13:47:25.117208 kernel: PCI: Using configuration type 1 for base access Jan 30 13:47:25.117227 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 13:47:25.117246 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 13:47:25.117263 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 13:47:25.117280 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:47:25.117302 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:47:25.117318 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:47:25.117335 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:47:25.117354 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:47:25.117372 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:47:25.117391 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 30 13:47:25.117411 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 13:47:25.117428 kernel: ACPI: Interpreter enabled Jan 30 13:47:25.117448 kernel: ACPI: PM: (supports S0 S3 S5) Jan 30 13:47:25.117469 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 13:47:25.117488 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 13:47:25.117506 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 30 13:47:25.117524 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 30 13:47:25.117543 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 13:47:25.117790 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 30 13:47:25.117975 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 30 13:47:25.118172 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 30 13:47:25.118201 kernel: PCI host bridge to bus 0000:00 Jan 30 13:47:25.118376 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 13:47:25.118551 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 13:47:25.118711 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 13:47:25.118876 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Jan 30 13:47:25.119047 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 13:47:25.121336 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 30 13:47:25.121553 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Jan 30 13:47:25.121748 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 30 13:47:25.121925 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 30 13:47:25.122115 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Jan 30 13:47:25.123277 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Jan 30 13:47:25.124286 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Jan 30 13:47:25.124530 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 30 13:47:25.124740 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Jan 30 13:47:25.124941 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Jan 30 13:47:25.126198 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Jan 30 13:47:25.126429 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Jan 30 13:47:25.126609 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Jan 30 13:47:25.126639 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 13:47:25.126658 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 13:47:25.126677 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 13:47:25.126695 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 13:47:25.126714 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 30 13:47:25.126732 kernel: iommu: Default domain type: Translated Jan 30 13:47:25.126751 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 13:47:25.126769 kernel: efivars: Registered efivars operations Jan 30 13:47:25.126787 kernel: PCI: Using ACPI for IRQ routing Jan 30 13:47:25.126806 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 13:47:25.126828 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Jan 30 13:47:25.126846 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Jan 30 13:47:25.126864 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Jan 30 13:47:25.126881 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Jan 30 13:47:25.126899 kernel: vgaarb: loaded Jan 30 13:47:25.126917 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 13:47:25.126935 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:47:25.126954 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:47:25.126976 kernel: pnp: PnP ACPI init Jan 30 13:47:25.126994 kernel: pnp: PnP ACPI: found 7 devices Jan 30 13:47:25.127013 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 13:47:25.127031 kernel: NET: Registered PF_INET protocol family Jan 30 13:47:25.127057 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 30 13:47:25.127074 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 30 13:47:25.127093 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:47:25.127111 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 13:47:25.127129 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 30 13:47:25.128219 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 30 13:47:25.128248 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 30 13:47:25.128267 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 30 13:47:25.128285 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:47:25.128304 kernel: NET: Registered PF_XDP protocol family Jan 30 13:47:25.128509 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 13:47:25.128676 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 13:47:25.128839 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 13:47:25.129010 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Jan 30 13:47:25.129230 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 30 13:47:25.129257 kernel: PCI: CLS 0 bytes, default 64 Jan 30 13:47:25.129277 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 30 13:47:25.129296 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Jan 30 13:47:25.129314 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 30 13:47:25.129334 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 30 13:47:25.129354 kernel: clocksource: Switched to clocksource tsc Jan 30 13:47:25.129379 kernel: Initialise system trusted keyrings Jan 30 13:47:25.129397 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 30 13:47:25.129416 kernel: Key type asymmetric registered Jan 30 13:47:25.129435 kernel: Asymmetric key parser 'x509' registered Jan 30 13:47:25.129453 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 13:47:25.129472 kernel: io scheduler mq-deadline registered Jan 30 13:47:25.129491 kernel: io scheduler kyber registered Jan 30 13:47:25.129508 kernel: io scheduler bfq registered Jan 30 13:47:25.129524 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 13:47:25.129549 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 30 13:47:25.129763 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Jan 30 13:47:25.129789 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jan 30 13:47:25.129978 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Jan 30 13:47:25.130002 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 30 13:47:25.132250 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Jan 30 13:47:25.132286 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:47:25.132462 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 13:47:25.132484 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 30 13:47:25.132512 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Jan 30 13:47:25.132531 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Jan 30 13:47:25.132757 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Jan 30 13:47:25.132785 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 13:47:25.132805 kernel: i8042: Warning: Keylock active Jan 30 13:47:25.132825 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 13:47:25.132845 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 13:47:25.133044 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 30 13:47:25.135299 kernel: rtc_cmos 00:00: registered as rtc0 Jan 30 13:47:25.135497 kernel: rtc_cmos 00:00: setting system clock to 2025-01-30T13:47:24 UTC (1738244844) Jan 30 13:47:25.135831 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 30 13:47:25.135873 kernel: intel_pstate: CPU model not supported Jan 30 13:47:25.135891 kernel: pstore: Using crash dump compression: deflate Jan 30 13:47:25.135919 kernel: pstore: Registered efi_pstore as persistent store backend Jan 30 13:47:25.135938 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:47:25.135957 kernel: Segment Routing with IPv6 Jan 30 13:47:25.135995 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:47:25.136014 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:47:25.136034 kernel: Key type dns_resolver registered Jan 30 13:47:25.136052 kernel: IPI shorthand broadcast: enabled Jan 30 13:47:25.136068 kernel: sched_clock: Marking stable (879005300, 150701826)->(1070479150, -40772024) Jan 30 13:47:25.136088 kernel: registered taskstats version 1 Jan 30 13:47:25.136108 kernel: Loading compiled-in X.509 certificates Jan 30 13:47:25.136127 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 13:47:25.136146 kernel: Key type .fscrypt registered Jan 30 13:47:25.136196 kernel: Key type fscrypt-provisioning registered Jan 30 13:47:25.136216 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:47:25.136235 kernel: ima: No architecture policies found Jan 30 13:47:25.136252 kernel: clk: Disabling unused clocks Jan 30 13:47:25.136271 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 13:47:25.136291 kernel: Write protecting the kernel read-only data: 36864k Jan 30 13:47:25.136311 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 13:47:25.136330 kernel: Run /init as init process Jan 30 13:47:25.136349 kernel: with arguments: Jan 30 13:47:25.136373 kernel: /init Jan 30 13:47:25.136391 kernel: with environment: Jan 30 13:47:25.136410 kernel: HOME=/ Jan 30 13:47:25.136429 kernel: TERM=linux Jan 30 13:47:25.136449 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:47:25.136468 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Jan 30 13:47:25.136492 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:47:25.136521 systemd[1]: Detected virtualization google. Jan 30 13:47:25.136541 systemd[1]: Detected architecture x86-64. Jan 30 13:47:25.136561 systemd[1]: Running in initrd. Jan 30 13:47:25.136581 systemd[1]: No hostname configured, using default hostname. Jan 30 13:47:25.136600 systemd[1]: Hostname set to . Jan 30 13:47:25.136620 systemd[1]: Initializing machine ID from random generator. Jan 30 13:47:25.136640 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:47:25.136662 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:47:25.136687 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:47:25.136708 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:47:25.136729 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:47:25.136749 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:47:25.136770 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:47:25.136793 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:47:25.136815 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:47:25.136855 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:47:25.136877 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:47:25.136918 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:47:25.136943 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:47:25.136973 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:47:25.136994 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:47:25.137020 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:47:25.137042 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:47:25.137064 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:47:25.137085 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:47:25.137107 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:47:25.137129 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:47:25.137172 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:47:25.137194 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:47:25.137215 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:47:25.137241 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:47:25.137263 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:47:25.137284 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:47:25.137305 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:47:25.137327 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:47:25.137348 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:47:25.137370 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:47:25.137391 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:47:25.137459 systemd-journald[183]: Collecting audit messages is disabled. Jan 30 13:47:25.137508 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:47:25.137536 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:47:25.137558 systemd-journald[183]: Journal started Jan 30 13:47:25.137602 systemd-journald[183]: Runtime Journal (/run/log/journal/ff2312217d764f249482d3f3892f3066) is 8.0M, max 148.7M, 140.7M free. Jan 30 13:47:25.125424 systemd-modules-load[184]: Inserted module 'overlay' Jan 30 13:47:25.150425 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:47:25.153233 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:47:25.161192 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:47:25.177459 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:47:25.184337 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:47:25.184546 kernel: Bridge firewalling registered Jan 30 13:47:25.183463 systemd-modules-load[184]: Inserted module 'br_netfilter' Jan 30 13:47:25.190526 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:47:25.203488 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:47:25.206134 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:47:25.211370 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:47:25.225810 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:47:25.228056 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:47:25.243544 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:47:25.252632 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:47:25.264433 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:47:25.271339 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:47:25.302699 dracut-cmdline[215]: dracut-dracut-053 Jan 30 13:47:25.307667 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:47:25.333038 systemd-resolved[216]: Positive Trust Anchors: Jan 30 13:47:25.333654 systemd-resolved[216]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:47:25.333728 systemd-resolved[216]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:47:25.340403 systemd-resolved[216]: Defaulting to hostname 'linux'. Jan 30 13:47:25.343826 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:47:25.357891 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:47:25.413200 kernel: SCSI subsystem initialized Jan 30 13:47:25.425205 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:47:25.437203 kernel: iscsi: registered transport (tcp) Jan 30 13:47:25.460288 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:47:25.460374 kernel: QLogic iSCSI HBA Driver Jan 30 13:47:25.512458 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:47:25.521394 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:47:25.549200 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:47:25.549300 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:47:25.551223 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:47:25.596199 kernel: raid6: avx2x4 gen() 17774 MB/s Jan 30 13:47:25.613191 kernel: raid6: avx2x2 gen() 17947 MB/s Jan 30 13:47:25.630628 kernel: raid6: avx2x1 gen() 13986 MB/s Jan 30 13:47:25.630677 kernel: raid6: using algorithm avx2x2 gen() 17947 MB/s Jan 30 13:47:25.648812 kernel: raid6: .... xor() 17380 MB/s, rmw enabled Jan 30 13:47:25.648879 kernel: raid6: using avx2x2 recovery algorithm Jan 30 13:47:25.672193 kernel: xor: automatically using best checksumming function avx Jan 30 13:47:25.852197 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:47:25.866676 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:47:25.873415 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:47:25.903710 systemd-udevd[399]: Using default interface naming scheme 'v255'. Jan 30 13:47:25.910627 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:47:25.920645 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:47:25.949487 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Jan 30 13:47:25.987050 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:47:25.994386 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:47:26.088642 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:47:26.101207 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:47:26.142586 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:47:26.147732 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:47:26.156289 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:47:26.160277 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:47:26.170509 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:47:26.210690 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:47:26.279183 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 13:47:26.340350 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:47:26.373298 kernel: scsi host0: Virtio SCSI HBA Jan 30 13:47:26.373586 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 13:47:26.373616 kernel: AES CTR mode by8 optimization enabled Jan 30 13:47:26.373650 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Jan 30 13:47:26.340536 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:47:26.343520 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:47:26.355265 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:47:26.355545 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:47:26.429259 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Jan 30 13:47:26.488068 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Jan 30 13:47:26.488353 kernel: sd 0:0:1:0: [sda] Write Protect is off Jan 30 13:47:26.488574 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Jan 30 13:47:26.488793 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 30 13:47:26.489006 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 13:47:26.489032 kernel: GPT:17805311 != 25165823 Jan 30 13:47:26.489055 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 13:47:26.489079 kernel: GPT:17805311 != 25165823 Jan 30 13:47:26.489101 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 13:47:26.489124 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:47:26.489148 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Jan 30 13:47:26.363332 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:47:26.378592 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:47:26.508649 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:47:26.532771 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:47:26.578372 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (444) Jan 30 13:47:26.578422 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (455) Jan 30 13:47:26.578134 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Jan 30 13:47:26.602524 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Jan 30 13:47:26.620842 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 30 13:47:26.645460 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Jan 30 13:47:26.645583 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Jan 30 13:47:26.651415 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:47:26.663660 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:47:26.692924 disk-uuid[547]: Primary Header is updated. Jan 30 13:47:26.692924 disk-uuid[547]: Secondary Entries is updated. Jan 30 13:47:26.692924 disk-uuid[547]: Secondary Header is updated. Jan 30 13:47:26.722296 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:47:26.740217 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:47:26.770351 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:47:27.760246 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:47:27.761798 disk-uuid[548]: The operation has completed successfully. Jan 30 13:47:27.835895 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:47:27.836063 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:47:27.872382 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:47:27.902605 sh[566]: Success Jan 30 13:47:27.925178 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 30 13:47:28.018855 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:47:28.026300 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:47:28.045876 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:47:28.101743 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 13:47:28.101842 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:47:28.101886 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:47:28.111179 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:47:28.123743 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:47:28.155196 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 30 13:47:28.162850 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:47:28.163838 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:47:28.179451 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:47:28.237524 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:47:28.237571 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:47:28.237598 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:47:28.237622 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 13:47:28.237646 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:47:28.209626 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:47:28.268221 kernel: BTRFS info (device sda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:47:28.276743 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:47:28.301567 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:47:28.388245 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:47:28.396955 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:47:28.503406 systemd-networkd[749]: lo: Link UP Jan 30 13:47:28.503939 systemd-networkd[749]: lo: Gained carrier Jan 30 13:47:28.506364 systemd-networkd[749]: Enumeration completed Jan 30 13:47:28.512990 ignition[672]: Ignition 2.19.0 Jan 30 13:47:28.506854 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:47:28.512998 ignition[672]: Stage: fetch-offline Jan 30 13:47:28.507226 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:47:28.513040 ignition[672]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:47:28.507232 systemd-networkd[749]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:47:28.513051 ignition[672]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 30 13:47:28.510953 systemd-networkd[749]: eth0: Link UP Jan 30 13:47:28.513258 ignition[672]: parsed url from cmdline: "" Jan 30 13:47:28.510960 systemd-networkd[749]: eth0: Gained carrier Jan 30 13:47:28.513265 ignition[672]: no config URL provided Jan 30 13:47:28.510975 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:47:28.513534 ignition[672]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:47:28.517808 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:47:28.513578 ignition[672]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:47:28.519265 systemd-networkd[749]: eth0: DHCPv4 address 10.128.0.26/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 30 13:47:28.513589 ignition[672]: failed to fetch config: resource requires networking Jan 30 13:47:28.544267 systemd[1]: Reached target network.target - Network. Jan 30 13:47:28.513926 ignition[672]: Ignition finished successfully Jan 30 13:47:28.565481 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 13:47:28.588337 ignition[759]: Ignition 2.19.0 Jan 30 13:47:28.600098 unknown[759]: fetched base config from "system" Jan 30 13:47:28.588347 ignition[759]: Stage: fetch Jan 30 13:47:28.600107 unknown[759]: fetched base config from "system" Jan 30 13:47:28.588555 ignition[759]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:47:28.600113 unknown[759]: fetched user config from "gcp" Jan 30 13:47:28.588567 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 30 13:47:28.603299 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 13:47:28.588691 ignition[759]: parsed url from cmdline: "" Jan 30 13:47:28.616488 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:47:28.588701 ignition[759]: no config URL provided Jan 30 13:47:28.676893 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:47:28.588711 ignition[759]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:47:28.703519 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:47:28.588722 ignition[759]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:47:28.761862 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:47:28.588746 ignition[759]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Jan 30 13:47:28.769283 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:47:28.593358 ignition[759]: GET result: OK Jan 30 13:47:28.796455 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:47:28.593676 ignition[759]: parsing config with SHA512: 2c18763519aec8bd044934417be67e3fa627e72f05dddc66db7e7aa1199e05662beac18a92177e17b59968c131aecc74104336e6c9ec175eb15f8419926ba78e Jan 30 13:47:28.802525 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:47:28.601323 ignition[759]: fetch: fetch complete Jan 30 13:47:28.819591 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:47:28.601334 ignition[759]: fetch: fetch passed Jan 30 13:47:28.833557 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:47:28.601428 ignition[759]: Ignition finished successfully Jan 30 13:47:28.855546 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:47:28.674106 ignition[765]: Ignition 2.19.0 Jan 30 13:47:28.674120 ignition[765]: Stage: kargs Jan 30 13:47:28.674412 ignition[765]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:47:28.674428 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 30 13:47:28.675500 ignition[765]: kargs: kargs passed Jan 30 13:47:28.675589 ignition[765]: Ignition finished successfully Jan 30 13:47:28.726712 ignition[772]: Ignition 2.19.0 Jan 30 13:47:28.726721 ignition[772]: Stage: disks Jan 30 13:47:28.726930 ignition[772]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:47:28.726942 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 30 13:47:28.728086 ignition[772]: disks: disks passed Jan 30 13:47:28.728144 ignition[772]: Ignition finished successfully Jan 30 13:47:28.916359 systemd-fsck[780]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 30 13:47:29.097992 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:47:29.103301 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:47:29.244185 kernel: EXT4-fs (sda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 13:47:29.245568 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:47:29.246447 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:47:29.268292 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:47:29.303326 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:47:29.313080 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 13:47:29.351418 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (788) Jan 30 13:47:29.351473 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:47:29.351501 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:47:29.351527 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:47:29.313224 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:47:29.405377 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 13:47:29.405428 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:47:29.313279 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:47:29.389585 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:47:29.414649 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:47:29.438417 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:47:29.558540 initrd-setup-root[812]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:47:29.569336 initrd-setup-root[819]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:47:29.580345 initrd-setup-root[826]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:47:29.591329 initrd-setup-root[833]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:47:29.591585 systemd-networkd[749]: eth0: Gained IPv6LL Jan 30 13:47:29.732129 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:47:29.738328 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:47:29.780212 kernel: BTRFS info (device sda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:47:29.780462 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:47:29.797861 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:47:29.823794 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:47:29.833140 ignition[901]: INFO : Ignition 2.19.0 Jan 30 13:47:29.833140 ignition[901]: INFO : Stage: mount Jan 30 13:47:29.860323 ignition[901]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:47:29.860323 ignition[901]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 30 13:47:29.860323 ignition[901]: INFO : mount: mount passed Jan 30 13:47:29.860323 ignition[901]: INFO : Ignition finished successfully Jan 30 13:47:29.839773 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:47:29.852319 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:47:29.895441 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:47:29.946242 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (912) Jan 30 13:47:29.965895 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:47:29.965984 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:47:29.966011 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:47:29.988468 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 13:47:29.988564 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:47:29.992065 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:47:30.033334 ignition[929]: INFO : Ignition 2.19.0 Jan 30 13:47:30.033334 ignition[929]: INFO : Stage: files Jan 30 13:47:30.047356 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:47:30.047356 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 30 13:47:30.047356 ignition[929]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:47:30.047356 ignition[929]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:47:30.047356 ignition[929]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:47:30.047356 ignition[929]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:47:30.047356 ignition[929]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:47:30.047356 ignition[929]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:47:30.046931 unknown[929]: wrote ssh authorized keys file for user: core Jan 30 13:47:30.150326 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 30 13:47:30.150326 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 30 13:47:30.150326 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:47:30.150326 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 30 13:47:33.267177 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 30 13:47:33.398003 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:47:33.415342 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:47:33.415342 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:47:33.415342 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:47:33.415342 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:47:33.415342 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:47:33.415342 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:47:33.415342 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:47:33.415342 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:47:33.415342 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:47:33.415342 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:47:33.415342 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:47:33.415342 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:47:33.415342 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:47:33.415342 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 30 13:47:33.694838 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 30 13:47:34.056618 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:47:34.056618 ignition[929]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 30 13:47:34.097395 ignition[929]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 30 13:47:34.097395 ignition[929]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 30 13:47:34.097395 ignition[929]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 30 13:47:34.097395 ignition[929]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 30 13:47:34.097395 ignition[929]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:47:34.097395 ignition[929]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:47:34.097395 ignition[929]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 30 13:47:34.097395 ignition[929]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 30 13:47:34.097395 ignition[929]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 13:47:34.097395 ignition[929]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:47:34.097395 ignition[929]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:47:34.097395 ignition[929]: INFO : files: files passed Jan 30 13:47:34.097395 ignition[929]: INFO : Ignition finished successfully Jan 30 13:47:34.062501 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:47:34.092633 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:47:34.131133 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:47:34.168976 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:47:34.380366 initrd-setup-root-after-ignition[957]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:47:34.380366 initrd-setup-root-after-ignition[957]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:47:34.169097 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:47:34.448348 initrd-setup-root-after-ignition[961]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:47:34.193885 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:47:34.218776 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:47:34.245402 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:47:34.352406 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:47:34.352634 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:47:34.373215 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:47:34.391541 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:47:34.406658 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:47:34.413496 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:47:34.483481 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:47:34.501450 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:47:34.543105 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:47:34.558704 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:47:34.568714 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:47:34.599637 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:47:34.599842 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:47:34.650602 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:47:34.670522 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:47:34.687605 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:47:34.705542 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:47:34.727578 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:47:34.748500 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:47:34.766567 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:47:34.788617 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:47:34.808567 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:47:34.828542 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:47:34.846457 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:47:34.846716 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:47:34.877620 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:47:34.897526 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:47:34.916509 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:47:34.916681 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:47:34.935492 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:47:34.935734 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:47:34.965600 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:47:35.048470 ignition[982]: INFO : Ignition 2.19.0 Jan 30 13:47:35.048470 ignition[982]: INFO : Stage: umount Jan 30 13:47:35.048470 ignition[982]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:47:35.048470 ignition[982]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 30 13:47:35.048470 ignition[982]: INFO : umount: umount passed Jan 30 13:47:35.048470 ignition[982]: INFO : Ignition finished successfully Jan 30 13:47:34.965830 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:47:34.987619 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:47:34.987806 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:47:35.012448 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:47:35.057353 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:47:35.057645 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:47:35.082661 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:47:35.151414 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:47:35.151721 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:47:35.172651 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:47:35.172862 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:47:35.209669 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:47:35.210841 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:47:35.210960 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:47:35.215083 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:47:35.215241 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:47:35.243849 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:47:35.243980 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:47:35.271797 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:47:35.271865 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:47:35.289575 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:47:35.289655 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:47:35.297625 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 13:47:35.297698 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 13:47:35.314669 systemd[1]: Stopped target network.target - Network. Jan 30 13:47:35.339446 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:47:35.339566 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:47:35.348639 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:47:35.366579 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:47:35.370301 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:47:35.381555 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:47:35.402567 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:47:35.433508 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:47:35.433581 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:47:35.441572 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:47:35.441637 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:47:35.458596 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:47:35.458677 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:47:35.475621 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:47:35.475700 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:47:35.492623 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:47:35.492702 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:47:35.509855 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:47:35.514239 systemd-networkd[749]: eth0: DHCPv6 lease lost Jan 30 13:47:35.545688 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:47:35.554952 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:47:35.555085 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:47:35.583114 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:47:35.583424 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:47:35.605865 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:47:35.605947 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:47:35.629317 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:47:35.639304 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:47:35.639546 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:47:35.647606 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:47:35.647680 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:47:35.674606 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:47:35.674678 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:47:35.701428 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:47:35.701551 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:47:35.721610 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:47:35.740910 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:47:35.741088 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:47:35.777600 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:47:35.777672 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:47:35.791577 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:47:35.791635 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:47:35.819491 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:47:35.819581 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:47:36.176376 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Jan 30 13:47:35.847618 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:47:35.847703 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:47:35.891358 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:47:35.891614 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:47:35.928406 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:47:35.968325 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:47:35.968459 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:47:35.989478 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:47:35.989580 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:47:36.010995 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:47:36.011129 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:47:36.030698 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:47:36.030821 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:47:36.052781 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:47:36.088406 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:47:36.135216 systemd[1]: Switching root. Jan 30 13:47:36.333373 systemd-journald[183]: Journal stopped Jan 30 13:47:25.113600 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 13:47:25.113648 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:47:25.113666 kernel: BIOS-provided physical RAM map: Jan 30 13:47:25.113681 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Jan 30 13:47:25.113693 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Jan 30 13:47:25.113706 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Jan 30 13:47:25.113721 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Jan 30 13:47:25.113739 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Jan 30 13:47:25.113753 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Jan 30 13:47:25.113767 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Jan 30 13:47:25.113781 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Jan 30 13:47:25.113794 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Jan 30 13:47:25.113808 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Jan 30 13:47:25.113821 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Jan 30 13:47:25.113841 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Jan 30 13:47:25.113856 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Jan 30 13:47:25.113872 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Jan 30 13:47:25.113887 kernel: NX (Execute Disable) protection: active Jan 30 13:47:25.113904 kernel: APIC: Static calls initialized Jan 30 13:47:25.113920 kernel: efi: EFI v2.7 by EDK II Jan 30 13:47:25.113936 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Jan 30 13:47:25.113952 kernel: SMBIOS 2.4 present. Jan 30 13:47:25.113969 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Jan 30 13:47:25.113985 kernel: Hypervisor detected: KVM Jan 30 13:47:25.114005 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 13:47:25.114021 kernel: kvm-clock: using sched offset of 12638355307 cycles Jan 30 13:47:25.114044 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 13:47:25.114061 kernel: tsc: Detected 2299.998 MHz processor Jan 30 13:47:25.114077 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 13:47:25.114094 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 13:47:25.114110 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Jan 30 13:47:25.114126 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Jan 30 13:47:25.114142 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 13:47:25.114192 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Jan 30 13:47:25.114208 kernel: Using GB pages for direct mapping Jan 30 13:47:25.114224 kernel: Secure boot disabled Jan 30 13:47:25.114240 kernel: ACPI: Early table checksum verification disabled Jan 30 13:47:25.114257 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Jan 30 13:47:25.114273 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Jan 30 13:47:25.114290 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Jan 30 13:47:25.114313 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Jan 30 13:47:25.114334 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Jan 30 13:47:25.114351 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Jan 30 13:47:25.114369 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Jan 30 13:47:25.114386 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Jan 30 13:47:25.114404 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Jan 30 13:47:25.114420 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Jan 30 13:47:25.114441 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Jan 30 13:47:25.114458 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Jan 30 13:47:25.114475 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Jan 30 13:47:25.114493 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Jan 30 13:47:25.114510 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Jan 30 13:47:25.114527 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Jan 30 13:47:25.114545 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Jan 30 13:47:25.114562 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Jan 30 13:47:25.114579 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Jan 30 13:47:25.114599 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Jan 30 13:47:25.114617 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 30 13:47:25.114634 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 30 13:47:25.114651 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 30 13:47:25.114668 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Jan 30 13:47:25.114685 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Jan 30 13:47:25.114701 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Jan 30 13:47:25.114716 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Jan 30 13:47:25.114731 kernel: NODE_DATA(0) allocated [mem 0x21fff8000-0x21fffdfff] Jan 30 13:47:25.114751 kernel: Zone ranges: Jan 30 13:47:25.114767 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 13:47:25.114784 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 30 13:47:25.114801 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Jan 30 13:47:25.114818 kernel: Movable zone start for each node Jan 30 13:47:25.114836 kernel: Early memory node ranges Jan 30 13:47:25.114852 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Jan 30 13:47:25.114870 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Jan 30 13:47:25.114887 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Jan 30 13:47:25.114902 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Jan 30 13:47:25.114922 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Jan 30 13:47:25.114939 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Jan 30 13:47:25.114956 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:47:25.114973 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Jan 30 13:47:25.114988 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Jan 30 13:47:25.115006 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 30 13:47:25.115024 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Jan 30 13:47:25.115048 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 30 13:47:25.115066 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 13:47:25.115088 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 30 13:47:25.115105 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 13:47:25.115120 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 13:47:25.115137 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 13:47:25.115168 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 13:47:25.115194 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 13:47:25.115211 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 30 13:47:25.115229 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 30 13:47:25.115246 kernel: Booting paravirtualized kernel on KVM Jan 30 13:47:25.115269 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 13:47:25.115287 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 30 13:47:25.115304 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 30 13:47:25.115322 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 30 13:47:25.115338 kernel: pcpu-alloc: [0] 0 1 Jan 30 13:47:25.115354 kernel: kvm-guest: PV spinlocks enabled Jan 30 13:47:25.115371 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 30 13:47:25.115390 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:47:25.115413 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:47:25.115429 kernel: random: crng init done Jan 30 13:47:25.115447 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 30 13:47:25.115464 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 13:47:25.115482 kernel: Fallback order for Node 0: 0 Jan 30 13:47:25.115499 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Jan 30 13:47:25.115516 kernel: Policy zone: Normal Jan 30 13:47:25.115533 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:47:25.115549 kernel: software IO TLB: area num 2. Jan 30 13:47:25.115570 kernel: Memory: 7513372K/7860584K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 346952K reserved, 0K cma-reserved) Jan 30 13:47:25.115587 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 13:47:25.115603 kernel: Kernel/User page tables isolation: enabled Jan 30 13:47:25.115620 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 13:47:25.115635 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 13:47:25.115652 kernel: Dynamic Preempt: voluntary Jan 30 13:47:25.115670 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:47:25.115694 kernel: rcu: RCU event tracing is enabled. Jan 30 13:47:25.115728 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 13:47:25.115744 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:47:25.115781 kernel: Rude variant of Tasks RCU enabled. Jan 30 13:47:25.115801 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:47:25.115817 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:47:25.115833 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 13:47:25.115849 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 30 13:47:25.115865 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:47:25.115882 kernel: Console: colour dummy device 80x25 Jan 30 13:47:25.115902 kernel: printk: console [ttyS0] enabled Jan 30 13:47:25.115919 kernel: ACPI: Core revision 20230628 Jan 30 13:47:25.115938 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 13:47:25.115955 kernel: x2apic enabled Jan 30 13:47:25.115972 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 13:47:25.115988 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Jan 30 13:47:25.116004 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 30 13:47:25.116021 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Jan 30 13:47:25.116052 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Jan 30 13:47:25.116071 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Jan 30 13:47:25.116089 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 13:47:25.116108 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 30 13:47:25.116127 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 30 13:47:25.116145 kernel: Spectre V2 : Mitigation: IBRS Jan 30 13:47:25.116180 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 13:47:25.116199 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 13:47:25.116218 kernel: RETBleed: Mitigation: IBRS Jan 30 13:47:25.116242 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 13:47:25.116260 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Jan 30 13:47:25.116278 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 13:47:25.116297 kernel: MDS: Mitigation: Clear CPU buffers Jan 30 13:47:25.116314 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 30 13:47:25.116331 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 13:47:25.116349 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 13:47:25.116367 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 13:47:25.116386 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 13:47:25.116408 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 30 13:47:25.116426 kernel: Freeing SMP alternatives memory: 32K Jan 30 13:47:25.116442 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:47:25.116459 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:47:25.116478 kernel: landlock: Up and running. Jan 30 13:47:25.116497 kernel: SELinux: Initializing. Jan 30 13:47:25.116515 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 13:47:25.116532 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 13:47:25.116551 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Jan 30 13:47:25.116576 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:47:25.116595 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:47:25.116613 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:47:25.116630 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Jan 30 13:47:25.116647 kernel: signal: max sigframe size: 1776 Jan 30 13:47:25.116663 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:47:25.116680 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:47:25.116697 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 30 13:47:25.116713 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:47:25.116736 kernel: smpboot: x86: Booting SMP configuration: Jan 30 13:47:25.116752 kernel: .... node #0, CPUs: #1 Jan 30 13:47:25.116770 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 30 13:47:25.116787 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 30 13:47:25.116804 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 13:47:25.116821 kernel: smpboot: Max logical packages: 1 Jan 30 13:47:25.116861 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jan 30 13:47:25.116877 kernel: devtmpfs: initialized Jan 30 13:47:25.116900 kernel: x86/mm: Memory block size: 128MB Jan 30 13:47:25.116916 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Jan 30 13:47:25.116933 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:47:25.116951 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 13:47:25.116968 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:47:25.116985 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:47:25.117003 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:47:25.117022 kernel: audit: type=2000 audit(1738244844.201:1): state=initialized audit_enabled=0 res=1 Jan 30 13:47:25.117080 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:47:25.117103 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 13:47:25.117119 kernel: cpuidle: using governor menu Jan 30 13:47:25.117137 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:47:25.117187 kernel: dca service started, version 1.12.1 Jan 30 13:47:25.117208 kernel: PCI: Using configuration type 1 for base access Jan 30 13:47:25.117227 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 13:47:25.117246 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 13:47:25.117263 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 13:47:25.117280 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:47:25.117302 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:47:25.117318 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:47:25.117335 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:47:25.117354 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:47:25.117372 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:47:25.117391 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 30 13:47:25.117411 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 13:47:25.117428 kernel: ACPI: Interpreter enabled Jan 30 13:47:25.117448 kernel: ACPI: PM: (supports S0 S3 S5) Jan 30 13:47:25.117469 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 13:47:25.117488 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 13:47:25.117506 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 30 13:47:25.117524 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 30 13:47:25.117543 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 13:47:25.117790 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 30 13:47:25.117975 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 30 13:47:25.118172 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 30 13:47:25.118201 kernel: PCI host bridge to bus 0000:00 Jan 30 13:47:25.118376 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 13:47:25.118551 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 13:47:25.118711 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 13:47:25.118876 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Jan 30 13:47:25.119047 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 13:47:25.121336 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 30 13:47:25.121553 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Jan 30 13:47:25.121748 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 30 13:47:25.121925 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 30 13:47:25.122115 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Jan 30 13:47:25.123277 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Jan 30 13:47:25.124286 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Jan 30 13:47:25.124530 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 30 13:47:25.124740 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Jan 30 13:47:25.124941 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Jan 30 13:47:25.126198 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Jan 30 13:47:25.126429 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Jan 30 13:47:25.126609 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Jan 30 13:47:25.126639 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 13:47:25.126658 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 13:47:25.126677 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 13:47:25.126695 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 13:47:25.126714 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 30 13:47:25.126732 kernel: iommu: Default domain type: Translated Jan 30 13:47:25.126751 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 13:47:25.126769 kernel: efivars: Registered efivars operations Jan 30 13:47:25.126787 kernel: PCI: Using ACPI for IRQ routing Jan 30 13:47:25.126806 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 13:47:25.126828 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Jan 30 13:47:25.126846 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Jan 30 13:47:25.126864 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Jan 30 13:47:25.126881 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Jan 30 13:47:25.126899 kernel: vgaarb: loaded Jan 30 13:47:25.126917 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 13:47:25.126935 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:47:25.126954 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:47:25.126976 kernel: pnp: PnP ACPI init Jan 30 13:47:25.126994 kernel: pnp: PnP ACPI: found 7 devices Jan 30 13:47:25.127013 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 13:47:25.127031 kernel: NET: Registered PF_INET protocol family Jan 30 13:47:25.127057 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 30 13:47:25.127074 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 30 13:47:25.127093 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:47:25.127111 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 13:47:25.127129 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 30 13:47:25.128219 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 30 13:47:25.128248 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 30 13:47:25.128267 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 30 13:47:25.128285 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:47:25.128304 kernel: NET: Registered PF_XDP protocol family Jan 30 13:47:25.128509 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 13:47:25.128676 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 13:47:25.128839 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 13:47:25.129010 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Jan 30 13:47:25.129230 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 30 13:47:25.129257 kernel: PCI: CLS 0 bytes, default 64 Jan 30 13:47:25.129277 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 30 13:47:25.129296 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Jan 30 13:47:25.129314 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 30 13:47:25.129334 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 30 13:47:25.129354 kernel: clocksource: Switched to clocksource tsc Jan 30 13:47:25.129379 kernel: Initialise system trusted keyrings Jan 30 13:47:25.129397 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 30 13:47:25.129416 kernel: Key type asymmetric registered Jan 30 13:47:25.129435 kernel: Asymmetric key parser 'x509' registered Jan 30 13:47:25.129453 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 13:47:25.129472 kernel: io scheduler mq-deadline registered Jan 30 13:47:25.129491 kernel: io scheduler kyber registered Jan 30 13:47:25.129508 kernel: io scheduler bfq registered Jan 30 13:47:25.129524 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 13:47:25.129549 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 30 13:47:25.129763 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Jan 30 13:47:25.129789 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jan 30 13:47:25.129978 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Jan 30 13:47:25.130002 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 30 13:47:25.132250 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Jan 30 13:47:25.132286 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:47:25.132462 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 13:47:25.132484 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 30 13:47:25.132512 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Jan 30 13:47:25.132531 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Jan 30 13:47:25.132757 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Jan 30 13:47:25.132785 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 13:47:25.132805 kernel: i8042: Warning: Keylock active Jan 30 13:47:25.132825 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 13:47:25.132845 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 13:47:25.133044 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 30 13:47:25.135299 kernel: rtc_cmos 00:00: registered as rtc0 Jan 30 13:47:25.135497 kernel: rtc_cmos 00:00: setting system clock to 2025-01-30T13:47:24 UTC (1738244844) Jan 30 13:47:25.135831 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 30 13:47:25.135873 kernel: intel_pstate: CPU model not supported Jan 30 13:47:25.135891 kernel: pstore: Using crash dump compression: deflate Jan 30 13:47:25.135919 kernel: pstore: Registered efi_pstore as persistent store backend Jan 30 13:47:25.135938 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:47:25.135957 kernel: Segment Routing with IPv6 Jan 30 13:47:25.135995 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:47:25.136014 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:47:25.136034 kernel: Key type dns_resolver registered Jan 30 13:47:25.136052 kernel: IPI shorthand broadcast: enabled Jan 30 13:47:25.136068 kernel: sched_clock: Marking stable (879005300, 150701826)->(1070479150, -40772024) Jan 30 13:47:25.136088 kernel: registered taskstats version 1 Jan 30 13:47:25.136108 kernel: Loading compiled-in X.509 certificates Jan 30 13:47:25.136127 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 13:47:25.136146 kernel: Key type .fscrypt registered Jan 30 13:47:25.136196 kernel: Key type fscrypt-provisioning registered Jan 30 13:47:25.136216 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:47:25.136235 kernel: ima: No architecture policies found Jan 30 13:47:25.136252 kernel: clk: Disabling unused clocks Jan 30 13:47:25.136271 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 13:47:25.136291 kernel: Write protecting the kernel read-only data: 36864k Jan 30 13:47:25.136311 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 13:47:25.136330 kernel: Run /init as init process Jan 30 13:47:25.136349 kernel: with arguments: Jan 30 13:47:25.136373 kernel: /init Jan 30 13:47:25.136391 kernel: with environment: Jan 30 13:47:25.136410 kernel: HOME=/ Jan 30 13:47:25.136429 kernel: TERM=linux Jan 30 13:47:25.136449 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:47:25.136468 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Jan 30 13:47:25.136492 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:47:25.136521 systemd[1]: Detected virtualization google. Jan 30 13:47:25.136541 systemd[1]: Detected architecture x86-64. Jan 30 13:47:25.136561 systemd[1]: Running in initrd. Jan 30 13:47:25.136581 systemd[1]: No hostname configured, using default hostname. Jan 30 13:47:25.136600 systemd[1]: Hostname set to . Jan 30 13:47:25.136620 systemd[1]: Initializing machine ID from random generator. Jan 30 13:47:25.136640 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:47:25.136662 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:47:25.136687 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:47:25.136708 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:47:25.136729 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:47:25.136749 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:47:25.136770 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:47:25.136793 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:47:25.136815 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:47:25.136855 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:47:25.136877 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:47:25.136918 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:47:25.136943 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:47:25.136973 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:47:25.136994 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:47:25.137020 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:47:25.137042 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:47:25.137064 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:47:25.137085 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:47:25.137107 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:47:25.137129 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:47:25.137172 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:47:25.137194 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:47:25.137215 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:47:25.137241 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:47:25.137263 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:47:25.137284 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:47:25.137305 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:47:25.137327 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:47:25.137348 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:47:25.137370 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:47:25.137391 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:47:25.137459 systemd-journald[183]: Collecting audit messages is disabled. Jan 30 13:47:25.137508 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:47:25.137536 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:47:25.137558 systemd-journald[183]: Journal started Jan 30 13:47:25.137602 systemd-journald[183]: Runtime Journal (/run/log/journal/ff2312217d764f249482d3f3892f3066) is 8.0M, max 148.7M, 140.7M free. Jan 30 13:47:25.125424 systemd-modules-load[184]: Inserted module 'overlay' Jan 30 13:47:25.150425 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:47:25.153233 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:47:25.161192 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:47:25.177459 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:47:25.184337 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:47:25.184546 kernel: Bridge firewalling registered Jan 30 13:47:25.183463 systemd-modules-load[184]: Inserted module 'br_netfilter' Jan 30 13:47:25.190526 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:47:25.203488 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:47:25.206134 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:47:25.211370 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:47:25.225810 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:47:25.228056 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:47:25.243544 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:47:25.252632 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:47:25.264433 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:47:25.271339 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:47:25.302699 dracut-cmdline[215]: dracut-dracut-053 Jan 30 13:47:25.307667 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:47:25.333038 systemd-resolved[216]: Positive Trust Anchors: Jan 30 13:47:25.333654 systemd-resolved[216]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:47:25.333728 systemd-resolved[216]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:47:25.340403 systemd-resolved[216]: Defaulting to hostname 'linux'. Jan 30 13:47:25.343826 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:47:25.357891 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:47:25.413200 kernel: SCSI subsystem initialized Jan 30 13:47:25.425205 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:47:25.437203 kernel: iscsi: registered transport (tcp) Jan 30 13:47:25.460288 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:47:25.460374 kernel: QLogic iSCSI HBA Driver Jan 30 13:47:25.512458 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:47:25.521394 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:47:25.549200 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:47:25.549300 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:47:25.551223 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:47:25.596199 kernel: raid6: avx2x4 gen() 17774 MB/s Jan 30 13:47:25.613191 kernel: raid6: avx2x2 gen() 17947 MB/s Jan 30 13:47:25.630628 kernel: raid6: avx2x1 gen() 13986 MB/s Jan 30 13:47:25.630677 kernel: raid6: using algorithm avx2x2 gen() 17947 MB/s Jan 30 13:47:25.648812 kernel: raid6: .... xor() 17380 MB/s, rmw enabled Jan 30 13:47:25.648879 kernel: raid6: using avx2x2 recovery algorithm Jan 30 13:47:25.672193 kernel: xor: automatically using best checksumming function avx Jan 30 13:47:25.852197 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:47:25.866676 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:47:25.873415 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:47:25.903710 systemd-udevd[399]: Using default interface naming scheme 'v255'. Jan 30 13:47:25.910627 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:47:25.920645 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:47:25.949487 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Jan 30 13:47:25.987050 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:47:25.994386 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:47:26.088642 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:47:26.101207 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:47:26.142586 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:47:26.147732 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:47:26.156289 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:47:26.160277 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:47:26.170509 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:47:26.210690 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:47:26.279183 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 13:47:26.340350 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:47:26.373298 kernel: scsi host0: Virtio SCSI HBA Jan 30 13:47:26.373586 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 13:47:26.373616 kernel: AES CTR mode by8 optimization enabled Jan 30 13:47:26.373650 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Jan 30 13:47:26.340536 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:47:26.343520 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:47:26.355265 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:47:26.355545 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:47:26.429259 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Jan 30 13:47:26.488068 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Jan 30 13:47:26.488353 kernel: sd 0:0:1:0: [sda] Write Protect is off Jan 30 13:47:26.488574 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Jan 30 13:47:26.488793 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 30 13:47:26.489006 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 13:47:26.489032 kernel: GPT:17805311 != 25165823 Jan 30 13:47:26.489055 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 13:47:26.489079 kernel: GPT:17805311 != 25165823 Jan 30 13:47:26.489101 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 13:47:26.489124 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:47:26.489148 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Jan 30 13:47:26.363332 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:47:26.378592 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:47:26.508649 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:47:26.532771 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:47:26.578372 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (444) Jan 30 13:47:26.578422 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (455) Jan 30 13:47:26.578134 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Jan 30 13:47:26.602524 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Jan 30 13:47:26.620842 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 30 13:47:26.645460 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Jan 30 13:47:26.645583 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Jan 30 13:47:26.651415 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:47:26.663660 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:47:26.692924 disk-uuid[547]: Primary Header is updated. Jan 30 13:47:26.692924 disk-uuid[547]: Secondary Entries is updated. Jan 30 13:47:26.692924 disk-uuid[547]: Secondary Header is updated. Jan 30 13:47:26.722296 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:47:26.740217 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:47:26.770351 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:47:27.760246 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:47:27.761798 disk-uuid[548]: The operation has completed successfully. Jan 30 13:47:27.835895 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:47:27.836063 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:47:27.872382 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:47:27.902605 sh[566]: Success Jan 30 13:47:27.925178 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 30 13:47:28.018855 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:47:28.026300 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:47:28.045876 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:47:28.101743 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 13:47:28.101842 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:47:28.101886 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:47:28.111179 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:47:28.123743 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:47:28.155196 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 30 13:47:28.162850 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:47:28.163838 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:47:28.179451 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:47:28.237524 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:47:28.237571 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:47:28.237598 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:47:28.237622 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 13:47:28.237646 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:47:28.209626 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:47:28.268221 kernel: BTRFS info (device sda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:47:28.276743 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:47:28.301567 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:47:28.388245 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:47:28.396955 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:47:28.503406 systemd-networkd[749]: lo: Link UP Jan 30 13:47:28.503939 systemd-networkd[749]: lo: Gained carrier Jan 30 13:47:28.506364 systemd-networkd[749]: Enumeration completed Jan 30 13:47:28.512990 ignition[672]: Ignition 2.19.0 Jan 30 13:47:28.506854 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:47:28.512998 ignition[672]: Stage: fetch-offline Jan 30 13:47:28.507226 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:47:28.513040 ignition[672]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:47:28.507232 systemd-networkd[749]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:47:28.513051 ignition[672]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 30 13:47:28.510953 systemd-networkd[749]: eth0: Link UP Jan 30 13:47:28.513258 ignition[672]: parsed url from cmdline: "" Jan 30 13:47:28.510960 systemd-networkd[749]: eth0: Gained carrier Jan 30 13:47:28.513265 ignition[672]: no config URL provided Jan 30 13:47:28.510975 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:47:28.513534 ignition[672]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:47:28.517808 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:47:28.513578 ignition[672]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:47:28.519265 systemd-networkd[749]: eth0: DHCPv4 address 10.128.0.26/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 30 13:47:28.513589 ignition[672]: failed to fetch config: resource requires networking Jan 30 13:47:28.544267 systemd[1]: Reached target network.target - Network. Jan 30 13:47:28.513926 ignition[672]: Ignition finished successfully Jan 30 13:47:28.565481 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 13:47:28.588337 ignition[759]: Ignition 2.19.0 Jan 30 13:47:28.600098 unknown[759]: fetched base config from "system" Jan 30 13:47:28.588347 ignition[759]: Stage: fetch Jan 30 13:47:28.600107 unknown[759]: fetched base config from "system" Jan 30 13:47:28.588555 ignition[759]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:47:28.600113 unknown[759]: fetched user config from "gcp" Jan 30 13:47:28.588567 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 30 13:47:28.603299 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 13:47:28.588691 ignition[759]: parsed url from cmdline: "" Jan 30 13:47:28.616488 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:47:28.588701 ignition[759]: no config URL provided Jan 30 13:47:28.676893 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:47:28.588711 ignition[759]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:47:28.703519 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:47:28.588722 ignition[759]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:47:28.761862 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:47:28.588746 ignition[759]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Jan 30 13:47:28.769283 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:47:28.593358 ignition[759]: GET result: OK Jan 30 13:47:28.796455 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:47:28.593676 ignition[759]: parsing config with SHA512: 2c18763519aec8bd044934417be67e3fa627e72f05dddc66db7e7aa1199e05662beac18a92177e17b59968c131aecc74104336e6c9ec175eb15f8419926ba78e Jan 30 13:47:28.802525 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:47:28.601323 ignition[759]: fetch: fetch complete Jan 30 13:47:28.819591 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:47:28.601334 ignition[759]: fetch: fetch passed Jan 30 13:47:28.833557 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:47:28.601428 ignition[759]: Ignition finished successfully Jan 30 13:47:28.855546 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:47:28.674106 ignition[765]: Ignition 2.19.0 Jan 30 13:47:28.674120 ignition[765]: Stage: kargs Jan 30 13:47:28.674412 ignition[765]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:47:28.674428 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 30 13:47:28.675500 ignition[765]: kargs: kargs passed Jan 30 13:47:28.675589 ignition[765]: Ignition finished successfully Jan 30 13:47:28.726712 ignition[772]: Ignition 2.19.0 Jan 30 13:47:28.726721 ignition[772]: Stage: disks Jan 30 13:47:28.726930 ignition[772]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:47:28.726942 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 30 13:47:28.728086 ignition[772]: disks: disks passed Jan 30 13:47:28.728144 ignition[772]: Ignition finished successfully Jan 30 13:47:28.916359 systemd-fsck[780]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 30 13:47:29.097992 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:47:29.103301 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:47:29.244185 kernel: EXT4-fs (sda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 13:47:29.245568 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:47:29.246447 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:47:29.268292 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:47:29.303326 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:47:29.313080 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 13:47:29.351418 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (788) Jan 30 13:47:29.351473 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:47:29.351501 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:47:29.351527 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:47:29.313224 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:47:29.405377 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 13:47:29.405428 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:47:29.313279 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:47:29.389585 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:47:29.414649 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:47:29.438417 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:47:29.558540 initrd-setup-root[812]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:47:29.569336 initrd-setup-root[819]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:47:29.580345 initrd-setup-root[826]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:47:29.591329 initrd-setup-root[833]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:47:29.591585 systemd-networkd[749]: eth0: Gained IPv6LL Jan 30 13:47:29.732129 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:47:29.738328 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:47:29.780212 kernel: BTRFS info (device sda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:47:29.780462 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:47:29.797861 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:47:29.823794 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:47:29.833140 ignition[901]: INFO : Ignition 2.19.0 Jan 30 13:47:29.833140 ignition[901]: INFO : Stage: mount Jan 30 13:47:29.860323 ignition[901]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:47:29.860323 ignition[901]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 30 13:47:29.860323 ignition[901]: INFO : mount: mount passed Jan 30 13:47:29.860323 ignition[901]: INFO : Ignition finished successfully Jan 30 13:47:29.839773 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:47:29.852319 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:47:29.895441 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:47:29.946242 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (912) Jan 30 13:47:29.965895 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:47:29.965984 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:47:29.966011 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:47:29.988468 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 13:47:29.988564 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:47:29.992065 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:47:30.033334 ignition[929]: INFO : Ignition 2.19.0 Jan 30 13:47:30.033334 ignition[929]: INFO : Stage: files Jan 30 13:47:30.047356 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:47:30.047356 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 30 13:47:30.047356 ignition[929]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:47:30.047356 ignition[929]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:47:30.047356 ignition[929]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:47:30.047356 ignition[929]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:47:30.047356 ignition[929]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:47:30.047356 ignition[929]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:47:30.046931 unknown[929]: wrote ssh authorized keys file for user: core Jan 30 13:47:30.150326 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 30 13:47:30.150326 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 30 13:47:30.150326 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:47:30.150326 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 30 13:47:33.267177 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 30 13:47:33.398003 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:47:33.415342 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:47:33.415342 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:47:33.415342 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:47:33.415342 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:47:33.415342 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:47:33.415342 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:47:33.415342 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:47:33.415342 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:47:33.415342 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:47:33.415342 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:47:33.415342 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:47:33.415342 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:47:33.415342 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:47:33.415342 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 30 13:47:33.694838 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 30 13:47:34.056618 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:47:34.056618 ignition[929]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 30 13:47:34.097395 ignition[929]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 30 13:47:34.097395 ignition[929]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 30 13:47:34.097395 ignition[929]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 30 13:47:34.097395 ignition[929]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 30 13:47:34.097395 ignition[929]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:47:34.097395 ignition[929]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:47:34.097395 ignition[929]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 30 13:47:34.097395 ignition[929]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 30 13:47:34.097395 ignition[929]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 13:47:34.097395 ignition[929]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:47:34.097395 ignition[929]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:47:34.097395 ignition[929]: INFO : files: files passed Jan 30 13:47:34.097395 ignition[929]: INFO : Ignition finished successfully Jan 30 13:47:34.062501 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:47:34.092633 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:47:34.131133 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:47:34.168976 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:47:34.380366 initrd-setup-root-after-ignition[957]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:47:34.380366 initrd-setup-root-after-ignition[957]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:47:34.169097 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:47:34.448348 initrd-setup-root-after-ignition[961]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:47:34.193885 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:47:34.218776 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:47:34.245402 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:47:34.352406 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:47:34.352634 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:47:34.373215 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:47:34.391541 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:47:34.406658 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:47:34.413496 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:47:34.483481 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:47:34.501450 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:47:34.543105 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:47:34.558704 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:47:34.568714 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:47:34.599637 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:47:34.599842 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:47:34.650602 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:47:34.670522 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:47:34.687605 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:47:34.705542 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:47:34.727578 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:47:34.748500 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:47:34.766567 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:47:34.788617 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:47:34.808567 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:47:34.828542 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:47:34.846457 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:47:34.846716 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:47:34.877620 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:47:34.897526 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:47:34.916509 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:47:34.916681 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:47:34.935492 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:47:34.935734 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:47:34.965600 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:47:35.048470 ignition[982]: INFO : Ignition 2.19.0 Jan 30 13:47:35.048470 ignition[982]: INFO : Stage: umount Jan 30 13:47:35.048470 ignition[982]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:47:35.048470 ignition[982]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 30 13:47:35.048470 ignition[982]: INFO : umount: umount passed Jan 30 13:47:35.048470 ignition[982]: INFO : Ignition finished successfully Jan 30 13:47:34.965830 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:47:34.987619 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:47:34.987806 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:47:35.012448 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:47:35.057353 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:47:35.057645 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:47:35.082661 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:47:35.151414 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:47:35.151721 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:47:35.172651 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:47:35.172862 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:47:35.209669 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:47:35.210841 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:47:35.210960 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:47:35.215083 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:47:35.215241 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:47:35.243849 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:47:35.243980 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:47:35.271797 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:47:35.271865 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:47:35.289575 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:47:35.289655 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:47:35.297625 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 13:47:35.297698 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 13:47:35.314669 systemd[1]: Stopped target network.target - Network. Jan 30 13:47:35.339446 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:47:35.339566 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:47:35.348639 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:47:35.366579 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:47:35.370301 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:47:35.381555 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:47:35.402567 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:47:35.433508 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:47:35.433581 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:47:35.441572 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:47:35.441637 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:47:35.458596 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:47:35.458677 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:47:35.475621 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:47:35.475700 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:47:35.492623 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:47:35.492702 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:47:35.509855 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:47:35.514239 systemd-networkd[749]: eth0: DHCPv6 lease lost Jan 30 13:47:35.545688 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:47:35.554952 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:47:35.555085 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:47:35.583114 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:47:35.583424 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:47:35.605865 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:47:35.605947 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:47:35.629317 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:47:35.639304 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:47:35.639546 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:47:35.647606 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:47:35.647680 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:47:35.674606 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:47:35.674678 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:47:35.701428 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:47:35.701551 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:47:35.721610 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:47:35.740910 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:47:35.741088 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:47:35.777600 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:47:35.777672 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:47:35.791577 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:47:35.791635 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:47:35.819491 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:47:35.819581 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:47:36.176376 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Jan 30 13:47:35.847618 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:47:35.847703 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:47:35.891358 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:47:35.891614 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:47:35.928406 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:47:35.968325 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:47:35.968459 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:47:35.989478 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:47:35.989580 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:47:36.010995 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:47:36.011129 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:47:36.030698 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:47:36.030821 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:47:36.052781 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:47:36.088406 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:47:36.135216 systemd[1]: Switching root. Jan 30 13:47:36.333373 systemd-journald[183]: Journal stopped Jan 30 13:47:38.866758 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 13:47:38.866813 kernel: SELinux: policy capability open_perms=1 Jan 30 13:47:38.866835 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 13:47:38.866853 kernel: SELinux: policy capability always_check_network=0 Jan 30 13:47:38.866870 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 13:47:38.866888 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 13:47:38.866909 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 13:47:38.866934 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 13:47:38.866952 kernel: audit: type=1403 audit(1738244856.849:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 13:47:38.866973 systemd[1]: Successfully loaded SELinux policy in 81.993ms. Jan 30 13:47:38.866997 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.598ms. Jan 30 13:47:38.867018 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:47:38.867039 systemd[1]: Detected virtualization google. Jan 30 13:47:38.867058 systemd[1]: Detected architecture x86-64. Jan 30 13:47:38.867090 systemd[1]: Detected first boot. Jan 30 13:47:38.867113 systemd[1]: Initializing machine ID from random generator. Jan 30 13:47:38.867134 zram_generator::config[1040]: No configuration found. Jan 30 13:47:38.867195 systemd[1]: Populated /etc with preset unit settings. Jan 30 13:47:38.867218 systemd[1]: Queued start job for default target multi-user.target. Jan 30 13:47:38.867244 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 30 13:47:38.867267 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 13:47:38.867288 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 13:47:38.867309 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 13:47:38.867330 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 13:47:38.867354 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 13:47:38.867376 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 13:47:38.867401 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 13:47:38.867423 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 13:47:38.867446 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:47:38.867467 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:47:38.867489 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 13:47:38.867511 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 13:47:38.867532 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 13:47:38.867554 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:47:38.867580 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 13:47:38.867602 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:47:38.867623 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 13:47:38.867644 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:47:38.867666 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:47:38.867688 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:47:38.867715 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:47:38.867737 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 13:47:38.867760 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 13:47:38.867786 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:47:38.867807 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:47:38.867830 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:47:38.867852 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:47:38.867874 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:47:38.867896 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 13:47:38.867920 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 13:47:38.867947 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 13:47:38.867970 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 13:47:38.867992 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:47:38.868015 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 13:47:38.868041 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 13:47:38.868064 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 13:47:38.868094 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 13:47:38.868118 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:47:38.868141 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:47:38.868176 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 13:47:38.868199 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:47:38.868221 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:47:38.868244 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:47:38.868271 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 13:47:38.868293 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:47:38.868317 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 13:47:38.868338 kernel: ACPI: bus type drm_connector registered Jan 30 13:47:38.868360 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 30 13:47:38.868383 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 30 13:47:38.868405 kernel: fuse: init (API version 7.39) Jan 30 13:47:38.868427 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:47:38.868454 kernel: loop: module loaded Jan 30 13:47:38.868475 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:47:38.868530 systemd-journald[1144]: Collecting audit messages is disabled. Jan 30 13:47:38.868577 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 13:47:38.868605 systemd-journald[1144]: Journal started Jan 30 13:47:38.868650 systemd-journald[1144]: Runtime Journal (/run/log/journal/53af10c191a242ab9bd84c671fcd7ce0) is 8.0M, max 148.7M, 140.7M free. Jan 30 13:47:38.901193 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 13:47:38.913213 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:47:38.940229 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:47:38.951201 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:47:38.967762 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 13:47:38.977515 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 13:47:38.987565 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 13:47:38.997523 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 13:47:39.007534 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 13:47:39.017544 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 13:47:39.029006 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 13:47:39.040994 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:47:39.052975 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 13:47:39.053351 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 13:47:39.064870 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:47:39.065189 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:47:39.076690 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:47:39.076946 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:47:39.087859 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:47:39.088207 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:47:39.099739 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 13:47:39.100001 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 13:47:39.110709 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:47:39.110967 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:47:39.122815 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:47:39.133689 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 13:47:39.145822 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 13:47:39.157812 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:47:39.181612 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 13:47:39.203339 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 13:47:39.218305 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 13:47:39.228381 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 13:47:39.237404 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 13:47:39.255444 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 13:47:39.266357 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:47:39.275845 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 13:47:39.287377 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:47:39.294382 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:47:39.298517 systemd-journald[1144]: Time spent on flushing to /var/log/journal/53af10c191a242ab9bd84c671fcd7ce0 is 57.731ms for 917 entries. Jan 30 13:47:39.298517 systemd-journald[1144]: System Journal (/var/log/journal/53af10c191a242ab9bd84c671fcd7ce0) is 8.0M, max 584.8M, 576.8M free. Jan 30 13:47:39.391527 systemd-journald[1144]: Received client request to flush runtime journal. Jan 30 13:47:39.327372 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:47:39.347392 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 13:47:39.367397 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 13:47:39.378554 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 13:47:39.390986 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 13:47:39.403308 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 13:47:39.414887 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:47:39.432273 systemd-tmpfiles[1183]: ACLs are not supported, ignoring. Jan 30 13:47:39.432307 systemd-tmpfiles[1183]: ACLs are not supported, ignoring. Jan 30 13:47:39.438441 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 13:47:39.450077 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:47:39.462835 udevadm[1185]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 30 13:47:39.470460 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 13:47:39.542083 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 13:47:39.563522 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:47:39.590958 systemd-tmpfiles[1202]: ACLs are not supported, ignoring. Jan 30 13:47:39.591525 systemd-tmpfiles[1202]: ACLs are not supported, ignoring. Jan 30 13:47:39.601416 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:47:40.134485 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 13:47:40.156408 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:47:40.186754 systemd-udevd[1208]: Using default interface naming scheme 'v255'. Jan 30 13:47:40.234775 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:47:40.259443 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:47:40.299382 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 13:47:40.432630 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 13:47:40.446323 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 30 13:47:40.471198 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 30 13:47:40.482454 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jan 30 13:47:40.504221 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 30 13:47:40.518204 kernel: ACPI: button: Power Button [PWRF] Jan 30 13:47:40.531224 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Jan 30 13:47:40.547194 kernel: ACPI: button: Sleep Button [SLPF] Jan 30 13:47:40.569184 kernel: EDAC MC: Ver: 3.0.0 Jan 30 13:47:40.629667 systemd-networkd[1217]: lo: Link UP Jan 30 13:47:40.629684 systemd-networkd[1217]: lo: Gained carrier Jan 30 13:47:40.633732 systemd-networkd[1217]: Enumeration completed Jan 30 13:47:40.633935 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:47:40.636481 systemd-networkd[1217]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:47:40.636497 systemd-networkd[1217]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:47:40.637204 systemd-networkd[1217]: eth0: Link UP Jan 30 13:47:40.637220 systemd-networkd[1217]: eth0: Gained carrier Jan 30 13:47:40.637244 systemd-networkd[1217]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:47:40.655179 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1220) Jan 30 13:47:40.655681 systemd-networkd[1217]: eth0: DHCPv4 address 10.128.0.26/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 30 13:47:40.661388 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 13:47:40.684399 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 13:47:40.757406 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 30 13:47:40.774581 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:47:40.792540 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 13:47:40.810483 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 13:47:40.831900 lvm[1251]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:47:40.861839 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 13:47:40.862457 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:47:40.867407 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 13:47:40.880471 lvm[1255]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:47:40.913640 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:47:40.927093 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 13:47:40.939246 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:47:40.950360 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 13:47:40.950408 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:47:40.960360 systemd[1]: Reached target machines.target - Containers. Jan 30 13:47:40.970733 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 13:47:40.994456 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 13:47:41.012408 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 13:47:41.022542 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:47:41.029528 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 13:47:41.047145 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 13:47:41.071884 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 13:47:41.073811 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 13:47:41.094285 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 13:47:41.111015 kernel: loop0: detected capacity change from 0 to 142488 Jan 30 13:47:41.123357 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 13:47:41.124933 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 13:47:41.174213 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 13:47:41.203199 kernel: loop1: detected capacity change from 0 to 210664 Jan 30 13:47:41.264480 kernel: loop2: detected capacity change from 0 to 140768 Jan 30 13:47:41.338213 kernel: loop3: detected capacity change from 0 to 54824 Jan 30 13:47:41.411942 kernel: loop4: detected capacity change from 0 to 142488 Jan 30 13:47:41.455179 kernel: loop5: detected capacity change from 0 to 210664 Jan 30 13:47:41.489456 kernel: loop6: detected capacity change from 0 to 140768 Jan 30 13:47:41.530216 kernel: loop7: detected capacity change from 0 to 54824 Jan 30 13:47:41.552920 (sd-merge)[1280]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Jan 30 13:47:41.554005 (sd-merge)[1280]: Merged extensions into '/usr'. Jan 30 13:47:41.565940 systemd[1]: Reloading requested from client PID 1268 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 13:47:41.566128 systemd[1]: Reloading... Jan 30 13:47:41.674061 zram_generator::config[1305]: No configuration found. Jan 30 13:47:41.907566 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:47:41.921603 ldconfig[1264]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 13:47:41.997572 systemd[1]: Reloading finished in 430 ms. Jan 30 13:47:42.020268 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 13:47:42.030781 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 13:47:42.051485 systemd[1]: Starting ensure-sysext.service... Jan 30 13:47:42.066385 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:47:42.081540 systemd[1]: Reloading requested from client PID 1356 ('systemctl') (unit ensure-sysext.service)... Jan 30 13:47:42.081731 systemd[1]: Reloading... Jan 30 13:47:42.109516 systemd-tmpfiles[1357]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 13:47:42.110216 systemd-tmpfiles[1357]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 13:47:42.111991 systemd-tmpfiles[1357]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 13:47:42.112604 systemd-tmpfiles[1357]: ACLs are not supported, ignoring. Jan 30 13:47:42.112728 systemd-tmpfiles[1357]: ACLs are not supported, ignoring. Jan 30 13:47:42.119758 systemd-tmpfiles[1357]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:47:42.119777 systemd-tmpfiles[1357]: Skipping /boot Jan 30 13:47:42.140530 systemd-tmpfiles[1357]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:47:42.143357 systemd-tmpfiles[1357]: Skipping /boot Jan 30 13:47:42.203183 zram_generator::config[1386]: No configuration found. Jan 30 13:47:42.263374 systemd-networkd[1217]: eth0: Gained IPv6LL Jan 30 13:47:42.360240 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:47:42.444270 systemd[1]: Reloading finished in 361 ms. Jan 30 13:47:42.463433 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 13:47:42.480903 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:47:42.511132 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:47:42.528714 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 13:47:42.548368 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 13:47:42.566677 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:47:42.591413 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 13:47:42.613784 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:47:42.615308 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:47:42.626439 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:47:42.628178 augenrules[1456]: No rules Jan 30 13:47:42.644250 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:47:42.670618 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:47:42.680472 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:47:42.680863 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:47:42.686951 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:47:42.700480 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 13:47:42.713357 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:47:42.713646 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:47:42.726372 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:47:42.726654 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:47:42.739953 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:47:42.740623 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:47:42.746914 systemd-resolved[1448]: Positive Trust Anchors: Jan 30 13:47:42.746970 systemd-resolved[1448]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:47:42.747037 systemd-resolved[1448]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:47:42.756554 systemd-resolved[1448]: Defaulting to hostname 'linux'. Jan 30 13:47:42.758567 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 13:47:42.772628 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:47:42.783106 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 13:47:42.800977 systemd[1]: Reached target network.target - Network. Jan 30 13:47:42.809508 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 13:47:42.819513 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:47:42.830469 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:47:42.830846 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:47:42.836537 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:47:42.863675 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:47:42.883589 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:47:42.893464 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:47:42.902570 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 13:47:42.912385 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:47:42.912624 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:47:42.919081 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:47:42.919390 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:47:42.931347 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:47:42.931654 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:47:42.944105 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:47:42.944414 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:47:42.955248 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 13:47:42.973223 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:47:42.973625 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:47:42.978506 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:47:42.997382 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:47:43.015597 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:47:43.035603 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:47:43.053574 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 30 13:47:43.063539 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:47:43.063946 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 13:47:43.073546 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:47:43.073757 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:47:43.076896 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:47:43.077276 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:47:43.090307 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:47:43.090630 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:47:43.101048 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:47:43.101354 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:47:43.112989 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:47:43.113306 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:47:43.134235 systemd[1]: Finished ensure-sysext.service. Jan 30 13:47:43.143494 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 30 13:47:43.164408 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Jan 30 13:47:43.175389 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:47:43.175487 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:47:43.185523 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 13:47:43.197405 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 13:47:43.209579 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 13:47:43.219493 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 13:47:43.230379 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 13:47:43.241353 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 13:47:43.241488 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:47:43.250343 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:47:43.259963 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 13:47:43.272237 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 13:47:43.280623 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:47:43.281738 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Jan 30 13:47:43.293647 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 13:47:43.306605 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 13:47:43.316310 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:47:43.326318 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:47:43.335614 systemd[1]: System is tainted: cgroupsv1 Jan 30 13:47:43.335701 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:47:43.335738 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:47:43.341311 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 13:47:43.364406 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 13:47:43.381723 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 13:47:43.417425 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 13:47:43.439759 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 13:47:43.449309 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 13:47:43.453386 jq[1532]: false Jan 30 13:47:43.456052 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:47:43.469136 coreos-metadata[1529]: Jan 30 13:47:43.469 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Jan 30 13:47:43.473430 coreos-metadata[1529]: Jan 30 13:47:43.473 INFO Fetch successful Jan 30 13:47:43.474716 coreos-metadata[1529]: Jan 30 13:47:43.474 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Jan 30 13:47:43.476465 coreos-metadata[1529]: Jan 30 13:47:43.476 INFO Fetch successful Jan 30 13:47:43.476465 coreos-metadata[1529]: Jan 30 13:47:43.476 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Jan 30 13:47:43.478759 coreos-metadata[1529]: Jan 30 13:47:43.477 INFO Fetch successful Jan 30 13:47:43.478759 coreos-metadata[1529]: Jan 30 13:47:43.477 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Jan 30 13:47:43.478277 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 13:47:43.481782 coreos-metadata[1529]: Jan 30 13:47:43.479 INFO Fetch successful Jan 30 13:47:43.496214 extend-filesystems[1535]: Found loop4 Jan 30 13:47:43.502355 extend-filesystems[1535]: Found loop5 Jan 30 13:47:43.502355 extend-filesystems[1535]: Found loop6 Jan 30 13:47:43.502355 extend-filesystems[1535]: Found loop7 Jan 30 13:47:43.502355 extend-filesystems[1535]: Found sda Jan 30 13:47:43.502355 extend-filesystems[1535]: Found sda1 Jan 30 13:47:43.502355 extend-filesystems[1535]: Found sda2 Jan 30 13:47:43.502355 extend-filesystems[1535]: Found sda3 Jan 30 13:47:43.502355 extend-filesystems[1535]: Found usr Jan 30 13:47:43.502355 extend-filesystems[1535]: Found sda4 Jan 30 13:47:43.502355 extend-filesystems[1535]: Found sda6 Jan 30 13:47:43.502355 extend-filesystems[1535]: Found sda7 Jan 30 13:47:43.502355 extend-filesystems[1535]: Found sda9 Jan 30 13:47:43.502355 extend-filesystems[1535]: Checking size of /dev/sda9 Jan 30 13:47:43.657664 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Jan 30 13:47:43.657722 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Jan 30 13:47:43.501413 systemd[1]: Started ntpd.service - Network Time Service. Jan 30 13:47:43.574643 dbus-daemon[1531]: [system] SELinux support is enabled Jan 30 13:47:43.659776 extend-filesystems[1535]: Resized partition /dev/sda9 Jan 30 13:47:43.690393 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1574) Jan 30 13:47:43.540423 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 13:47:43.588260 ntpd[1541]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 09:31:52 UTC 2025 (1): Starting Jan 30 13:47:43.700356 extend-filesystems[1554]: resize2fs 1.47.1 (20-May-2024) Jan 30 13:47:43.700356 extend-filesystems[1554]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 30 13:47:43.700356 extend-filesystems[1554]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 30 13:47:43.700356 extend-filesystems[1554]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Jan 30 13:47:43.580379 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Jan 30 13:47:43.736643 ntpd[1541]: 30 Jan 13:47:43 ntpd[1541]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 09:31:52 UTC 2025 (1): Starting Jan 30 13:47:43.736643 ntpd[1541]: 30 Jan 13:47:43 ntpd[1541]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 30 13:47:43.736643 ntpd[1541]: 30 Jan 13:47:43 ntpd[1541]: ---------------------------------------------------- Jan 30 13:47:43.736643 ntpd[1541]: 30 Jan 13:47:43 ntpd[1541]: ntp-4 is maintained by Network Time Foundation, Jan 30 13:47:43.736643 ntpd[1541]: 30 Jan 13:47:43 ntpd[1541]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 30 13:47:43.736643 ntpd[1541]: 30 Jan 13:47:43 ntpd[1541]: corporation. Support and training for ntp-4 are Jan 30 13:47:43.736643 ntpd[1541]: 30 Jan 13:47:43 ntpd[1541]: available at https://www.nwtime.org/support Jan 30 13:47:43.736643 ntpd[1541]: 30 Jan 13:47:43 ntpd[1541]: ---------------------------------------------------- Jan 30 13:47:43.736643 ntpd[1541]: 30 Jan 13:47:43 ntpd[1541]: proto: precision = 0.086 usec (-23) Jan 30 13:47:43.736643 ntpd[1541]: 30 Jan 13:47:43 ntpd[1541]: basedate set to 2025-01-17 Jan 30 13:47:43.736643 ntpd[1541]: 30 Jan 13:47:43 ntpd[1541]: gps base set to 2025-01-19 (week 2350) Jan 30 13:47:43.736643 ntpd[1541]: 30 Jan 13:47:43 ntpd[1541]: Listen and drop on 0 v6wildcard [::]:123 Jan 30 13:47:43.736643 ntpd[1541]: 30 Jan 13:47:43 ntpd[1541]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 30 13:47:43.736643 ntpd[1541]: 30 Jan 13:47:43 ntpd[1541]: Listen normally on 2 lo 127.0.0.1:123 Jan 30 13:47:43.736643 ntpd[1541]: 30 Jan 13:47:43 ntpd[1541]: Listen normally on 3 eth0 10.128.0.26:123 Jan 30 13:47:43.736643 ntpd[1541]: 30 Jan 13:47:43 ntpd[1541]: Listen normally on 4 lo [::1]:123 Jan 30 13:47:43.736643 ntpd[1541]: 30 Jan 13:47:43 ntpd[1541]: Listen normally on 5 eth0 [fe80::4001:aff:fe80:1a%2]:123 Jan 30 13:47:43.736643 ntpd[1541]: 30 Jan 13:47:43 ntpd[1541]: Listening on routing socket on fd #22 for interface updates Jan 30 13:47:43.736643 ntpd[1541]: 30 Jan 13:47:43 ntpd[1541]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 30 13:47:43.736643 ntpd[1541]: 30 Jan 13:47:43 ntpd[1541]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 30 13:47:43.588292 ntpd[1541]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 30 13:47:43.741409 extend-filesystems[1535]: Resized filesystem in /dev/sda9 Jan 30 13:47:43.601380 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 13:47:43.588309 ntpd[1541]: ---------------------------------------------------- Jan 30 13:47:43.639462 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 13:47:43.757235 init.sh[1557]: + '[' -e /etc/default/instance_configs.cfg.template ']' Jan 30 13:47:43.757235 init.sh[1557]: + echo -e '[InstanceSetup]\nset_host_keys = false' Jan 30 13:47:43.757235 init.sh[1557]: + /usr/bin/google_instance_setup Jan 30 13:47:43.588323 ntpd[1541]: ntp-4 is maintained by Network Time Foundation, Jan 30 13:47:43.694655 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 13:47:43.588338 ntpd[1541]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 30 13:47:43.712442 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 13:47:43.588352 ntpd[1541]: corporation. Support and training for ntp-4 are Jan 30 13:47:43.733925 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Jan 30 13:47:43.588367 ntpd[1541]: available at https://www.nwtime.org/support Jan 30 13:47:43.745457 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 13:47:43.588380 ntpd[1541]: ---------------------------------------------------- Jan 30 13:47:43.590828 ntpd[1541]: proto: precision = 0.086 usec (-23) Jan 30 13:47:43.591267 ntpd[1541]: basedate set to 2025-01-17 Jan 30 13:47:43.591290 ntpd[1541]: gps base set to 2025-01-19 (week 2350) Jan 30 13:47:43.593450 dbus-daemon[1531]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1217 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 30 13:47:43.595096 ntpd[1541]: Listen and drop on 0 v6wildcard [::]:123 Jan 30 13:47:43.595180 ntpd[1541]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 30 13:47:43.595415 ntpd[1541]: Listen normally on 2 lo 127.0.0.1:123 Jan 30 13:47:43.595472 ntpd[1541]: Listen normally on 3 eth0 10.128.0.26:123 Jan 30 13:47:43.595549 ntpd[1541]: Listen normally on 4 lo [::1]:123 Jan 30 13:47:43.595619 ntpd[1541]: Listen normally on 5 eth0 [fe80::4001:aff:fe80:1a%2]:123 Jan 30 13:47:43.595674 ntpd[1541]: Listening on routing socket on fd #22 for interface updates Jan 30 13:47:43.598560 ntpd[1541]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 30 13:47:43.598595 ntpd[1541]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 30 13:47:43.797337 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 13:47:43.810138 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 13:47:43.839184 jq[1588]: true Jan 30 13:47:43.843256 update_engine[1584]: I20250130 13:47:43.840611 1584 main.cc:92] Flatcar Update Engine starting Jan 30 13:47:43.847786 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 13:47:43.851571 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 13:47:43.854956 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 13:47:43.856886 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 13:47:43.861084 update_engine[1584]: I20250130 13:47:43.859663 1584 update_check_scheduler.cc:74] Next update check in 6m18s Jan 30 13:47:43.883773 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 13:47:43.884616 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 13:47:43.895499 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 13:47:43.914746 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 13:47:43.915125 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 13:47:43.957195 jq[1596]: true Jan 30 13:47:43.957919 (ntainerd)[1597]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 13:47:43.999084 systemd-logind[1582]: Watching system buttons on /dev/input/event2 (Power Button) Jan 30 13:47:43.999141 systemd-logind[1582]: Watching system buttons on /dev/input/event3 (Sleep Button) Jan 30 13:47:43.999231 systemd-logind[1582]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 13:47:44.000565 systemd-logind[1582]: New seat seat0. Jan 30 13:47:44.002438 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 13:47:44.017306 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 13:47:44.063027 dbus-daemon[1531]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 30 13:47:44.100543 systemd[1]: Started update-engine.service - Update Engine. Jan 30 13:47:44.123006 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 13:47:44.124190 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 13:47:44.124471 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 13:47:44.126189 tar[1595]: linux-amd64/helm Jan 30 13:47:44.145560 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 30 13:47:44.156417 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 13:47:44.156688 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 13:47:44.170358 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 13:47:44.183252 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 13:47:44.261605 bash[1635]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:47:44.259389 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 13:47:44.284535 systemd[1]: Starting sshkeys.service... Jan 30 13:47:44.348863 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 30 13:47:44.371322 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 30 13:47:44.647164 coreos-metadata[1639]: Jan 30 13:47:44.644 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Jan 30 13:47:44.669187 coreos-metadata[1639]: Jan 30 13:47:44.667 INFO Fetch failed with 404: resource not found Jan 30 13:47:44.669187 coreos-metadata[1639]: Jan 30 13:47:44.667 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Jan 30 13:47:44.674292 coreos-metadata[1639]: Jan 30 13:47:44.671 INFO Fetch successful Jan 30 13:47:44.674292 coreos-metadata[1639]: Jan 30 13:47:44.671 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Jan 30 13:47:44.674292 coreos-metadata[1639]: Jan 30 13:47:44.673 INFO Fetch failed with 404: resource not found Jan 30 13:47:44.674292 coreos-metadata[1639]: Jan 30 13:47:44.674 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Jan 30 13:47:44.676808 coreos-metadata[1639]: Jan 30 13:47:44.676 INFO Fetch failed with 404: resource not found Jan 30 13:47:44.676808 coreos-metadata[1639]: Jan 30 13:47:44.676 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Jan 30 13:47:44.681013 coreos-metadata[1639]: Jan 30 13:47:44.680 INFO Fetch successful Jan 30 13:47:44.687292 unknown[1639]: wrote ssh authorized keys file for user: core Jan 30 13:47:44.785604 update-ssh-keys[1651]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:47:44.785470 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 30 13:47:44.801724 systemd[1]: Finished sshkeys.service. Jan 30 13:47:44.934423 dbus-daemon[1531]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 30 13:47:44.936309 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 30 13:47:44.939081 dbus-daemon[1531]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1627 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 30 13:47:44.955524 systemd[1]: Starting polkit.service - Authorization Manager... Jan 30 13:47:44.962130 locksmithd[1631]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 13:47:45.066380 polkitd[1658]: Started polkitd version 121 Jan 30 13:47:45.099513 polkitd[1658]: Loading rules from directory /etc/polkit-1/rules.d Jan 30 13:47:45.102491 polkitd[1658]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 30 13:47:45.109527 polkitd[1658]: Finished loading, compiling and executing 2 rules Jan 30 13:47:45.114358 dbus-daemon[1531]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 30 13:47:45.114642 systemd[1]: Started polkit.service - Authorization Manager. Jan 30 13:47:45.115214 polkitd[1658]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 30 13:47:45.163992 containerd[1597]: time="2025-01-30T13:47:45.163881964Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 13:47:45.177734 systemd-hostnamed[1627]: Hostname set to (transient) Jan 30 13:47:45.178543 systemd-resolved[1448]: System hostname changed to 'ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal'. Jan 30 13:47:45.344701 containerd[1597]: time="2025-01-30T13:47:45.340374062Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:47:45.347252 containerd[1597]: time="2025-01-30T13:47:45.347172083Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:47:45.347252 containerd[1597]: time="2025-01-30T13:47:45.347249784Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 13:47:45.347426 containerd[1597]: time="2025-01-30T13:47:45.347278293Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 13:47:45.347522 containerd[1597]: time="2025-01-30T13:47:45.347491729Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 13:47:45.347577 containerd[1597]: time="2025-01-30T13:47:45.347535391Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 13:47:45.347659 containerd[1597]: time="2025-01-30T13:47:45.347630852Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:47:45.347709 containerd[1597]: time="2025-01-30T13:47:45.347662753Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:47:45.348059 containerd[1597]: time="2025-01-30T13:47:45.348016209Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:47:45.348175 containerd[1597]: time="2025-01-30T13:47:45.348065395Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 13:47:45.348175 containerd[1597]: time="2025-01-30T13:47:45.348090800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:47:45.348175 containerd[1597]: time="2025-01-30T13:47:45.348107973Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 13:47:45.348300 containerd[1597]: time="2025-01-30T13:47:45.348279879Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:47:45.348912 containerd[1597]: time="2025-01-30T13:47:45.348607271Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:47:45.348992 containerd[1597]: time="2025-01-30T13:47:45.348915717Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:47:45.348992 containerd[1597]: time="2025-01-30T13:47:45.348941296Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 13:47:45.349082 containerd[1597]: time="2025-01-30T13:47:45.349063197Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 13:47:45.350195 containerd[1597]: time="2025-01-30T13:47:45.349133759Z" level=info msg="metadata content store policy set" policy=shared Jan 30 13:47:45.361572 containerd[1597]: time="2025-01-30T13:47:45.361477298Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 13:47:45.363176 containerd[1597]: time="2025-01-30T13:47:45.361774810Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 13:47:45.363176 containerd[1597]: time="2025-01-30T13:47:45.361808498Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 13:47:45.363176 containerd[1597]: time="2025-01-30T13:47:45.361835781Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 13:47:45.363176 containerd[1597]: time="2025-01-30T13:47:45.361876935Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 13:47:45.363176 containerd[1597]: time="2025-01-30T13:47:45.362078499Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 13:47:45.363176 containerd[1597]: time="2025-01-30T13:47:45.362639246Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 13:47:45.363176 containerd[1597]: time="2025-01-30T13:47:45.362803604Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 13:47:45.363176 containerd[1597]: time="2025-01-30T13:47:45.362827436Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 13:47:45.363176 containerd[1597]: time="2025-01-30T13:47:45.362848602Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 13:47:45.363176 containerd[1597]: time="2025-01-30T13:47:45.362870623Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 13:47:45.363176 containerd[1597]: time="2025-01-30T13:47:45.362893903Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 13:47:45.363176 containerd[1597]: time="2025-01-30T13:47:45.362914749Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 13:47:45.363176 containerd[1597]: time="2025-01-30T13:47:45.362966751Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 13:47:45.363176 containerd[1597]: time="2025-01-30T13:47:45.363019695Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 13:47:45.363828 containerd[1597]: time="2025-01-30T13:47:45.363041199Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 13:47:45.363828 containerd[1597]: time="2025-01-30T13:47:45.363062211Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 13:47:45.363828 containerd[1597]: time="2025-01-30T13:47:45.363083912Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 13:47:45.363828 containerd[1597]: time="2025-01-30T13:47:45.363114714Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 13:47:45.363828 containerd[1597]: time="2025-01-30T13:47:45.363137428Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 13:47:45.368383 containerd[1597]: time="2025-01-30T13:47:45.366940848Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 13:47:45.368383 containerd[1597]: time="2025-01-30T13:47:45.366990158Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 13:47:45.368383 containerd[1597]: time="2025-01-30T13:47:45.367025792Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 13:47:45.368383 containerd[1597]: time="2025-01-30T13:47:45.367051655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 13:47:45.368383 containerd[1597]: time="2025-01-30T13:47:45.367074877Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 13:47:45.368383 containerd[1597]: time="2025-01-30T13:47:45.367105916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 13:47:45.368383 containerd[1597]: time="2025-01-30T13:47:45.367135189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 13:47:45.368383 containerd[1597]: time="2025-01-30T13:47:45.367181075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 13:47:45.368383 containerd[1597]: time="2025-01-30T13:47:45.367204450Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 13:47:45.368383 containerd[1597]: time="2025-01-30T13:47:45.367223957Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 13:47:45.368383 containerd[1597]: time="2025-01-30T13:47:45.368041066Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 13:47:45.368383 containerd[1597]: time="2025-01-30T13:47:45.368089379Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 13:47:45.368383 containerd[1597]: time="2025-01-30T13:47:45.368134348Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 13:47:45.373295 containerd[1597]: time="2025-01-30T13:47:45.369764974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 13:47:45.373295 containerd[1597]: time="2025-01-30T13:47:45.369805153Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 13:47:45.373295 containerd[1597]: time="2025-01-30T13:47:45.369882146Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 13:47:45.373295 containerd[1597]: time="2025-01-30T13:47:45.369911979Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 13:47:45.373295 containerd[1597]: time="2025-01-30T13:47:45.369932761Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 13:47:45.373295 containerd[1597]: time="2025-01-30T13:47:45.369953284Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 13:47:45.373295 containerd[1597]: time="2025-01-30T13:47:45.369970046Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 13:47:45.373295 containerd[1597]: time="2025-01-30T13:47:45.369990816Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 13:47:45.373295 containerd[1597]: time="2025-01-30T13:47:45.370008098Z" level=info msg="NRI interface is disabled by configuration." Jan 30 13:47:45.373295 containerd[1597]: time="2025-01-30T13:47:45.370024992Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 13:47:45.373845 containerd[1597]: time="2025-01-30T13:47:45.370484071Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 13:47:45.373845 containerd[1597]: time="2025-01-30T13:47:45.370591643Z" level=info msg="Connect containerd service" Jan 30 13:47:45.373845 containerd[1597]: time="2025-01-30T13:47:45.370660296Z" level=info msg="using legacy CRI server" Jan 30 13:47:45.373845 containerd[1597]: time="2025-01-30T13:47:45.370671905Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 13:47:45.373845 containerd[1597]: time="2025-01-30T13:47:45.370830518Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 13:47:45.388588 containerd[1597]: time="2025-01-30T13:47:45.387337230Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:47:45.389685 containerd[1597]: time="2025-01-30T13:47:45.389062114Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 13:47:45.392632 containerd[1597]: time="2025-01-30T13:47:45.391204658Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 13:47:45.392632 containerd[1597]: time="2025-01-30T13:47:45.391304637Z" level=info msg="Start subscribing containerd event" Jan 30 13:47:45.392632 containerd[1597]: time="2025-01-30T13:47:45.391363192Z" level=info msg="Start recovering state" Jan 30 13:47:45.392632 containerd[1597]: time="2025-01-30T13:47:45.391488148Z" level=info msg="Start event monitor" Jan 30 13:47:45.392632 containerd[1597]: time="2025-01-30T13:47:45.391506314Z" level=info msg="Start snapshots syncer" Jan 30 13:47:45.392632 containerd[1597]: time="2025-01-30T13:47:45.391529758Z" level=info msg="Start cni network conf syncer for default" Jan 30 13:47:45.392632 containerd[1597]: time="2025-01-30T13:47:45.391549882Z" level=info msg="Start streaming server" Jan 30 13:47:45.391841 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 13:47:45.401186 containerd[1597]: time="2025-01-30T13:47:45.398190734Z" level=info msg="containerd successfully booted in 0.241218s" Jan 30 13:47:45.684966 instance-setup[1570]: INFO Running google_set_multiqueue. Jan 30 13:47:45.746669 instance-setup[1570]: INFO Set channels for eth0 to 2. Jan 30 13:47:45.761393 instance-setup[1570]: INFO Setting /proc/irq/27/smp_affinity_list to 0 for device virtio1. Jan 30 13:47:45.767388 instance-setup[1570]: INFO /proc/irq/27/smp_affinity_list: real affinity 0 Jan 30 13:47:45.767453 instance-setup[1570]: INFO Setting /proc/irq/28/smp_affinity_list to 0 for device virtio1. Jan 30 13:47:45.775407 instance-setup[1570]: INFO /proc/irq/28/smp_affinity_list: real affinity 0 Jan 30 13:47:45.775494 instance-setup[1570]: INFO Setting /proc/irq/29/smp_affinity_list to 1 for device virtio1. Jan 30 13:47:45.779692 instance-setup[1570]: INFO /proc/irq/29/smp_affinity_list: real affinity 1 Jan 30 13:47:45.782039 instance-setup[1570]: INFO Setting /proc/irq/30/smp_affinity_list to 1 for device virtio1. Jan 30 13:47:45.790782 sshd_keygen[1587]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 13:47:45.791361 instance-setup[1570]: INFO /proc/irq/30/smp_affinity_list: real affinity 1 Jan 30 13:47:45.821661 instance-setup[1570]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jan 30 13:47:45.837895 instance-setup[1570]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jan 30 13:47:45.843785 instance-setup[1570]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Jan 30 13:47:45.844049 instance-setup[1570]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Jan 30 13:47:45.864814 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 13:47:45.883556 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 13:47:45.910280 init.sh[1557]: + /usr/bin/google_metadata_script_runner --script-type startup Jan 30 13:47:45.921838 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 13:47:45.922288 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 13:47:45.939524 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 13:47:45.986348 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 13:47:46.005762 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 13:47:46.026016 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 13:47:46.036640 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 13:47:46.085875 tar[1595]: linux-amd64/LICENSE Jan 30 13:47:46.085875 tar[1595]: linux-amd64/README.md Jan 30 13:47:46.121717 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 13:47:46.174030 startup-script[1712]: INFO Starting startup scripts. Jan 30 13:47:46.181008 startup-script[1712]: INFO No startup scripts found in metadata. Jan 30 13:47:46.181092 startup-script[1712]: INFO Finished running startup scripts. Jan 30 13:47:46.205324 init.sh[1557]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Jan 30 13:47:46.205324 init.sh[1557]: + daemon_pids=() Jan 30 13:47:46.205324 init.sh[1557]: + for d in accounts clock_skew network Jan 30 13:47:46.205324 init.sh[1557]: + daemon_pids+=($!) Jan 30 13:47:46.205324 init.sh[1557]: + for d in accounts clock_skew network Jan 30 13:47:46.205324 init.sh[1557]: + daemon_pids+=($!) Jan 30 13:47:46.205324 init.sh[1557]: + for d in accounts clock_skew network Jan 30 13:47:46.205324 init.sh[1557]: + daemon_pids+=($!) Jan 30 13:47:46.205324 init.sh[1557]: + NOTIFY_SOCKET=/run/systemd/notify Jan 30 13:47:46.205324 init.sh[1557]: + /usr/bin/systemd-notify --ready Jan 30 13:47:46.205824 init.sh[1730]: + /usr/bin/google_clock_skew_daemon Jan 30 13:47:46.206252 init.sh[1731]: + /usr/bin/google_network_daemon Jan 30 13:47:46.207894 init.sh[1729]: + /usr/bin/google_accounts_daemon Jan 30 13:47:46.228343 systemd[1]: Started oem-gce.service - GCE Linux Agent. Jan 30 13:47:46.244175 init.sh[1557]: + wait -n 1729 1730 1731 Jan 30 13:47:46.467953 google-clock-skew[1730]: INFO Starting Google Clock Skew daemon. Jan 30 13:47:46.482921 google-clock-skew[1730]: INFO Clock drift token has changed: 0. Jan 30 13:47:47.000079 systemd-resolved[1448]: Clock change detected. Flushing caches. Jan 30 13:47:47.001653 google-clock-skew[1730]: INFO Synced system time with hardware clock. Jan 30 13:47:47.016693 google-networking[1731]: INFO Starting Google Networking daemon. Jan 30 13:47:47.092820 groupadd[1741]: group added to /etc/group: name=google-sudoers, GID=1000 Jan 30 13:47:47.097829 groupadd[1741]: group added to /etc/gshadow: name=google-sudoers Jan 30 13:47:47.147271 groupadd[1741]: new group: name=google-sudoers, GID=1000 Jan 30 13:47:47.178443 google-accounts[1729]: INFO Starting Google Accounts daemon. Jan 30 13:47:47.191106 google-accounts[1729]: WARNING OS Login not installed. Jan 30 13:47:47.193182 google-accounts[1729]: INFO Creating a new user account for 0. Jan 30 13:47:47.198730 init.sh[1749]: useradd: invalid user name '0': use --badname to ignore Jan 30 13:47:47.199293 google-accounts[1729]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Jan 30 13:47:47.229663 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:47:47.242615 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 13:47:47.249157 (kubelet)[1759]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:47:47.253884 systemd[1]: Startup finished in 13.122s (kernel) + 10.026s (userspace) = 23.149s. Jan 30 13:47:48.265945 kubelet[1759]: E0130 13:47:48.265865 1759 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:47:48.268914 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:47:48.269343 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:47:52.973576 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 13:47:52.978814 systemd[1]: Started sshd@0-10.128.0.26:22-139.178.68.195:42696.service - OpenSSH per-connection server daemon (139.178.68.195:42696). Jan 30 13:47:53.339387 sshd[1772]: Accepted publickey for core from 139.178.68.195 port 42696 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:47:53.343059 sshd[1772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:47:53.359589 systemd-logind[1582]: New session 1 of user core. Jan 30 13:47:53.361157 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 13:47:53.366812 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 13:47:53.396356 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 13:47:53.409874 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 13:47:53.434453 (systemd)[1778]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 13:47:53.584858 systemd[1778]: Queued start job for default target default.target. Jan 30 13:47:53.585523 systemd[1778]: Created slice app.slice - User Application Slice. Jan 30 13:47:53.585564 systemd[1778]: Reached target paths.target - Paths. Jan 30 13:47:53.585586 systemd[1778]: Reached target timers.target - Timers. Jan 30 13:47:53.590716 systemd[1778]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 13:47:53.604141 systemd[1778]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 13:47:53.604245 systemd[1778]: Reached target sockets.target - Sockets. Jan 30 13:47:53.604270 systemd[1778]: Reached target basic.target - Basic System. Jan 30 13:47:53.604348 systemd[1778]: Reached target default.target - Main User Target. Jan 30 13:47:53.604973 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 13:47:53.607450 systemd[1778]: Startup finished in 164ms. Jan 30 13:47:53.617662 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 13:47:53.881255 systemd[1]: Started sshd@1-10.128.0.26:22-139.178.68.195:42698.service - OpenSSH per-connection server daemon (139.178.68.195:42698). Jan 30 13:47:54.222434 sshd[1790]: Accepted publickey for core from 139.178.68.195 port 42698 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:47:54.224238 sshd[1790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:47:54.230885 systemd-logind[1582]: New session 2 of user core. Jan 30 13:47:54.238850 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 13:47:54.475207 sshd[1790]: pam_unix(sshd:session): session closed for user core Jan 30 13:47:54.481752 systemd[1]: sshd@1-10.128.0.26:22-139.178.68.195:42698.service: Deactivated successfully. Jan 30 13:47:54.487010 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 13:47:54.488019 systemd-logind[1582]: Session 2 logged out. Waiting for processes to exit. Jan 30 13:47:54.489585 systemd-logind[1582]: Removed session 2. Jan 30 13:47:54.538877 systemd[1]: Started sshd@2-10.128.0.26:22-139.178.68.195:42706.service - OpenSSH per-connection server daemon (139.178.68.195:42706). Jan 30 13:47:54.877890 sshd[1798]: Accepted publickey for core from 139.178.68.195 port 42706 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:47:54.879798 sshd[1798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:47:54.886138 systemd-logind[1582]: New session 3 of user core. Jan 30 13:47:54.896871 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 13:47:55.124773 sshd[1798]: pam_unix(sshd:session): session closed for user core Jan 30 13:47:55.129216 systemd[1]: sshd@2-10.128.0.26:22-139.178.68.195:42706.service: Deactivated successfully. Jan 30 13:47:55.135214 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 13:47:55.135928 systemd-logind[1582]: Session 3 logged out. Waiting for processes to exit. Jan 30 13:47:55.137865 systemd-logind[1582]: Removed session 3. Jan 30 13:47:55.181796 systemd[1]: Started sshd@3-10.128.0.26:22-139.178.68.195:55944.service - OpenSSH per-connection server daemon (139.178.68.195:55944). Jan 30 13:47:55.522197 sshd[1806]: Accepted publickey for core from 139.178.68.195 port 55944 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:47:55.524267 sshd[1806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:47:55.530810 systemd-logind[1582]: New session 4 of user core. Jan 30 13:47:55.537871 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 13:47:55.773069 sshd[1806]: pam_unix(sshd:session): session closed for user core Jan 30 13:47:55.777870 systemd[1]: sshd@3-10.128.0.26:22-139.178.68.195:55944.service: Deactivated successfully. Jan 30 13:47:55.784137 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 13:47:55.784272 systemd-logind[1582]: Session 4 logged out. Waiting for processes to exit. Jan 30 13:47:55.786006 systemd-logind[1582]: Removed session 4. Jan 30 13:47:55.836885 systemd[1]: Started sshd@4-10.128.0.26:22-139.178.68.195:55950.service - OpenSSH per-connection server daemon (139.178.68.195:55950). Jan 30 13:47:56.175473 sshd[1814]: Accepted publickey for core from 139.178.68.195 port 55950 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:47:56.177269 sshd[1814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:47:56.183484 systemd-logind[1582]: New session 5 of user core. Jan 30 13:47:56.192970 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 13:47:56.398144 sudo[1818]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 13:47:56.398691 sudo[1818]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:47:56.414277 sudo[1818]: pam_unix(sudo:session): session closed for user root Jan 30 13:47:56.466783 sshd[1814]: pam_unix(sshd:session): session closed for user core Jan 30 13:47:56.472934 systemd[1]: sshd@4-10.128.0.26:22-139.178.68.195:55950.service: Deactivated successfully. Jan 30 13:47:56.477465 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 13:47:56.478237 systemd-logind[1582]: Session 5 logged out. Waiting for processes to exit. Jan 30 13:47:56.480523 systemd-logind[1582]: Removed session 5. Jan 30 13:47:56.532835 systemd[1]: Started sshd@5-10.128.0.26:22-139.178.68.195:55966.service - OpenSSH per-connection server daemon (139.178.68.195:55966). Jan 30 13:47:56.881308 sshd[1823]: Accepted publickey for core from 139.178.68.195 port 55966 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:47:56.883332 sshd[1823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:47:56.890145 systemd-logind[1582]: New session 6 of user core. Jan 30 13:47:56.899813 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 13:47:57.093250 sudo[1828]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 13:47:57.093776 sudo[1828]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:47:57.098712 sudo[1828]: pam_unix(sudo:session): session closed for user root Jan 30 13:47:57.112249 sudo[1827]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 13:47:57.112767 sudo[1827]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:47:57.135887 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 13:47:57.138487 auditctl[1831]: No rules Jan 30 13:47:57.140043 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:47:57.141648 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 13:47:57.151860 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:47:57.182774 augenrules[1850]: No rules Jan 30 13:47:57.184791 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:47:57.189207 sudo[1827]: pam_unix(sudo:session): session closed for user root Jan 30 13:47:57.243507 sshd[1823]: pam_unix(sshd:session): session closed for user core Jan 30 13:47:57.249074 systemd[1]: sshd@5-10.128.0.26:22-139.178.68.195:55966.service: Deactivated successfully. Jan 30 13:47:57.253961 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 13:47:57.254955 systemd-logind[1582]: Session 6 logged out. Waiting for processes to exit. Jan 30 13:47:57.256318 systemd-logind[1582]: Removed session 6. Jan 30 13:47:57.308241 systemd[1]: Started sshd@6-10.128.0.26:22-139.178.68.195:55970.service - OpenSSH per-connection server daemon (139.178.68.195:55970). Jan 30 13:47:57.651965 sshd[1859]: Accepted publickey for core from 139.178.68.195 port 55970 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:47:57.653704 sshd[1859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:47:57.659111 systemd-logind[1582]: New session 7 of user core. Jan 30 13:47:57.669823 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 13:47:57.860874 sudo[1863]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 13:47:57.861385 sudo[1863]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:47:58.303551 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 13:47:58.308767 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 13:47:58.312669 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:47:58.314903 (dockerd)[1879]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 13:47:58.669846 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:47:58.683047 (kubelet)[1896]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:47:58.770146 kubelet[1896]: E0130 13:47:58.769955 1896 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:47:58.775449 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:47:58.775743 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:47:58.867133 dockerd[1879]: time="2025-01-30T13:47:58.867050890Z" level=info msg="Starting up" Jan 30 13:47:59.142611 dockerd[1879]: time="2025-01-30T13:47:59.142535415Z" level=info msg="Loading containers: start." Jan 30 13:47:59.295532 kernel: Initializing XFRM netlink socket Jan 30 13:47:59.399838 systemd-networkd[1217]: docker0: Link UP Jan 30 13:47:59.423384 dockerd[1879]: time="2025-01-30T13:47:59.423320242Z" level=info msg="Loading containers: done." Jan 30 13:47:59.446875 dockerd[1879]: time="2025-01-30T13:47:59.446802440Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 13:47:59.447110 dockerd[1879]: time="2025-01-30T13:47:59.446953714Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 30 13:47:59.447174 dockerd[1879]: time="2025-01-30T13:47:59.447134616Z" level=info msg="Daemon has completed initialization" Jan 30 13:47:59.487098 dockerd[1879]: time="2025-01-30T13:47:59.486637441Z" level=info msg="API listen on /run/docker.sock" Jan 30 13:47:59.486971 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 13:48:00.538452 containerd[1597]: time="2025-01-30T13:48:00.538382145Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 30 13:48:01.084667 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2842194631.mount: Deactivated successfully. Jan 30 13:48:02.817215 containerd[1597]: time="2025-01-30T13:48:02.817123322Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:02.819143 containerd[1597]: time="2025-01-30T13:48:02.819010172Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32683640" Jan 30 13:48:02.820689 containerd[1597]: time="2025-01-30T13:48:02.820607624Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:02.824895 containerd[1597]: time="2025-01-30T13:48:02.824817206Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:02.826668 containerd[1597]: time="2025-01-30T13:48:02.826367661Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 2.287904239s" Jan 30 13:48:02.826668 containerd[1597]: time="2025-01-30T13:48:02.826447815Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 30 13:48:02.859952 containerd[1597]: time="2025-01-30T13:48:02.859901870Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 30 13:48:04.482133 containerd[1597]: time="2025-01-30T13:48:04.482062859Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:04.483712 containerd[1597]: time="2025-01-30T13:48:04.483635935Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29607679" Jan 30 13:48:04.484914 containerd[1597]: time="2025-01-30T13:48:04.484837886Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:04.489537 containerd[1597]: time="2025-01-30T13:48:04.489472109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:04.492581 containerd[1597]: time="2025-01-30T13:48:04.491483004Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 1.631520585s" Jan 30 13:48:04.492581 containerd[1597]: time="2025-01-30T13:48:04.491533318Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 30 13:48:04.521759 containerd[1597]: time="2025-01-30T13:48:04.521698588Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 30 13:48:05.661818 containerd[1597]: time="2025-01-30T13:48:05.661742044Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:05.663437 containerd[1597]: time="2025-01-30T13:48:05.663342598Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17784980" Jan 30 13:48:05.665225 containerd[1597]: time="2025-01-30T13:48:05.665185370Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:05.669860 containerd[1597]: time="2025-01-30T13:48:05.669767913Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:05.675888 containerd[1597]: time="2025-01-30T13:48:05.675623957Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 1.15385965s" Jan 30 13:48:05.675888 containerd[1597]: time="2025-01-30T13:48:05.675711412Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 30 13:48:05.708195 containerd[1597]: time="2025-01-30T13:48:05.708133608Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 30 13:48:06.891258 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2592278381.mount: Deactivated successfully. Jan 30 13:48:07.451869 containerd[1597]: time="2025-01-30T13:48:07.451795655Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:07.453486 containerd[1597]: time="2025-01-30T13:48:07.453414547Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29060232" Jan 30 13:48:07.455178 containerd[1597]: time="2025-01-30T13:48:07.455111918Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:07.460421 containerd[1597]: time="2025-01-30T13:48:07.458177325Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:07.461246 containerd[1597]: time="2025-01-30T13:48:07.461190314Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 1.75299798s" Jan 30 13:48:07.461371 containerd[1597]: time="2025-01-30T13:48:07.461259996Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 30 13:48:07.496717 containerd[1597]: time="2025-01-30T13:48:07.496657621Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 13:48:07.976298 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1826610908.mount: Deactivated successfully. Jan 30 13:48:08.820561 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 13:48:08.831722 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:48:09.125656 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:48:09.139134 (kubelet)[2188]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:48:09.224582 kubelet[2188]: E0130 13:48:09.224421 2188 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:48:09.228156 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:48:09.230657 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:48:09.307884 containerd[1597]: time="2025-01-30T13:48:09.307811533Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:09.309518 containerd[1597]: time="2025-01-30T13:48:09.309434235Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18192419" Jan 30 13:48:09.310948 containerd[1597]: time="2025-01-30T13:48:09.310857854Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:09.316258 containerd[1597]: time="2025-01-30T13:48:09.316158521Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:09.318304 containerd[1597]: time="2025-01-30T13:48:09.318121719Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.821408804s" Jan 30 13:48:09.318304 containerd[1597]: time="2025-01-30T13:48:09.318174467Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 30 13:48:09.352046 containerd[1597]: time="2025-01-30T13:48:09.351984079Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 30 13:48:09.745894 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount152482653.mount: Deactivated successfully. Jan 30 13:48:09.753492 containerd[1597]: time="2025-01-30T13:48:09.753434341Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:09.755353 containerd[1597]: time="2025-01-30T13:48:09.755276307Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=324188" Jan 30 13:48:09.757188 containerd[1597]: time="2025-01-30T13:48:09.757117383Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:09.761871 containerd[1597]: time="2025-01-30T13:48:09.761794184Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:09.763332 containerd[1597]: time="2025-01-30T13:48:09.763133158Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 411.09468ms" Jan 30 13:48:09.763332 containerd[1597]: time="2025-01-30T13:48:09.763189525Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 30 13:48:09.791802 containerd[1597]: time="2025-01-30T13:48:09.791760858Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 30 13:48:10.201047 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2507157888.mount: Deactivated successfully. Jan 30 13:48:12.372276 containerd[1597]: time="2025-01-30T13:48:12.372203190Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:12.374053 containerd[1597]: time="2025-01-30T13:48:12.373982879Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57246061" Jan 30 13:48:12.375562 containerd[1597]: time="2025-01-30T13:48:12.375495422Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:12.379633 containerd[1597]: time="2025-01-30T13:48:12.379567580Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:12.381528 containerd[1597]: time="2025-01-30T13:48:12.381040485Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.589227674s" Jan 30 13:48:12.381528 containerd[1597]: time="2025-01-30T13:48:12.381090992Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 30 13:48:15.673831 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 30 13:48:16.085877 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:48:16.098855 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:48:16.133557 systemd[1]: Reloading requested from client PID 2325 ('systemctl') (unit session-7.scope)... Jan 30 13:48:16.133580 systemd[1]: Reloading... Jan 30 13:48:16.271433 zram_generator::config[2361]: No configuration found. Jan 30 13:48:16.462967 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:48:16.557873 systemd[1]: Reloading finished in 423 ms. Jan 30 13:48:16.617606 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 13:48:16.617768 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 13:48:16.618287 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:48:16.624099 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:48:16.934649 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:48:16.947075 (kubelet)[2428]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:48:17.001267 kubelet[2428]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:48:17.001854 kubelet[2428]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:48:17.001854 kubelet[2428]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:48:17.001854 kubelet[2428]: I0130 13:48:17.001380 2428 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:48:17.385005 kubelet[2428]: I0130 13:48:17.384951 2428 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 13:48:17.385005 kubelet[2428]: I0130 13:48:17.384983 2428 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:48:17.385323 kubelet[2428]: I0130 13:48:17.385282 2428 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 13:48:17.418587 kubelet[2428]: I0130 13:48:17.417574 2428 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:48:17.419432 kubelet[2428]: E0130 13:48:17.419377 2428 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.128.0.26:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.128.0.26:6443: connect: connection refused Jan 30 13:48:17.441671 kubelet[2428]: I0130 13:48:17.441625 2428 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:48:17.445953 kubelet[2428]: I0130 13:48:17.445869 2428 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:48:17.446228 kubelet[2428]: I0130 13:48:17.445938 2428 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 13:48:17.446456 kubelet[2428]: I0130 13:48:17.446235 2428 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:48:17.446456 kubelet[2428]: I0130 13:48:17.446255 2428 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 13:48:17.446562 kubelet[2428]: I0130 13:48:17.446479 2428 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:48:17.448010 kubelet[2428]: I0130 13:48:17.447969 2428 kubelet.go:400] "Attempting to sync node with API server" Jan 30 13:48:17.448010 kubelet[2428]: I0130 13:48:17.448001 2428 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:48:17.448358 kubelet[2428]: I0130 13:48:17.448037 2428 kubelet.go:312] "Adding apiserver pod source" Jan 30 13:48:17.448358 kubelet[2428]: I0130 13:48:17.448060 2428 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:48:17.455045 kubelet[2428]: W0130 13:48:17.454827 2428 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.26:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.26:6443: connect: connection refused Jan 30 13:48:17.455045 kubelet[2428]: E0130 13:48:17.454908 2428 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.26:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.26:6443: connect: connection refused Jan 30 13:48:17.457000 kubelet[2428]: W0130 13:48:17.456933 2428 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.26:6443: connect: connection refused Jan 30 13:48:17.457000 kubelet[2428]: E0130 13:48:17.456984 2428 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.26:6443: connect: connection refused Jan 30 13:48:17.457180 kubelet[2428]: I0130 13:48:17.457106 2428 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 13:48:17.459301 kubelet[2428]: I0130 13:48:17.459253 2428 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:48:17.459419 kubelet[2428]: W0130 13:48:17.459350 2428 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 13:48:17.460359 kubelet[2428]: I0130 13:48:17.460172 2428 server.go:1264] "Started kubelet" Jan 30 13:48:17.468303 kubelet[2428]: I0130 13:48:17.466734 2428 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:48:17.468560 kubelet[2428]: I0130 13:48:17.468538 2428 server.go:455] "Adding debug handlers to kubelet server" Jan 30 13:48:17.470242 kubelet[2428]: I0130 13:48:17.469838 2428 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:48:17.470242 kubelet[2428]: I0130 13:48:17.470215 2428 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:48:17.471824 kubelet[2428]: E0130 13:48:17.470450 2428 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.26:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.26:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal.181f7c864dbf3ca0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal,UID:ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal,},FirstTimestamp:2025-01-30 13:48:17.460141216 +0000 UTC m=+0.507533542,LastTimestamp:2025-01-30 13:48:17.460141216 +0000 UTC m=+0.507533542,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal,}" Jan 30 13:48:17.472327 kubelet[2428]: I0130 13:48:17.472301 2428 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:48:17.478756 kubelet[2428]: E0130 13:48:17.478310 2428 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal\" not found" Jan 30 13:48:17.478756 kubelet[2428]: I0130 13:48:17.478369 2428 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 13:48:17.478756 kubelet[2428]: I0130 13:48:17.478511 2428 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:48:17.478756 kubelet[2428]: I0130 13:48:17.478606 2428 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:48:17.479149 kubelet[2428]: W0130 13:48:17.479092 2428 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.26:6443: connect: connection refused Jan 30 13:48:17.479235 kubelet[2428]: E0130 13:48:17.479167 2428 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.26:6443: connect: connection refused Jan 30 13:48:17.480119 kubelet[2428]: E0130 13:48:17.480060 2428 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.26:6443: connect: connection refused" interval="200ms" Jan 30 13:48:17.481024 kubelet[2428]: E0130 13:48:17.480858 2428 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:48:17.482837 kubelet[2428]: I0130 13:48:17.482809 2428 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:48:17.482837 kubelet[2428]: I0130 13:48:17.482837 2428 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:48:17.482984 kubelet[2428]: I0130 13:48:17.482913 2428 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:48:17.521483 kubelet[2428]: I0130 13:48:17.521157 2428 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:48:17.529148 kubelet[2428]: I0130 13:48:17.529090 2428 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:48:17.529148 kubelet[2428]: I0130 13:48:17.529133 2428 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:48:17.529148 kubelet[2428]: I0130 13:48:17.529157 2428 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 13:48:17.529439 kubelet[2428]: E0130 13:48:17.529219 2428 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:48:17.532252 kubelet[2428]: W0130 13:48:17.531980 2428 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.26:6443: connect: connection refused Jan 30 13:48:17.532252 kubelet[2428]: E0130 13:48:17.532065 2428 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.26:6443: connect: connection refused Jan 30 13:48:17.539849 kubelet[2428]: I0130 13:48:17.539823 2428 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:48:17.540118 kubelet[2428]: I0130 13:48:17.540036 2428 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:48:17.540118 kubelet[2428]: I0130 13:48:17.540082 2428 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:48:17.542710 kubelet[2428]: I0130 13:48:17.542681 2428 policy_none.go:49] "None policy: Start" Jan 30 13:48:17.543565 kubelet[2428]: I0130 13:48:17.543539 2428 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:48:17.543671 kubelet[2428]: I0130 13:48:17.543586 2428 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:48:17.550987 kubelet[2428]: I0130 13:48:17.550948 2428 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:48:17.551407 kubelet[2428]: I0130 13:48:17.551249 2428 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:48:17.551511 kubelet[2428]: I0130 13:48:17.551447 2428 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:48:17.555630 kubelet[2428]: E0130 13:48:17.555594 2428 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal\" not found" Jan 30 13:48:17.586490 kubelet[2428]: I0130 13:48:17.586434 2428 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:48:17.586932 kubelet[2428]: E0130 13:48:17.586884 2428 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.26:6443/api/v1/nodes\": dial tcp 10.128.0.26:6443: connect: connection refused" node="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:48:17.630337 kubelet[2428]: I0130 13:48:17.630254 2428 topology_manager.go:215] "Topology Admit Handler" podUID="04ca85077877b24892268bc37d855acc" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:48:17.639503 kubelet[2428]: I0130 13:48:17.639341 2428 topology_manager.go:215] "Topology Admit Handler" podUID="df7bc38b930d3acb165abdc261f36181" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:48:17.647341 kubelet[2428]: I0130 13:48:17.647001 2428 topology_manager.go:215] "Topology Admit Handler" podUID="22e3123772296cbdd9806b102070f17e" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:48:17.681634 kubelet[2428]: E0130 13:48:17.681558 2428 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.26:6443: connect: connection refused" interval="400ms" Jan 30 13:48:17.779992 kubelet[2428]: I0130 13:48:17.779916 2428 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/04ca85077877b24892268bc37d855acc-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal\" (UID: \"04ca85077877b24892268bc37d855acc\") " pod="kube-system/kube-apiserver-ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:48:17.779992 kubelet[2428]: I0130 13:48:17.779988 2428 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/df7bc38b930d3acb165abdc261f36181-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal\" (UID: \"df7bc38b930d3acb165abdc261f36181\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:48:17.780239 kubelet[2428]: I0130 13:48:17.780021 2428 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/df7bc38b930d3acb165abdc261f36181-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal\" (UID: \"df7bc38b930d3acb165abdc261f36181\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:48:17.780239 kubelet[2428]: I0130 13:48:17.780047 2428 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/22e3123772296cbdd9806b102070f17e-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal\" (UID: \"22e3123772296cbdd9806b102070f17e\") " pod="kube-system/kube-scheduler-ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:48:17.780239 kubelet[2428]: I0130 13:48:17.780073 2428 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/04ca85077877b24892268bc37d855acc-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal\" (UID: \"04ca85077877b24892268bc37d855acc\") " pod="kube-system/kube-apiserver-ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:48:17.780239 kubelet[2428]: I0130 13:48:17.780099 2428 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/df7bc38b930d3acb165abdc261f36181-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal\" (UID: \"df7bc38b930d3acb165abdc261f36181\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:48:17.780494 kubelet[2428]: I0130 13:48:17.780125 2428 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/df7bc38b930d3acb165abdc261f36181-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal\" (UID: \"df7bc38b930d3acb165abdc261f36181\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:48:17.780494 kubelet[2428]: I0130 13:48:17.780162 2428 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/df7bc38b930d3acb165abdc261f36181-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal\" (UID: \"df7bc38b930d3acb165abdc261f36181\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:48:17.780494 kubelet[2428]: I0130 13:48:17.780205 2428 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/04ca85077877b24892268bc37d855acc-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal\" (UID: \"04ca85077877b24892268bc37d855acc\") " pod="kube-system/kube-apiserver-ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:48:17.794598 kubelet[2428]: I0130 13:48:17.794513 2428 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:48:17.794997 kubelet[2428]: E0130 13:48:17.794946 2428 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.26:6443/api/v1/nodes\": dial tcp 10.128.0.26:6443: connect: connection refused" node="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:48:17.962133 containerd[1597]: time="2025-01-30T13:48:17.961979926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal,Uid:04ca85077877b24892268bc37d855acc,Namespace:kube-system,Attempt:0,}" Jan 30 13:48:17.969507 containerd[1597]: time="2025-01-30T13:48:17.969094260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal,Uid:df7bc38b930d3acb165abdc261f36181,Namespace:kube-system,Attempt:0,}" Jan 30 13:48:17.972525 containerd[1597]: time="2025-01-30T13:48:17.972482990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal,Uid:22e3123772296cbdd9806b102070f17e,Namespace:kube-system,Attempt:0,}" Jan 30 13:48:18.082945 kubelet[2428]: E0130 13:48:18.082866 2428 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.26:6443: connect: connection refused" interval="800ms" Jan 30 13:48:18.200934 kubelet[2428]: I0130 13:48:18.200885 2428 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:48:18.201365 kubelet[2428]: E0130 13:48:18.201314 2428 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.26:6443/api/v1/nodes\": dial tcp 10.128.0.26:6443: connect: connection refused" node="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:48:18.259236 kubelet[2428]: W0130 13:48:18.259037 2428 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.26:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.26:6443: connect: connection refused Jan 30 13:48:18.259236 kubelet[2428]: E0130 13:48:18.259128 2428 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.26:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.26:6443: connect: connection refused Jan 30 13:48:18.375759 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1563319881.mount: Deactivated successfully. Jan 30 13:48:18.386391 containerd[1597]: time="2025-01-30T13:48:18.386331251Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:48:18.387797 containerd[1597]: time="2025-01-30T13:48:18.387744597Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:48:18.389109 containerd[1597]: time="2025-01-30T13:48:18.389045300Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=313954" Jan 30 13:48:18.390091 containerd[1597]: time="2025-01-30T13:48:18.390027994Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:48:18.391908 containerd[1597]: time="2025-01-30T13:48:18.391850059Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:48:18.393293 containerd[1597]: time="2025-01-30T13:48:18.393253251Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:48:18.394251 containerd[1597]: time="2025-01-30T13:48:18.394164846Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:48:18.396638 containerd[1597]: time="2025-01-30T13:48:18.396530951Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:48:18.399436 containerd[1597]: time="2025-01-30T13:48:18.399079094Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 426.499966ms" Jan 30 13:48:18.400915 containerd[1597]: time="2025-01-30T13:48:18.400611523Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 438.528456ms" Jan 30 13:48:18.406186 containerd[1597]: time="2025-01-30T13:48:18.406125657Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 436.913637ms" Jan 30 13:48:18.530849 kubelet[2428]: W0130 13:48:18.530653 2428 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.26:6443: connect: connection refused Jan 30 13:48:18.530849 kubelet[2428]: E0130 13:48:18.530715 2428 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.26:6443: connect: connection refused Jan 30 13:48:18.621761 containerd[1597]: time="2025-01-30T13:48:18.619858889Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:48:18.621761 containerd[1597]: time="2025-01-30T13:48:18.620168253Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:48:18.621761 containerd[1597]: time="2025-01-30T13:48:18.620199547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:48:18.621761 containerd[1597]: time="2025-01-30T13:48:18.620343981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:48:18.621761 containerd[1597]: time="2025-01-30T13:48:18.621033859Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:48:18.621761 containerd[1597]: time="2025-01-30T13:48:18.621162884Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:48:18.621761 containerd[1597]: time="2025-01-30T13:48:18.621191441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:48:18.621761 containerd[1597]: time="2025-01-30T13:48:18.621646675Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:48:18.624615 containerd[1597]: time="2025-01-30T13:48:18.622667999Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:48:18.624615 containerd[1597]: time="2025-01-30T13:48:18.622742439Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:48:18.624615 containerd[1597]: time="2025-01-30T13:48:18.622770134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:48:18.624615 containerd[1597]: time="2025-01-30T13:48:18.622897309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:48:18.755422 containerd[1597]: time="2025-01-30T13:48:18.755329193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal,Uid:df7bc38b930d3acb165abdc261f36181,Namespace:kube-system,Attempt:0,} returns sandbox id \"bf9c82d55632d942137a9d3e41b20e5652727d6c8e9f49574abffbc7aca8a2ed\"" Jan 30 13:48:18.760432 kubelet[2428]: E0130 13:48:18.759688 2428 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4081-3-0-95c971ef03008a3d90e9.c.flat" Jan 30 13:48:18.765329 containerd[1597]: time="2025-01-30T13:48:18.765274365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal,Uid:04ca85077877b24892268bc37d855acc,Namespace:kube-system,Attempt:0,} returns sandbox id \"a0adc1ffb9068b72730259bd84dbbf510c3098c61ec25b7c0f0e3cc181e5c87e\"" Jan 30 13:48:18.770659 containerd[1597]: time="2025-01-30T13:48:18.770387080Z" level=info msg="CreateContainer within sandbox \"bf9c82d55632d942137a9d3e41b20e5652727d6c8e9f49574abffbc7aca8a2ed\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 13:48:18.774439 kubelet[2428]: E0130 13:48:18.772553 2428 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-21291" Jan 30 13:48:18.777492 containerd[1597]: time="2025-01-30T13:48:18.777450948Z" level=info msg="CreateContainer within sandbox \"a0adc1ffb9068b72730259bd84dbbf510c3098c61ec25b7c0f0e3cc181e5c87e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 13:48:18.788927 containerd[1597]: time="2025-01-30T13:48:18.787904775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal,Uid:22e3123772296cbdd9806b102070f17e,Namespace:kube-system,Attempt:0,} returns sandbox id \"b21658c70aed018315eb65dc18f1b77afae4a945ea86c6ead3f9926b2ba69c88\"" Jan 30 13:48:18.791441 kubelet[2428]: E0130 13:48:18.791317 2428 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-21291" Jan 30 13:48:18.793022 containerd[1597]: time="2025-01-30T13:48:18.792981327Z" level=info msg="CreateContainer within sandbox \"b21658c70aed018315eb65dc18f1b77afae4a945ea86c6ead3f9926b2ba69c88\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 13:48:18.808944 containerd[1597]: time="2025-01-30T13:48:18.808881916Z" level=info msg="CreateContainer within sandbox \"bf9c82d55632d942137a9d3e41b20e5652727d6c8e9f49574abffbc7aca8a2ed\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"680002dbea985de9843dec7e7881462bc2a53575f28507c6a44d8e08e3c18977\"" Jan 30 13:48:18.809986 containerd[1597]: time="2025-01-30T13:48:18.809927561Z" level=info msg="StartContainer for \"680002dbea985de9843dec7e7881462bc2a53575f28507c6a44d8e08e3c18977\"" Jan 30 13:48:18.818151 containerd[1597]: time="2025-01-30T13:48:18.818006578Z" level=info msg="CreateContainer within sandbox \"a0adc1ffb9068b72730259bd84dbbf510c3098c61ec25b7c0f0e3cc181e5c87e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2dd1aa455c7ae3e743fb2922ffffa91767133331d5434536b5d4d805b02eaf1b\"" Jan 30 13:48:18.818739 kubelet[2428]: W0130 13:48:18.818519 2428 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.26:6443: connect: connection refused Jan 30 13:48:18.818739 kubelet[2428]: E0130 13:48:18.818605 2428 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.26:6443: connect: connection refused Jan 30 13:48:18.820052 containerd[1597]: time="2025-01-30T13:48:18.819702589Z" level=info msg="StartContainer for \"2dd1aa455c7ae3e743fb2922ffffa91767133331d5434536b5d4d805b02eaf1b\"" Jan 30 13:48:18.828252 containerd[1597]: time="2025-01-30T13:48:18.827381930Z" level=info msg="CreateContainer within sandbox \"b21658c70aed018315eb65dc18f1b77afae4a945ea86c6ead3f9926b2ba69c88\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"03640969f44f555430a7290a0d1fdf61b485feb7c718f34443a51968045ce7d0\"" Jan 30 13:48:18.830458 containerd[1597]: time="2025-01-30T13:48:18.830055406Z" level=info msg="StartContainer for \"03640969f44f555430a7290a0d1fdf61b485feb7c718f34443a51968045ce7d0\"" Jan 30 13:48:18.885482 kubelet[2428]: E0130 13:48:18.885366 2428 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.26:6443: connect: connection refused" interval="1.6s" Jan 30 13:48:18.995619 containerd[1597]: time="2025-01-30T13:48:18.995560691Z" level=info msg="StartContainer for \"2dd1aa455c7ae3e743fb2922ffffa91767133331d5434536b5d4d805b02eaf1b\" returns successfully" Jan 30 13:48:19.013954 containerd[1597]: time="2025-01-30T13:48:19.012578015Z" level=info msg="StartContainer for \"680002dbea985de9843dec7e7881462bc2a53575f28507c6a44d8e08e3c18977\" returns successfully" Jan 30 13:48:19.023657 kubelet[2428]: I0130 13:48:19.022323 2428 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:48:19.023657 kubelet[2428]: E0130 13:48:19.022811 2428 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.26:6443/api/v1/nodes\": dial tcp 10.128.0.26:6443: connect: connection refused" node="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:48:19.037441 containerd[1597]: time="2025-01-30T13:48:19.037373137Z" level=info msg="StartContainer for \"03640969f44f555430a7290a0d1fdf61b485feb7c718f34443a51968045ce7d0\" returns successfully" Jan 30 13:48:19.062444 kubelet[2428]: W0130 13:48:19.061542 2428 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.26:6443: connect: connection refused Jan 30 13:48:19.063455 kubelet[2428]: E0130 13:48:19.063424 2428 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.26:6443: connect: connection refused Jan 30 13:48:20.641459 kubelet[2428]: I0130 13:48:20.641413 2428 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:48:21.914708 kubelet[2428]: E0130 13:48:21.914639 2428 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal\" not found" node="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:48:21.945414 kubelet[2428]: I0130 13:48:21.944784 2428 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:48:22.315007 kubelet[2428]: E0130 13:48:22.314547 2428 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:48:22.459365 kubelet[2428]: I0130 13:48:22.459303 2428 apiserver.go:52] "Watching apiserver" Jan 30 13:48:22.479466 kubelet[2428]: I0130 13:48:22.479374 2428 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:48:23.903194 systemd[1]: Reloading requested from client PID 2702 ('systemctl') (unit session-7.scope)... Jan 30 13:48:23.903215 systemd[1]: Reloading... Jan 30 13:48:24.044541 zram_generator::config[2745]: No configuration found. Jan 30 13:48:24.186628 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:48:24.300729 systemd[1]: Reloading finished in 396 ms. Jan 30 13:48:24.353046 kubelet[2428]: I0130 13:48:24.352993 2428 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:48:24.353718 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:48:24.365363 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 13:48:24.366066 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:48:24.375590 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:48:24.648709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:48:24.665277 (kubelet)[2800]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:48:24.738710 kubelet[2800]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:48:24.738710 kubelet[2800]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:48:24.738710 kubelet[2800]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:48:24.739383 kubelet[2800]: I0130 13:48:24.738796 2800 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:48:24.744375 kubelet[2800]: I0130 13:48:24.744331 2800 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 13:48:24.744375 kubelet[2800]: I0130 13:48:24.744358 2800 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:48:24.744843 kubelet[2800]: I0130 13:48:24.744795 2800 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 13:48:24.746792 kubelet[2800]: I0130 13:48:24.746753 2800 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 13:48:24.749030 kubelet[2800]: I0130 13:48:24.748837 2800 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:48:24.762437 kubelet[2800]: I0130 13:48:24.762295 2800 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:48:24.763101 kubelet[2800]: I0130 13:48:24.763050 2800 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:48:24.763371 kubelet[2800]: I0130 13:48:24.763093 2800 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 13:48:24.763613 kubelet[2800]: I0130 13:48:24.763388 2800 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:48:24.763613 kubelet[2800]: I0130 13:48:24.763431 2800 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 13:48:24.763613 kubelet[2800]: I0130 13:48:24.763510 2800 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:48:24.763821 kubelet[2800]: I0130 13:48:24.763675 2800 kubelet.go:400] "Attempting to sync node with API server" Jan 30 13:48:24.763821 kubelet[2800]: I0130 13:48:24.763702 2800 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:48:24.763821 kubelet[2800]: I0130 13:48:24.763739 2800 kubelet.go:312] "Adding apiserver pod source" Jan 30 13:48:24.763821 kubelet[2800]: I0130 13:48:24.763772 2800 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:48:24.768426 kubelet[2800]: I0130 13:48:24.767851 2800 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 13:48:24.768426 kubelet[2800]: I0130 13:48:24.768123 2800 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:48:24.770791 kubelet[2800]: I0130 13:48:24.770764 2800 server.go:1264] "Started kubelet" Jan 30 13:48:24.781423 kubelet[2800]: I0130 13:48:24.779495 2800 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:48:24.789345 kubelet[2800]: I0130 13:48:24.789284 2800 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:48:24.791390 kubelet[2800]: I0130 13:48:24.791363 2800 server.go:455] "Adding debug handlers to kubelet server" Jan 30 13:48:24.793922 kubelet[2800]: I0130 13:48:24.793841 2800 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:48:24.794237 kubelet[2800]: I0130 13:48:24.794217 2800 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:48:24.800431 kubelet[2800]: I0130 13:48:24.797387 2800 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 13:48:24.800431 kubelet[2800]: I0130 13:48:24.797829 2800 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:48:24.800431 kubelet[2800]: I0130 13:48:24.798045 2800 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:48:24.804094 kubelet[2800]: I0130 13:48:24.804035 2800 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:48:24.805509 kubelet[2800]: I0130 13:48:24.805430 2800 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:48:24.808428 kubelet[2800]: I0130 13:48:24.808078 2800 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:48:24.809246 kubelet[2800]: E0130 13:48:24.809176 2800 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:48:24.832985 kubelet[2800]: I0130 13:48:24.832929 2800 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:48:24.836231 kubelet[2800]: I0130 13:48:24.836194 2800 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:48:24.836359 kubelet[2800]: I0130 13:48:24.836254 2800 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:48:24.836359 kubelet[2800]: I0130 13:48:24.836341 2800 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 13:48:24.836502 kubelet[2800]: E0130 13:48:24.836456 2800 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:48:24.911545 kubelet[2800]: I0130 13:48:24.908551 2800 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:48:24.934708 kubelet[2800]: I0130 13:48:24.934234 2800 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:48:24.934708 kubelet[2800]: I0130 13:48:24.934340 2800 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:48:24.937302 kubelet[2800]: E0130 13:48:24.937086 2800 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 13:48:24.970442 kubelet[2800]: I0130 13:48:24.970412 2800 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:48:24.973085 kubelet[2800]: I0130 13:48:24.972726 2800 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:48:24.973085 kubelet[2800]: I0130 13:48:24.972792 2800 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:48:24.973085 kubelet[2800]: I0130 13:48:24.973014 2800 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 13:48:24.973085 kubelet[2800]: I0130 13:48:24.973032 2800 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 13:48:24.973085 kubelet[2800]: I0130 13:48:24.973062 2800 policy_none.go:49] "None policy: Start" Jan 30 13:48:24.975448 kubelet[2800]: I0130 13:48:24.975015 2800 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:48:24.975448 kubelet[2800]: I0130 13:48:24.975043 2800 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:48:24.975448 kubelet[2800]: I0130 13:48:24.975279 2800 state_mem.go:75] "Updated machine memory state" Jan 30 13:48:24.980335 kubelet[2800]: I0130 13:48:24.978012 2800 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:48:24.980335 kubelet[2800]: I0130 13:48:24.978241 2800 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:48:24.980772 kubelet[2800]: I0130 13:48:24.980648 2800 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:48:25.137902 kubelet[2800]: I0130 13:48:25.137830 2800 topology_manager.go:215] "Topology Admit Handler" podUID="04ca85077877b24892268bc37d855acc" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:48:25.138548 kubelet[2800]: I0130 13:48:25.137974 2800 topology_manager.go:215] "Topology Admit Handler" podUID="df7bc38b930d3acb165abdc261f36181" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:48:25.138548 kubelet[2800]: I0130 13:48:25.138060 2800 topology_manager.go:215] "Topology Admit Handler" podUID="22e3123772296cbdd9806b102070f17e" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:48:25.145583 kubelet[2800]: W0130 13:48:25.145530 2800 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jan 30 13:48:25.147896 kubelet[2800]: W0130 13:48:25.147343 2800 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jan 30 13:48:25.148976 kubelet[2800]: W0130 13:48:25.148711 2800 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jan 30 13:48:25.202655 kubelet[2800]: I0130 13:48:25.202391 2800 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/df7bc38b930d3acb165abdc261f36181-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal\" (UID: \"df7bc38b930d3acb165abdc261f36181\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:48:25.202655 kubelet[2800]: I0130 13:48:25.202473 2800 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/df7bc38b930d3acb165abdc261f36181-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal\" (UID: \"df7bc38b930d3acb165abdc261f36181\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:48:25.202655 kubelet[2800]: I0130 13:48:25.202511 2800 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/04ca85077877b24892268bc37d855acc-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal\" (UID: \"04ca85077877b24892268bc37d855acc\") " pod="kube-system/kube-apiserver-ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:48:25.202655 kubelet[2800]: I0130 13:48:25.202544 2800 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/04ca85077877b24892268bc37d855acc-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal\" (UID: \"04ca85077877b24892268bc37d855acc\") " pod="kube-system/kube-apiserver-ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:48:25.202968 kubelet[2800]: I0130 13:48:25.202575 2800 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/df7bc38b930d3acb165abdc261f36181-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal\" (UID: \"df7bc38b930d3acb165abdc261f36181\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:48:25.202968 kubelet[2800]: I0130 13:48:25.202600 2800 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/df7bc38b930d3acb165abdc261f36181-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal\" (UID: \"df7bc38b930d3acb165abdc261f36181\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:48:25.202968 kubelet[2800]: I0130 13:48:25.202628 2800 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/df7bc38b930d3acb165abdc261f36181-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal\" (UID: \"df7bc38b930d3acb165abdc261f36181\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:48:25.202968 kubelet[2800]: I0130 13:48:25.202666 2800 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/22e3123772296cbdd9806b102070f17e-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal\" (UID: \"22e3123772296cbdd9806b102070f17e\") " pod="kube-system/kube-scheduler-ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:48:25.204262 kubelet[2800]: I0130 13:48:25.202694 2800 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/04ca85077877b24892268bc37d855acc-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal\" (UID: \"04ca85077877b24892268bc37d855acc\") " pod="kube-system/kube-apiserver-ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:48:25.765429 kubelet[2800]: I0130 13:48:25.764497 2800 apiserver.go:52] "Watching apiserver" Jan 30 13:48:25.798992 kubelet[2800]: I0130 13:48:25.798941 2800 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:48:25.901649 kubelet[2800]: W0130 13:48:25.901603 2800 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jan 30 13:48:25.901855 kubelet[2800]: E0130 13:48:25.901718 2800 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:48:25.953418 kubelet[2800]: I0130 13:48:25.953255 2800 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" podStartSLOduration=0.953204265 podStartE2EDuration="953.204265ms" podCreationTimestamp="2025-01-30 13:48:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:48:25.951746671 +0000 UTC m=+1.278385197" watchObservedRunningTime="2025-01-30 13:48:25.953204265 +0000 UTC m=+1.279842792" Jan 30 13:48:25.955763 kubelet[2800]: I0130 13:48:25.955536 2800 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" podStartSLOduration=0.955494768 podStartE2EDuration="955.494768ms" podCreationTimestamp="2025-01-30 13:48:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:48:25.916633074 +0000 UTC m=+1.243271605" watchObservedRunningTime="2025-01-30 13:48:25.955494768 +0000 UTC m=+1.282133317" Jan 30 13:48:26.011836 kubelet[2800]: I0130 13:48:26.010367 2800 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" podStartSLOduration=1.010340671 podStartE2EDuration="1.010340671s" podCreationTimestamp="2025-01-30 13:48:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:48:25.979358207 +0000 UTC m=+1.305996733" watchObservedRunningTime="2025-01-30 13:48:26.010340671 +0000 UTC m=+1.336979198" Jan 30 13:48:29.118564 update_engine[1584]: I20250130 13:48:29.118449 1584 update_attempter.cc:509] Updating boot flags... Jan 30 13:48:29.185588 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2867) Jan 30 13:48:29.307315 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2868) Jan 30 13:48:29.409437 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2868) Jan 30 13:48:30.514954 sudo[1863]: pam_unix(sudo:session): session closed for user root Jan 30 13:48:30.568219 sshd[1859]: pam_unix(sshd:session): session closed for user core Jan 30 13:48:30.576420 systemd[1]: sshd@6-10.128.0.26:22-139.178.68.195:55970.service: Deactivated successfully. Jan 30 13:48:30.581055 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 13:48:30.582323 systemd-logind[1582]: Session 7 logged out. Waiting for processes to exit. Jan 30 13:48:30.583929 systemd-logind[1582]: Removed session 7. Jan 30 13:48:38.661016 kubelet[2800]: I0130 13:48:38.660309 2800 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 13:48:38.661775 containerd[1597]: time="2025-01-30T13:48:38.660855894Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 13:48:38.665110 kubelet[2800]: I0130 13:48:38.664615 2800 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 13:48:39.363909 kubelet[2800]: I0130 13:48:39.363691 2800 topology_manager.go:215] "Topology Admit Handler" podUID="a78dd299-f06a-4a88-8698-5ee4675ed75d" podNamespace="kube-system" podName="kube-proxy-s5xxq" Jan 30 13:48:39.401963 kubelet[2800]: I0130 13:48:39.401881 2800 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a78dd299-f06a-4a88-8698-5ee4675ed75d-kube-proxy\") pod \"kube-proxy-s5xxq\" (UID: \"a78dd299-f06a-4a88-8698-5ee4675ed75d\") " pod="kube-system/kube-proxy-s5xxq" Jan 30 13:48:39.401963 kubelet[2800]: I0130 13:48:39.401959 2800 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a78dd299-f06a-4a88-8698-5ee4675ed75d-xtables-lock\") pod \"kube-proxy-s5xxq\" (UID: \"a78dd299-f06a-4a88-8698-5ee4675ed75d\") " pod="kube-system/kube-proxy-s5xxq" Jan 30 13:48:39.402233 kubelet[2800]: I0130 13:48:39.401999 2800 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbhwr\" (UniqueName: \"kubernetes.io/projected/a78dd299-f06a-4a88-8698-5ee4675ed75d-kube-api-access-hbhwr\") pod \"kube-proxy-s5xxq\" (UID: \"a78dd299-f06a-4a88-8698-5ee4675ed75d\") " pod="kube-system/kube-proxy-s5xxq" Jan 30 13:48:39.402233 kubelet[2800]: I0130 13:48:39.402050 2800 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a78dd299-f06a-4a88-8698-5ee4675ed75d-lib-modules\") pod \"kube-proxy-s5xxq\" (UID: \"a78dd299-f06a-4a88-8698-5ee4675ed75d\") " pod="kube-system/kube-proxy-s5xxq" Jan 30 13:48:39.644605 kubelet[2800]: I0130 13:48:39.644452 2800 topology_manager.go:215] "Topology Admit Handler" podUID="23b051f5-0fcc-4ed5-a6cb-1c808a70c42a" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-t5r8f" Jan 30 13:48:39.673380 containerd[1597]: time="2025-01-30T13:48:39.673313203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s5xxq,Uid:a78dd299-f06a-4a88-8698-5ee4675ed75d,Namespace:kube-system,Attempt:0,}" Jan 30 13:48:39.704446 kubelet[2800]: I0130 13:48:39.704231 2800 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkpwx\" (UniqueName: \"kubernetes.io/projected/23b051f5-0fcc-4ed5-a6cb-1c808a70c42a-kube-api-access-rkpwx\") pod \"tigera-operator-7bc55997bb-t5r8f\" (UID: \"23b051f5-0fcc-4ed5-a6cb-1c808a70c42a\") " pod="tigera-operator/tigera-operator-7bc55997bb-t5r8f" Jan 30 13:48:39.704446 kubelet[2800]: I0130 13:48:39.704356 2800 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/23b051f5-0fcc-4ed5-a6cb-1c808a70c42a-var-lib-calico\") pod \"tigera-operator-7bc55997bb-t5r8f\" (UID: \"23b051f5-0fcc-4ed5-a6cb-1c808a70c42a\") " pod="tigera-operator/tigera-operator-7bc55997bb-t5r8f" Jan 30 13:48:39.714110 containerd[1597]: time="2025-01-30T13:48:39.713979542Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:48:39.714464 containerd[1597]: time="2025-01-30T13:48:39.714217219Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:48:39.714464 containerd[1597]: time="2025-01-30T13:48:39.714272599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:48:39.714746 containerd[1597]: time="2025-01-30T13:48:39.714437359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:48:39.772898 containerd[1597]: time="2025-01-30T13:48:39.772845570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s5xxq,Uid:a78dd299-f06a-4a88-8698-5ee4675ed75d,Namespace:kube-system,Attempt:0,} returns sandbox id \"e604cd177c150f1518af41b81247390c072360d01f6a0189fa7f08127086e976\"" Jan 30 13:48:39.777797 containerd[1597]: time="2025-01-30T13:48:39.777561880Z" level=info msg="CreateContainer within sandbox \"e604cd177c150f1518af41b81247390c072360d01f6a0189fa7f08127086e976\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 13:48:39.801797 containerd[1597]: time="2025-01-30T13:48:39.801735032Z" level=info msg="CreateContainer within sandbox \"e604cd177c150f1518af41b81247390c072360d01f6a0189fa7f08127086e976\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ba825c9ca7fb7b0b4d5d7500e847330e2d6401580913c71f5107f2686068926b\"" Jan 30 13:48:39.802965 containerd[1597]: time="2025-01-30T13:48:39.802609467Z" level=info msg="StartContainer for \"ba825c9ca7fb7b0b4d5d7500e847330e2d6401580913c71f5107f2686068926b\"" Jan 30 13:48:39.890130 containerd[1597]: time="2025-01-30T13:48:39.889998341Z" level=info msg="StartContainer for \"ba825c9ca7fb7b0b4d5d7500e847330e2d6401580913c71f5107f2686068926b\" returns successfully" Jan 30 13:48:39.916771 kubelet[2800]: I0130 13:48:39.916587 2800 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-s5xxq" podStartSLOduration=0.916561374 podStartE2EDuration="916.561374ms" podCreationTimestamp="2025-01-30 13:48:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:48:39.915484659 +0000 UTC m=+15.242123187" watchObservedRunningTime="2025-01-30 13:48:39.916561374 +0000 UTC m=+15.243199903" Jan 30 13:48:39.959365 containerd[1597]: time="2025-01-30T13:48:39.957514036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-t5r8f,Uid:23b051f5-0fcc-4ed5-a6cb-1c808a70c42a,Namespace:tigera-operator,Attempt:0,}" Jan 30 13:48:40.005040 containerd[1597]: time="2025-01-30T13:48:40.004131847Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:48:40.005040 containerd[1597]: time="2025-01-30T13:48:40.004217596Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:48:40.005040 containerd[1597]: time="2025-01-30T13:48:40.004246113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:48:40.008991 containerd[1597]: time="2025-01-30T13:48:40.008918109Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:48:40.085213 containerd[1597]: time="2025-01-30T13:48:40.085034210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-t5r8f,Uid:23b051f5-0fcc-4ed5-a6cb-1c808a70c42a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b6531db46ad228118bccf622b2dbece4bdb8c16006da79de2b39f2180fccf367\"" Jan 30 13:48:40.088798 containerd[1597]: time="2025-01-30T13:48:40.088739319Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 30 13:48:41.199119 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2049539338.mount: Deactivated successfully. Jan 30 13:48:42.614793 containerd[1597]: time="2025-01-30T13:48:42.614712533Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:42.616570 containerd[1597]: time="2025-01-30T13:48:42.616490612Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Jan 30 13:48:42.618622 containerd[1597]: time="2025-01-30T13:48:42.618524590Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:42.623334 containerd[1597]: time="2025-01-30T13:48:42.623238032Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:42.625254 containerd[1597]: time="2025-01-30T13:48:42.624634498Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.53583946s" Jan 30 13:48:42.625254 containerd[1597]: time="2025-01-30T13:48:42.624697879Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 30 13:48:42.627847 containerd[1597]: time="2025-01-30T13:48:42.627799704Z" level=info msg="CreateContainer within sandbox \"b6531db46ad228118bccf622b2dbece4bdb8c16006da79de2b39f2180fccf367\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 30 13:48:42.649664 containerd[1597]: time="2025-01-30T13:48:42.649604656Z" level=info msg="CreateContainer within sandbox \"b6531db46ad228118bccf622b2dbece4bdb8c16006da79de2b39f2180fccf367\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b6829804f9f68139e55b4e0393fa3ad106930389479109aab3c7513533ae689a\"" Jan 30 13:48:42.650563 containerd[1597]: time="2025-01-30T13:48:42.650342986Z" level=info msg="StartContainer for \"b6829804f9f68139e55b4e0393fa3ad106930389479109aab3c7513533ae689a\"" Jan 30 13:48:42.694547 systemd[1]: run-containerd-runc-k8s.io-b6829804f9f68139e55b4e0393fa3ad106930389479109aab3c7513533ae689a-runc.bx7u1Q.mount: Deactivated successfully. Jan 30 13:48:42.736770 containerd[1597]: time="2025-01-30T13:48:42.736700086Z" level=info msg="StartContainer for \"b6829804f9f68139e55b4e0393fa3ad106930389479109aab3c7513533ae689a\" returns successfully" Jan 30 13:48:45.973771 kubelet[2800]: I0130 13:48:45.968363 2800 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-t5r8f" podStartSLOduration=4.430061821 podStartE2EDuration="6.968334007s" podCreationTimestamp="2025-01-30 13:48:39 +0000 UTC" firstStartedPulling="2025-01-30 13:48:40.087679652 +0000 UTC m=+15.414318155" lastFinishedPulling="2025-01-30 13:48:42.62595183 +0000 UTC m=+17.952590341" observedRunningTime="2025-01-30 13:48:42.922107806 +0000 UTC m=+18.248746334" watchObservedRunningTime="2025-01-30 13:48:45.968334007 +0000 UTC m=+21.294972571" Jan 30 13:48:45.973771 kubelet[2800]: I0130 13:48:45.968578 2800 topology_manager.go:215] "Topology Admit Handler" podUID="4df20951-f5c3-4a32-8d14-0e48925c4fa5" podNamespace="calico-system" podName="calico-typha-647956c58-wx9nq" Jan 30 13:48:46.046802 kubelet[2800]: I0130 13:48:46.046247 2800 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4df20951-f5c3-4a32-8d14-0e48925c4fa5-typha-certs\") pod \"calico-typha-647956c58-wx9nq\" (UID: \"4df20951-f5c3-4a32-8d14-0e48925c4fa5\") " pod="calico-system/calico-typha-647956c58-wx9nq" Jan 30 13:48:46.046802 kubelet[2800]: I0130 13:48:46.046566 2800 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4df20951-f5c3-4a32-8d14-0e48925c4fa5-tigera-ca-bundle\") pod \"calico-typha-647956c58-wx9nq\" (UID: \"4df20951-f5c3-4a32-8d14-0e48925c4fa5\") " pod="calico-system/calico-typha-647956c58-wx9nq" Jan 30 13:48:46.046802 kubelet[2800]: I0130 13:48:46.046746 2800 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dlrm\" (UniqueName: \"kubernetes.io/projected/4df20951-f5c3-4a32-8d14-0e48925c4fa5-kube-api-access-2dlrm\") pod \"calico-typha-647956c58-wx9nq\" (UID: \"4df20951-f5c3-4a32-8d14-0e48925c4fa5\") " pod="calico-system/calico-typha-647956c58-wx9nq" Jan 30 13:48:46.070330 kubelet[2800]: I0130 13:48:46.070282 2800 topology_manager.go:215] "Topology Admit Handler" podUID="2c0e5edb-17cd-4399-ab5e-e3d2a9027fb5" podNamespace="calico-system" podName="calico-node-68r5d" Jan 30 13:48:46.147595 kubelet[2800]: I0130 13:48:46.147528 2800 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2c0e5edb-17cd-4399-ab5e-e3d2a9027fb5-var-lib-calico\") pod \"calico-node-68r5d\" (UID: \"2c0e5edb-17cd-4399-ab5e-e3d2a9027fb5\") " pod="calico-system/calico-node-68r5d" Jan 30 13:48:46.147595 kubelet[2800]: I0130 13:48:46.147596 2800 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjs4p\" (UniqueName: \"kubernetes.io/projected/2c0e5edb-17cd-4399-ab5e-e3d2a9027fb5-kube-api-access-rjs4p\") pod \"calico-node-68r5d\" (UID: \"2c0e5edb-17cd-4399-ab5e-e3d2a9027fb5\") " pod="calico-system/calico-node-68r5d" Jan 30 13:48:46.147848 kubelet[2800]: I0130 13:48:46.147630 2800 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c0e5edb-17cd-4399-ab5e-e3d2a9027fb5-tigera-ca-bundle\") pod \"calico-node-68r5d\" (UID: \"2c0e5edb-17cd-4399-ab5e-e3d2a9027fb5\") " pod="calico-system/calico-node-68r5d" Jan 30 13:48:46.147848 kubelet[2800]: I0130 13:48:46.147654 2800 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2c0e5edb-17cd-4399-ab5e-e3d2a9027fb5-cni-log-dir\") pod \"calico-node-68r5d\" (UID: \"2c0e5edb-17cd-4399-ab5e-e3d2a9027fb5\") " pod="calico-system/calico-node-68r5d" Jan 30 13:48:46.147848 kubelet[2800]: I0130 13:48:46.147683 2800 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2c0e5edb-17cd-4399-ab5e-e3d2a9027fb5-node-certs\") pod \"calico-node-68r5d\" (UID: \"2c0e5edb-17cd-4399-ab5e-e3d2a9027fb5\") " pod="calico-system/calico-node-68r5d" Jan 30 13:48:46.147848 kubelet[2800]: I0130 13:48:46.147728 2800 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2c0e5edb-17cd-4399-ab5e-e3d2a9027fb5-policysync\") pod \"calico-node-68r5d\" (UID: \"2c0e5edb-17cd-4399-ab5e-e3d2a9027fb5\") " pod="calico-system/calico-node-68r5d" Jan 30 13:48:46.147848 kubelet[2800]: I0130 13:48:46.147753 2800 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2c0e5edb-17cd-4399-ab5e-e3d2a9027fb5-cni-bin-dir\") pod \"calico-node-68r5d\" (UID: \"2c0e5edb-17cd-4399-ab5e-e3d2a9027fb5\") " pod="calico-system/calico-node-68r5d" Jan 30 13:48:46.148097 kubelet[2800]: I0130 13:48:46.147778 2800 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2c0e5edb-17cd-4399-ab5e-e3d2a9027fb5-cni-net-dir\") pod \"calico-node-68r5d\" (UID: \"2c0e5edb-17cd-4399-ab5e-e3d2a9027fb5\") " pod="calico-system/calico-node-68r5d" Jan 30 13:48:46.148097 kubelet[2800]: I0130 13:48:46.147824 2800 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2c0e5edb-17cd-4399-ab5e-e3d2a9027fb5-lib-modules\") pod \"calico-node-68r5d\" (UID: \"2c0e5edb-17cd-4399-ab5e-e3d2a9027fb5\") " pod="calico-system/calico-node-68r5d" Jan 30 13:48:46.148097 kubelet[2800]: I0130 13:48:46.147861 2800 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2c0e5edb-17cd-4399-ab5e-e3d2a9027fb5-xtables-lock\") pod \"calico-node-68r5d\" (UID: \"2c0e5edb-17cd-4399-ab5e-e3d2a9027fb5\") " pod="calico-system/calico-node-68r5d" Jan 30 13:48:46.148097 kubelet[2800]: I0130 13:48:46.147893 2800 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2c0e5edb-17cd-4399-ab5e-e3d2a9027fb5-flexvol-driver-host\") pod \"calico-node-68r5d\" (UID: \"2c0e5edb-17cd-4399-ab5e-e3d2a9027fb5\") " pod="calico-system/calico-node-68r5d" Jan 30 13:48:46.148097 kubelet[2800]: I0130 13:48:46.147935 2800 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2c0e5edb-17cd-4399-ab5e-e3d2a9027fb5-var-run-calico\") pod \"calico-node-68r5d\" (UID: \"2c0e5edb-17cd-4399-ab5e-e3d2a9027fb5\") " pod="calico-system/calico-node-68r5d" Jan 30 13:48:46.262013 kubelet[2800]: E0130 13:48:46.255550 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.262013 kubelet[2800]: W0130 13:48:46.255600 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.262013 kubelet[2800]: E0130 13:48:46.255645 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.262632 kubelet[2800]: E0130 13:48:46.262330 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.262632 kubelet[2800]: W0130 13:48:46.262390 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.262632 kubelet[2800]: E0130 13:48:46.262568 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.270382 kubelet[2800]: E0130 13:48:46.270234 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.270382 kubelet[2800]: W0130 13:48:46.270261 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.270382 kubelet[2800]: E0130 13:48:46.270294 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.276795 kubelet[2800]: E0130 13:48:46.273666 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.276795 kubelet[2800]: W0130 13:48:46.273691 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.276795 kubelet[2800]: E0130 13:48:46.274004 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.276795 kubelet[2800]: W0130 13:48:46.274019 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.276795 kubelet[2800]: E0130 13:48:46.276634 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.276795 kubelet[2800]: W0130 13:48:46.276650 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.276795 kubelet[2800]: E0130 13:48:46.276666 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.276795 kubelet[2800]: E0130 13:48:46.276636 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.277311 kubelet[2800]: E0130 13:48:46.277016 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.277311 kubelet[2800]: W0130 13:48:46.277030 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.277311 kubelet[2800]: E0130 13:48:46.277048 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.277808 kubelet[2800]: E0130 13:48:46.277361 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.277808 kubelet[2800]: W0130 13:48:46.277374 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.277808 kubelet[2800]: E0130 13:48:46.277390 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.277808 kubelet[2800]: E0130 13:48:46.277454 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.280508 kubelet[2800]: E0130 13:48:46.279641 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.280508 kubelet[2800]: W0130 13:48:46.279661 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.280508 kubelet[2800]: E0130 13:48:46.279680 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.284441 containerd[1597]: time="2025-01-30T13:48:46.281748475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-647956c58-wx9nq,Uid:4df20951-f5c3-4a32-8d14-0e48925c4fa5,Namespace:calico-system,Attempt:0,}" Jan 30 13:48:46.286691 kubelet[2800]: E0130 13:48:46.284388 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.286691 kubelet[2800]: W0130 13:48:46.286346 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.286691 kubelet[2800]: E0130 13:48:46.286374 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.292674 kubelet[2800]: I0130 13:48:46.292636 2800 topology_manager.go:215] "Topology Admit Handler" podUID="8ce3c383-738d-490f-a267-4c123b509bcf" podNamespace="calico-system" podName="csi-node-driver-jrv56" Jan 30 13:48:46.293216 kubelet[2800]: E0130 13:48:46.293077 2800 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jrv56" podUID="8ce3c383-738d-490f-a267-4c123b509bcf" Jan 30 13:48:46.341727 kubelet[2800]: E0130 13:48:46.340497 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.341727 kubelet[2800]: W0130 13:48:46.340530 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.341727 kubelet[2800]: E0130 13:48:46.340560 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.347168 kubelet[2800]: E0130 13:48:46.345453 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.347168 kubelet[2800]: W0130 13:48:46.345479 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.347168 kubelet[2800]: E0130 13:48:46.345505 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.351842 kubelet[2800]: E0130 13:48:46.350249 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.351842 kubelet[2800]: W0130 13:48:46.350280 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.351842 kubelet[2800]: E0130 13:48:46.350316 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.355936 kubelet[2800]: E0130 13:48:46.353530 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.355936 kubelet[2800]: W0130 13:48:46.353554 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.355936 kubelet[2800]: E0130 13:48:46.353583 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.356192 kubelet[2800]: E0130 13:48:46.356125 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.356192 kubelet[2800]: W0130 13:48:46.356144 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.356192 kubelet[2800]: E0130 13:48:46.356169 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.358503 kubelet[2800]: E0130 13:48:46.358283 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.358503 kubelet[2800]: W0130 13:48:46.358310 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.358503 kubelet[2800]: E0130 13:48:46.358332 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.361077 kubelet[2800]: E0130 13:48:46.360553 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.361077 kubelet[2800]: W0130 13:48:46.360574 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.361077 kubelet[2800]: E0130 13:48:46.360595 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.363961 kubelet[2800]: E0130 13:48:46.363764 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.363961 kubelet[2800]: W0130 13:48:46.363783 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.363961 kubelet[2800]: E0130 13:48:46.363804 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.367039 kubelet[2800]: E0130 13:48:46.366610 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.367039 kubelet[2800]: W0130 13:48:46.366633 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.370530 kubelet[2800]: E0130 13:48:46.367447 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.370530 kubelet[2800]: I0130 13:48:46.367570 2800 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8ce3c383-738d-490f-a267-4c123b509bcf-kubelet-dir\") pod \"csi-node-driver-jrv56\" (UID: \"8ce3c383-738d-490f-a267-4c123b509bcf\") " pod="calico-system/csi-node-driver-jrv56" Jan 30 13:48:46.370530 kubelet[2800]: E0130 13:48:46.369498 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.370530 kubelet[2800]: W0130 13:48:46.369514 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.370530 kubelet[2800]: E0130 13:48:46.370463 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.374416 kubelet[2800]: E0130 13:48:46.372628 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.374416 kubelet[2800]: W0130 13:48:46.372646 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.374416 kubelet[2800]: E0130 13:48:46.372741 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.374641 kubelet[2800]: E0130 13:48:46.374627 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.374699 kubelet[2800]: W0130 13:48:46.374642 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.377493 kubelet[2800]: E0130 13:48:46.374771 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.377493 kubelet[2800]: I0130 13:48:46.374814 2800 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8ce3c383-738d-490f-a267-4c123b509bcf-varrun\") pod \"csi-node-driver-jrv56\" (UID: \"8ce3c383-738d-490f-a267-4c123b509bcf\") " pod="calico-system/csi-node-driver-jrv56" Jan 30 13:48:46.389446 kubelet[2800]: E0130 13:48:46.379496 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.389446 kubelet[2800]: W0130 13:48:46.379516 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.389446 kubelet[2800]: E0130 13:48:46.379645 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.389446 kubelet[2800]: E0130 13:48:46.380612 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.389446 kubelet[2800]: W0130 13:48:46.380626 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.389446 kubelet[2800]: E0130 13:48:46.381771 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.389446 kubelet[2800]: E0130 13:48:46.382933 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.389446 kubelet[2800]: W0130 13:48:46.382950 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.389446 kubelet[2800]: E0130 13:48:46.383814 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.389446 kubelet[2800]: E0130 13:48:46.384784 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.390032 kubelet[2800]: W0130 13:48:46.384800 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.390032 kubelet[2800]: E0130 13:48:46.386204 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.391358 kubelet[2800]: E0130 13:48:46.391328 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.391602 kubelet[2800]: W0130 13:48:46.391574 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.395559 kubelet[2800]: E0130 13:48:46.395534 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.395559 kubelet[2800]: W0130 13:48:46.395558 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.396973 kubelet[2800]: E0130 13:48:46.396945 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.397611 kubelet[2800]: W0130 13:48:46.397580 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.397711 kubelet[2800]: E0130 13:48:46.397621 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.402644 kubelet[2800]: E0130 13:48:46.402613 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.402765 kubelet[2800]: E0130 13:48:46.402661 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.403002 kubelet[2800]: E0130 13:48:46.402983 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.403086 kubelet[2800]: W0130 13:48:46.403003 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.403086 kubelet[2800]: E0130 13:48:46.403025 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.404688 kubelet[2800]: E0130 13:48:46.404663 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.404688 kubelet[2800]: W0130 13:48:46.404688 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.405804 kubelet[2800]: E0130 13:48:46.404853 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.408419 kubelet[2800]: E0130 13:48:46.408379 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.411483 kubelet[2800]: W0130 13:48:46.411451 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.411588 kubelet[2800]: E0130 13:48:46.411491 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.411815 kubelet[2800]: E0130 13:48:46.411794 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.411883 kubelet[2800]: W0130 13:48:46.411815 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.411883 kubelet[2800]: E0130 13:48:46.411833 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.417422 kubelet[2800]: E0130 13:48:46.414809 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.417422 kubelet[2800]: W0130 13:48:46.414827 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.417422 kubelet[2800]: E0130 13:48:46.414846 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.418523 kubelet[2800]: E0130 13:48:46.418497 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.418523 kubelet[2800]: W0130 13:48:46.418522 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.418671 kubelet[2800]: E0130 13:48:46.418542 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.419043 kubelet[2800]: E0130 13:48:46.419023 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.419539 kubelet[2800]: W0130 13:48:46.419512 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.419636 kubelet[2800]: E0130 13:48:46.419545 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.423421 kubelet[2800]: E0130 13:48:46.422686 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.423421 kubelet[2800]: W0130 13:48:46.422704 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.423421 kubelet[2800]: E0130 13:48:46.422723 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.423644 kubelet[2800]: E0130 13:48:46.423506 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.423644 kubelet[2800]: W0130 13:48:46.423521 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.423644 kubelet[2800]: E0130 13:48:46.423538 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.475490 containerd[1597]: time="2025-01-30T13:48:46.468023117Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:48:46.475490 containerd[1597]: time="2025-01-30T13:48:46.468105036Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:48:46.475490 containerd[1597]: time="2025-01-30T13:48:46.468148377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:48:46.475490 containerd[1597]: time="2025-01-30T13:48:46.468310510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:48:46.517213 kubelet[2800]: E0130 13:48:46.517082 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.517213 kubelet[2800]: W0130 13:48:46.517110 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.517213 kubelet[2800]: E0130 13:48:46.517190 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.517484 kubelet[2800]: I0130 13:48:46.517291 2800 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8ce3c383-738d-490f-a267-4c123b509bcf-registration-dir\") pod \"csi-node-driver-jrv56\" (UID: \"8ce3c383-738d-490f-a267-4c123b509bcf\") " pod="calico-system/csi-node-driver-jrv56" Jan 30 13:48:46.520076 kubelet[2800]: E0130 13:48:46.519172 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.520076 kubelet[2800]: W0130 13:48:46.519196 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.520076 kubelet[2800]: E0130 13:48:46.519224 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.522429 kubelet[2800]: E0130 13:48:46.520562 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.522429 kubelet[2800]: W0130 13:48:46.520694 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.522429 kubelet[2800]: E0130 13:48:46.521368 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.524290 kubelet[2800]: E0130 13:48:46.523482 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.524290 kubelet[2800]: W0130 13:48:46.523507 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.524290 kubelet[2800]: E0130 13:48:46.523529 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.524290 kubelet[2800]: I0130 13:48:46.523572 2800 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8ce3c383-738d-490f-a267-4c123b509bcf-socket-dir\") pod \"csi-node-driver-jrv56\" (UID: \"8ce3c383-738d-490f-a267-4c123b509bcf\") " pod="calico-system/csi-node-driver-jrv56" Jan 30 13:48:46.526345 kubelet[2800]: E0130 13:48:46.526320 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.526345 kubelet[2800]: W0130 13:48:46.526345 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.526538 kubelet[2800]: E0130 13:48:46.526433 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.529326 kubelet[2800]: E0130 13:48:46.529302 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.529326 kubelet[2800]: W0130 13:48:46.529326 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.529628 kubelet[2800]: E0130 13:48:46.529345 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.532555 kubelet[2800]: E0130 13:48:46.532532 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.532555 kubelet[2800]: W0130 13:48:46.532555 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.532833 kubelet[2800]: E0130 13:48:46.532573 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.535605 kubelet[2800]: E0130 13:48:46.534611 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.535605 kubelet[2800]: W0130 13:48:46.534630 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.535605 kubelet[2800]: E0130 13:48:46.535112 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.539018 kubelet[2800]: E0130 13:48:46.538353 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.539499 kubelet[2800]: W0130 13:48:46.539013 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.540511 kubelet[2800]: E0130 13:48:46.540469 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.540666 kubelet[2800]: E0130 13:48:46.540646 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.540666 kubelet[2800]: W0130 13:48:46.540666 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.540847 kubelet[2800]: E0130 13:48:46.540685 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.540847 kubelet[2800]: I0130 13:48:46.540721 2800 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sftfc\" (UniqueName: \"kubernetes.io/projected/8ce3c383-738d-490f-a267-4c123b509bcf-kube-api-access-sftfc\") pod \"csi-node-driver-jrv56\" (UID: \"8ce3c383-738d-490f-a267-4c123b509bcf\") " pod="calico-system/csi-node-driver-jrv56" Jan 30 13:48:46.542824 kubelet[2800]: E0130 13:48:46.542794 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.542824 kubelet[2800]: W0130 13:48:46.542824 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.544138 kubelet[2800]: E0130 13:48:46.544106 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.546164 kubelet[2800]: E0130 13:48:46.545931 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.546164 kubelet[2800]: W0130 13:48:46.545951 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.546709 kubelet[2800]: E0130 13:48:46.546470 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.548085 kubelet[2800]: E0130 13:48:46.547706 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.548085 kubelet[2800]: W0130 13:48:46.547723 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.548085 kubelet[2800]: E0130 13:48:46.547783 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.548744 kubelet[2800]: E0130 13:48:46.548611 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.548744 kubelet[2800]: W0130 13:48:46.548628 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.549966 kubelet[2800]: E0130 13:48:46.549449 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.550069 kubelet[2800]: E0130 13:48:46.550000 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.550069 kubelet[2800]: W0130 13:48:46.550015 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.550184 kubelet[2800]: E0130 13:48:46.550076 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.551525 kubelet[2800]: E0130 13:48:46.551465 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.551739 kubelet[2800]: W0130 13:48:46.551484 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.551739 kubelet[2800]: E0130 13:48:46.551638 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.553455 kubelet[2800]: E0130 13:48:46.552299 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.553455 kubelet[2800]: W0130 13:48:46.552698 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.553455 kubelet[2800]: E0130 13:48:46.552724 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.553936 kubelet[2800]: E0130 13:48:46.553913 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.554024 kubelet[2800]: W0130 13:48:46.553941 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.554024 kubelet[2800]: E0130 13:48:46.553959 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.555658 kubelet[2800]: E0130 13:48:46.555593 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.555658 kubelet[2800]: W0130 13:48:46.555612 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.555658 kubelet[2800]: E0130 13:48:46.555632 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.644474 containerd[1597]: time="2025-01-30T13:48:46.644371959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-647956c58-wx9nq,Uid:4df20951-f5c3-4a32-8d14-0e48925c4fa5,Namespace:calico-system,Attempt:0,} returns sandbox id \"31ec61995f717c0c923510b42e5d66c510e06dfe6b4e8fb9d9f985f0f7113678\"" Jan 30 13:48:46.648040 kubelet[2800]: E0130 13:48:46.647807 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.648040 kubelet[2800]: W0130 13:48:46.647835 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.648040 kubelet[2800]: E0130 13:48:46.647862 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.648769 kubelet[2800]: E0130 13:48:46.648582 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.648769 kubelet[2800]: W0130 13:48:46.648603 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.648769 kubelet[2800]: E0130 13:48:46.648623 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.649511 kubelet[2800]: E0130 13:48:46.649266 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.649511 kubelet[2800]: W0130 13:48:46.649285 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.649511 kubelet[2800]: E0130 13:48:46.649303 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.650118 kubelet[2800]: E0130 13:48:46.649953 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.650118 kubelet[2800]: W0130 13:48:46.649972 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.650118 kubelet[2800]: E0130 13:48:46.650003 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.650708 kubelet[2800]: E0130 13:48:46.650542 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.650708 kubelet[2800]: W0130 13:48:46.650561 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.650708 kubelet[2800]: E0130 13:48:46.650589 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.651156 kubelet[2800]: E0130 13:48:46.651111 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.651156 kubelet[2800]: W0130 13:48:46.651130 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.651156 kubelet[2800]: E0130 13:48:46.651148 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.652224 kubelet[2800]: E0130 13:48:46.652196 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.652344 kubelet[2800]: W0130 13:48:46.652261 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.652344 kubelet[2800]: E0130 13:48:46.652282 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.653169 kubelet[2800]: E0130 13:48:46.652809 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.653169 kubelet[2800]: W0130 13:48:46.652828 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.653169 kubelet[2800]: E0130 13:48:46.652846 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.653836 kubelet[2800]: E0130 13:48:46.653293 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.653836 kubelet[2800]: W0130 13:48:46.653360 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.653836 kubelet[2800]: E0130 13:48:46.653380 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.654569 kubelet[2800]: E0130 13:48:46.653879 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.654569 kubelet[2800]: W0130 13:48:46.653895 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.654569 kubelet[2800]: E0130 13:48:46.653911 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.654569 kubelet[2800]: E0130 13:48:46.654417 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.654569 kubelet[2800]: W0130 13:48:46.654432 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.654569 kubelet[2800]: E0130 13:48:46.654450 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.656116 kubelet[2800]: E0130 13:48:46.655426 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.656116 kubelet[2800]: W0130 13:48:46.655447 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.656116 kubelet[2800]: E0130 13:48:46.655463 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.656116 kubelet[2800]: E0130 13:48:46.655950 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.656116 kubelet[2800]: W0130 13:48:46.655965 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.656116 kubelet[2800]: E0130 13:48:46.655994 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.656925 kubelet[2800]: E0130 13:48:46.656801 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.656925 kubelet[2800]: W0130 13:48:46.656816 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.656925 kubelet[2800]: E0130 13:48:46.656832 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.657891 kubelet[2800]: E0130 13:48:46.657706 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.657891 kubelet[2800]: W0130 13:48:46.657729 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.657891 kubelet[2800]: E0130 13:48:46.657747 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.660651 containerd[1597]: time="2025-01-30T13:48:46.660601814Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 30 13:48:46.673314 kubelet[2800]: E0130 13:48:46.673271 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:46.673314 kubelet[2800]: W0130 13:48:46.673304 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:46.673314 kubelet[2800]: E0130 13:48:46.673333 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:46.679281 containerd[1597]: time="2025-01-30T13:48:46.679217203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-68r5d,Uid:2c0e5edb-17cd-4399-ab5e-e3d2a9027fb5,Namespace:calico-system,Attempt:0,}" Jan 30 13:48:46.723686 containerd[1597]: time="2025-01-30T13:48:46.723143206Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:48:46.723686 containerd[1597]: time="2025-01-30T13:48:46.723248879Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:48:46.723686 containerd[1597]: time="2025-01-30T13:48:46.723289456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:48:46.723686 containerd[1597]: time="2025-01-30T13:48:46.723454080Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:48:46.810808 containerd[1597]: time="2025-01-30T13:48:46.809848598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-68r5d,Uid:2c0e5edb-17cd-4399-ab5e-e3d2a9027fb5,Namespace:calico-system,Attempt:0,} returns sandbox id \"d4413c392cdb46f670c4467110eff29fef65f5d459aada106dd77d33210e1f7e\"" Jan 30 13:48:47.686857 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2654021476.mount: Deactivated successfully. Jan 30 13:48:47.836882 kubelet[2800]: E0130 13:48:47.836767 2800 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jrv56" podUID="8ce3c383-738d-490f-a267-4c123b509bcf" Jan 30 13:48:48.499873 containerd[1597]: time="2025-01-30T13:48:48.499800535Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:48.501255 containerd[1597]: time="2025-01-30T13:48:48.501145645Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Jan 30 13:48:48.502881 containerd[1597]: time="2025-01-30T13:48:48.502810994Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:48.505891 containerd[1597]: time="2025-01-30T13:48:48.505847677Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:48.507027 containerd[1597]: time="2025-01-30T13:48:48.506833544Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 1.846166807s" Jan 30 13:48:48.507027 containerd[1597]: time="2025-01-30T13:48:48.506880985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 30 13:48:48.509761 containerd[1597]: time="2025-01-30T13:48:48.509549665Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 30 13:48:48.523754 containerd[1597]: time="2025-01-30T13:48:48.523696237Z" level=info msg="CreateContainer within sandbox \"31ec61995f717c0c923510b42e5d66c510e06dfe6b4e8fb9d9f985f0f7113678\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 30 13:48:48.554984 containerd[1597]: time="2025-01-30T13:48:48.554922448Z" level=info msg="CreateContainer within sandbox \"31ec61995f717c0c923510b42e5d66c510e06dfe6b4e8fb9d9f985f0f7113678\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"635925aea0d2dbd83ff03a26eb4c8013013d02f2646365a38488727791b19efe\"" Jan 30 13:48:48.556096 containerd[1597]: time="2025-01-30T13:48:48.555846493Z" level=info msg="StartContainer for \"635925aea0d2dbd83ff03a26eb4c8013013d02f2646365a38488727791b19efe\"" Jan 30 13:48:48.679527 containerd[1597]: time="2025-01-30T13:48:48.679365841Z" level=info msg="StartContainer for \"635925aea0d2dbd83ff03a26eb4c8013013d02f2646365a38488727791b19efe\" returns successfully" Jan 30 13:48:48.945805 kubelet[2800]: E0130 13:48:48.945686 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:48.945805 kubelet[2800]: W0130 13:48:48.945716 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:48.945805 kubelet[2800]: E0130 13:48:48.945767 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:48.949219 kubelet[2800]: E0130 13:48:48.946693 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:48.949219 kubelet[2800]: W0130 13:48:48.946733 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:48.949219 kubelet[2800]: E0130 13:48:48.946755 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:48.949219 kubelet[2800]: E0130 13:48:48.947372 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:48.949219 kubelet[2800]: W0130 13:48:48.947388 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:48.949219 kubelet[2800]: E0130 13:48:48.947437 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:48.949219 kubelet[2800]: E0130 13:48:48.947811 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:48.949219 kubelet[2800]: W0130 13:48:48.947851 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:48.949219 kubelet[2800]: E0130 13:48:48.947870 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:48.949219 kubelet[2800]: E0130 13:48:48.948269 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:48.950876 kubelet[2800]: W0130 13:48:48.948291 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:48.950876 kubelet[2800]: E0130 13:48:48.948313 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:48.950876 kubelet[2800]: E0130 13:48:48.948703 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:48.950876 kubelet[2800]: W0130 13:48:48.948717 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:48.950876 kubelet[2800]: E0130 13:48:48.948758 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:48.950876 kubelet[2800]: E0130 13:48:48.949184 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:48.950876 kubelet[2800]: W0130 13:48:48.949199 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:48.950876 kubelet[2800]: E0130 13:48:48.949216 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:48.950876 kubelet[2800]: E0130 13:48:48.949629 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:48.950876 kubelet[2800]: W0130 13:48:48.949642 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:48.951589 kubelet[2800]: E0130 13:48:48.949679 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:48.951589 kubelet[2800]: E0130 13:48:48.950122 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:48.951589 kubelet[2800]: W0130 13:48:48.950137 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:48.951589 kubelet[2800]: E0130 13:48:48.950285 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:48.951589 kubelet[2800]: E0130 13:48:48.950750 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:48.951589 kubelet[2800]: W0130 13:48:48.950764 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:48.951589 kubelet[2800]: E0130 13:48:48.950780 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:48.951589 kubelet[2800]: E0130 13:48:48.951166 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:48.951589 kubelet[2800]: W0130 13:48:48.951180 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:48.951589 kubelet[2800]: E0130 13:48:48.951195 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:48.952166 kubelet[2800]: E0130 13:48:48.951531 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:48.952166 kubelet[2800]: W0130 13:48:48.951545 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:48.952166 kubelet[2800]: E0130 13:48:48.951561 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:48.952166 kubelet[2800]: E0130 13:48:48.951842 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:48.952166 kubelet[2800]: W0130 13:48:48.951857 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:48.952166 kubelet[2800]: E0130 13:48:48.951871 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:48.952526 kubelet[2800]: E0130 13:48:48.952193 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:48.952526 kubelet[2800]: W0130 13:48:48.952206 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:48.952526 kubelet[2800]: E0130 13:48:48.952220 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:48.952685 kubelet[2800]: E0130 13:48:48.952538 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:48.952685 kubelet[2800]: W0130 13:48:48.952551 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:48.952685 kubelet[2800]: E0130 13:48:48.952565 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:48.967274 kubelet[2800]: E0130 13:48:48.967224 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:48.967274 kubelet[2800]: W0130 13:48:48.967254 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:48.967274 kubelet[2800]: E0130 13:48:48.967283 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:48.968020 kubelet[2800]: E0130 13:48:48.967989 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:48.968135 kubelet[2800]: W0130 13:48:48.968100 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:48.968203 kubelet[2800]: E0130 13:48:48.968135 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:48.968678 kubelet[2800]: E0130 13:48:48.968566 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:48.968678 kubelet[2800]: W0130 13:48:48.968588 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:48.968678 kubelet[2800]: E0130 13:48:48.968616 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:48.969549 kubelet[2800]: E0130 13:48:48.968989 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:48.969549 kubelet[2800]: W0130 13:48:48.969005 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:48.969549 kubelet[2800]: E0130 13:48:48.969056 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:48.969549 kubelet[2800]: E0130 13:48:48.969308 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:48.969549 kubelet[2800]: W0130 13:48:48.969334 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:48.969549 kubelet[2800]: E0130 13:48:48.969435 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:48.970024 kubelet[2800]: E0130 13:48:48.969798 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:48.970024 kubelet[2800]: W0130 13:48:48.969837 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:48.970024 kubelet[2800]: E0130 13:48:48.969860 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:48.970375 kubelet[2800]: E0130 13:48:48.970344 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:48.971385 kubelet[2800]: W0130 13:48:48.970362 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:48.971385 kubelet[2800]: E0130 13:48:48.970455 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:48.971385 kubelet[2800]: E0130 13:48:48.970794 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:48.971385 kubelet[2800]: W0130 13:48:48.970809 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:48.971385 kubelet[2800]: E0130 13:48:48.970954 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:48.971385 kubelet[2800]: E0130 13:48:48.971201 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:48.971385 kubelet[2800]: W0130 13:48:48.971215 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:48.971385 kubelet[2800]: E0130 13:48:48.971330 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:48.972278 kubelet[2800]: E0130 13:48:48.972234 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:48.972477 kubelet[2800]: W0130 13:48:48.972458 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:48.972951 kubelet[2800]: E0130 13:48:48.972928 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:48.972951 kubelet[2800]: W0130 13:48:48.972947 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:48.973355 kubelet[2800]: E0130 13:48:48.973331 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:48.973471 kubelet[2800]: W0130 13:48:48.973360 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:48.973471 kubelet[2800]: E0130 13:48:48.973377 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:48.973919 kubelet[2800]: E0130 13:48:48.973820 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:48.973919 kubelet[2800]: W0130 13:48:48.973838 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:48.973919 kubelet[2800]: E0130 13:48:48.973855 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:48.974160 kubelet[2800]: E0130 13:48:48.973924 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:48.975075 kubelet[2800]: E0130 13:48:48.974597 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:48.975075 kubelet[2800]: E0130 13:48:48.974808 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:48.975075 kubelet[2800]: W0130 13:48:48.974825 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:48.975075 kubelet[2800]: E0130 13:48:48.974849 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:48.975363 kubelet[2800]: E0130 13:48:48.975294 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:48.975363 kubelet[2800]: W0130 13:48:48.975309 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:48.975363 kubelet[2800]: E0130 13:48:48.975346 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:48.975979 kubelet[2800]: E0130 13:48:48.975740 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:48.975979 kubelet[2800]: W0130 13:48:48.975754 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:48.975979 kubelet[2800]: E0130 13:48:48.975790 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:48.976562 kubelet[2800]: E0130 13:48:48.976114 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:48.976562 kubelet[2800]: W0130 13:48:48.976128 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:48.976562 kubelet[2800]: E0130 13:48:48.976144 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:48.977007 kubelet[2800]: E0130 13:48:48.976767 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:48:48.977007 kubelet[2800]: W0130 13:48:48.976782 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:48:48.977007 kubelet[2800]: E0130 13:48:48.976799 2800 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:48:49.479241 containerd[1597]: time="2025-01-30T13:48:49.479174422Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:49.480591 containerd[1597]: time="2025-01-30T13:48:49.480527580Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Jan 30 13:48:49.481862 containerd[1597]: time="2025-01-30T13:48:49.481770774Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:49.485280 containerd[1597]: time="2025-01-30T13:48:49.485195956Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:49.486529 containerd[1597]: time="2025-01-30T13:48:49.486329263Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 976.733657ms" Jan 30 13:48:49.486529 containerd[1597]: time="2025-01-30T13:48:49.486379258Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 30 13:48:49.491294 containerd[1597]: time="2025-01-30T13:48:49.491253359Z" level=info msg="CreateContainer within sandbox \"d4413c392cdb46f670c4467110eff29fef65f5d459aada106dd77d33210e1f7e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 30 13:48:49.509675 containerd[1597]: time="2025-01-30T13:48:49.509609681Z" level=info msg="CreateContainer within sandbox \"d4413c392cdb46f670c4467110eff29fef65f5d459aada106dd77d33210e1f7e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"55871c0e501a0a92a89a2df0798403a5fb72cdaf0d1e8bc09ddff278681b79df\"" Jan 30 13:48:49.511326 containerd[1597]: time="2025-01-30T13:48:49.510622615Z" level=info msg="StartContainer for \"55871c0e501a0a92a89a2df0798403a5fb72cdaf0d1e8bc09ddff278681b79df\"" Jan 30 13:48:49.606980 containerd[1597]: time="2025-01-30T13:48:49.606928154Z" level=info msg="StartContainer for \"55871c0e501a0a92a89a2df0798403a5fb72cdaf0d1e8bc09ddff278681b79df\" returns successfully" Jan 30 13:48:49.659777 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-55871c0e501a0a92a89a2df0798403a5fb72cdaf0d1e8bc09ddff278681b79df-rootfs.mount: Deactivated successfully. Jan 30 13:48:49.837172 kubelet[2800]: E0130 13:48:49.837103 2800 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jrv56" podUID="8ce3c383-738d-490f-a267-4c123b509bcf" Jan 30 13:48:49.937359 kubelet[2800]: I0130 13:48:49.937294 2800 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:48:49.960431 kubelet[2800]: I0130 13:48:49.955964 2800 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-647956c58-wx9nq" podStartSLOduration=3.105583277 podStartE2EDuration="4.955930597s" podCreationTimestamp="2025-01-30 13:48:45 +0000 UTC" firstStartedPulling="2025-01-30 13:48:46.657885219 +0000 UTC m=+21.984523743" lastFinishedPulling="2025-01-30 13:48:48.50823254 +0000 UTC m=+23.834871063" observedRunningTime="2025-01-30 13:48:48.948621096 +0000 UTC m=+24.275259623" watchObservedRunningTime="2025-01-30 13:48:49.955930597 +0000 UTC m=+25.282569104" Jan 30 13:48:50.253921 containerd[1597]: time="2025-01-30T13:48:50.253738910Z" level=info msg="shim disconnected" id=55871c0e501a0a92a89a2df0798403a5fb72cdaf0d1e8bc09ddff278681b79df namespace=k8s.io Jan 30 13:48:50.254190 containerd[1597]: time="2025-01-30T13:48:50.253846614Z" level=warning msg="cleaning up after shim disconnected" id=55871c0e501a0a92a89a2df0798403a5fb72cdaf0d1e8bc09ddff278681b79df namespace=k8s.io Jan 30 13:48:50.254190 containerd[1597]: time="2025-01-30T13:48:50.254000491Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:48:50.943323 containerd[1597]: time="2025-01-30T13:48:50.943278575Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 30 13:48:51.837288 kubelet[2800]: E0130 13:48:51.836771 2800 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jrv56" podUID="8ce3c383-738d-490f-a267-4c123b509bcf" Jan 30 13:48:53.836707 kubelet[2800]: E0130 13:48:53.836637 2800 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jrv56" podUID="8ce3c383-738d-490f-a267-4c123b509bcf" Jan 30 13:48:54.799109 containerd[1597]: time="2025-01-30T13:48:54.799037443Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:54.800628 containerd[1597]: time="2025-01-30T13:48:54.800553526Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 30 13:48:54.802296 containerd[1597]: time="2025-01-30T13:48:54.802231553Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:54.805708 containerd[1597]: time="2025-01-30T13:48:54.805663664Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:54.806981 containerd[1597]: time="2025-01-30T13:48:54.806628089Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 3.863299338s" Jan 30 13:48:54.806981 containerd[1597]: time="2025-01-30T13:48:54.806683559Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 30 13:48:54.810806 containerd[1597]: time="2025-01-30T13:48:54.810763842Z" level=info msg="CreateContainer within sandbox \"d4413c392cdb46f670c4467110eff29fef65f5d459aada106dd77d33210e1f7e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 30 13:48:54.834850 containerd[1597]: time="2025-01-30T13:48:54.834784441Z" level=info msg="CreateContainer within sandbox \"d4413c392cdb46f670c4467110eff29fef65f5d459aada106dd77d33210e1f7e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8f7e75af5d3eb7b1b94fdfe4e7a4b17731e329cc95f2f9906616cdf71e8d4a4f\"" Jan 30 13:48:54.835913 containerd[1597]: time="2025-01-30T13:48:54.835800908Z" level=info msg="StartContainer for \"8f7e75af5d3eb7b1b94fdfe4e7a4b17731e329cc95f2f9906616cdf71e8d4a4f\"" Jan 30 13:48:54.952109 containerd[1597]: time="2025-01-30T13:48:54.950884194Z" level=info msg="StartContainer for \"8f7e75af5d3eb7b1b94fdfe4e7a4b17731e329cc95f2f9906616cdf71e8d4a4f\" returns successfully" Jan 30 13:48:55.837833 kubelet[2800]: E0130 13:48:55.837617 2800 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jrv56" podUID="8ce3c383-738d-490f-a267-4c123b509bcf" Jan 30 13:48:56.033585 containerd[1597]: time="2025-01-30T13:48:56.033462609Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:48:56.074670 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f7e75af5d3eb7b1b94fdfe4e7a4b17731e329cc95f2f9906616cdf71e8d4a4f-rootfs.mount: Deactivated successfully. Jan 30 13:48:56.103270 kubelet[2800]: I0130 13:48:56.102916 2800 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 30 13:48:56.135229 kubelet[2800]: I0130 13:48:56.134828 2800 topology_manager.go:215] "Topology Admit Handler" podUID="bec5f10d-9deb-43b7-8bb4-a1f9acbcdd25" podNamespace="kube-system" podName="coredns-7db6d8ff4d-f9rp4" Jan 30 13:48:56.142713 kubelet[2800]: I0130 13:48:56.142567 2800 topology_manager.go:215] "Topology Admit Handler" podUID="f2b9a9ee-1ab9-4acc-8bb6-079b1de8f5a0" podNamespace="kube-system" podName="coredns-7db6d8ff4d-kgnts" Jan 30 13:48:56.160451 kubelet[2800]: I0130 13:48:56.156509 2800 topology_manager.go:215] "Topology Admit Handler" podUID="4b2c2fe6-e614-4530-a6fa-02d31dc3b011" podNamespace="calico-apiserver" podName="calico-apiserver-7876bf9cc5-jjtnr" Jan 30 13:48:56.160451 kubelet[2800]: I0130 13:48:56.156740 2800 topology_manager.go:215] "Topology Admit Handler" podUID="954b7f72-1740-46bf-9d10-67fe412470fe" podNamespace="calico-apiserver" podName="calico-apiserver-7876bf9cc5-rvcj9" Jan 30 13:48:56.160451 kubelet[2800]: I0130 13:48:56.156900 2800 topology_manager.go:215] "Topology Admit Handler" podUID="9b01ed56-b260-4e1d-b3d3-dac544b9b63d" podNamespace="calico-system" podName="calico-kube-controllers-5cd4d8684d-s5t7r" Jan 30 13:48:56.219595 kubelet[2800]: I0130 13:48:56.219546 2800 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/954b7f72-1740-46bf-9d10-67fe412470fe-calico-apiserver-certs\") pod \"calico-apiserver-7876bf9cc5-rvcj9\" (UID: \"954b7f72-1740-46bf-9d10-67fe412470fe\") " pod="calico-apiserver/calico-apiserver-7876bf9cc5-rvcj9" Jan 30 13:48:56.219954 kubelet[2800]: I0130 13:48:56.219927 2800 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b01ed56-b260-4e1d-b3d3-dac544b9b63d-tigera-ca-bundle\") pod \"calico-kube-controllers-5cd4d8684d-s5t7r\" (UID: \"9b01ed56-b260-4e1d-b3d3-dac544b9b63d\") " pod="calico-system/calico-kube-controllers-5cd4d8684d-s5t7r" Jan 30 13:48:56.220169 kubelet[2800]: I0130 13:48:56.220147 2800 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f2b9a9ee-1ab9-4acc-8bb6-079b1de8f5a0-config-volume\") pod \"coredns-7db6d8ff4d-kgnts\" (UID: \"f2b9a9ee-1ab9-4acc-8bb6-079b1de8f5a0\") " pod="kube-system/coredns-7db6d8ff4d-kgnts" Jan 30 13:48:56.220411 kubelet[2800]: I0130 13:48:56.220350 2800 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsjtl\" (UniqueName: \"kubernetes.io/projected/9b01ed56-b260-4e1d-b3d3-dac544b9b63d-kube-api-access-gsjtl\") pod \"calico-kube-controllers-5cd4d8684d-s5t7r\" (UID: \"9b01ed56-b260-4e1d-b3d3-dac544b9b63d\") " pod="calico-system/calico-kube-controllers-5cd4d8684d-s5t7r" Jan 30 13:48:56.220673 kubelet[2800]: I0130 13:48:56.220593 2800 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8rm6\" (UniqueName: \"kubernetes.io/projected/954b7f72-1740-46bf-9d10-67fe412470fe-kube-api-access-s8rm6\") pod \"calico-apiserver-7876bf9cc5-rvcj9\" (UID: \"954b7f72-1740-46bf-9d10-67fe412470fe\") " pod="calico-apiserver/calico-apiserver-7876bf9cc5-rvcj9" Jan 30 13:48:56.221198 kubelet[2800]: I0130 13:48:56.220968 2800 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s66pq\" (UniqueName: \"kubernetes.io/projected/4b2c2fe6-e614-4530-a6fa-02d31dc3b011-kube-api-access-s66pq\") pod \"calico-apiserver-7876bf9cc5-jjtnr\" (UID: \"4b2c2fe6-e614-4530-a6fa-02d31dc3b011\") " pod="calico-apiserver/calico-apiserver-7876bf9cc5-jjtnr" Jan 30 13:48:56.221198 kubelet[2800]: I0130 13:48:56.221060 2800 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bec5f10d-9deb-43b7-8bb4-a1f9acbcdd25-config-volume\") pod \"coredns-7db6d8ff4d-f9rp4\" (UID: \"bec5f10d-9deb-43b7-8bb4-a1f9acbcdd25\") " pod="kube-system/coredns-7db6d8ff4d-f9rp4" Jan 30 13:48:56.221198 kubelet[2800]: I0130 13:48:56.221097 2800 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4b2c2fe6-e614-4530-a6fa-02d31dc3b011-calico-apiserver-certs\") pod \"calico-apiserver-7876bf9cc5-jjtnr\" (UID: \"4b2c2fe6-e614-4530-a6fa-02d31dc3b011\") " pod="calico-apiserver/calico-apiserver-7876bf9cc5-jjtnr" Jan 30 13:48:56.221198 kubelet[2800]: I0130 13:48:56.221134 2800 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dhzp\" (UniqueName: \"kubernetes.io/projected/bec5f10d-9deb-43b7-8bb4-a1f9acbcdd25-kube-api-access-7dhzp\") pod \"coredns-7db6d8ff4d-f9rp4\" (UID: \"bec5f10d-9deb-43b7-8bb4-a1f9acbcdd25\") " pod="kube-system/coredns-7db6d8ff4d-f9rp4" Jan 30 13:48:56.221198 kubelet[2800]: I0130 13:48:56.221163 2800 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwlrj\" (UniqueName: \"kubernetes.io/projected/f2b9a9ee-1ab9-4acc-8bb6-079b1de8f5a0-kube-api-access-nwlrj\") pod \"coredns-7db6d8ff4d-kgnts\" (UID: \"f2b9a9ee-1ab9-4acc-8bb6-079b1de8f5a0\") " pod="kube-system/coredns-7db6d8ff4d-kgnts" Jan 30 13:48:56.464854 containerd[1597]: time="2025-01-30T13:48:56.464160857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-f9rp4,Uid:bec5f10d-9deb-43b7-8bb4-a1f9acbcdd25,Namespace:kube-system,Attempt:0,}" Jan 30 13:48:56.468261 containerd[1597]: time="2025-01-30T13:48:56.468216237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kgnts,Uid:f2b9a9ee-1ab9-4acc-8bb6-079b1de8f5a0,Namespace:kube-system,Attempt:0,}" Jan 30 13:48:56.490763 containerd[1597]: time="2025-01-30T13:48:56.490465878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7876bf9cc5-jjtnr,Uid:4b2c2fe6-e614-4530-a6fa-02d31dc3b011,Namespace:calico-apiserver,Attempt:0,}" Jan 30 13:48:56.490763 containerd[1597]: time="2025-01-30T13:48:56.490537077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cd4d8684d-s5t7r,Uid:9b01ed56-b260-4e1d-b3d3-dac544b9b63d,Namespace:calico-system,Attempt:0,}" Jan 30 13:48:56.511641 containerd[1597]: time="2025-01-30T13:48:56.511559501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7876bf9cc5-rvcj9,Uid:954b7f72-1740-46bf-9d10-67fe412470fe,Namespace:calico-apiserver,Attempt:0,}" Jan 30 13:48:56.836872 containerd[1597]: time="2025-01-30T13:48:56.836798663Z" level=info msg="shim disconnected" id=8f7e75af5d3eb7b1b94fdfe4e7a4b17731e329cc95f2f9906616cdf71e8d4a4f namespace=k8s.io Jan 30 13:48:56.836872 containerd[1597]: time="2025-01-30T13:48:56.836867291Z" level=warning msg="cleaning up after shim disconnected" id=8f7e75af5d3eb7b1b94fdfe4e7a4b17731e329cc95f2f9906616cdf71e8d4a4f namespace=k8s.io Jan 30 13:48:56.836872 containerd[1597]: time="2025-01-30T13:48:56.836880533Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:48:56.866444 containerd[1597]: time="2025-01-30T13:48:56.864827303Z" level=warning msg="cleanup warnings time=\"2025-01-30T13:48:56Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 13:48:56.981761 containerd[1597]: time="2025-01-30T13:48:56.980566084Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 30 13:48:57.170594 containerd[1597]: time="2025-01-30T13:48:57.169960179Z" level=error msg="Failed to destroy network for sandbox \"7bd9f26f32de4e02082a9e1c4ed736d590938a463b47a2f9e44ea8c6180096cb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:48:57.179202 containerd[1597]: time="2025-01-30T13:48:57.179139556Z" level=error msg="encountered an error cleaning up failed sandbox \"7bd9f26f32de4e02082a9e1c4ed736d590938a463b47a2f9e44ea8c6180096cb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:48:57.179304 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7bd9f26f32de4e02082a9e1c4ed736d590938a463b47a2f9e44ea8c6180096cb-shm.mount: Deactivated successfully. Jan 30 13:48:57.180149 containerd[1597]: time="2025-01-30T13:48:57.179660999Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-f9rp4,Uid:bec5f10d-9deb-43b7-8bb4-a1f9acbcdd25,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7bd9f26f32de4e02082a9e1c4ed736d590938a463b47a2f9e44ea8c6180096cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:48:57.183904 kubelet[2800]: E0130 13:48:57.181093 2800 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bd9f26f32de4e02082a9e1c4ed736d590938a463b47a2f9e44ea8c6180096cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:48:57.183904 kubelet[2800]: E0130 13:48:57.181183 2800 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bd9f26f32de4e02082a9e1c4ed736d590938a463b47a2f9e44ea8c6180096cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-f9rp4" Jan 30 13:48:57.183904 kubelet[2800]: E0130 13:48:57.181215 2800 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bd9f26f32de4e02082a9e1c4ed736d590938a463b47a2f9e44ea8c6180096cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-f9rp4" Jan 30 13:48:57.184642 kubelet[2800]: E0130 13:48:57.181272 2800 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-f9rp4_kube-system(bec5f10d-9deb-43b7-8bb4-a1f9acbcdd25)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-f9rp4_kube-system(bec5f10d-9deb-43b7-8bb4-a1f9acbcdd25)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7bd9f26f32de4e02082a9e1c4ed736d590938a463b47a2f9e44ea8c6180096cb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-f9rp4" podUID="bec5f10d-9deb-43b7-8bb4-a1f9acbcdd25" Jan 30 13:48:57.210136 containerd[1597]: time="2025-01-30T13:48:57.209989159Z" level=error msg="Failed to destroy network for sandbox \"3d28b1b525f5762eb760ab01b33b72790294c53d0b8932b5a127c10fd661799c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:48:57.213172 containerd[1597]: time="2025-01-30T13:48:57.213025916Z" level=error msg="encountered an error cleaning up failed sandbox \"3d28b1b525f5762eb760ab01b33b72790294c53d0b8932b5a127c10fd661799c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:48:57.214796 containerd[1597]: time="2025-01-30T13:48:57.214669172Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cd4d8684d-s5t7r,Uid:9b01ed56-b260-4e1d-b3d3-dac544b9b63d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3d28b1b525f5762eb760ab01b33b72790294c53d0b8932b5a127c10fd661799c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:48:57.216462 kubelet[2800]: E0130 13:48:57.216299 2800 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d28b1b525f5762eb760ab01b33b72790294c53d0b8932b5a127c10fd661799c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:48:57.217331 kubelet[2800]: E0130 13:48:57.216969 2800 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d28b1b525f5762eb760ab01b33b72790294c53d0b8932b5a127c10fd661799c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5cd4d8684d-s5t7r" Jan 30 13:48:57.217762 kubelet[2800]: E0130 13:48:57.217718 2800 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d28b1b525f5762eb760ab01b33b72790294c53d0b8932b5a127c10fd661799c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5cd4d8684d-s5t7r" Jan 30 13:48:57.218309 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3d28b1b525f5762eb760ab01b33b72790294c53d0b8932b5a127c10fd661799c-shm.mount: Deactivated successfully. Jan 30 13:48:57.219658 kubelet[2800]: E0130 13:48:57.219550 2800 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5cd4d8684d-s5t7r_calico-system(9b01ed56-b260-4e1d-b3d3-dac544b9b63d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5cd4d8684d-s5t7r_calico-system(9b01ed56-b260-4e1d-b3d3-dac544b9b63d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3d28b1b525f5762eb760ab01b33b72790294c53d0b8932b5a127c10fd661799c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5cd4d8684d-s5t7r" podUID="9b01ed56-b260-4e1d-b3d3-dac544b9b63d" Jan 30 13:48:57.231639 containerd[1597]: time="2025-01-30T13:48:57.231579171Z" level=error msg="Failed to destroy network for sandbox \"ab34cb8ecc5c08e0ec4de63d0f180b07d7a49a06fca59e0d1ae9974aa55b0dbe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:48:57.233843 containerd[1597]: time="2025-01-30T13:48:57.233753986Z" level=error msg="encountered an error cleaning up failed sandbox \"ab34cb8ecc5c08e0ec4de63d0f180b07d7a49a06fca59e0d1ae9974aa55b0dbe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:48:57.236419 containerd[1597]: time="2025-01-30T13:48:57.234518179Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7876bf9cc5-jjtnr,Uid:4b2c2fe6-e614-4530-a6fa-02d31dc3b011,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ab34cb8ecc5c08e0ec4de63d0f180b07d7a49a06fca59e0d1ae9974aa55b0dbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:48:57.237510 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ab34cb8ecc5c08e0ec4de63d0f180b07d7a49a06fca59e0d1ae9974aa55b0dbe-shm.mount: Deactivated successfully. Jan 30 13:48:57.240469 kubelet[2800]: E0130 13:48:57.238602 2800 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab34cb8ecc5c08e0ec4de63d0f180b07d7a49a06fca59e0d1ae9974aa55b0dbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:48:57.240469 kubelet[2800]: E0130 13:48:57.238672 2800 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab34cb8ecc5c08e0ec4de63d0f180b07d7a49a06fca59e0d1ae9974aa55b0dbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7876bf9cc5-jjtnr" Jan 30 13:48:57.240469 kubelet[2800]: E0130 13:48:57.238705 2800 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab34cb8ecc5c08e0ec4de63d0f180b07d7a49a06fca59e0d1ae9974aa55b0dbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7876bf9cc5-jjtnr" Jan 30 13:48:57.240690 containerd[1597]: time="2025-01-30T13:48:57.240316432Z" level=error msg="Failed to destroy network for sandbox \"3c6b39ff8f06e3ae96187bc34712f7d5f94b346e3033eaec3132dc5a8a3a80cf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:48:57.240758 kubelet[2800]: E0130 13:48:57.238764 2800 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7876bf9cc5-jjtnr_calico-apiserver(4b2c2fe6-e614-4530-a6fa-02d31dc3b011)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7876bf9cc5-jjtnr_calico-apiserver(4b2c2fe6-e614-4530-a6fa-02d31dc3b011)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ab34cb8ecc5c08e0ec4de63d0f180b07d7a49a06fca59e0d1ae9974aa55b0dbe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7876bf9cc5-jjtnr" podUID="4b2c2fe6-e614-4530-a6fa-02d31dc3b011" Jan 30 13:48:57.240867 containerd[1597]: time="2025-01-30T13:48:57.240753649Z" level=error msg="encountered an error cleaning up failed sandbox \"3c6b39ff8f06e3ae96187bc34712f7d5f94b346e3033eaec3132dc5a8a3a80cf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:48:57.240867 containerd[1597]: time="2025-01-30T13:48:57.240818370Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kgnts,Uid:f2b9a9ee-1ab9-4acc-8bb6-079b1de8f5a0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3c6b39ff8f06e3ae96187bc34712f7d5f94b346e3033eaec3132dc5a8a3a80cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:48:57.243820 kubelet[2800]: E0130 13:48:57.242455 2800 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c6b39ff8f06e3ae96187bc34712f7d5f94b346e3033eaec3132dc5a8a3a80cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:48:57.243820 kubelet[2800]: E0130 13:48:57.242567 2800 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c6b39ff8f06e3ae96187bc34712f7d5f94b346e3033eaec3132dc5a8a3a80cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-kgnts" Jan 30 13:48:57.243820 kubelet[2800]: E0130 13:48:57.242629 2800 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c6b39ff8f06e3ae96187bc34712f7d5f94b346e3033eaec3132dc5a8a3a80cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-kgnts" Jan 30 13:48:57.248820 kubelet[2800]: E0130 13:48:57.247600 2800 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-kgnts_kube-system(f2b9a9ee-1ab9-4acc-8bb6-079b1de8f5a0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-kgnts_kube-system(f2b9a9ee-1ab9-4acc-8bb6-079b1de8f5a0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3c6b39ff8f06e3ae96187bc34712f7d5f94b346e3033eaec3132dc5a8a3a80cf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-kgnts" podUID="f2b9a9ee-1ab9-4acc-8bb6-079b1de8f5a0" Jan 30 13:48:57.250313 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3c6b39ff8f06e3ae96187bc34712f7d5f94b346e3033eaec3132dc5a8a3a80cf-shm.mount: Deactivated successfully. Jan 30 13:48:57.251649 containerd[1597]: time="2025-01-30T13:48:57.251599144Z" level=error msg="Failed to destroy network for sandbox \"4c9c40a812a591da952af179d9a573a88fbd976b65bd47d034f0f61837d8017e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:48:57.252065 containerd[1597]: time="2025-01-30T13:48:57.252020468Z" level=error msg="encountered an error cleaning up failed sandbox \"4c9c40a812a591da952af179d9a573a88fbd976b65bd47d034f0f61837d8017e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:48:57.252242 containerd[1597]: time="2025-01-30T13:48:57.252205886Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7876bf9cc5-rvcj9,Uid:954b7f72-1740-46bf-9d10-67fe412470fe,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4c9c40a812a591da952af179d9a573a88fbd976b65bd47d034f0f61837d8017e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:48:57.252954 kubelet[2800]: E0130 13:48:57.252753 2800 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c9c40a812a591da952af179d9a573a88fbd976b65bd47d034f0f61837d8017e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:48:57.253212 kubelet[2800]: E0130 13:48:57.253111 2800 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c9c40a812a591da952af179d9a573a88fbd976b65bd47d034f0f61837d8017e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7876bf9cc5-rvcj9" Jan 30 13:48:57.253212 kubelet[2800]: E0130 13:48:57.253173 2800 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c9c40a812a591da952af179d9a573a88fbd976b65bd47d034f0f61837d8017e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7876bf9cc5-rvcj9" Jan 30 13:48:57.253435 kubelet[2800]: E0130 13:48:57.253232 2800 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7876bf9cc5-rvcj9_calico-apiserver(954b7f72-1740-46bf-9d10-67fe412470fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7876bf9cc5-rvcj9_calico-apiserver(954b7f72-1740-46bf-9d10-67fe412470fe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4c9c40a812a591da952af179d9a573a88fbd976b65bd47d034f0f61837d8017e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7876bf9cc5-rvcj9" podUID="954b7f72-1740-46bf-9d10-67fe412470fe" Jan 30 13:48:57.841214 containerd[1597]: time="2025-01-30T13:48:57.840686026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jrv56,Uid:8ce3c383-738d-490f-a267-4c123b509bcf,Namespace:calico-system,Attempt:0,}" Jan 30 13:48:57.958440 containerd[1597]: time="2025-01-30T13:48:57.958364168Z" level=error msg="Failed to destroy network for sandbox \"25d5485a7ef27ddf29a14cb54f9f09d6559c3280b393782636ebf1a780cfd875\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:48:57.959047 containerd[1597]: time="2025-01-30T13:48:57.959005513Z" level=error msg="encountered an error cleaning up failed sandbox \"25d5485a7ef27ddf29a14cb54f9f09d6559c3280b393782636ebf1a780cfd875\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:48:57.959250 containerd[1597]: time="2025-01-30T13:48:57.959217560Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jrv56,Uid:8ce3c383-738d-490f-a267-4c123b509bcf,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"25d5485a7ef27ddf29a14cb54f9f09d6559c3280b393782636ebf1a780cfd875\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:48:57.959832 kubelet[2800]: E0130 13:48:57.959776 2800 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25d5485a7ef27ddf29a14cb54f9f09d6559c3280b393782636ebf1a780cfd875\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:48:57.960032 kubelet[2800]: E0130 13:48:57.959909 2800 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25d5485a7ef27ddf29a14cb54f9f09d6559c3280b393782636ebf1a780cfd875\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jrv56" Jan 30 13:48:57.960032 kubelet[2800]: E0130 13:48:57.959969 2800 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25d5485a7ef27ddf29a14cb54f9f09d6559c3280b393782636ebf1a780cfd875\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jrv56" Jan 30 13:48:57.960219 kubelet[2800]: E0130 13:48:57.960066 2800 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jrv56_calico-system(8ce3c383-738d-490f-a267-4c123b509bcf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jrv56_calico-system(8ce3c383-738d-490f-a267-4c123b509bcf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"25d5485a7ef27ddf29a14cb54f9f09d6559c3280b393782636ebf1a780cfd875\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jrv56" podUID="8ce3c383-738d-490f-a267-4c123b509bcf" Jan 30 13:48:57.980461 kubelet[2800]: I0130 13:48:57.979710 2800 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d28b1b525f5762eb760ab01b33b72790294c53d0b8932b5a127c10fd661799c" Jan 30 13:48:57.982562 containerd[1597]: time="2025-01-30T13:48:57.981627033Z" level=info msg="StopPodSandbox for \"3d28b1b525f5762eb760ab01b33b72790294c53d0b8932b5a127c10fd661799c\"" Jan 30 13:48:57.982562 containerd[1597]: time="2025-01-30T13:48:57.981880993Z" level=info msg="Ensure that sandbox 3d28b1b525f5762eb760ab01b33b72790294c53d0b8932b5a127c10fd661799c in task-service has been cleanup successfully" Jan 30 13:48:57.997956 kubelet[2800]: I0130 13:48:57.997912 2800 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c6b39ff8f06e3ae96187bc34712f7d5f94b346e3033eaec3132dc5a8a3a80cf" Jan 30 13:48:58.005790 containerd[1597]: time="2025-01-30T13:48:58.005743494Z" level=info msg="StopPodSandbox for \"3c6b39ff8f06e3ae96187bc34712f7d5f94b346e3033eaec3132dc5a8a3a80cf\"" Jan 30 13:48:58.006029 containerd[1597]: time="2025-01-30T13:48:58.005999663Z" level=info msg="Ensure that sandbox 3c6b39ff8f06e3ae96187bc34712f7d5f94b346e3033eaec3132dc5a8a3a80cf in task-service has been cleanup successfully" Jan 30 13:48:58.013105 kubelet[2800]: I0130 13:48:58.013013 2800 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25d5485a7ef27ddf29a14cb54f9f09d6559c3280b393782636ebf1a780cfd875" Jan 30 13:48:58.017423 containerd[1597]: time="2025-01-30T13:48:58.016940500Z" level=info msg="StopPodSandbox for \"25d5485a7ef27ddf29a14cb54f9f09d6559c3280b393782636ebf1a780cfd875\"" Jan 30 13:48:58.018988 containerd[1597]: time="2025-01-30T13:48:58.018943208Z" level=info msg="Ensure that sandbox 25d5485a7ef27ddf29a14cb54f9f09d6559c3280b393782636ebf1a780cfd875 in task-service has been cleanup successfully" Jan 30 13:48:58.035302 kubelet[2800]: I0130 13:48:58.034866 2800 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c9c40a812a591da952af179d9a573a88fbd976b65bd47d034f0f61837d8017e" Jan 30 13:48:58.038766 containerd[1597]: time="2025-01-30T13:48:58.038612737Z" level=info msg="StopPodSandbox for \"4c9c40a812a591da952af179d9a573a88fbd976b65bd47d034f0f61837d8017e\"" Jan 30 13:48:58.043617 containerd[1597]: time="2025-01-30T13:48:58.043557999Z" level=info msg="Ensure that sandbox 4c9c40a812a591da952af179d9a573a88fbd976b65bd47d034f0f61837d8017e in task-service has been cleanup successfully" Jan 30 13:48:58.053276 kubelet[2800]: I0130 13:48:58.052517 2800 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab34cb8ecc5c08e0ec4de63d0f180b07d7a49a06fca59e0d1ae9974aa55b0dbe" Jan 30 13:48:58.056220 containerd[1597]: time="2025-01-30T13:48:58.055271305Z" level=info msg="StopPodSandbox for \"ab34cb8ecc5c08e0ec4de63d0f180b07d7a49a06fca59e0d1ae9974aa55b0dbe\"" Jan 30 13:48:58.059553 containerd[1597]: time="2025-01-30T13:48:58.058917808Z" level=info msg="Ensure that sandbox ab34cb8ecc5c08e0ec4de63d0f180b07d7a49a06fca59e0d1ae9974aa55b0dbe in task-service has been cleanup successfully" Jan 30 13:48:58.069424 kubelet[2800]: I0130 13:48:58.066935 2800 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bd9f26f32de4e02082a9e1c4ed736d590938a463b47a2f9e44ea8c6180096cb" Jan 30 13:48:58.080844 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4c9c40a812a591da952af179d9a573a88fbd976b65bd47d034f0f61837d8017e-shm.mount: Deactivated successfully. Jan 30 13:48:58.085189 containerd[1597]: time="2025-01-30T13:48:58.082629295Z" level=info msg="StopPodSandbox for \"7bd9f26f32de4e02082a9e1c4ed736d590938a463b47a2f9e44ea8c6180096cb\"" Jan 30 13:48:58.085189 containerd[1597]: time="2025-01-30T13:48:58.082902436Z" level=info msg="Ensure that sandbox 7bd9f26f32de4e02082a9e1c4ed736d590938a463b47a2f9e44ea8c6180096cb in task-service has been cleanup successfully" Jan 30 13:48:58.169688 containerd[1597]: time="2025-01-30T13:48:58.169504385Z" level=error msg="StopPodSandbox for \"3d28b1b525f5762eb760ab01b33b72790294c53d0b8932b5a127c10fd661799c\" failed" error="failed to destroy network for sandbox \"3d28b1b525f5762eb760ab01b33b72790294c53d0b8932b5a127c10fd661799c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:48:58.170171 kubelet[2800]: E0130 13:48:58.170125 2800 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3d28b1b525f5762eb760ab01b33b72790294c53d0b8932b5a127c10fd661799c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3d28b1b525f5762eb760ab01b33b72790294c53d0b8932b5a127c10fd661799c" Jan 30 13:48:58.170446 kubelet[2800]: E0130 13:48:58.170367 2800 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3d28b1b525f5762eb760ab01b33b72790294c53d0b8932b5a127c10fd661799c"} Jan 30 13:48:58.170735 kubelet[2800]: E0130 13:48:58.170605 2800 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9b01ed56-b260-4e1d-b3d3-dac544b9b63d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3d28b1b525f5762eb760ab01b33b72790294c53d0b8932b5a127c10fd661799c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:48:58.170735 kubelet[2800]: E0130 13:48:58.170674 2800 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9b01ed56-b260-4e1d-b3d3-dac544b9b63d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3d28b1b525f5762eb760ab01b33b72790294c53d0b8932b5a127c10fd661799c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5cd4d8684d-s5t7r" podUID="9b01ed56-b260-4e1d-b3d3-dac544b9b63d" Jan 30 13:48:58.177490 containerd[1597]: time="2025-01-30T13:48:58.177430624Z" level=error msg="StopPodSandbox for \"3c6b39ff8f06e3ae96187bc34712f7d5f94b346e3033eaec3132dc5a8a3a80cf\" failed" error="failed to destroy network for sandbox \"3c6b39ff8f06e3ae96187bc34712f7d5f94b346e3033eaec3132dc5a8a3a80cf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:48:58.178903 kubelet[2800]: E0130 13:48:58.178380 2800 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3c6b39ff8f06e3ae96187bc34712f7d5f94b346e3033eaec3132dc5a8a3a80cf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3c6b39ff8f06e3ae96187bc34712f7d5f94b346e3033eaec3132dc5a8a3a80cf" Jan 30 13:48:58.178903 kubelet[2800]: E0130 13:48:58.178469 2800 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3c6b39ff8f06e3ae96187bc34712f7d5f94b346e3033eaec3132dc5a8a3a80cf"} Jan 30 13:48:58.178903 kubelet[2800]: E0130 13:48:58.178531 2800 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f2b9a9ee-1ab9-4acc-8bb6-079b1de8f5a0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3c6b39ff8f06e3ae96187bc34712f7d5f94b346e3033eaec3132dc5a8a3a80cf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:48:58.178903 kubelet[2800]: E0130 13:48:58.178571 2800 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f2b9a9ee-1ab9-4acc-8bb6-079b1de8f5a0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3c6b39ff8f06e3ae96187bc34712f7d5f94b346e3033eaec3132dc5a8a3a80cf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-kgnts" podUID="f2b9a9ee-1ab9-4acc-8bb6-079b1de8f5a0" Jan 30 13:48:58.211606 containerd[1597]: time="2025-01-30T13:48:58.211531664Z" level=error msg="StopPodSandbox for \"25d5485a7ef27ddf29a14cb54f9f09d6559c3280b393782636ebf1a780cfd875\" failed" error="failed to destroy network for sandbox \"25d5485a7ef27ddf29a14cb54f9f09d6559c3280b393782636ebf1a780cfd875\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:48:58.212355 kubelet[2800]: E0130 13:48:58.212089 2800 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"25d5485a7ef27ddf29a14cb54f9f09d6559c3280b393782636ebf1a780cfd875\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="25d5485a7ef27ddf29a14cb54f9f09d6559c3280b393782636ebf1a780cfd875" Jan 30 13:48:58.212355 kubelet[2800]: E0130 13:48:58.212158 2800 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"25d5485a7ef27ddf29a14cb54f9f09d6559c3280b393782636ebf1a780cfd875"} Jan 30 13:48:58.212355 kubelet[2800]: E0130 13:48:58.212209 2800 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8ce3c383-738d-490f-a267-4c123b509bcf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"25d5485a7ef27ddf29a14cb54f9f09d6559c3280b393782636ebf1a780cfd875\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:48:58.212355 kubelet[2800]: E0130 13:48:58.212288 2800 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8ce3c383-738d-490f-a267-4c123b509bcf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"25d5485a7ef27ddf29a14cb54f9f09d6559c3280b393782636ebf1a780cfd875\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jrv56" podUID="8ce3c383-738d-490f-a267-4c123b509bcf" Jan 30 13:48:58.241450 containerd[1597]: time="2025-01-30T13:48:58.240173301Z" level=error msg="StopPodSandbox for \"4c9c40a812a591da952af179d9a573a88fbd976b65bd47d034f0f61837d8017e\" failed" error="failed to destroy network for sandbox \"4c9c40a812a591da952af179d9a573a88fbd976b65bd47d034f0f61837d8017e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:48:58.241626 kubelet[2800]: E0130 13:48:58.240502 2800 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4c9c40a812a591da952af179d9a573a88fbd976b65bd47d034f0f61837d8017e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4c9c40a812a591da952af179d9a573a88fbd976b65bd47d034f0f61837d8017e" Jan 30 13:48:58.241626 kubelet[2800]: E0130 13:48:58.240564 2800 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4c9c40a812a591da952af179d9a573a88fbd976b65bd47d034f0f61837d8017e"} Jan 30 13:48:58.241626 kubelet[2800]: E0130 13:48:58.240623 2800 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"954b7f72-1740-46bf-9d10-67fe412470fe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4c9c40a812a591da952af179d9a573a88fbd976b65bd47d034f0f61837d8017e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:48:58.241626 kubelet[2800]: E0130 13:48:58.240662 2800 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"954b7f72-1740-46bf-9d10-67fe412470fe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4c9c40a812a591da952af179d9a573a88fbd976b65bd47d034f0f61837d8017e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7876bf9cc5-rvcj9" podUID="954b7f72-1740-46bf-9d10-67fe412470fe" Jan 30 13:48:58.244473 containerd[1597]: time="2025-01-30T13:48:58.242619049Z" level=error msg="StopPodSandbox for \"ab34cb8ecc5c08e0ec4de63d0f180b07d7a49a06fca59e0d1ae9974aa55b0dbe\" failed" error="failed to destroy network for sandbox \"ab34cb8ecc5c08e0ec4de63d0f180b07d7a49a06fca59e0d1ae9974aa55b0dbe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:48:58.244608 kubelet[2800]: E0130 13:48:58.244199 2800 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ab34cb8ecc5c08e0ec4de63d0f180b07d7a49a06fca59e0d1ae9974aa55b0dbe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ab34cb8ecc5c08e0ec4de63d0f180b07d7a49a06fca59e0d1ae9974aa55b0dbe" Jan 30 13:48:58.244608 kubelet[2800]: E0130 13:48:58.244262 2800 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ab34cb8ecc5c08e0ec4de63d0f180b07d7a49a06fca59e0d1ae9974aa55b0dbe"} Jan 30 13:48:58.244608 kubelet[2800]: E0130 13:48:58.244309 2800 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4b2c2fe6-e614-4530-a6fa-02d31dc3b011\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ab34cb8ecc5c08e0ec4de63d0f180b07d7a49a06fca59e0d1ae9974aa55b0dbe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:48:58.244608 kubelet[2800]: E0130 13:48:58.244346 2800 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4b2c2fe6-e614-4530-a6fa-02d31dc3b011\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ab34cb8ecc5c08e0ec4de63d0f180b07d7a49a06fca59e0d1ae9974aa55b0dbe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7876bf9cc5-jjtnr" podUID="4b2c2fe6-e614-4530-a6fa-02d31dc3b011" Jan 30 13:48:58.268358 containerd[1597]: time="2025-01-30T13:48:58.268290185Z" level=error msg="StopPodSandbox for \"7bd9f26f32de4e02082a9e1c4ed736d590938a463b47a2f9e44ea8c6180096cb\" failed" error="failed to destroy network for sandbox \"7bd9f26f32de4e02082a9e1c4ed736d590938a463b47a2f9e44ea8c6180096cb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:48:58.268880 kubelet[2800]: E0130 13:48:58.268832 2800 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7bd9f26f32de4e02082a9e1c4ed736d590938a463b47a2f9e44ea8c6180096cb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7bd9f26f32de4e02082a9e1c4ed736d590938a463b47a2f9e44ea8c6180096cb" Jan 30 13:48:58.269157 kubelet[2800]: E0130 13:48:58.269031 2800 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7bd9f26f32de4e02082a9e1c4ed736d590938a463b47a2f9e44ea8c6180096cb"} Jan 30 13:48:58.269157 kubelet[2800]: E0130 13:48:58.269085 2800 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bec5f10d-9deb-43b7-8bb4-a1f9acbcdd25\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7bd9f26f32de4e02082a9e1c4ed736d590938a463b47a2f9e44ea8c6180096cb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:48:58.269157 kubelet[2800]: E0130 13:48:58.269122 2800 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bec5f10d-9deb-43b7-8bb4-a1f9acbcdd25\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7bd9f26f32de4e02082a9e1c4ed736d590938a463b47a2f9e44ea8c6180096cb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-f9rp4" podUID="bec5f10d-9deb-43b7-8bb4-a1f9acbcdd25" Jan 30 13:49:03.295827 kubelet[2800]: I0130 13:49:03.295103 2800 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:49:03.836728 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2921005897.mount: Deactivated successfully. Jan 30 13:49:03.884308 containerd[1597]: time="2025-01-30T13:49:03.884243639Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:03.885805 containerd[1597]: time="2025-01-30T13:49:03.885620728Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 30 13:49:03.889133 containerd[1597]: time="2025-01-30T13:49:03.887186940Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:03.890874 containerd[1597]: time="2025-01-30T13:49:03.890490726Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:03.892501 containerd[1597]: time="2025-01-30T13:49:03.892440221Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 6.911814011s" Jan 30 13:49:03.892501 containerd[1597]: time="2025-01-30T13:49:03.892493397Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 30 13:49:03.927465 containerd[1597]: time="2025-01-30T13:49:03.927279628Z" level=info msg="CreateContainer within sandbox \"d4413c392cdb46f670c4467110eff29fef65f5d459aada106dd77d33210e1f7e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 30 13:49:03.953229 containerd[1597]: time="2025-01-30T13:49:03.953159664Z" level=info msg="CreateContainer within sandbox \"d4413c392cdb46f670c4467110eff29fef65f5d459aada106dd77d33210e1f7e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"38963f1f569e8970b47ef5c0d18f9593098dba6b0f9374033b4a22e30fdd74b6\"" Jan 30 13:49:03.955426 containerd[1597]: time="2025-01-30T13:49:03.954033994Z" level=info msg="StartContainer for \"38963f1f569e8970b47ef5c0d18f9593098dba6b0f9374033b4a22e30fdd74b6\"" Jan 30 13:49:04.041884 containerd[1597]: time="2025-01-30T13:49:04.041826627Z" level=info msg="StartContainer for \"38963f1f569e8970b47ef5c0d18f9593098dba6b0f9374033b4a22e30fdd74b6\" returns successfully" Jan 30 13:49:04.110288 kubelet[2800]: I0130 13:49:04.110001 2800 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-68r5d" podStartSLOduration=1.031203334 podStartE2EDuration="18.109953693s" podCreationTimestamp="2025-01-30 13:48:46 +0000 UTC" firstStartedPulling="2025-01-30 13:48:46.815730108 +0000 UTC m=+22.142368621" lastFinishedPulling="2025-01-30 13:49:03.894480466 +0000 UTC m=+39.221118980" observedRunningTime="2025-01-30 13:49:04.106493875 +0000 UTC m=+39.433132403" watchObservedRunningTime="2025-01-30 13:49:04.109953693 +0000 UTC m=+39.436592220" Jan 30 13:49:04.162617 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 30 13:49:04.163030 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 30 13:49:05.948461 kernel: bpftool[4083]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 30 13:49:06.230308 systemd-networkd[1217]: vxlan.calico: Link UP Jan 30 13:49:06.230321 systemd-networkd[1217]: vxlan.calico: Gained carrier Jan 30 13:49:07.713698 systemd-networkd[1217]: vxlan.calico: Gained IPv6LL Jan 30 13:49:08.839571 containerd[1597]: time="2025-01-30T13:49:08.839515751Z" level=info msg="StopPodSandbox for \"7bd9f26f32de4e02082a9e1c4ed736d590938a463b47a2f9e44ea8c6180096cb\"" Jan 30 13:49:08.948882 containerd[1597]: 2025-01-30 13:49:08.904 [INFO][4167] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7bd9f26f32de4e02082a9e1c4ed736d590938a463b47a2f9e44ea8c6180096cb" Jan 30 13:49:08.948882 containerd[1597]: 2025-01-30 13:49:08.905 [INFO][4167] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7bd9f26f32de4e02082a9e1c4ed736d590938a463b47a2f9e44ea8c6180096cb" iface="eth0" netns="/var/run/netns/cni-1ecbaaea-8f61-51bb-20d9-61e7ad5ea4b7" Jan 30 13:49:08.948882 containerd[1597]: 2025-01-30 13:49:08.906 [INFO][4167] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7bd9f26f32de4e02082a9e1c4ed736d590938a463b47a2f9e44ea8c6180096cb" iface="eth0" netns="/var/run/netns/cni-1ecbaaea-8f61-51bb-20d9-61e7ad5ea4b7" Jan 30 13:49:08.948882 containerd[1597]: 2025-01-30 13:49:08.906 [INFO][4167] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7bd9f26f32de4e02082a9e1c4ed736d590938a463b47a2f9e44ea8c6180096cb" iface="eth0" netns="/var/run/netns/cni-1ecbaaea-8f61-51bb-20d9-61e7ad5ea4b7" Jan 30 13:49:08.948882 containerd[1597]: 2025-01-30 13:49:08.907 [INFO][4167] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7bd9f26f32de4e02082a9e1c4ed736d590938a463b47a2f9e44ea8c6180096cb" Jan 30 13:49:08.948882 containerd[1597]: 2025-01-30 13:49:08.907 [INFO][4167] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7bd9f26f32de4e02082a9e1c4ed736d590938a463b47a2f9e44ea8c6180096cb" Jan 30 13:49:08.948882 containerd[1597]: 2025-01-30 13:49:08.935 [INFO][4173] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7bd9f26f32de4e02082a9e1c4ed736d590938a463b47a2f9e44ea8c6180096cb" HandleID="k8s-pod-network.7bd9f26f32de4e02082a9e1c4ed736d590938a463b47a2f9e44ea8c6180096cb" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--f9rp4-eth0" Jan 30 13:49:08.948882 containerd[1597]: 2025-01-30 13:49:08.936 [INFO][4173] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:49:08.948882 containerd[1597]: 2025-01-30 13:49:08.936 [INFO][4173] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:49:08.948882 containerd[1597]: 2025-01-30 13:49:08.942 [WARNING][4173] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7bd9f26f32de4e02082a9e1c4ed736d590938a463b47a2f9e44ea8c6180096cb" HandleID="k8s-pod-network.7bd9f26f32de4e02082a9e1c4ed736d590938a463b47a2f9e44ea8c6180096cb" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--f9rp4-eth0" Jan 30 13:49:08.948882 containerd[1597]: 2025-01-30 13:49:08.942 [INFO][4173] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7bd9f26f32de4e02082a9e1c4ed736d590938a463b47a2f9e44ea8c6180096cb" HandleID="k8s-pod-network.7bd9f26f32de4e02082a9e1c4ed736d590938a463b47a2f9e44ea8c6180096cb" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--f9rp4-eth0" Jan 30 13:49:08.948882 containerd[1597]: 2025-01-30 13:49:08.944 [INFO][4173] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:49:08.948882 containerd[1597]: 2025-01-30 13:49:08.947 [INFO][4167] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7bd9f26f32de4e02082a9e1c4ed736d590938a463b47a2f9e44ea8c6180096cb" Jan 30 13:49:08.951569 containerd[1597]: time="2025-01-30T13:49:08.951500677Z" level=info msg="TearDown network for sandbox \"7bd9f26f32de4e02082a9e1c4ed736d590938a463b47a2f9e44ea8c6180096cb\" successfully" Jan 30 13:49:08.951569 containerd[1597]: time="2025-01-30T13:49:08.951559185Z" level=info msg="StopPodSandbox for \"7bd9f26f32de4e02082a9e1c4ed736d590938a463b47a2f9e44ea8c6180096cb\" returns successfully" Jan 30 13:49:08.952610 containerd[1597]: time="2025-01-30T13:49:08.952572256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-f9rp4,Uid:bec5f10d-9deb-43b7-8bb4-a1f9acbcdd25,Namespace:kube-system,Attempt:1,}" Jan 30 13:49:08.956963 systemd[1]: run-netns-cni\x2d1ecbaaea\x2d8f61\x2d51bb\x2d20d9\x2d61e7ad5ea4b7.mount: Deactivated successfully. Jan 30 13:49:09.118041 systemd-networkd[1217]: cali80b13e38ee5: Link UP Jan 30 13:49:09.118387 systemd-networkd[1217]: cali80b13e38ee5: Gained carrier Jan 30 13:49:09.149730 containerd[1597]: 2025-01-30 13:49:09.029 [INFO][4180] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--f9rp4-eth0 coredns-7db6d8ff4d- kube-system bec5f10d-9deb-43b7-8bb4-a1f9acbcdd25 744 0 2025-01-30 13:48:39 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal coredns-7db6d8ff4d-f9rp4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali80b13e38ee5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="7eff629b77a5876374ad0b5c58495b2de9ff8cab83de416c627b2fd2c0798274" Namespace="kube-system" Pod="coredns-7db6d8ff4d-f9rp4" WorkloadEndpoint="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--f9rp4-" Jan 30 13:49:09.149730 containerd[1597]: 2025-01-30 13:49:09.030 [INFO][4180] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7eff629b77a5876374ad0b5c58495b2de9ff8cab83de416c627b2fd2c0798274" Namespace="kube-system" Pod="coredns-7db6d8ff4d-f9rp4" WorkloadEndpoint="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--f9rp4-eth0" Jan 30 13:49:09.149730 containerd[1597]: 2025-01-30 13:49:09.069 [INFO][4190] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7eff629b77a5876374ad0b5c58495b2de9ff8cab83de416c627b2fd2c0798274" HandleID="k8s-pod-network.7eff629b77a5876374ad0b5c58495b2de9ff8cab83de416c627b2fd2c0798274" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--f9rp4-eth0" Jan 30 13:49:09.149730 containerd[1597]: 2025-01-30 13:49:09.079 [INFO][4190] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7eff629b77a5876374ad0b5c58495b2de9ff8cab83de416c627b2fd2c0798274" HandleID="k8s-pod-network.7eff629b77a5876374ad0b5c58495b2de9ff8cab83de416c627b2fd2c0798274" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--f9rp4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319940), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal", "pod":"coredns-7db6d8ff4d-f9rp4", "timestamp":"2025-01-30 13:49:09.068994253 +0000 UTC"}, Hostname:"ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:49:09.149730 containerd[1597]: 2025-01-30 13:49:09.079 [INFO][4190] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:49:09.149730 containerd[1597]: 2025-01-30 13:49:09.079 [INFO][4190] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:49:09.149730 containerd[1597]: 2025-01-30 13:49:09.079 [INFO][4190] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal' Jan 30 13:49:09.149730 containerd[1597]: 2025-01-30 13:49:09.081 [INFO][4190] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7eff629b77a5876374ad0b5c58495b2de9ff8cab83de416c627b2fd2c0798274" host="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:49:09.149730 containerd[1597]: 2025-01-30 13:49:09.085 [INFO][4190] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:49:09.149730 containerd[1597]: 2025-01-30 13:49:09.090 [INFO][4190] ipam/ipam.go 489: Trying affinity for 192.168.2.64/26 host="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:49:09.149730 containerd[1597]: 2025-01-30 13:49:09.092 [INFO][4190] ipam/ipam.go 155: Attempting to load block cidr=192.168.2.64/26 host="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:49:09.149730 containerd[1597]: 2025-01-30 13:49:09.094 [INFO][4190] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.2.64/26 host="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:49:09.149730 containerd[1597]: 2025-01-30 13:49:09.094 [INFO][4190] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.2.64/26 handle="k8s-pod-network.7eff629b77a5876374ad0b5c58495b2de9ff8cab83de416c627b2fd2c0798274" host="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:49:09.149730 containerd[1597]: 2025-01-30 13:49:09.096 [INFO][4190] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7eff629b77a5876374ad0b5c58495b2de9ff8cab83de416c627b2fd2c0798274 Jan 30 13:49:09.149730 containerd[1597]: 2025-01-30 13:49:09.102 [INFO][4190] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.2.64/26 handle="k8s-pod-network.7eff629b77a5876374ad0b5c58495b2de9ff8cab83de416c627b2fd2c0798274" host="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:49:09.149730 containerd[1597]: 2025-01-30 13:49:09.109 [INFO][4190] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.2.65/26] block=192.168.2.64/26 handle="k8s-pod-network.7eff629b77a5876374ad0b5c58495b2de9ff8cab83de416c627b2fd2c0798274" host="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:49:09.149730 containerd[1597]: 2025-01-30 13:49:09.109 [INFO][4190] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.2.65/26] handle="k8s-pod-network.7eff629b77a5876374ad0b5c58495b2de9ff8cab83de416c627b2fd2c0798274" host="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:49:09.149730 containerd[1597]: 2025-01-30 13:49:09.109 [INFO][4190] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:49:09.149730 containerd[1597]: 2025-01-30 13:49:09.109 [INFO][4190] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.2.65/26] IPv6=[] ContainerID="7eff629b77a5876374ad0b5c58495b2de9ff8cab83de416c627b2fd2c0798274" HandleID="k8s-pod-network.7eff629b77a5876374ad0b5c58495b2de9ff8cab83de416c627b2fd2c0798274" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--f9rp4-eth0" Jan 30 13:49:09.150908 containerd[1597]: 2025-01-30 13:49:09.111 [INFO][4180] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7eff629b77a5876374ad0b5c58495b2de9ff8cab83de416c627b2fd2c0798274" Namespace="kube-system" Pod="coredns-7db6d8ff4d-f9rp4" WorkloadEndpoint="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--f9rp4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--f9rp4-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"bec5f10d-9deb-43b7-8bb4-a1f9acbcdd25", ResourceVersion:"744", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 48, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-7db6d8ff4d-f9rp4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali80b13e38ee5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:49:09.150908 containerd[1597]: 2025-01-30 13:49:09.112 [INFO][4180] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.2.65/32] ContainerID="7eff629b77a5876374ad0b5c58495b2de9ff8cab83de416c627b2fd2c0798274" Namespace="kube-system" Pod="coredns-7db6d8ff4d-f9rp4" WorkloadEndpoint="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--f9rp4-eth0" Jan 30 13:49:09.150908 containerd[1597]: 2025-01-30 13:49:09.112 [INFO][4180] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali80b13e38ee5 ContainerID="7eff629b77a5876374ad0b5c58495b2de9ff8cab83de416c627b2fd2c0798274" Namespace="kube-system" Pod="coredns-7db6d8ff4d-f9rp4" WorkloadEndpoint="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--f9rp4-eth0" Jan 30 13:49:09.150908 containerd[1597]: 2025-01-30 13:49:09.116 [INFO][4180] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7eff629b77a5876374ad0b5c58495b2de9ff8cab83de416c627b2fd2c0798274" Namespace="kube-system" Pod="coredns-7db6d8ff4d-f9rp4" WorkloadEndpoint="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--f9rp4-eth0" Jan 30 13:49:09.150908 containerd[1597]: 2025-01-30 13:49:09.117 [INFO][4180] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7eff629b77a5876374ad0b5c58495b2de9ff8cab83de416c627b2fd2c0798274" Namespace="kube-system" Pod="coredns-7db6d8ff4d-f9rp4" WorkloadEndpoint="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--f9rp4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--f9rp4-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"bec5f10d-9deb-43b7-8bb4-a1f9acbcdd25", ResourceVersion:"744", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 48, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal", ContainerID:"7eff629b77a5876374ad0b5c58495b2de9ff8cab83de416c627b2fd2c0798274", Pod:"coredns-7db6d8ff4d-f9rp4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali80b13e38ee5", MAC:"ca:b2:6f:8d:a7:91", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:49:09.150908 containerd[1597]: 2025-01-30 13:49:09.138 [INFO][4180] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7eff629b77a5876374ad0b5c58495b2de9ff8cab83de416c627b2fd2c0798274" Namespace="kube-system" Pod="coredns-7db6d8ff4d-f9rp4" WorkloadEndpoint="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--f9rp4-eth0" Jan 30 13:49:09.196326 containerd[1597]: time="2025-01-30T13:49:09.196002098Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:49:09.196668 containerd[1597]: time="2025-01-30T13:49:09.196296773Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:49:09.196668 containerd[1597]: time="2025-01-30T13:49:09.196374373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:49:09.196912 containerd[1597]: time="2025-01-30T13:49:09.196682776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:49:09.279138 containerd[1597]: time="2025-01-30T13:49:09.279083462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-f9rp4,Uid:bec5f10d-9deb-43b7-8bb4-a1f9acbcdd25,Namespace:kube-system,Attempt:1,} returns sandbox id \"7eff629b77a5876374ad0b5c58495b2de9ff8cab83de416c627b2fd2c0798274\"" Jan 30 13:49:09.283689 containerd[1597]: time="2025-01-30T13:49:09.283453179Z" level=info msg="CreateContainer within sandbox \"7eff629b77a5876374ad0b5c58495b2de9ff8cab83de416c627b2fd2c0798274\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:49:09.300969 containerd[1597]: time="2025-01-30T13:49:09.300920901Z" level=info msg="CreateContainer within sandbox \"7eff629b77a5876374ad0b5c58495b2de9ff8cab83de416c627b2fd2c0798274\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1d0e4e8148a91896a6cc4b1890117e3991b352944289ce589e3ddef2b77bc05d\"" Jan 30 13:49:09.302122 containerd[1597]: time="2025-01-30T13:49:09.301764730Z" level=info msg="StartContainer for \"1d0e4e8148a91896a6cc4b1890117e3991b352944289ce589e3ddef2b77bc05d\"" Jan 30 13:49:09.372177 containerd[1597]: time="2025-01-30T13:49:09.371985826Z" level=info msg="StartContainer for \"1d0e4e8148a91896a6cc4b1890117e3991b352944289ce589e3ddef2b77bc05d\" returns successfully" Jan 30 13:49:09.838452 containerd[1597]: time="2025-01-30T13:49:09.837981789Z" level=info msg="StopPodSandbox for \"25d5485a7ef27ddf29a14cb54f9f09d6559c3280b393782636ebf1a780cfd875\"" Jan 30 13:49:09.949600 containerd[1597]: 2025-01-30 13:49:09.900 [INFO][4297] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="25d5485a7ef27ddf29a14cb54f9f09d6559c3280b393782636ebf1a780cfd875" Jan 30 13:49:09.949600 containerd[1597]: 2025-01-30 13:49:09.900 [INFO][4297] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="25d5485a7ef27ddf29a14cb54f9f09d6559c3280b393782636ebf1a780cfd875" iface="eth0" netns="/var/run/netns/cni-63d16c11-68a1-32ff-154c-d9d064c12a1a" Jan 30 13:49:09.949600 containerd[1597]: 2025-01-30 13:49:09.901 [INFO][4297] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="25d5485a7ef27ddf29a14cb54f9f09d6559c3280b393782636ebf1a780cfd875" iface="eth0" netns="/var/run/netns/cni-63d16c11-68a1-32ff-154c-d9d064c12a1a" Jan 30 13:49:09.949600 containerd[1597]: 2025-01-30 13:49:09.901 [INFO][4297] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="25d5485a7ef27ddf29a14cb54f9f09d6559c3280b393782636ebf1a780cfd875" iface="eth0" netns="/var/run/netns/cni-63d16c11-68a1-32ff-154c-d9d064c12a1a" Jan 30 13:49:09.949600 containerd[1597]: 2025-01-30 13:49:09.901 [INFO][4297] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="25d5485a7ef27ddf29a14cb54f9f09d6559c3280b393782636ebf1a780cfd875" Jan 30 13:49:09.949600 containerd[1597]: 2025-01-30 13:49:09.901 [INFO][4297] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="25d5485a7ef27ddf29a14cb54f9f09d6559c3280b393782636ebf1a780cfd875" Jan 30 13:49:09.949600 containerd[1597]: 2025-01-30 13:49:09.936 [INFO][4303] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="25d5485a7ef27ddf29a14cb54f9f09d6559c3280b393782636ebf1a780cfd875" HandleID="k8s-pod-network.25d5485a7ef27ddf29a14cb54f9f09d6559c3280b393782636ebf1a780cfd875" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-csi--node--driver--jrv56-eth0" Jan 30 13:49:09.949600 containerd[1597]: 2025-01-30 13:49:09.936 [INFO][4303] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:49:09.949600 containerd[1597]: 2025-01-30 13:49:09.936 [INFO][4303] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:49:09.949600 containerd[1597]: 2025-01-30 13:49:09.944 [WARNING][4303] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="25d5485a7ef27ddf29a14cb54f9f09d6559c3280b393782636ebf1a780cfd875" HandleID="k8s-pod-network.25d5485a7ef27ddf29a14cb54f9f09d6559c3280b393782636ebf1a780cfd875" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-csi--node--driver--jrv56-eth0" Jan 30 13:49:09.949600 containerd[1597]: 2025-01-30 13:49:09.944 [INFO][4303] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="25d5485a7ef27ddf29a14cb54f9f09d6559c3280b393782636ebf1a780cfd875" HandleID="k8s-pod-network.25d5485a7ef27ddf29a14cb54f9f09d6559c3280b393782636ebf1a780cfd875" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-csi--node--driver--jrv56-eth0" Jan 30 13:49:09.949600 containerd[1597]: 2025-01-30 13:49:09.946 [INFO][4303] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:49:09.949600 containerd[1597]: 2025-01-30 13:49:09.947 [INFO][4297] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="25d5485a7ef27ddf29a14cb54f9f09d6559c3280b393782636ebf1a780cfd875" Jan 30 13:49:09.949600 containerd[1597]: time="2025-01-30T13:49:09.949288171Z" level=info msg="TearDown network for sandbox \"25d5485a7ef27ddf29a14cb54f9f09d6559c3280b393782636ebf1a780cfd875\" successfully" Jan 30 13:49:09.949600 containerd[1597]: time="2025-01-30T13:49:09.949325517Z" level=info msg="StopPodSandbox for \"25d5485a7ef27ddf29a14cb54f9f09d6559c3280b393782636ebf1a780cfd875\" returns successfully" Jan 30 13:49:09.953750 containerd[1597]: time="2025-01-30T13:49:09.953700356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jrv56,Uid:8ce3c383-738d-490f-a267-4c123b509bcf,Namespace:calico-system,Attempt:1,}" Jan 30 13:49:09.961180 systemd[1]: run-netns-cni\x2d63d16c11\x2d68a1\x2d32ff\x2d154c\x2dd9d064c12a1a.mount: Deactivated successfully. Jan 30 13:49:10.152617 systemd-networkd[1217]: cali75802108f1e: Link UP Jan 30 13:49:10.152973 systemd-networkd[1217]: cali75802108f1e: Gained carrier Jan 30 13:49:10.163923 kubelet[2800]: I0130 13:49:10.161512 2800 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-f9rp4" podStartSLOduration=31.161484655 podStartE2EDuration="31.161484655s" podCreationTimestamp="2025-01-30 13:48:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:49:10.126173962 +0000 UTC m=+45.452812499" watchObservedRunningTime="2025-01-30 13:49:10.161484655 +0000 UTC m=+45.488123186" Jan 30 13:49:10.189199 containerd[1597]: 2025-01-30 13:49:10.042 [INFO][4309] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-csi--node--driver--jrv56-eth0 csi-node-driver- calico-system 8ce3c383-738d-490f-a267-4c123b509bcf 754 0 2025-01-30 13:48:46 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal csi-node-driver-jrv56 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali75802108f1e [] []}} ContainerID="74023e3b965146bafd526f893862a5bb8fbcf2bffc4dab5517e55d8c4eeb8ea4" Namespace="calico-system" Pod="csi-node-driver-jrv56" WorkloadEndpoint="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-csi--node--driver--jrv56-" Jan 30 13:49:10.189199 containerd[1597]: 2025-01-30 13:49:10.042 [INFO][4309] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="74023e3b965146bafd526f893862a5bb8fbcf2bffc4dab5517e55d8c4eeb8ea4" Namespace="calico-system" Pod="csi-node-driver-jrv56" WorkloadEndpoint="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-csi--node--driver--jrv56-eth0" Jan 30 13:49:10.189199 containerd[1597]: 2025-01-30 13:49:10.077 [INFO][4322] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="74023e3b965146bafd526f893862a5bb8fbcf2bffc4dab5517e55d8c4eeb8ea4" HandleID="k8s-pod-network.74023e3b965146bafd526f893862a5bb8fbcf2bffc4dab5517e55d8c4eeb8ea4" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-csi--node--driver--jrv56-eth0" Jan 30 13:49:10.189199 containerd[1597]: 2025-01-30 13:49:10.090 [INFO][4322] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="74023e3b965146bafd526f893862a5bb8fbcf2bffc4dab5517e55d8c4eeb8ea4" HandleID="k8s-pod-network.74023e3b965146bafd526f893862a5bb8fbcf2bffc4dab5517e55d8c4eeb8ea4" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-csi--node--driver--jrv56-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000292b70), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal", "pod":"csi-node-driver-jrv56", "timestamp":"2025-01-30 13:49:10.077697938 +0000 UTC"}, Hostname:"ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:49:10.189199 containerd[1597]: 2025-01-30 13:49:10.090 [INFO][4322] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:49:10.189199 containerd[1597]: 2025-01-30 13:49:10.090 [INFO][4322] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:49:10.189199 containerd[1597]: 2025-01-30 13:49:10.090 [INFO][4322] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal' Jan 30 13:49:10.189199 containerd[1597]: 2025-01-30 13:49:10.092 [INFO][4322] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.74023e3b965146bafd526f893862a5bb8fbcf2bffc4dab5517e55d8c4eeb8ea4" host="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:49:10.189199 containerd[1597]: 2025-01-30 13:49:10.097 [INFO][4322] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:49:10.189199 containerd[1597]: 2025-01-30 13:49:10.103 [INFO][4322] ipam/ipam.go 489: Trying affinity for 192.168.2.64/26 host="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:49:10.189199 containerd[1597]: 2025-01-30 13:49:10.106 [INFO][4322] ipam/ipam.go 155: Attempting to load block cidr=192.168.2.64/26 host="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:49:10.189199 containerd[1597]: 2025-01-30 13:49:10.111 [INFO][4322] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.2.64/26 host="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:49:10.189199 containerd[1597]: 2025-01-30 13:49:10.111 [INFO][4322] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.2.64/26 handle="k8s-pod-network.74023e3b965146bafd526f893862a5bb8fbcf2bffc4dab5517e55d8c4eeb8ea4" host="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:49:10.189199 containerd[1597]: 2025-01-30 13:49:10.113 [INFO][4322] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.74023e3b965146bafd526f893862a5bb8fbcf2bffc4dab5517e55d8c4eeb8ea4 Jan 30 13:49:10.189199 containerd[1597]: 2025-01-30 13:49:10.119 [INFO][4322] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.2.64/26 handle="k8s-pod-network.74023e3b965146bafd526f893862a5bb8fbcf2bffc4dab5517e55d8c4eeb8ea4" host="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:49:10.189199 containerd[1597]: 2025-01-30 13:49:10.130 [INFO][4322] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.2.66/26] block=192.168.2.64/26 handle="k8s-pod-network.74023e3b965146bafd526f893862a5bb8fbcf2bffc4dab5517e55d8c4eeb8ea4" host="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:49:10.189199 containerd[1597]: 2025-01-30 13:49:10.130 [INFO][4322] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.2.66/26] handle="k8s-pod-network.74023e3b965146bafd526f893862a5bb8fbcf2bffc4dab5517e55d8c4eeb8ea4" host="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:49:10.189199 containerd[1597]: 2025-01-30 13:49:10.130 [INFO][4322] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:49:10.189199 containerd[1597]: 2025-01-30 13:49:10.130 [INFO][4322] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.2.66/26] IPv6=[] ContainerID="74023e3b965146bafd526f893862a5bb8fbcf2bffc4dab5517e55d8c4eeb8ea4" HandleID="k8s-pod-network.74023e3b965146bafd526f893862a5bb8fbcf2bffc4dab5517e55d8c4eeb8ea4" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-csi--node--driver--jrv56-eth0" Jan 30 13:49:10.192838 containerd[1597]: 2025-01-30 13:49:10.135 [INFO][4309] cni-plugin/k8s.go 386: Populated endpoint ContainerID="74023e3b965146bafd526f893862a5bb8fbcf2bffc4dab5517e55d8c4eeb8ea4" Namespace="calico-system" Pod="csi-node-driver-jrv56" WorkloadEndpoint="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-csi--node--driver--jrv56-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-csi--node--driver--jrv56-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8ce3c383-738d-490f-a267-4c123b509bcf", ResourceVersion:"754", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 48, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal", ContainerID:"", Pod:"csi-node-driver-jrv56", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.2.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali75802108f1e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:49:10.192838 containerd[1597]: 2025-01-30 13:49:10.136 [INFO][4309] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.2.66/32] ContainerID="74023e3b965146bafd526f893862a5bb8fbcf2bffc4dab5517e55d8c4eeb8ea4" Namespace="calico-system" Pod="csi-node-driver-jrv56" WorkloadEndpoint="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-csi--node--driver--jrv56-eth0" Jan 30 13:49:10.192838 containerd[1597]: 2025-01-30 13:49:10.136 [INFO][4309] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali75802108f1e ContainerID="74023e3b965146bafd526f893862a5bb8fbcf2bffc4dab5517e55d8c4eeb8ea4" Namespace="calico-system" Pod="csi-node-driver-jrv56" WorkloadEndpoint="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-csi--node--driver--jrv56-eth0" Jan 30 13:49:10.192838 containerd[1597]: 2025-01-30 13:49:10.150 [INFO][4309] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="74023e3b965146bafd526f893862a5bb8fbcf2bffc4dab5517e55d8c4eeb8ea4" Namespace="calico-system" Pod="csi-node-driver-jrv56" WorkloadEndpoint="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-csi--node--driver--jrv56-eth0" Jan 30 13:49:10.192838 containerd[1597]: 2025-01-30 13:49:10.150 [INFO][4309] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="74023e3b965146bafd526f893862a5bb8fbcf2bffc4dab5517e55d8c4eeb8ea4" Namespace="calico-system" Pod="csi-node-driver-jrv56" WorkloadEndpoint="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-csi--node--driver--jrv56-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-csi--node--driver--jrv56-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8ce3c383-738d-490f-a267-4c123b509bcf", ResourceVersion:"754", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 48, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal", ContainerID:"74023e3b965146bafd526f893862a5bb8fbcf2bffc4dab5517e55d8c4eeb8ea4", Pod:"csi-node-driver-jrv56", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.2.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali75802108f1e", MAC:"de:55:5b:98:00:7a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:49:10.192838 containerd[1597]: 2025-01-30 13:49:10.177 [INFO][4309] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="74023e3b965146bafd526f893862a5bb8fbcf2bffc4dab5517e55d8c4eeb8ea4" Namespace="calico-system" Pod="csi-node-driver-jrv56" WorkloadEndpoint="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-csi--node--driver--jrv56-eth0" Jan 30 13:49:10.239823 containerd[1597]: time="2025-01-30T13:49:10.238087123Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:49:10.239823 containerd[1597]: time="2025-01-30T13:49:10.238171174Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:49:10.239823 containerd[1597]: time="2025-01-30T13:49:10.238199381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:49:10.239823 containerd[1597]: time="2025-01-30T13:49:10.238352456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:49:10.309426 containerd[1597]: time="2025-01-30T13:49:10.309362037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jrv56,Uid:8ce3c383-738d-490f-a267-4c123b509bcf,Namespace:calico-system,Attempt:1,} returns sandbox id \"74023e3b965146bafd526f893862a5bb8fbcf2bffc4dab5517e55d8c4eeb8ea4\"" Jan 30 13:49:10.311824 containerd[1597]: time="2025-01-30T13:49:10.311766512Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 30 13:49:10.658333 systemd-networkd[1217]: cali80b13e38ee5: Gained IPv6LL Jan 30 13:49:10.839950 containerd[1597]: time="2025-01-30T13:49:10.838788645Z" level=info msg="StopPodSandbox for \"3d28b1b525f5762eb760ab01b33b72790294c53d0b8932b5a127c10fd661799c\"" Jan 30 13:49:10.843927 containerd[1597]: time="2025-01-30T13:49:10.843618726Z" level=info msg="StopPodSandbox for \"ab34cb8ecc5c08e0ec4de63d0f180b07d7a49a06fca59e0d1ae9974aa55b0dbe\"" Jan 30 13:49:11.014038 containerd[1597]: 2025-01-30 13:49:10.943 [INFO][4417] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ab34cb8ecc5c08e0ec4de63d0f180b07d7a49a06fca59e0d1ae9974aa55b0dbe" Jan 30 13:49:11.014038 containerd[1597]: 2025-01-30 13:49:10.944 [INFO][4417] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ab34cb8ecc5c08e0ec4de63d0f180b07d7a49a06fca59e0d1ae9974aa55b0dbe" iface="eth0" netns="/var/run/netns/cni-30d875c4-c992-0fde-f93d-f2e7237c4fae" Jan 30 13:49:11.014038 containerd[1597]: 2025-01-30 13:49:10.944 [INFO][4417] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ab34cb8ecc5c08e0ec4de63d0f180b07d7a49a06fca59e0d1ae9974aa55b0dbe" iface="eth0" netns="/var/run/netns/cni-30d875c4-c992-0fde-f93d-f2e7237c4fae" Jan 30 13:49:11.014038 containerd[1597]: 2025-01-30 13:49:10.944 [INFO][4417] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ab34cb8ecc5c08e0ec4de63d0f180b07d7a49a06fca59e0d1ae9974aa55b0dbe" iface="eth0" netns="/var/run/netns/cni-30d875c4-c992-0fde-f93d-f2e7237c4fae" Jan 30 13:49:11.014038 containerd[1597]: 2025-01-30 13:49:10.944 [INFO][4417] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ab34cb8ecc5c08e0ec4de63d0f180b07d7a49a06fca59e0d1ae9974aa55b0dbe" Jan 30 13:49:11.014038 containerd[1597]: 2025-01-30 13:49:10.944 [INFO][4417] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ab34cb8ecc5c08e0ec4de63d0f180b07d7a49a06fca59e0d1ae9974aa55b0dbe" Jan 30 13:49:11.014038 containerd[1597]: 2025-01-30 13:49:10.995 [INFO][4428] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ab34cb8ecc5c08e0ec4de63d0f180b07d7a49a06fca59e0d1ae9974aa55b0dbe" HandleID="k8s-pod-network.ab34cb8ecc5c08e0ec4de63d0f180b07d7a49a06fca59e0d1ae9974aa55b0dbe" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--apiserver--7876bf9cc5--jjtnr-eth0" Jan 30 13:49:11.014038 containerd[1597]: 2025-01-30 13:49:10.995 [INFO][4428] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:49:11.014038 containerd[1597]: 2025-01-30 13:49:10.995 [INFO][4428] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:49:11.014038 containerd[1597]: 2025-01-30 13:49:11.006 [WARNING][4428] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ab34cb8ecc5c08e0ec4de63d0f180b07d7a49a06fca59e0d1ae9974aa55b0dbe" HandleID="k8s-pod-network.ab34cb8ecc5c08e0ec4de63d0f180b07d7a49a06fca59e0d1ae9974aa55b0dbe" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--apiserver--7876bf9cc5--jjtnr-eth0" Jan 30 13:49:11.014038 containerd[1597]: 2025-01-30 13:49:11.006 [INFO][4428] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ab34cb8ecc5c08e0ec4de63d0f180b07d7a49a06fca59e0d1ae9974aa55b0dbe" HandleID="k8s-pod-network.ab34cb8ecc5c08e0ec4de63d0f180b07d7a49a06fca59e0d1ae9974aa55b0dbe" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--apiserver--7876bf9cc5--jjtnr-eth0" Jan 30 13:49:11.014038 containerd[1597]: 2025-01-30 13:49:11.009 [INFO][4428] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:49:11.014038 containerd[1597]: 2025-01-30 13:49:11.010 [INFO][4417] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ab34cb8ecc5c08e0ec4de63d0f180b07d7a49a06fca59e0d1ae9974aa55b0dbe" Jan 30 13:49:11.021729 containerd[1597]: time="2025-01-30T13:49:11.018734700Z" level=info msg="TearDown network for sandbox \"ab34cb8ecc5c08e0ec4de63d0f180b07d7a49a06fca59e0d1ae9974aa55b0dbe\" successfully" Jan 30 13:49:11.021729 containerd[1597]: time="2025-01-30T13:49:11.018894630Z" level=info msg="StopPodSandbox for \"ab34cb8ecc5c08e0ec4de63d0f180b07d7a49a06fca59e0d1ae9974aa55b0dbe\" returns successfully" Jan 30 13:49:11.021729 containerd[1597]: time="2025-01-30T13:49:11.021088130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7876bf9cc5-jjtnr,Uid:4b2c2fe6-e614-4530-a6fa-02d31dc3b011,Namespace:calico-apiserver,Attempt:1,}" Jan 30 13:49:11.023989 systemd[1]: run-netns-cni\x2d30d875c4\x2dc992\x2d0fde\x2df93d\x2df2e7237c4fae.mount: Deactivated successfully. Jan 30 13:49:11.042783 containerd[1597]: 2025-01-30 13:49:10.942 [INFO][4416] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3d28b1b525f5762eb760ab01b33b72790294c53d0b8932b5a127c10fd661799c" Jan 30 13:49:11.042783 containerd[1597]: 2025-01-30 13:49:10.944 [INFO][4416] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3d28b1b525f5762eb760ab01b33b72790294c53d0b8932b5a127c10fd661799c" iface="eth0" netns="/var/run/netns/cni-47c055b7-ae51-9e57-2c94-cdb72d080f63" Jan 30 13:49:11.042783 containerd[1597]: 2025-01-30 13:49:10.944 [INFO][4416] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3d28b1b525f5762eb760ab01b33b72790294c53d0b8932b5a127c10fd661799c" iface="eth0" netns="/var/run/netns/cni-47c055b7-ae51-9e57-2c94-cdb72d080f63" Jan 30 13:49:11.042783 containerd[1597]: 2025-01-30 13:49:10.945 [INFO][4416] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3d28b1b525f5762eb760ab01b33b72790294c53d0b8932b5a127c10fd661799c" iface="eth0" netns="/var/run/netns/cni-47c055b7-ae51-9e57-2c94-cdb72d080f63" Jan 30 13:49:11.042783 containerd[1597]: 2025-01-30 13:49:10.945 [INFO][4416] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3d28b1b525f5762eb760ab01b33b72790294c53d0b8932b5a127c10fd661799c" Jan 30 13:49:11.042783 containerd[1597]: 2025-01-30 13:49:10.945 [INFO][4416] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3d28b1b525f5762eb760ab01b33b72790294c53d0b8932b5a127c10fd661799c" Jan 30 13:49:11.042783 containerd[1597]: 2025-01-30 13:49:11.001 [INFO][4429] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3d28b1b525f5762eb760ab01b33b72790294c53d0b8932b5a127c10fd661799c" HandleID="k8s-pod-network.3d28b1b525f5762eb760ab01b33b72790294c53d0b8932b5a127c10fd661799c" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--kube--controllers--5cd4d8684d--s5t7r-eth0" Jan 30 13:49:11.042783 containerd[1597]: 2025-01-30 13:49:11.001 [INFO][4429] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:49:11.042783 containerd[1597]: 2025-01-30 13:49:11.009 [INFO][4429] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:49:11.042783 containerd[1597]: 2025-01-30 13:49:11.031 [WARNING][4429] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3d28b1b525f5762eb760ab01b33b72790294c53d0b8932b5a127c10fd661799c" HandleID="k8s-pod-network.3d28b1b525f5762eb760ab01b33b72790294c53d0b8932b5a127c10fd661799c" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--kube--controllers--5cd4d8684d--s5t7r-eth0" Jan 30 13:49:11.042783 containerd[1597]: 2025-01-30 13:49:11.031 [INFO][4429] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3d28b1b525f5762eb760ab01b33b72790294c53d0b8932b5a127c10fd661799c" HandleID="k8s-pod-network.3d28b1b525f5762eb760ab01b33b72790294c53d0b8932b5a127c10fd661799c" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--kube--controllers--5cd4d8684d--s5t7r-eth0" Jan 30 13:49:11.042783 containerd[1597]: 2025-01-30 13:49:11.034 [INFO][4429] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:49:11.042783 containerd[1597]: 2025-01-30 13:49:11.039 [INFO][4416] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3d28b1b525f5762eb760ab01b33b72790294c53d0b8932b5a127c10fd661799c" Jan 30 13:49:11.042783 containerd[1597]: time="2025-01-30T13:49:11.042619572Z" level=info msg="TearDown network for sandbox \"3d28b1b525f5762eb760ab01b33b72790294c53d0b8932b5a127c10fd661799c\" successfully" Jan 30 13:49:11.042783 containerd[1597]: time="2025-01-30T13:49:11.042656599Z" level=info msg="StopPodSandbox for \"3d28b1b525f5762eb760ab01b33b72790294c53d0b8932b5a127c10fd661799c\" returns successfully" Jan 30 13:49:11.047483 containerd[1597]: time="2025-01-30T13:49:11.047248377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cd4d8684d-s5t7r,Uid:9b01ed56-b260-4e1d-b3d3-dac544b9b63d,Namespace:calico-system,Attempt:1,}" Jan 30 13:49:11.052368 systemd[1]: run-netns-cni\x2d47c055b7\x2dae51\x2d9e57\x2d2c94\x2dcdb72d080f63.mount: Deactivated successfully. Jan 30 13:49:11.333930 systemd-networkd[1217]: cali5790e5492df: Link UP Jan 30 13:49:11.339562 systemd-networkd[1217]: cali5790e5492df: Gained carrier Jan 30 13:49:11.371867 containerd[1597]: 2025-01-30 13:49:11.187 [INFO][4451] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--kube--controllers--5cd4d8684d--s5t7r-eth0 calico-kube-controllers-5cd4d8684d- calico-system 9b01ed56-b260-4e1d-b3d3-dac544b9b63d 771 0 2025-01-30 13:48:46 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5cd4d8684d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal calico-kube-controllers-5cd4d8684d-s5t7r eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali5790e5492df [] []}} ContainerID="c252151db8fa5c6948ffd0a0e113c990665de5228e16c6aeb0d976f0964e0573" Namespace="calico-system" Pod="calico-kube-controllers-5cd4d8684d-s5t7r" WorkloadEndpoint="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--kube--controllers--5cd4d8684d--s5t7r-" Jan 30 13:49:11.371867 containerd[1597]: 2025-01-30 13:49:11.187 [INFO][4451] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c252151db8fa5c6948ffd0a0e113c990665de5228e16c6aeb0d976f0964e0573" Namespace="calico-system" Pod="calico-kube-controllers-5cd4d8684d-s5t7r" WorkloadEndpoint="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--kube--controllers--5cd4d8684d--s5t7r-eth0" Jan 30 13:49:11.371867 containerd[1597]: 2025-01-30 13:49:11.258 [INFO][4468] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c252151db8fa5c6948ffd0a0e113c990665de5228e16c6aeb0d976f0964e0573" HandleID="k8s-pod-network.c252151db8fa5c6948ffd0a0e113c990665de5228e16c6aeb0d976f0964e0573" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--kube--controllers--5cd4d8684d--s5t7r-eth0" Jan 30 13:49:11.371867 containerd[1597]: 2025-01-30 13:49:11.276 [INFO][4468] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c252151db8fa5c6948ffd0a0e113c990665de5228e16c6aeb0d976f0964e0573" HandleID="k8s-pod-network.c252151db8fa5c6948ffd0a0e113c990665de5228e16c6aeb0d976f0964e0573" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--kube--controllers--5cd4d8684d--s5t7r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318040), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal", "pod":"calico-kube-controllers-5cd4d8684d-s5t7r", "timestamp":"2025-01-30 13:49:11.257957437 +0000 UTC"}, Hostname:"ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:49:11.371867 containerd[1597]: 2025-01-30 13:49:11.276 [INFO][4468] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:49:11.371867 containerd[1597]: 2025-01-30 13:49:11.276 [INFO][4468] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:49:11.371867 containerd[1597]: 2025-01-30 13:49:11.277 [INFO][4468] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal' Jan 30 13:49:11.371867 containerd[1597]: 2025-01-30 13:49:11.280 [INFO][4468] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c252151db8fa5c6948ffd0a0e113c990665de5228e16c6aeb0d976f0964e0573" host="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:49:11.371867 containerd[1597]: 2025-01-30 13:49:11.286 [INFO][4468] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:49:11.371867 containerd[1597]: 2025-01-30 13:49:11.292 [INFO][4468] ipam/ipam.go 489: Trying affinity for 192.168.2.64/26 host="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:49:11.371867 containerd[1597]: 2025-01-30 13:49:11.294 [INFO][4468] ipam/ipam.go 155: Attempting to load block cidr=192.168.2.64/26 host="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:49:11.371867 containerd[1597]: 2025-01-30 13:49:11.297 [INFO][4468] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.2.64/26 host="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:49:11.371867 containerd[1597]: 2025-01-30 13:49:11.297 [INFO][4468] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.2.64/26 handle="k8s-pod-network.c252151db8fa5c6948ffd0a0e113c990665de5228e16c6aeb0d976f0964e0573" host="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:49:11.371867 containerd[1597]: 2025-01-30 13:49:11.300 [INFO][4468] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c252151db8fa5c6948ffd0a0e113c990665de5228e16c6aeb0d976f0964e0573 Jan 30 13:49:11.371867 containerd[1597]: 2025-01-30 13:49:11.308 [INFO][4468] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.2.64/26 handle="k8s-pod-network.c252151db8fa5c6948ffd0a0e113c990665de5228e16c6aeb0d976f0964e0573" host="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:49:11.371867 containerd[1597]: 2025-01-30 13:49:11.320 [INFO][4468] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.2.67/26] block=192.168.2.64/26 handle="k8s-pod-network.c252151db8fa5c6948ffd0a0e113c990665de5228e16c6aeb0d976f0964e0573" host="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:49:11.371867 containerd[1597]: 2025-01-30 13:49:11.321 [INFO][4468] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.2.67/26] handle="k8s-pod-network.c252151db8fa5c6948ffd0a0e113c990665de5228e16c6aeb0d976f0964e0573" host="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:49:11.371867 containerd[1597]: 2025-01-30 13:49:11.321 [INFO][4468] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:49:11.371867 containerd[1597]: 2025-01-30 13:49:11.321 [INFO][4468] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.2.67/26] IPv6=[] ContainerID="c252151db8fa5c6948ffd0a0e113c990665de5228e16c6aeb0d976f0964e0573" HandleID="k8s-pod-network.c252151db8fa5c6948ffd0a0e113c990665de5228e16c6aeb0d976f0964e0573" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--kube--controllers--5cd4d8684d--s5t7r-eth0" Jan 30 13:49:11.376138 containerd[1597]: 2025-01-30 13:49:11.325 [INFO][4451] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c252151db8fa5c6948ffd0a0e113c990665de5228e16c6aeb0d976f0964e0573" Namespace="calico-system" Pod="calico-kube-controllers-5cd4d8684d-s5t7r" WorkloadEndpoint="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--kube--controllers--5cd4d8684d--s5t7r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--kube--controllers--5cd4d8684d--s5t7r-eth0", GenerateName:"calico-kube-controllers-5cd4d8684d-", Namespace:"calico-system", SelfLink:"", UID:"9b01ed56-b260-4e1d-b3d3-dac544b9b63d", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 48, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5cd4d8684d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-kube-controllers-5cd4d8684d-s5t7r", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.2.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5790e5492df", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:49:11.376138 containerd[1597]: 2025-01-30 13:49:11.325 [INFO][4451] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.2.67/32] ContainerID="c252151db8fa5c6948ffd0a0e113c990665de5228e16c6aeb0d976f0964e0573" Namespace="calico-system" Pod="calico-kube-controllers-5cd4d8684d-s5t7r" WorkloadEndpoint="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--kube--controllers--5cd4d8684d--s5t7r-eth0" Jan 30 13:49:11.376138 containerd[1597]: 2025-01-30 13:49:11.325 [INFO][4451] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5790e5492df ContainerID="c252151db8fa5c6948ffd0a0e113c990665de5228e16c6aeb0d976f0964e0573" Namespace="calico-system" Pod="calico-kube-controllers-5cd4d8684d-s5t7r" WorkloadEndpoint="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--kube--controllers--5cd4d8684d--s5t7r-eth0" Jan 30 13:49:11.376138 containerd[1597]: 2025-01-30 13:49:11.341 [INFO][4451] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c252151db8fa5c6948ffd0a0e113c990665de5228e16c6aeb0d976f0964e0573" Namespace="calico-system" Pod="calico-kube-controllers-5cd4d8684d-s5t7r" WorkloadEndpoint="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--kube--controllers--5cd4d8684d--s5t7r-eth0" Jan 30 13:49:11.376138 containerd[1597]: 2025-01-30 13:49:11.345 [INFO][4451] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c252151db8fa5c6948ffd0a0e113c990665de5228e16c6aeb0d976f0964e0573" Namespace="calico-system" Pod="calico-kube-controllers-5cd4d8684d-s5t7r" WorkloadEndpoint="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--kube--controllers--5cd4d8684d--s5t7r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--kube--controllers--5cd4d8684d--s5t7r-eth0", GenerateName:"calico-kube-controllers-5cd4d8684d-", Namespace:"calico-system", SelfLink:"", UID:"9b01ed56-b260-4e1d-b3d3-dac544b9b63d", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 48, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5cd4d8684d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal", ContainerID:"c252151db8fa5c6948ffd0a0e113c990665de5228e16c6aeb0d976f0964e0573", Pod:"calico-kube-controllers-5cd4d8684d-s5t7r", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.2.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5790e5492df", MAC:"f2:1a:a2:02:0b:eb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:49:11.376138 containerd[1597]: 2025-01-30 13:49:11.368 [INFO][4451] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c252151db8fa5c6948ffd0a0e113c990665de5228e16c6aeb0d976f0964e0573" Namespace="calico-system" Pod="calico-kube-controllers-5cd4d8684d-s5t7r" WorkloadEndpoint="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--kube--controllers--5cd4d8684d--s5t7r-eth0" Jan 30 13:49:11.433144 systemd-networkd[1217]: cali6444274696c: Link UP Jan 30 13:49:11.433855 systemd-networkd[1217]: cali6444274696c: Gained carrier Jan 30 13:49:11.457953 containerd[1597]: time="2025-01-30T13:49:11.457782187Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:49:11.457953 containerd[1597]: time="2025-01-30T13:49:11.457864421Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:49:11.457953 containerd[1597]: time="2025-01-30T13:49:11.457891798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:49:11.459918 containerd[1597]: time="2025-01-30T13:49:11.459540192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:49:11.485173 containerd[1597]: 2025-01-30 13:49:11.169 [INFO][4442] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--apiserver--7876bf9cc5--jjtnr-eth0 calico-apiserver-7876bf9cc5- calico-apiserver 4b2c2fe6-e614-4530-a6fa-02d31dc3b011 770 0 2025-01-30 13:48:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7876bf9cc5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal calico-apiserver-7876bf9cc5-jjtnr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6444274696c [] []}} ContainerID="e6fcc81ffeb859e9aaed7f0e988ef2eb7dea8228af089f693e19dc0663b61176" Namespace="calico-apiserver" Pod="calico-apiserver-7876bf9cc5-jjtnr" WorkloadEndpoint="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--apiserver--7876bf9cc5--jjtnr-" Jan 30 13:49:11.485173 containerd[1597]: 2025-01-30 13:49:11.169 [INFO][4442] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e6fcc81ffeb859e9aaed7f0e988ef2eb7dea8228af089f693e19dc0663b61176" Namespace="calico-apiserver" Pod="calico-apiserver-7876bf9cc5-jjtnr" WorkloadEndpoint="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--apiserver--7876bf9cc5--jjtnr-eth0" Jan 30 13:49:11.485173 containerd[1597]: 2025-01-30 13:49:11.252 [INFO][4464] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e6fcc81ffeb859e9aaed7f0e988ef2eb7dea8228af089f693e19dc0663b61176" HandleID="k8s-pod-network.e6fcc81ffeb859e9aaed7f0e988ef2eb7dea8228af089f693e19dc0663b61176" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--apiserver--7876bf9cc5--jjtnr-eth0" Jan 30 13:49:11.485173 containerd[1597]: 2025-01-30 13:49:11.277 [INFO][4464] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e6fcc81ffeb859e9aaed7f0e988ef2eb7dea8228af089f693e19dc0663b61176" HandleID="k8s-pod-network.e6fcc81ffeb859e9aaed7f0e988ef2eb7dea8228af089f693e19dc0663b61176" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--apiserver--7876bf9cc5--jjtnr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003bd680), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal", "pod":"calico-apiserver-7876bf9cc5-jjtnr", "timestamp":"2025-01-30 13:49:11.252019622 +0000 UTC"}, Hostname:"ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:49:11.485173 containerd[1597]: 2025-01-30 13:49:11.277 [INFO][4464] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:49:11.485173 containerd[1597]: 2025-01-30 13:49:11.321 [INFO][4464] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:49:11.485173 containerd[1597]: 2025-01-30 13:49:11.321 [INFO][4464] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal' Jan 30 13:49:11.485173 containerd[1597]: 2025-01-30 13:49:11.326 [INFO][4464] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e6fcc81ffeb859e9aaed7f0e988ef2eb7dea8228af089f693e19dc0663b61176" host="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:49:11.485173 containerd[1597]: 2025-01-30 13:49:11.341 [INFO][4464] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:49:11.485173 containerd[1597]: 2025-01-30 13:49:11.370 [INFO][4464] ipam/ipam.go 489: Trying affinity for 192.168.2.64/26 host="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:49:11.485173 containerd[1597]: 2025-01-30 13:49:11.376 [INFO][4464] ipam/ipam.go 155: Attempting to load block cidr=192.168.2.64/26 host="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:49:11.485173 containerd[1597]: 2025-01-30 13:49:11.383 [INFO][4464] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.2.64/26 host="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:49:11.485173 containerd[1597]: 2025-01-30 13:49:11.383 [INFO][4464] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.2.64/26 handle="k8s-pod-network.e6fcc81ffeb859e9aaed7f0e988ef2eb7dea8228af089f693e19dc0663b61176" host="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:49:11.485173 containerd[1597]: 2025-01-30 13:49:11.385 [INFO][4464] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e6fcc81ffeb859e9aaed7f0e988ef2eb7dea8228af089f693e19dc0663b61176 Jan 30 13:49:11.485173 containerd[1597]: 2025-01-30 13:49:11.395 [INFO][4464] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.2.64/26 handle="k8s-pod-network.e6fcc81ffeb859e9aaed7f0e988ef2eb7dea8228af089f693e19dc0663b61176" host="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:49:11.485173 containerd[1597]: 2025-01-30 13:49:11.411 [INFO][4464] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.2.68/26] block=192.168.2.64/26 handle="k8s-pod-network.e6fcc81ffeb859e9aaed7f0e988ef2eb7dea8228af089f693e19dc0663b61176" host="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:49:11.485173 containerd[1597]: 2025-01-30 13:49:11.411 [INFO][4464] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.2.68/26] handle="k8s-pod-network.e6fcc81ffeb859e9aaed7f0e988ef2eb7dea8228af089f693e19dc0663b61176" host="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:49:11.485173 containerd[1597]: 2025-01-30 13:49:11.411 [INFO][4464] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:49:11.485173 containerd[1597]: 2025-01-30 13:49:11.411 [INFO][4464] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.2.68/26] IPv6=[] ContainerID="e6fcc81ffeb859e9aaed7f0e988ef2eb7dea8228af089f693e19dc0663b61176" HandleID="k8s-pod-network.e6fcc81ffeb859e9aaed7f0e988ef2eb7dea8228af089f693e19dc0663b61176" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--apiserver--7876bf9cc5--jjtnr-eth0" Jan 30 13:49:11.486761 containerd[1597]: 2025-01-30 13:49:11.416 [INFO][4442] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e6fcc81ffeb859e9aaed7f0e988ef2eb7dea8228af089f693e19dc0663b61176" Namespace="calico-apiserver" Pod="calico-apiserver-7876bf9cc5-jjtnr" WorkloadEndpoint="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--apiserver--7876bf9cc5--jjtnr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--apiserver--7876bf9cc5--jjtnr-eth0", GenerateName:"calico-apiserver-7876bf9cc5-", Namespace:"calico-apiserver", SelfLink:"", UID:"4b2c2fe6-e614-4530-a6fa-02d31dc3b011", ResourceVersion:"770", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 48, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7876bf9cc5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-7876bf9cc5-jjtnr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6444274696c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:49:11.486761 containerd[1597]: 2025-01-30 13:49:11.416 [INFO][4442] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.2.68/32] ContainerID="e6fcc81ffeb859e9aaed7f0e988ef2eb7dea8228af089f693e19dc0663b61176" Namespace="calico-apiserver" Pod="calico-apiserver-7876bf9cc5-jjtnr" WorkloadEndpoint="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--apiserver--7876bf9cc5--jjtnr-eth0" Jan 30 13:49:11.486761 containerd[1597]: 2025-01-30 13:49:11.417 [INFO][4442] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6444274696c ContainerID="e6fcc81ffeb859e9aaed7f0e988ef2eb7dea8228af089f693e19dc0663b61176" Namespace="calico-apiserver" Pod="calico-apiserver-7876bf9cc5-jjtnr" WorkloadEndpoint="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--apiserver--7876bf9cc5--jjtnr-eth0" Jan 30 13:49:11.486761 containerd[1597]: 2025-01-30 13:49:11.443 [INFO][4442] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e6fcc81ffeb859e9aaed7f0e988ef2eb7dea8228af089f693e19dc0663b61176" Namespace="calico-apiserver" Pod="calico-apiserver-7876bf9cc5-jjtnr" WorkloadEndpoint="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--apiserver--7876bf9cc5--jjtnr-eth0" Jan 30 13:49:11.486761 containerd[1597]: 2025-01-30 13:49:11.456 [INFO][4442] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e6fcc81ffeb859e9aaed7f0e988ef2eb7dea8228af089f693e19dc0663b61176" Namespace="calico-apiserver" Pod="calico-apiserver-7876bf9cc5-jjtnr" WorkloadEndpoint="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--apiserver--7876bf9cc5--jjtnr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--apiserver--7876bf9cc5--jjtnr-eth0", GenerateName:"calico-apiserver-7876bf9cc5-", Namespace:"calico-apiserver", SelfLink:"", UID:"4b2c2fe6-e614-4530-a6fa-02d31dc3b011", ResourceVersion:"770", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 48, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7876bf9cc5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal", ContainerID:"e6fcc81ffeb859e9aaed7f0e988ef2eb7dea8228af089f693e19dc0663b61176", Pod:"calico-apiserver-7876bf9cc5-jjtnr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6444274696c", MAC:"86:09:47:0d:57:12", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:49:11.486761 containerd[1597]: 2025-01-30 13:49:11.477 [INFO][4442] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e6fcc81ffeb859e9aaed7f0e988ef2eb7dea8228af089f693e19dc0663b61176" Namespace="calico-apiserver" Pod="calico-apiserver-7876bf9cc5-jjtnr" WorkloadEndpoint="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--apiserver--7876bf9cc5--jjtnr-eth0" Jan 30 13:49:11.572885 containerd[1597]: time="2025-01-30T13:49:11.572781361Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:49:11.574185 containerd[1597]: time="2025-01-30T13:49:11.573922115Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:49:11.574612 containerd[1597]: time="2025-01-30T13:49:11.574161895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:49:11.575254 containerd[1597]: time="2025-01-30T13:49:11.575142703Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:49:11.681817 systemd-networkd[1217]: cali75802108f1e: Gained IPv6LL Jan 30 13:49:11.697869 containerd[1597]: time="2025-01-30T13:49:11.697587112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cd4d8684d-s5t7r,Uid:9b01ed56-b260-4e1d-b3d3-dac544b9b63d,Namespace:calico-system,Attempt:1,} returns sandbox id \"c252151db8fa5c6948ffd0a0e113c990665de5228e16c6aeb0d976f0964e0573\"" Jan 30 13:49:11.744252 containerd[1597]: time="2025-01-30T13:49:11.743830409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7876bf9cc5-jjtnr,Uid:4b2c2fe6-e614-4530-a6fa-02d31dc3b011,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"e6fcc81ffeb859e9aaed7f0e988ef2eb7dea8228af089f693e19dc0663b61176\"" Jan 30 13:49:11.782602 containerd[1597]: time="2025-01-30T13:49:11.782531257Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:11.783890 containerd[1597]: time="2025-01-30T13:49:11.783800478Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 30 13:49:11.785455 containerd[1597]: time="2025-01-30T13:49:11.785345691Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:11.789138 containerd[1597]: time="2025-01-30T13:49:11.789049817Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:11.790106 containerd[1597]: time="2025-01-30T13:49:11.789926875Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.478119099s" Jan 30 13:49:11.790106 containerd[1597]: time="2025-01-30T13:49:11.789973573Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 30 13:49:11.792594 containerd[1597]: time="2025-01-30T13:49:11.792313181Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 30 13:49:11.794115 containerd[1597]: time="2025-01-30T13:49:11.794074225Z" level=info msg="CreateContainer within sandbox \"74023e3b965146bafd526f893862a5bb8fbcf2bffc4dab5517e55d8c4eeb8ea4\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 30 13:49:11.813103 containerd[1597]: time="2025-01-30T13:49:11.813039010Z" level=info msg="CreateContainer within sandbox \"74023e3b965146bafd526f893862a5bb8fbcf2bffc4dab5517e55d8c4eeb8ea4\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"a2371d6ef09c5974dcf84526a34e7686f1cc63bcee20f59649570fb18cbb63cf\"" Jan 30 13:49:11.815371 containerd[1597]: time="2025-01-30T13:49:11.814109378Z" level=info msg="StartContainer for \"a2371d6ef09c5974dcf84526a34e7686f1cc63bcee20f59649570fb18cbb63cf\"" Jan 30 13:49:11.907839 containerd[1597]: time="2025-01-30T13:49:11.907712674Z" level=info msg="StartContainer for \"a2371d6ef09c5974dcf84526a34e7686f1cc63bcee20f59649570fb18cbb63cf\" returns successfully" Jan 30 13:49:12.833756 systemd-networkd[1217]: cali6444274696c: Gained IPv6LL Jan 30 13:49:12.839032 containerd[1597]: time="2025-01-30T13:49:12.838324165Z" level=info msg="StopPodSandbox for \"3c6b39ff8f06e3ae96187bc34712f7d5f94b346e3033eaec3132dc5a8a3a80cf\"" Jan 30 13:49:13.146856 containerd[1597]: 2025-01-30 13:49:12.967 [INFO][4651] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3c6b39ff8f06e3ae96187bc34712f7d5f94b346e3033eaec3132dc5a8a3a80cf" Jan 30 13:49:13.146856 containerd[1597]: 2025-01-30 13:49:12.967 [INFO][4651] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3c6b39ff8f06e3ae96187bc34712f7d5f94b346e3033eaec3132dc5a8a3a80cf" iface="eth0" netns="/var/run/netns/cni-17620d94-5eca-f6ec-aab5-2c2f2ed52c5e" Jan 30 13:49:13.146856 containerd[1597]: 2025-01-30 13:49:12.973 [INFO][4651] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3c6b39ff8f06e3ae96187bc34712f7d5f94b346e3033eaec3132dc5a8a3a80cf" iface="eth0" netns="/var/run/netns/cni-17620d94-5eca-f6ec-aab5-2c2f2ed52c5e" Jan 30 13:49:13.146856 containerd[1597]: 2025-01-30 13:49:12.977 [INFO][4651] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3c6b39ff8f06e3ae96187bc34712f7d5f94b346e3033eaec3132dc5a8a3a80cf" iface="eth0" netns="/var/run/netns/cni-17620d94-5eca-f6ec-aab5-2c2f2ed52c5e" Jan 30 13:49:13.146856 containerd[1597]: 2025-01-30 13:49:12.977 [INFO][4651] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3c6b39ff8f06e3ae96187bc34712f7d5f94b346e3033eaec3132dc5a8a3a80cf" Jan 30 13:49:13.146856 containerd[1597]: 2025-01-30 13:49:12.977 [INFO][4651] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3c6b39ff8f06e3ae96187bc34712f7d5f94b346e3033eaec3132dc5a8a3a80cf" Jan 30 13:49:13.146856 containerd[1597]: 2025-01-30 13:49:13.125 [INFO][4660] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3c6b39ff8f06e3ae96187bc34712f7d5f94b346e3033eaec3132dc5a8a3a80cf" HandleID="k8s-pod-network.3c6b39ff8f06e3ae96187bc34712f7d5f94b346e3033eaec3132dc5a8a3a80cf" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--kgnts-eth0" Jan 30 13:49:13.146856 containerd[1597]: 2025-01-30 13:49:13.125 [INFO][4660] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:49:13.146856 containerd[1597]: 2025-01-30 13:49:13.126 [INFO][4660] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:49:13.146856 containerd[1597]: 2025-01-30 13:49:13.137 [WARNING][4660] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3c6b39ff8f06e3ae96187bc34712f7d5f94b346e3033eaec3132dc5a8a3a80cf" HandleID="k8s-pod-network.3c6b39ff8f06e3ae96187bc34712f7d5f94b346e3033eaec3132dc5a8a3a80cf" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--kgnts-eth0" Jan 30 13:49:13.146856 containerd[1597]: 2025-01-30 13:49:13.137 [INFO][4660] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3c6b39ff8f06e3ae96187bc34712f7d5f94b346e3033eaec3132dc5a8a3a80cf" HandleID="k8s-pod-network.3c6b39ff8f06e3ae96187bc34712f7d5f94b346e3033eaec3132dc5a8a3a80cf" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--kgnts-eth0" Jan 30 13:49:13.146856 containerd[1597]: 2025-01-30 13:49:13.140 [INFO][4660] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:49:13.146856 containerd[1597]: 2025-01-30 13:49:13.142 [INFO][4651] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3c6b39ff8f06e3ae96187bc34712f7d5f94b346e3033eaec3132dc5a8a3a80cf" Jan 30 13:49:13.146856 containerd[1597]: time="2025-01-30T13:49:13.145111852Z" level=info msg="TearDown network for sandbox \"3c6b39ff8f06e3ae96187bc34712f7d5f94b346e3033eaec3132dc5a8a3a80cf\" successfully" Jan 30 13:49:13.146856 containerd[1597]: time="2025-01-30T13:49:13.145151041Z" level=info msg="StopPodSandbox for \"3c6b39ff8f06e3ae96187bc34712f7d5f94b346e3033eaec3132dc5a8a3a80cf\" returns successfully" Jan 30 13:49:13.156333 containerd[1597]: time="2025-01-30T13:49:13.156184235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kgnts,Uid:f2b9a9ee-1ab9-4acc-8bb6-079b1de8f5a0,Namespace:kube-system,Attempt:1,}" Jan 30 13:49:13.157972 systemd[1]: run-netns-cni\x2d17620d94\x2d5eca\x2df6ec\x2daab5\x2d2c2f2ed52c5e.mount: Deactivated successfully. Jan 30 13:49:13.284023 systemd-networkd[1217]: cali5790e5492df: Gained IPv6LL Jan 30 13:49:13.395477 systemd-networkd[1217]: cali4452044e417: Link UP Jan 30 13:49:13.396808 systemd-networkd[1217]: cali4452044e417: Gained carrier Jan 30 13:49:13.433835 containerd[1597]: 2025-01-30 13:49:13.265 [INFO][4667] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--kgnts-eth0 coredns-7db6d8ff4d- kube-system f2b9a9ee-1ab9-4acc-8bb6-079b1de8f5a0 789 0 2025-01-30 13:48:39 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal coredns-7db6d8ff4d-kgnts eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4452044e417 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="4c4416a94968bd90ce116291ba2fea6ffe4d3e6893f93b3ce5454e1d2a8a18fb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kgnts" WorkloadEndpoint="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--kgnts-" Jan 30 13:49:13.433835 containerd[1597]: 2025-01-30 13:49:13.265 [INFO][4667] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4c4416a94968bd90ce116291ba2fea6ffe4d3e6893f93b3ce5454e1d2a8a18fb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kgnts" WorkloadEndpoint="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--kgnts-eth0" Jan 30 13:49:13.433835 containerd[1597]: 2025-01-30 13:49:13.321 [INFO][4678] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4c4416a94968bd90ce116291ba2fea6ffe4d3e6893f93b3ce5454e1d2a8a18fb" HandleID="k8s-pod-network.4c4416a94968bd90ce116291ba2fea6ffe4d3e6893f93b3ce5454e1d2a8a18fb" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--kgnts-eth0" Jan 30 13:49:13.433835 containerd[1597]: 2025-01-30 13:49:13.339 [INFO][4678] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4c4416a94968bd90ce116291ba2fea6ffe4d3e6893f93b3ce5454e1d2a8a18fb" HandleID="k8s-pod-network.4c4416a94968bd90ce116291ba2fea6ffe4d3e6893f93b3ce5454e1d2a8a18fb" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--kgnts-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000311410), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal", "pod":"coredns-7db6d8ff4d-kgnts", "timestamp":"2025-01-30 13:49:13.321216602 +0000 UTC"}, Hostname:"ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:49:13.433835 containerd[1597]: 2025-01-30 13:49:13.339 [INFO][4678] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:49:13.433835 containerd[1597]: 2025-01-30 13:49:13.339 [INFO][4678] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:49:13.433835 containerd[1597]: 2025-01-30 13:49:13.339 [INFO][4678] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal' Jan 30 13:49:13.433835 containerd[1597]: 2025-01-30 13:49:13.342 [INFO][4678] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4c4416a94968bd90ce116291ba2fea6ffe4d3e6893f93b3ce5454e1d2a8a18fb" host="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:49:13.433835 containerd[1597]: 2025-01-30 13:49:13.348 [INFO][4678] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:49:13.433835 containerd[1597]: 2025-01-30 13:49:13.355 [INFO][4678] ipam/ipam.go 489: Trying affinity for 192.168.2.64/26 host="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:49:13.433835 containerd[1597]: 2025-01-30 13:49:13.358 [INFO][4678] ipam/ipam.go 155: Attempting to load block cidr=192.168.2.64/26 host="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:49:13.433835 containerd[1597]: 2025-01-30 13:49:13.362 [INFO][4678] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.2.64/26 host="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:49:13.433835 containerd[1597]: 2025-01-30 13:49:13.362 [INFO][4678] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.2.64/26 handle="k8s-pod-network.4c4416a94968bd90ce116291ba2fea6ffe4d3e6893f93b3ce5454e1d2a8a18fb" host="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:49:13.433835 containerd[1597]: 2025-01-30 13:49:13.365 [INFO][4678] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4c4416a94968bd90ce116291ba2fea6ffe4d3e6893f93b3ce5454e1d2a8a18fb Jan 30 13:49:13.433835 containerd[1597]: 2025-01-30 13:49:13.374 [INFO][4678] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.2.64/26 handle="k8s-pod-network.4c4416a94968bd90ce116291ba2fea6ffe4d3e6893f93b3ce5454e1d2a8a18fb" host="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:49:13.433835 containerd[1597]: 2025-01-30 13:49:13.388 [INFO][4678] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.2.69/26] block=192.168.2.64/26 handle="k8s-pod-network.4c4416a94968bd90ce116291ba2fea6ffe4d3e6893f93b3ce5454e1d2a8a18fb" host="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:49:13.433835 containerd[1597]: 2025-01-30 13:49:13.388 [INFO][4678] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.2.69/26] handle="k8s-pod-network.4c4416a94968bd90ce116291ba2fea6ffe4d3e6893f93b3ce5454e1d2a8a18fb" host="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:49:13.433835 containerd[1597]: 2025-01-30 13:49:13.388 [INFO][4678] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:49:13.433835 containerd[1597]: 2025-01-30 13:49:13.388 [INFO][4678] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.2.69/26] IPv6=[] ContainerID="4c4416a94968bd90ce116291ba2fea6ffe4d3e6893f93b3ce5454e1d2a8a18fb" HandleID="k8s-pod-network.4c4416a94968bd90ce116291ba2fea6ffe4d3e6893f93b3ce5454e1d2a8a18fb" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--kgnts-eth0" Jan 30 13:49:13.435447 containerd[1597]: 2025-01-30 13:49:13.391 [INFO][4667] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4c4416a94968bd90ce116291ba2fea6ffe4d3e6893f93b3ce5454e1d2a8a18fb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kgnts" WorkloadEndpoint="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--kgnts-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--kgnts-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"f2b9a9ee-1ab9-4acc-8bb6-079b1de8f5a0", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 48, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-7db6d8ff4d-kgnts", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4452044e417", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:49:13.435447 containerd[1597]: 2025-01-30 13:49:13.391 [INFO][4667] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.2.69/32] ContainerID="4c4416a94968bd90ce116291ba2fea6ffe4d3e6893f93b3ce5454e1d2a8a18fb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kgnts" WorkloadEndpoint="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--kgnts-eth0" Jan 30 13:49:13.435447 containerd[1597]: 2025-01-30 13:49:13.391 [INFO][4667] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4452044e417 ContainerID="4c4416a94968bd90ce116291ba2fea6ffe4d3e6893f93b3ce5454e1d2a8a18fb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kgnts" WorkloadEndpoint="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--kgnts-eth0" Jan 30 13:49:13.435447 containerd[1597]: 2025-01-30 13:49:13.397 [INFO][4667] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4c4416a94968bd90ce116291ba2fea6ffe4d3e6893f93b3ce5454e1d2a8a18fb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kgnts" WorkloadEndpoint="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--kgnts-eth0" Jan 30 13:49:13.435447 containerd[1597]: 2025-01-30 13:49:13.397 [INFO][4667] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4c4416a94968bd90ce116291ba2fea6ffe4d3e6893f93b3ce5454e1d2a8a18fb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kgnts" WorkloadEndpoint="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--kgnts-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--kgnts-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"f2b9a9ee-1ab9-4acc-8bb6-079b1de8f5a0", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 48, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal", ContainerID:"4c4416a94968bd90ce116291ba2fea6ffe4d3e6893f93b3ce5454e1d2a8a18fb", Pod:"coredns-7db6d8ff4d-kgnts", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4452044e417", MAC:"f2:05:da:fd:0d:27", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:49:13.435447 containerd[1597]: 2025-01-30 13:49:13.415 [INFO][4667] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4c4416a94968bd90ce116291ba2fea6ffe4d3e6893f93b3ce5454e1d2a8a18fb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kgnts" WorkloadEndpoint="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--kgnts-eth0" Jan 30 13:49:13.492725 containerd[1597]: time="2025-01-30T13:49:13.492314615Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:49:13.492725 containerd[1597]: time="2025-01-30T13:49:13.492390881Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:49:13.492725 containerd[1597]: time="2025-01-30T13:49:13.492445657Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:49:13.492725 containerd[1597]: time="2025-01-30T13:49:13.492583073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:49:13.621907 containerd[1597]: time="2025-01-30T13:49:13.621790044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kgnts,Uid:f2b9a9ee-1ab9-4acc-8bb6-079b1de8f5a0,Namespace:kube-system,Attempt:1,} returns sandbox id \"4c4416a94968bd90ce116291ba2fea6ffe4d3e6893f93b3ce5454e1d2a8a18fb\"" Jan 30 13:49:13.629199 containerd[1597]: time="2025-01-30T13:49:13.629103135Z" level=info msg="CreateContainer within sandbox \"4c4416a94968bd90ce116291ba2fea6ffe4d3e6893f93b3ce5454e1d2a8a18fb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:49:13.656631 containerd[1597]: time="2025-01-30T13:49:13.656573725Z" level=info msg="CreateContainer within sandbox \"4c4416a94968bd90ce116291ba2fea6ffe4d3e6893f93b3ce5454e1d2a8a18fb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cb6ab14b6e0ef09a02601f87e1ed50ac77da405bf9718dca4aa35e0d2781e423\"" Jan 30 13:49:13.659448 containerd[1597]: time="2025-01-30T13:49:13.658443281Z" level=info msg="StartContainer for \"cb6ab14b6e0ef09a02601f87e1ed50ac77da405bf9718dca4aa35e0d2781e423\"" Jan 30 13:49:13.760504 containerd[1597]: time="2025-01-30T13:49:13.760350045Z" level=info msg="StartContainer for \"cb6ab14b6e0ef09a02601f87e1ed50ac77da405bf9718dca4aa35e0d2781e423\" returns successfully" Jan 30 13:49:13.840037 containerd[1597]: time="2025-01-30T13:49:13.839261013Z" level=info msg="StopPodSandbox for \"4c9c40a812a591da952af179d9a573a88fbd976b65bd47d034f0f61837d8017e\"" Jan 30 13:49:14.017863 containerd[1597]: 2025-01-30 13:49:13.952 [INFO][4791] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4c9c40a812a591da952af179d9a573a88fbd976b65bd47d034f0f61837d8017e" Jan 30 13:49:14.017863 containerd[1597]: 2025-01-30 13:49:13.953 [INFO][4791] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4c9c40a812a591da952af179d9a573a88fbd976b65bd47d034f0f61837d8017e" iface="eth0" netns="/var/run/netns/cni-b1c073ea-8004-3f05-9c8f-87ae37ca403d" Jan 30 13:49:14.017863 containerd[1597]: 2025-01-30 13:49:13.953 [INFO][4791] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4c9c40a812a591da952af179d9a573a88fbd976b65bd47d034f0f61837d8017e" iface="eth0" netns="/var/run/netns/cni-b1c073ea-8004-3f05-9c8f-87ae37ca403d" Jan 30 13:49:14.017863 containerd[1597]: 2025-01-30 13:49:13.954 [INFO][4791] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4c9c40a812a591da952af179d9a573a88fbd976b65bd47d034f0f61837d8017e" iface="eth0" netns="/var/run/netns/cni-b1c073ea-8004-3f05-9c8f-87ae37ca403d" Jan 30 13:49:14.017863 containerd[1597]: 2025-01-30 13:49:13.954 [INFO][4791] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4c9c40a812a591da952af179d9a573a88fbd976b65bd47d034f0f61837d8017e" Jan 30 13:49:14.017863 containerd[1597]: 2025-01-30 13:49:13.954 [INFO][4791] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4c9c40a812a591da952af179d9a573a88fbd976b65bd47d034f0f61837d8017e" Jan 30 13:49:14.017863 containerd[1597]: 2025-01-30 13:49:13.995 [INFO][4797] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4c9c40a812a591da952af179d9a573a88fbd976b65bd47d034f0f61837d8017e" HandleID="k8s-pod-network.4c9c40a812a591da952af179d9a573a88fbd976b65bd47d034f0f61837d8017e" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--apiserver--7876bf9cc5--rvcj9-eth0" Jan 30 13:49:14.017863 containerd[1597]: 2025-01-30 13:49:13.996 [INFO][4797] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:49:14.017863 containerd[1597]: 2025-01-30 13:49:13.996 [INFO][4797] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:49:14.017863 containerd[1597]: 2025-01-30 13:49:14.008 [WARNING][4797] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4c9c40a812a591da952af179d9a573a88fbd976b65bd47d034f0f61837d8017e" HandleID="k8s-pod-network.4c9c40a812a591da952af179d9a573a88fbd976b65bd47d034f0f61837d8017e" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--apiserver--7876bf9cc5--rvcj9-eth0" Jan 30 13:49:14.017863 containerd[1597]: 2025-01-30 13:49:14.008 [INFO][4797] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4c9c40a812a591da952af179d9a573a88fbd976b65bd47d034f0f61837d8017e" HandleID="k8s-pod-network.4c9c40a812a591da952af179d9a573a88fbd976b65bd47d034f0f61837d8017e" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--apiserver--7876bf9cc5--rvcj9-eth0" Jan 30 13:49:14.017863 containerd[1597]: 2025-01-30 13:49:14.013 [INFO][4797] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:49:14.017863 containerd[1597]: 2025-01-30 13:49:14.015 [INFO][4791] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4c9c40a812a591da952af179d9a573a88fbd976b65bd47d034f0f61837d8017e" Jan 30 13:49:14.019114 containerd[1597]: time="2025-01-30T13:49:14.018711101Z" level=info msg="TearDown network for sandbox \"4c9c40a812a591da952af179d9a573a88fbd976b65bd47d034f0f61837d8017e\" successfully" Jan 30 13:49:14.019114 containerd[1597]: time="2025-01-30T13:49:14.018751118Z" level=info msg="StopPodSandbox for \"4c9c40a812a591da952af179d9a573a88fbd976b65bd47d034f0f61837d8017e\" returns successfully" Jan 30 13:49:14.020788 containerd[1597]: time="2025-01-30T13:49:14.020429964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7876bf9cc5-rvcj9,Uid:954b7f72-1740-46bf-9d10-67fe412470fe,Namespace:calico-apiserver,Attempt:1,}" Jan 30 13:49:14.169839 kubelet[2800]: I0130 13:49:14.165434 2800 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-kgnts" podStartSLOduration=35.165384027 podStartE2EDuration="35.165384027s" podCreationTimestamp="2025-01-30 13:48:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:49:14.164094512 +0000 UTC m=+49.490733041" watchObservedRunningTime="2025-01-30 13:49:14.165384027 +0000 UTC m=+49.492022558" Jan 30 13:49:14.168236 systemd[1]: run-netns-cni\x2db1c073ea\x2d8004\x2d3f05\x2d9c8f\x2d87ae37ca403d.mount: Deactivated successfully. Jan 30 13:49:14.307775 systemd-networkd[1217]: cali59864786278: Link UP Jan 30 13:49:14.310782 systemd-networkd[1217]: cali59864786278: Gained carrier Jan 30 13:49:14.333768 containerd[1597]: 2025-01-30 13:49:14.105 [INFO][4804] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--apiserver--7876bf9cc5--rvcj9-eth0 calico-apiserver-7876bf9cc5- calico-apiserver 954b7f72-1740-46bf-9d10-67fe412470fe 800 0 2025-01-30 13:48:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7876bf9cc5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal calico-apiserver-7876bf9cc5-rvcj9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali59864786278 [] []}} ContainerID="748edd451c9b3a7cac63d98b37e21f085b743a2c569d1b00d2b29327ae276c02" Namespace="calico-apiserver" Pod="calico-apiserver-7876bf9cc5-rvcj9" WorkloadEndpoint="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--apiserver--7876bf9cc5--rvcj9-" Jan 30 13:49:14.333768 containerd[1597]: 2025-01-30 13:49:14.106 [INFO][4804] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="748edd451c9b3a7cac63d98b37e21f085b743a2c569d1b00d2b29327ae276c02" Namespace="calico-apiserver" Pod="calico-apiserver-7876bf9cc5-rvcj9" WorkloadEndpoint="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--apiserver--7876bf9cc5--rvcj9-eth0" Jan 30 13:49:14.333768 containerd[1597]: 2025-01-30 13:49:14.220 [INFO][4816] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="748edd451c9b3a7cac63d98b37e21f085b743a2c569d1b00d2b29327ae276c02" HandleID="k8s-pod-network.748edd451c9b3a7cac63d98b37e21f085b743a2c569d1b00d2b29327ae276c02" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--apiserver--7876bf9cc5--rvcj9-eth0" Jan 30 13:49:14.333768 containerd[1597]: 2025-01-30 13:49:14.243 [INFO][4816] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="748edd451c9b3a7cac63d98b37e21f085b743a2c569d1b00d2b29327ae276c02" HandleID="k8s-pod-network.748edd451c9b3a7cac63d98b37e21f085b743a2c569d1b00d2b29327ae276c02" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--apiserver--7876bf9cc5--rvcj9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003bc610), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal", "pod":"calico-apiserver-7876bf9cc5-rvcj9", "timestamp":"2025-01-30 13:49:14.220594553 +0000 UTC"}, Hostname:"ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:49:14.333768 containerd[1597]: 2025-01-30 13:49:14.244 [INFO][4816] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:49:14.333768 containerd[1597]: 2025-01-30 13:49:14.244 [INFO][4816] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:49:14.333768 containerd[1597]: 2025-01-30 13:49:14.244 [INFO][4816] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal' Jan 30 13:49:14.333768 containerd[1597]: 2025-01-30 13:49:14.248 [INFO][4816] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.748edd451c9b3a7cac63d98b37e21f085b743a2c569d1b00d2b29327ae276c02" host="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:49:14.333768 containerd[1597]: 2025-01-30 13:49:14.257 [INFO][4816] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:49:14.333768 containerd[1597]: 2025-01-30 13:49:14.266 [INFO][4816] ipam/ipam.go 489: Trying affinity for 192.168.2.64/26 host="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:49:14.333768 containerd[1597]: 2025-01-30 13:49:14.271 [INFO][4816] ipam/ipam.go 155: Attempting to load block cidr=192.168.2.64/26 host="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:49:14.333768 containerd[1597]: 2025-01-30 13:49:14.274 [INFO][4816] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.2.64/26 host="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:49:14.333768 containerd[1597]: 2025-01-30 13:49:14.274 [INFO][4816] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.2.64/26 handle="k8s-pod-network.748edd451c9b3a7cac63d98b37e21f085b743a2c569d1b00d2b29327ae276c02" host="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:49:14.333768 containerd[1597]: 2025-01-30 13:49:14.279 [INFO][4816] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.748edd451c9b3a7cac63d98b37e21f085b743a2c569d1b00d2b29327ae276c02 Jan 30 13:49:14.333768 containerd[1597]: 2025-01-30 13:49:14.287 [INFO][4816] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.2.64/26 handle="k8s-pod-network.748edd451c9b3a7cac63d98b37e21f085b743a2c569d1b00d2b29327ae276c02" host="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:49:14.333768 containerd[1597]: 2025-01-30 13:49:14.300 [INFO][4816] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.2.70/26] block=192.168.2.64/26 handle="k8s-pod-network.748edd451c9b3a7cac63d98b37e21f085b743a2c569d1b00d2b29327ae276c02" host="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:49:14.333768 containerd[1597]: 2025-01-30 13:49:14.300 [INFO][4816] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.2.70/26] handle="k8s-pod-network.748edd451c9b3a7cac63d98b37e21f085b743a2c569d1b00d2b29327ae276c02" host="ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal" Jan 30 13:49:14.333768 containerd[1597]: 2025-01-30 13:49:14.300 [INFO][4816] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:49:14.333768 containerd[1597]: 2025-01-30 13:49:14.300 [INFO][4816] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.2.70/26] IPv6=[] ContainerID="748edd451c9b3a7cac63d98b37e21f085b743a2c569d1b00d2b29327ae276c02" HandleID="k8s-pod-network.748edd451c9b3a7cac63d98b37e21f085b743a2c569d1b00d2b29327ae276c02" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--apiserver--7876bf9cc5--rvcj9-eth0" Jan 30 13:49:14.334954 containerd[1597]: 2025-01-30 13:49:14.304 [INFO][4804] cni-plugin/k8s.go 386: Populated endpoint ContainerID="748edd451c9b3a7cac63d98b37e21f085b743a2c569d1b00d2b29327ae276c02" Namespace="calico-apiserver" Pod="calico-apiserver-7876bf9cc5-rvcj9" WorkloadEndpoint="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--apiserver--7876bf9cc5--rvcj9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--apiserver--7876bf9cc5--rvcj9-eth0", GenerateName:"calico-apiserver-7876bf9cc5-", Namespace:"calico-apiserver", SelfLink:"", UID:"954b7f72-1740-46bf-9d10-67fe412470fe", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 48, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7876bf9cc5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-7876bf9cc5-rvcj9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali59864786278", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:49:14.334954 containerd[1597]: 2025-01-30 13:49:14.304 [INFO][4804] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.2.70/32] ContainerID="748edd451c9b3a7cac63d98b37e21f085b743a2c569d1b00d2b29327ae276c02" Namespace="calico-apiserver" Pod="calico-apiserver-7876bf9cc5-rvcj9" WorkloadEndpoint="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--apiserver--7876bf9cc5--rvcj9-eth0" Jan 30 13:49:14.334954 containerd[1597]: 2025-01-30 13:49:14.304 [INFO][4804] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali59864786278 ContainerID="748edd451c9b3a7cac63d98b37e21f085b743a2c569d1b00d2b29327ae276c02" Namespace="calico-apiserver" Pod="calico-apiserver-7876bf9cc5-rvcj9" WorkloadEndpoint="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--apiserver--7876bf9cc5--rvcj9-eth0" Jan 30 13:49:14.334954 containerd[1597]: 2025-01-30 13:49:14.309 [INFO][4804] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="748edd451c9b3a7cac63d98b37e21f085b743a2c569d1b00d2b29327ae276c02" Namespace="calico-apiserver" Pod="calico-apiserver-7876bf9cc5-rvcj9" WorkloadEndpoint="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--apiserver--7876bf9cc5--rvcj9-eth0" Jan 30 13:49:14.334954 containerd[1597]: 2025-01-30 13:49:14.309 [INFO][4804] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="748edd451c9b3a7cac63d98b37e21f085b743a2c569d1b00d2b29327ae276c02" Namespace="calico-apiserver" Pod="calico-apiserver-7876bf9cc5-rvcj9" WorkloadEndpoint="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--apiserver--7876bf9cc5--rvcj9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--apiserver--7876bf9cc5--rvcj9-eth0", GenerateName:"calico-apiserver-7876bf9cc5-", Namespace:"calico-apiserver", SelfLink:"", UID:"954b7f72-1740-46bf-9d10-67fe412470fe", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 48, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7876bf9cc5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal", ContainerID:"748edd451c9b3a7cac63d98b37e21f085b743a2c569d1b00d2b29327ae276c02", Pod:"calico-apiserver-7876bf9cc5-rvcj9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali59864786278", MAC:"be:c8:97:68:db:53", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:49:14.334954 containerd[1597]: 2025-01-30 13:49:14.324 [INFO][4804] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="748edd451c9b3a7cac63d98b37e21f085b743a2c569d1b00d2b29327ae276c02" Namespace="calico-apiserver" Pod="calico-apiserver-7876bf9cc5-rvcj9" WorkloadEndpoint="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--apiserver--7876bf9cc5--rvcj9-eth0" Jan 30 13:49:14.412445 containerd[1597]: time="2025-01-30T13:49:14.412139214Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:49:14.415739 containerd[1597]: time="2025-01-30T13:49:14.413411704Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:49:14.415739 containerd[1597]: time="2025-01-30T13:49:14.414006544Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:49:14.415739 containerd[1597]: time="2025-01-30T13:49:14.414157478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:49:14.461451 systemd[1]: run-containerd-runc-k8s.io-748edd451c9b3a7cac63d98b37e21f085b743a2c569d1b00d2b29327ae276c02-runc.4HySyy.mount: Deactivated successfully. Jan 30 13:49:14.532666 containerd[1597]: time="2025-01-30T13:49:14.532542662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7876bf9cc5-rvcj9,Uid:954b7f72-1740-46bf-9d10-67fe412470fe,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"748edd451c9b3a7cac63d98b37e21f085b743a2c569d1b00d2b29327ae276c02\"" Jan 30 13:49:14.573666 containerd[1597]: time="2025-01-30T13:49:14.573596734Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:14.574934 containerd[1597]: time="2025-01-30T13:49:14.574861165Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 30 13:49:14.576556 containerd[1597]: time="2025-01-30T13:49:14.576486154Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:14.580216 containerd[1597]: time="2025-01-30T13:49:14.580131795Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:14.581597 containerd[1597]: time="2025-01-30T13:49:14.581001682Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.788634919s" Jan 30 13:49:14.581597 containerd[1597]: time="2025-01-30T13:49:14.581054162Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 30 13:49:14.583148 containerd[1597]: time="2025-01-30T13:49:14.582701577Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 13:49:14.596792 containerd[1597]: time="2025-01-30T13:49:14.596740849Z" level=info msg="CreateContainer within sandbox \"c252151db8fa5c6948ffd0a0e113c990665de5228e16c6aeb0d976f0964e0573\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 30 13:49:14.617113 containerd[1597]: time="2025-01-30T13:49:14.617048588Z" level=info msg="CreateContainer within sandbox \"c252151db8fa5c6948ffd0a0e113c990665de5228e16c6aeb0d976f0964e0573\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"e7399fb22526f5a5292ab2805923036ed2b031884995d9db0a03b41110e71a87\"" Jan 30 13:49:14.619476 containerd[1597]: time="2025-01-30T13:49:14.618058989Z" level=info msg="StartContainer for \"e7399fb22526f5a5292ab2805923036ed2b031884995d9db0a03b41110e71a87\"" Jan 30 13:49:14.689743 systemd-networkd[1217]: cali4452044e417: Gained IPv6LL Jan 30 13:49:14.720422 containerd[1597]: time="2025-01-30T13:49:14.719649350Z" level=info msg="StartContainer for \"e7399fb22526f5a5292ab2805923036ed2b031884995d9db0a03b41110e71a87\" returns successfully" Jan 30 13:49:15.245963 systemd[1]: run-containerd-runc-k8s.io-e7399fb22526f5a5292ab2805923036ed2b031884995d9db0a03b41110e71a87-runc.isWjYb.mount: Deactivated successfully. Jan 30 13:49:15.316113 kubelet[2800]: I0130 13:49:15.314328 2800 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5cd4d8684d-s5t7r" podStartSLOduration=26.433320585 podStartE2EDuration="29.3143043s" podCreationTimestamp="2025-01-30 13:48:46 +0000 UTC" firstStartedPulling="2025-01-30 13:49:11.701348378 +0000 UTC m=+47.027986891" lastFinishedPulling="2025-01-30 13:49:14.582332078 +0000 UTC m=+49.908970606" observedRunningTime="2025-01-30 13:49:15.19713306 +0000 UTC m=+50.523771587" watchObservedRunningTime="2025-01-30 13:49:15.3143043 +0000 UTC m=+50.640942828" Jan 30 13:49:15.458562 systemd-networkd[1217]: cali59864786278: Gained IPv6LL Jan 30 13:49:16.808560 containerd[1597]: time="2025-01-30T13:49:16.808335303Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:16.812346 containerd[1597]: time="2025-01-30T13:49:16.812160130Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 30 13:49:16.814539 containerd[1597]: time="2025-01-30T13:49:16.814481906Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:16.819718 containerd[1597]: time="2025-01-30T13:49:16.819545707Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:16.822811 containerd[1597]: time="2025-01-30T13:49:16.821375002Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 2.238631086s" Jan 30 13:49:16.822811 containerd[1597]: time="2025-01-30T13:49:16.822347808Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 30 13:49:16.825216 containerd[1597]: time="2025-01-30T13:49:16.824944188Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 30 13:49:16.826722 containerd[1597]: time="2025-01-30T13:49:16.826517735Z" level=info msg="CreateContainer within sandbox \"e6fcc81ffeb859e9aaed7f0e988ef2eb7dea8228af089f693e19dc0663b61176\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 13:49:16.858554 containerd[1597]: time="2025-01-30T13:49:16.857583613Z" level=info msg="CreateContainer within sandbox \"e6fcc81ffeb859e9aaed7f0e988ef2eb7dea8228af089f693e19dc0663b61176\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ebf02af0c2d185179807e5396b754158f5be5a8476712ac539f604a429259af2\"" Jan 30 13:49:16.859718 containerd[1597]: time="2025-01-30T13:49:16.859598298Z" level=info msg="StartContainer for \"ebf02af0c2d185179807e5396b754158f5be5a8476712ac539f604a429259af2\"" Jan 30 13:49:17.030534 containerd[1597]: time="2025-01-30T13:49:17.030356377Z" level=info msg="StartContainer for \"ebf02af0c2d185179807e5396b754158f5be5a8476712ac539f604a429259af2\" returns successfully" Jan 30 13:49:17.205480 kubelet[2800]: I0130 13:49:17.202444 2800 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7876bf9cc5-jjtnr" podStartSLOduration=26.126412994 podStartE2EDuration="31.202417171s" podCreationTimestamp="2025-01-30 13:48:46 +0000 UTC" firstStartedPulling="2025-01-30 13:49:11.747708337 +0000 UTC m=+47.074346847" lastFinishedPulling="2025-01-30 13:49:16.823712516 +0000 UTC m=+52.150351024" observedRunningTime="2025-01-30 13:49:17.201763055 +0000 UTC m=+52.528401582" watchObservedRunningTime="2025-01-30 13:49:17.202417171 +0000 UTC m=+52.529055690" Jan 30 13:49:18.047436 ntpd[1541]: Listen normally on 6 vxlan.calico 192.168.2.64:123 Jan 30 13:49:18.048808 ntpd[1541]: 30 Jan 13:49:18 ntpd[1541]: Listen normally on 6 vxlan.calico 192.168.2.64:123 Jan 30 13:49:18.048808 ntpd[1541]: 30 Jan 13:49:18 ntpd[1541]: Listen normally on 7 vxlan.calico [fe80::64e7:38ff:fe86:d381%4]:123 Jan 30 13:49:18.048808 ntpd[1541]: 30 Jan 13:49:18 ntpd[1541]: Listen normally on 8 cali80b13e38ee5 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 30 13:49:18.048808 ntpd[1541]: 30 Jan 13:49:18 ntpd[1541]: Listen normally on 9 cali75802108f1e [fe80::ecee:eeff:feee:eeee%8]:123 Jan 30 13:49:18.048808 ntpd[1541]: 30 Jan 13:49:18 ntpd[1541]: Listen normally on 10 cali5790e5492df [fe80::ecee:eeff:feee:eeee%9]:123 Jan 30 13:49:18.048808 ntpd[1541]: 30 Jan 13:49:18 ntpd[1541]: Listen normally on 11 cali6444274696c [fe80::ecee:eeff:feee:eeee%10]:123 Jan 30 13:49:18.048808 ntpd[1541]: 30 Jan 13:49:18 ntpd[1541]: Listen normally on 12 cali4452044e417 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 30 13:49:18.048808 ntpd[1541]: 30 Jan 13:49:18 ntpd[1541]: Listen normally on 13 cali59864786278 [fe80::ecee:eeff:feee:eeee%12]:123 Jan 30 13:49:18.047557 ntpd[1541]: Listen normally on 7 vxlan.calico [fe80::64e7:38ff:fe86:d381%4]:123 Jan 30 13:49:18.047633 ntpd[1541]: Listen normally on 8 cali80b13e38ee5 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 30 13:49:18.047684 ntpd[1541]: Listen normally on 9 cali75802108f1e [fe80::ecee:eeff:feee:eeee%8]:123 Jan 30 13:49:18.047737 ntpd[1541]: Listen normally on 10 cali5790e5492df [fe80::ecee:eeff:feee:eeee%9]:123 Jan 30 13:49:18.047786 ntpd[1541]: Listen normally on 11 cali6444274696c [fe80::ecee:eeff:feee:eeee%10]:123 Jan 30 13:49:18.047835 ntpd[1541]: Listen normally on 12 cali4452044e417 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 30 13:49:18.047888 ntpd[1541]: Listen normally on 13 cali59864786278 [fe80::ecee:eeff:feee:eeee%12]:123 Jan 30 13:49:18.223575 containerd[1597]: time="2025-01-30T13:49:18.219525304Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:18.226287 containerd[1597]: time="2025-01-30T13:49:18.225653881Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 30 13:49:18.228390 containerd[1597]: time="2025-01-30T13:49:18.228343816Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:18.236444 containerd[1597]: time="2025-01-30T13:49:18.235851162Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:18.238203 containerd[1597]: time="2025-01-30T13:49:18.237198571Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.412203449s" Jan 30 13:49:18.238203 containerd[1597]: time="2025-01-30T13:49:18.237249396Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 30 13:49:18.245178 containerd[1597]: time="2025-01-30T13:49:18.243461725Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 13:49:18.251614 containerd[1597]: time="2025-01-30T13:49:18.251378663Z" level=info msg="CreateContainer within sandbox \"74023e3b965146bafd526f893862a5bb8fbcf2bffc4dab5517e55d8c4eeb8ea4\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 30 13:49:18.292715 containerd[1597]: time="2025-01-30T13:49:18.291344719Z" level=info msg="CreateContainer within sandbox \"74023e3b965146bafd526f893862a5bb8fbcf2bffc4dab5517e55d8c4eeb8ea4\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"54151722424f9ee747bf1d285e87abc9593b34887fd2639655f3f4e73a62501e\"" Jan 30 13:49:18.295332 containerd[1597]: time="2025-01-30T13:49:18.293725863Z" level=info msg="StartContainer for \"54151722424f9ee747bf1d285e87abc9593b34887fd2639655f3f4e73a62501e\"" Jan 30 13:49:18.429039 containerd[1597]: time="2025-01-30T13:49:18.428869393Z" level=info msg="StartContainer for \"54151722424f9ee747bf1d285e87abc9593b34887fd2639655f3f4e73a62501e\" returns successfully" Jan 30 13:49:18.462439 containerd[1597]: time="2025-01-30T13:49:18.462225486Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:18.464437 containerd[1597]: time="2025-01-30T13:49:18.464203936Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 30 13:49:18.471438 containerd[1597]: time="2025-01-30T13:49:18.471263581Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 227.752638ms" Jan 30 13:49:18.471438 containerd[1597]: time="2025-01-30T13:49:18.471354453Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 30 13:49:18.478724 containerd[1597]: time="2025-01-30T13:49:18.478629317Z" level=info msg="CreateContainer within sandbox \"748edd451c9b3a7cac63d98b37e21f085b743a2c569d1b00d2b29327ae276c02\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 13:49:18.510441 containerd[1597]: time="2025-01-30T13:49:18.506924594Z" level=info msg="CreateContainer within sandbox \"748edd451c9b3a7cac63d98b37e21f085b743a2c569d1b00d2b29327ae276c02\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"15d6aa287eab30a7fc19efe4ef7666016bb4ed8742c11144206451a5b8a32af6\"" Jan 30 13:49:18.512444 containerd[1597]: time="2025-01-30T13:49:18.510822621Z" level=info msg="StartContainer for \"15d6aa287eab30a7fc19efe4ef7666016bb4ed8742c11144206451a5b8a32af6\"" Jan 30 13:49:18.663475 containerd[1597]: time="2025-01-30T13:49:18.663275137Z" level=info msg="StartContainer for \"15d6aa287eab30a7fc19efe4ef7666016bb4ed8742c11144206451a5b8a32af6\" returns successfully" Jan 30 13:49:19.033424 kubelet[2800]: I0130 13:49:19.032647 2800 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 30 13:49:19.033424 kubelet[2800]: I0130 13:49:19.032686 2800 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 30 13:49:19.263961 kubelet[2800]: I0130 13:49:19.263865 2800 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-jrv56" podStartSLOduration=25.333921506 podStartE2EDuration="33.263841133s" podCreationTimestamp="2025-01-30 13:48:46 +0000 UTC" firstStartedPulling="2025-01-30 13:49:10.311055085 +0000 UTC m=+45.637693594" lastFinishedPulling="2025-01-30 13:49:18.240974699 +0000 UTC m=+53.567613221" observedRunningTime="2025-01-30 13:49:19.241418362 +0000 UTC m=+54.568056886" watchObservedRunningTime="2025-01-30 13:49:19.263841133 +0000 UTC m=+54.590479660" Jan 30 13:49:20.223317 kubelet[2800]: I0130 13:49:20.223222 2800 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:49:20.855679 kubelet[2800]: I0130 13:49:20.855273 2800 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7876bf9cc5-rvcj9" podStartSLOduration=30.917664594 podStartE2EDuration="34.855246296s" podCreationTimestamp="2025-01-30 13:48:46 +0000 UTC" firstStartedPulling="2025-01-30 13:49:14.536211063 +0000 UTC m=+49.862849570" lastFinishedPulling="2025-01-30 13:49:18.473792756 +0000 UTC m=+53.800431272" observedRunningTime="2025-01-30 13:49:19.263645189 +0000 UTC m=+54.590283715" watchObservedRunningTime="2025-01-30 13:49:20.855246296 +0000 UTC m=+56.181884824" Jan 30 13:49:24.806512 containerd[1597]: time="2025-01-30T13:49:24.806450935Z" level=info msg="StopPodSandbox for \"7bd9f26f32de4e02082a9e1c4ed736d590938a463b47a2f9e44ea8c6180096cb\"" Jan 30 13:49:24.966626 containerd[1597]: 2025-01-30 13:49:24.886 [WARNING][5133] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7bd9f26f32de4e02082a9e1c4ed736d590938a463b47a2f9e44ea8c6180096cb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--f9rp4-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"bec5f10d-9deb-43b7-8bb4-a1f9acbcdd25", ResourceVersion:"760", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 48, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal", ContainerID:"7eff629b77a5876374ad0b5c58495b2de9ff8cab83de416c627b2fd2c0798274", Pod:"coredns-7db6d8ff4d-f9rp4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali80b13e38ee5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:49:24.966626 containerd[1597]: 2025-01-30 13:49:24.886 [INFO][5133] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7bd9f26f32de4e02082a9e1c4ed736d590938a463b47a2f9e44ea8c6180096cb" Jan 30 13:49:24.966626 containerd[1597]: 2025-01-30 13:49:24.886 [INFO][5133] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7bd9f26f32de4e02082a9e1c4ed736d590938a463b47a2f9e44ea8c6180096cb" iface="eth0" netns="" Jan 30 13:49:24.966626 containerd[1597]: 2025-01-30 13:49:24.886 [INFO][5133] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7bd9f26f32de4e02082a9e1c4ed736d590938a463b47a2f9e44ea8c6180096cb" Jan 30 13:49:24.966626 containerd[1597]: 2025-01-30 13:49:24.887 [INFO][5133] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7bd9f26f32de4e02082a9e1c4ed736d590938a463b47a2f9e44ea8c6180096cb" Jan 30 13:49:24.966626 containerd[1597]: 2025-01-30 13:49:24.947 [INFO][5142] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7bd9f26f32de4e02082a9e1c4ed736d590938a463b47a2f9e44ea8c6180096cb" HandleID="k8s-pod-network.7bd9f26f32de4e02082a9e1c4ed736d590938a463b47a2f9e44ea8c6180096cb" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--f9rp4-eth0" Jan 30 13:49:24.966626 containerd[1597]: 2025-01-30 13:49:24.948 [INFO][5142] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:49:24.966626 containerd[1597]: 2025-01-30 13:49:24.948 [INFO][5142] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:49:24.966626 containerd[1597]: 2025-01-30 13:49:24.958 [WARNING][5142] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7bd9f26f32de4e02082a9e1c4ed736d590938a463b47a2f9e44ea8c6180096cb" HandleID="k8s-pod-network.7bd9f26f32de4e02082a9e1c4ed736d590938a463b47a2f9e44ea8c6180096cb" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--f9rp4-eth0" Jan 30 13:49:24.966626 containerd[1597]: 2025-01-30 13:49:24.958 [INFO][5142] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7bd9f26f32de4e02082a9e1c4ed736d590938a463b47a2f9e44ea8c6180096cb" HandleID="k8s-pod-network.7bd9f26f32de4e02082a9e1c4ed736d590938a463b47a2f9e44ea8c6180096cb" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--f9rp4-eth0" Jan 30 13:49:24.966626 containerd[1597]: 2025-01-30 13:49:24.961 [INFO][5142] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:49:24.966626 containerd[1597]: 2025-01-30 13:49:24.962 [INFO][5133] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7bd9f26f32de4e02082a9e1c4ed736d590938a463b47a2f9e44ea8c6180096cb" Jan 30 13:49:24.969883 containerd[1597]: time="2025-01-30T13:49:24.966543575Z" level=info msg="TearDown network for sandbox \"7bd9f26f32de4e02082a9e1c4ed736d590938a463b47a2f9e44ea8c6180096cb\" successfully" Jan 30 13:49:24.969883 containerd[1597]: time="2025-01-30T13:49:24.966809269Z" level=info msg="StopPodSandbox for \"7bd9f26f32de4e02082a9e1c4ed736d590938a463b47a2f9e44ea8c6180096cb\" returns successfully" Jan 30 13:49:24.969883 containerd[1597]: time="2025-01-30T13:49:24.969349683Z" level=info msg="RemovePodSandbox for \"7bd9f26f32de4e02082a9e1c4ed736d590938a463b47a2f9e44ea8c6180096cb\"" Jan 30 13:49:24.969883 containerd[1597]: time="2025-01-30T13:49:24.969789631Z" level=info msg="Forcibly stopping sandbox \"7bd9f26f32de4e02082a9e1c4ed736d590938a463b47a2f9e44ea8c6180096cb\"" Jan 30 13:49:25.088280 containerd[1597]: 2025-01-30 13:49:25.041 [WARNING][5161] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7bd9f26f32de4e02082a9e1c4ed736d590938a463b47a2f9e44ea8c6180096cb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--f9rp4-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"bec5f10d-9deb-43b7-8bb4-a1f9acbcdd25", ResourceVersion:"760", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 48, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal", ContainerID:"7eff629b77a5876374ad0b5c58495b2de9ff8cab83de416c627b2fd2c0798274", Pod:"coredns-7db6d8ff4d-f9rp4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali80b13e38ee5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:49:25.088280 containerd[1597]: 2025-01-30 13:49:25.042 [INFO][5161] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7bd9f26f32de4e02082a9e1c4ed736d590938a463b47a2f9e44ea8c6180096cb" Jan 30 13:49:25.088280 containerd[1597]: 2025-01-30 13:49:25.042 [INFO][5161] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7bd9f26f32de4e02082a9e1c4ed736d590938a463b47a2f9e44ea8c6180096cb" iface="eth0" netns="" Jan 30 13:49:25.088280 containerd[1597]: 2025-01-30 13:49:25.042 [INFO][5161] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7bd9f26f32de4e02082a9e1c4ed736d590938a463b47a2f9e44ea8c6180096cb" Jan 30 13:49:25.088280 containerd[1597]: 2025-01-30 13:49:25.042 [INFO][5161] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7bd9f26f32de4e02082a9e1c4ed736d590938a463b47a2f9e44ea8c6180096cb" Jan 30 13:49:25.088280 containerd[1597]: 2025-01-30 13:49:25.074 [INFO][5167] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7bd9f26f32de4e02082a9e1c4ed736d590938a463b47a2f9e44ea8c6180096cb" HandleID="k8s-pod-network.7bd9f26f32de4e02082a9e1c4ed736d590938a463b47a2f9e44ea8c6180096cb" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--f9rp4-eth0" Jan 30 13:49:25.088280 containerd[1597]: 2025-01-30 13:49:25.074 [INFO][5167] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:49:25.088280 containerd[1597]: 2025-01-30 13:49:25.074 [INFO][5167] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:49:25.088280 containerd[1597]: 2025-01-30 13:49:25.083 [WARNING][5167] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7bd9f26f32de4e02082a9e1c4ed736d590938a463b47a2f9e44ea8c6180096cb" HandleID="k8s-pod-network.7bd9f26f32de4e02082a9e1c4ed736d590938a463b47a2f9e44ea8c6180096cb" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--f9rp4-eth0" Jan 30 13:49:25.088280 containerd[1597]: 2025-01-30 13:49:25.083 [INFO][5167] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7bd9f26f32de4e02082a9e1c4ed736d590938a463b47a2f9e44ea8c6180096cb" HandleID="k8s-pod-network.7bd9f26f32de4e02082a9e1c4ed736d590938a463b47a2f9e44ea8c6180096cb" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--f9rp4-eth0" Jan 30 13:49:25.088280 containerd[1597]: 2025-01-30 13:49:25.085 [INFO][5167] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:49:25.088280 containerd[1597]: 2025-01-30 13:49:25.086 [INFO][5161] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7bd9f26f32de4e02082a9e1c4ed736d590938a463b47a2f9e44ea8c6180096cb" Jan 30 13:49:25.089694 containerd[1597]: time="2025-01-30T13:49:25.088328744Z" level=info msg="TearDown network for sandbox \"7bd9f26f32de4e02082a9e1c4ed736d590938a463b47a2f9e44ea8c6180096cb\" successfully" Jan 30 13:49:25.093662 containerd[1597]: time="2025-01-30T13:49:25.093593388Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7bd9f26f32de4e02082a9e1c4ed736d590938a463b47a2f9e44ea8c6180096cb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:49:25.093834 containerd[1597]: time="2025-01-30T13:49:25.093765901Z" level=info msg="RemovePodSandbox \"7bd9f26f32de4e02082a9e1c4ed736d590938a463b47a2f9e44ea8c6180096cb\" returns successfully" Jan 30 13:49:25.094774 containerd[1597]: time="2025-01-30T13:49:25.094702952Z" level=info msg="StopPodSandbox for \"ab34cb8ecc5c08e0ec4de63d0f180b07d7a49a06fca59e0d1ae9974aa55b0dbe\"" Jan 30 13:49:25.200904 containerd[1597]: 2025-01-30 13:49:25.149 [WARNING][5185] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ab34cb8ecc5c08e0ec4de63d0f180b07d7a49a06fca59e0d1ae9974aa55b0dbe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--apiserver--7876bf9cc5--jjtnr-eth0", GenerateName:"calico-apiserver-7876bf9cc5-", Namespace:"calico-apiserver", SelfLink:"", UID:"4b2c2fe6-e614-4530-a6fa-02d31dc3b011", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 48, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7876bf9cc5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal", ContainerID:"e6fcc81ffeb859e9aaed7f0e988ef2eb7dea8228af089f693e19dc0663b61176", Pod:"calico-apiserver-7876bf9cc5-jjtnr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6444274696c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:49:25.200904 containerd[1597]: 2025-01-30 13:49:25.150 [INFO][5185] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ab34cb8ecc5c08e0ec4de63d0f180b07d7a49a06fca59e0d1ae9974aa55b0dbe" Jan 30 13:49:25.200904 containerd[1597]: 2025-01-30 13:49:25.150 [INFO][5185] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ab34cb8ecc5c08e0ec4de63d0f180b07d7a49a06fca59e0d1ae9974aa55b0dbe" iface="eth0" netns="" Jan 30 13:49:25.200904 containerd[1597]: 2025-01-30 13:49:25.150 [INFO][5185] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ab34cb8ecc5c08e0ec4de63d0f180b07d7a49a06fca59e0d1ae9974aa55b0dbe" Jan 30 13:49:25.200904 containerd[1597]: 2025-01-30 13:49:25.150 [INFO][5185] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ab34cb8ecc5c08e0ec4de63d0f180b07d7a49a06fca59e0d1ae9974aa55b0dbe" Jan 30 13:49:25.200904 containerd[1597]: 2025-01-30 13:49:25.181 [INFO][5192] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ab34cb8ecc5c08e0ec4de63d0f180b07d7a49a06fca59e0d1ae9974aa55b0dbe" HandleID="k8s-pod-network.ab34cb8ecc5c08e0ec4de63d0f180b07d7a49a06fca59e0d1ae9974aa55b0dbe" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--apiserver--7876bf9cc5--jjtnr-eth0" Jan 30 13:49:25.200904 containerd[1597]: 2025-01-30 13:49:25.181 [INFO][5192] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:49:25.200904 containerd[1597]: 2025-01-30 13:49:25.181 [INFO][5192] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:49:25.200904 containerd[1597]: 2025-01-30 13:49:25.191 [WARNING][5192] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ab34cb8ecc5c08e0ec4de63d0f180b07d7a49a06fca59e0d1ae9974aa55b0dbe" HandleID="k8s-pod-network.ab34cb8ecc5c08e0ec4de63d0f180b07d7a49a06fca59e0d1ae9974aa55b0dbe" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--apiserver--7876bf9cc5--jjtnr-eth0" Jan 30 13:49:25.200904 containerd[1597]: 2025-01-30 13:49:25.191 [INFO][5192] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ab34cb8ecc5c08e0ec4de63d0f180b07d7a49a06fca59e0d1ae9974aa55b0dbe" HandleID="k8s-pod-network.ab34cb8ecc5c08e0ec4de63d0f180b07d7a49a06fca59e0d1ae9974aa55b0dbe" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--apiserver--7876bf9cc5--jjtnr-eth0" Jan 30 13:49:25.200904 containerd[1597]: 2025-01-30 13:49:25.197 [INFO][5192] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:49:25.200904 containerd[1597]: 2025-01-30 13:49:25.199 [INFO][5185] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ab34cb8ecc5c08e0ec4de63d0f180b07d7a49a06fca59e0d1ae9974aa55b0dbe" Jan 30 13:49:25.202007 containerd[1597]: time="2025-01-30T13:49:25.200905478Z" level=info msg="TearDown network for sandbox \"ab34cb8ecc5c08e0ec4de63d0f180b07d7a49a06fca59e0d1ae9974aa55b0dbe\" successfully" Jan 30 13:49:25.202007 containerd[1597]: time="2025-01-30T13:49:25.200940163Z" level=info msg="StopPodSandbox for \"ab34cb8ecc5c08e0ec4de63d0f180b07d7a49a06fca59e0d1ae9974aa55b0dbe\" returns successfully" Jan 30 13:49:25.202007 containerd[1597]: time="2025-01-30T13:49:25.201670135Z" level=info msg="RemovePodSandbox for \"ab34cb8ecc5c08e0ec4de63d0f180b07d7a49a06fca59e0d1ae9974aa55b0dbe\"" Jan 30 13:49:25.202007 containerd[1597]: time="2025-01-30T13:49:25.201709474Z" level=info msg="Forcibly stopping sandbox \"ab34cb8ecc5c08e0ec4de63d0f180b07d7a49a06fca59e0d1ae9974aa55b0dbe\"" Jan 30 13:49:25.387682 containerd[1597]: 2025-01-30 13:49:25.290 [WARNING][5210] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ab34cb8ecc5c08e0ec4de63d0f180b07d7a49a06fca59e0d1ae9974aa55b0dbe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--apiserver--7876bf9cc5--jjtnr-eth0", GenerateName:"calico-apiserver-7876bf9cc5-", Namespace:"calico-apiserver", SelfLink:"", UID:"4b2c2fe6-e614-4530-a6fa-02d31dc3b011", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 48, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7876bf9cc5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal", ContainerID:"e6fcc81ffeb859e9aaed7f0e988ef2eb7dea8228af089f693e19dc0663b61176", Pod:"calico-apiserver-7876bf9cc5-jjtnr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6444274696c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:49:25.387682 containerd[1597]: 2025-01-30 13:49:25.291 [INFO][5210] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ab34cb8ecc5c08e0ec4de63d0f180b07d7a49a06fca59e0d1ae9974aa55b0dbe" Jan 30 13:49:25.387682 containerd[1597]: 2025-01-30 13:49:25.291 [INFO][5210] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ab34cb8ecc5c08e0ec4de63d0f180b07d7a49a06fca59e0d1ae9974aa55b0dbe" iface="eth0" netns="" Jan 30 13:49:25.387682 containerd[1597]: 2025-01-30 13:49:25.291 [INFO][5210] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ab34cb8ecc5c08e0ec4de63d0f180b07d7a49a06fca59e0d1ae9974aa55b0dbe" Jan 30 13:49:25.387682 containerd[1597]: 2025-01-30 13:49:25.291 [INFO][5210] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ab34cb8ecc5c08e0ec4de63d0f180b07d7a49a06fca59e0d1ae9974aa55b0dbe" Jan 30 13:49:25.387682 containerd[1597]: 2025-01-30 13:49:25.357 [INFO][5216] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ab34cb8ecc5c08e0ec4de63d0f180b07d7a49a06fca59e0d1ae9974aa55b0dbe" HandleID="k8s-pod-network.ab34cb8ecc5c08e0ec4de63d0f180b07d7a49a06fca59e0d1ae9974aa55b0dbe" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--apiserver--7876bf9cc5--jjtnr-eth0" Jan 30 13:49:25.387682 containerd[1597]: 2025-01-30 13:49:25.358 [INFO][5216] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:49:25.387682 containerd[1597]: 2025-01-30 13:49:25.358 [INFO][5216] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:49:25.387682 containerd[1597]: 2025-01-30 13:49:25.373 [WARNING][5216] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ab34cb8ecc5c08e0ec4de63d0f180b07d7a49a06fca59e0d1ae9974aa55b0dbe" HandleID="k8s-pod-network.ab34cb8ecc5c08e0ec4de63d0f180b07d7a49a06fca59e0d1ae9974aa55b0dbe" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--apiserver--7876bf9cc5--jjtnr-eth0" Jan 30 13:49:25.387682 containerd[1597]: 2025-01-30 13:49:25.373 [INFO][5216] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ab34cb8ecc5c08e0ec4de63d0f180b07d7a49a06fca59e0d1ae9974aa55b0dbe" HandleID="k8s-pod-network.ab34cb8ecc5c08e0ec4de63d0f180b07d7a49a06fca59e0d1ae9974aa55b0dbe" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--apiserver--7876bf9cc5--jjtnr-eth0" Jan 30 13:49:25.387682 containerd[1597]: 2025-01-30 13:49:25.378 [INFO][5216] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:49:25.387682 containerd[1597]: 2025-01-30 13:49:25.383 [INFO][5210] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ab34cb8ecc5c08e0ec4de63d0f180b07d7a49a06fca59e0d1ae9974aa55b0dbe" Jan 30 13:49:25.391887 containerd[1597]: time="2025-01-30T13:49:25.390223627Z" level=info msg="TearDown network for sandbox \"ab34cb8ecc5c08e0ec4de63d0f180b07d7a49a06fca59e0d1ae9974aa55b0dbe\" successfully" Jan 30 13:49:25.408771 containerd[1597]: time="2025-01-30T13:49:25.408603540Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ab34cb8ecc5c08e0ec4de63d0f180b07d7a49a06fca59e0d1ae9974aa55b0dbe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:49:25.412431 containerd[1597]: time="2025-01-30T13:49:25.411623339Z" level=info msg="RemovePodSandbox \"ab34cb8ecc5c08e0ec4de63d0f180b07d7a49a06fca59e0d1ae9974aa55b0dbe\" returns successfully" Jan 30 13:49:25.413554 containerd[1597]: time="2025-01-30T13:49:25.413515630Z" level=info msg="StopPodSandbox for \"3c6b39ff8f06e3ae96187bc34712f7d5f94b346e3033eaec3132dc5a8a3a80cf\"" Jan 30 13:49:25.506657 containerd[1597]: 2025-01-30 13:49:25.465 [WARNING][5236] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3c6b39ff8f06e3ae96187bc34712f7d5f94b346e3033eaec3132dc5a8a3a80cf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--kgnts-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"f2b9a9ee-1ab9-4acc-8bb6-079b1de8f5a0", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 48, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal", ContainerID:"4c4416a94968bd90ce116291ba2fea6ffe4d3e6893f93b3ce5454e1d2a8a18fb", Pod:"coredns-7db6d8ff4d-kgnts", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4452044e417", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:49:25.506657 containerd[1597]: 2025-01-30 13:49:25.465 [INFO][5236] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3c6b39ff8f06e3ae96187bc34712f7d5f94b346e3033eaec3132dc5a8a3a80cf" Jan 30 13:49:25.506657 containerd[1597]: 2025-01-30 13:49:25.465 [INFO][5236] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3c6b39ff8f06e3ae96187bc34712f7d5f94b346e3033eaec3132dc5a8a3a80cf" iface="eth0" netns="" Jan 30 13:49:25.506657 containerd[1597]: 2025-01-30 13:49:25.465 [INFO][5236] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3c6b39ff8f06e3ae96187bc34712f7d5f94b346e3033eaec3132dc5a8a3a80cf" Jan 30 13:49:25.506657 containerd[1597]: 2025-01-30 13:49:25.465 [INFO][5236] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3c6b39ff8f06e3ae96187bc34712f7d5f94b346e3033eaec3132dc5a8a3a80cf" Jan 30 13:49:25.506657 containerd[1597]: 2025-01-30 13:49:25.492 [INFO][5242] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3c6b39ff8f06e3ae96187bc34712f7d5f94b346e3033eaec3132dc5a8a3a80cf" HandleID="k8s-pod-network.3c6b39ff8f06e3ae96187bc34712f7d5f94b346e3033eaec3132dc5a8a3a80cf" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--kgnts-eth0" Jan 30 13:49:25.506657 containerd[1597]: 2025-01-30 13:49:25.492 [INFO][5242] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:49:25.506657 containerd[1597]: 2025-01-30 13:49:25.492 [INFO][5242] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:49:25.506657 containerd[1597]: 2025-01-30 13:49:25.500 [WARNING][5242] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3c6b39ff8f06e3ae96187bc34712f7d5f94b346e3033eaec3132dc5a8a3a80cf" HandleID="k8s-pod-network.3c6b39ff8f06e3ae96187bc34712f7d5f94b346e3033eaec3132dc5a8a3a80cf" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--kgnts-eth0" Jan 30 13:49:25.506657 containerd[1597]: 2025-01-30 13:49:25.501 [INFO][5242] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3c6b39ff8f06e3ae96187bc34712f7d5f94b346e3033eaec3132dc5a8a3a80cf" HandleID="k8s-pod-network.3c6b39ff8f06e3ae96187bc34712f7d5f94b346e3033eaec3132dc5a8a3a80cf" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--kgnts-eth0" Jan 30 13:49:25.506657 containerd[1597]: 2025-01-30 13:49:25.504 [INFO][5242] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:49:25.506657 containerd[1597]: 2025-01-30 13:49:25.505 [INFO][5236] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3c6b39ff8f06e3ae96187bc34712f7d5f94b346e3033eaec3132dc5a8a3a80cf" Jan 30 13:49:25.507840 containerd[1597]: time="2025-01-30T13:49:25.506713429Z" level=info msg="TearDown network for sandbox \"3c6b39ff8f06e3ae96187bc34712f7d5f94b346e3033eaec3132dc5a8a3a80cf\" successfully" Jan 30 13:49:25.507840 containerd[1597]: time="2025-01-30T13:49:25.506748703Z" level=info msg="StopPodSandbox for \"3c6b39ff8f06e3ae96187bc34712f7d5f94b346e3033eaec3132dc5a8a3a80cf\" returns successfully" Jan 30 13:49:25.507840 containerd[1597]: time="2025-01-30T13:49:25.507471821Z" level=info msg="RemovePodSandbox for \"3c6b39ff8f06e3ae96187bc34712f7d5f94b346e3033eaec3132dc5a8a3a80cf\"" Jan 30 13:49:25.507840 containerd[1597]: time="2025-01-30T13:49:25.507512896Z" level=info msg="Forcibly stopping sandbox \"3c6b39ff8f06e3ae96187bc34712f7d5f94b346e3033eaec3132dc5a8a3a80cf\"" Jan 30 13:49:25.596120 containerd[1597]: 2025-01-30 13:49:25.554 [WARNING][5260] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3c6b39ff8f06e3ae96187bc34712f7d5f94b346e3033eaec3132dc5a8a3a80cf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--kgnts-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"f2b9a9ee-1ab9-4acc-8bb6-079b1de8f5a0", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 48, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal", ContainerID:"4c4416a94968bd90ce116291ba2fea6ffe4d3e6893f93b3ce5454e1d2a8a18fb", Pod:"coredns-7db6d8ff4d-kgnts", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4452044e417", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:49:25.596120 containerd[1597]: 2025-01-30 13:49:25.554 [INFO][5260] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3c6b39ff8f06e3ae96187bc34712f7d5f94b346e3033eaec3132dc5a8a3a80cf" Jan 30 13:49:25.596120 containerd[1597]: 2025-01-30 13:49:25.554 [INFO][5260] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3c6b39ff8f06e3ae96187bc34712f7d5f94b346e3033eaec3132dc5a8a3a80cf" iface="eth0" netns="" Jan 30 13:49:25.596120 containerd[1597]: 2025-01-30 13:49:25.554 [INFO][5260] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3c6b39ff8f06e3ae96187bc34712f7d5f94b346e3033eaec3132dc5a8a3a80cf" Jan 30 13:49:25.596120 containerd[1597]: 2025-01-30 13:49:25.554 [INFO][5260] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3c6b39ff8f06e3ae96187bc34712f7d5f94b346e3033eaec3132dc5a8a3a80cf" Jan 30 13:49:25.596120 containerd[1597]: 2025-01-30 13:49:25.581 [INFO][5266] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3c6b39ff8f06e3ae96187bc34712f7d5f94b346e3033eaec3132dc5a8a3a80cf" HandleID="k8s-pod-network.3c6b39ff8f06e3ae96187bc34712f7d5f94b346e3033eaec3132dc5a8a3a80cf" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--kgnts-eth0" Jan 30 13:49:25.596120 containerd[1597]: 2025-01-30 13:49:25.581 [INFO][5266] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:49:25.596120 containerd[1597]: 2025-01-30 13:49:25.581 [INFO][5266] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:49:25.596120 containerd[1597]: 2025-01-30 13:49:25.591 [WARNING][5266] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3c6b39ff8f06e3ae96187bc34712f7d5f94b346e3033eaec3132dc5a8a3a80cf" HandleID="k8s-pod-network.3c6b39ff8f06e3ae96187bc34712f7d5f94b346e3033eaec3132dc5a8a3a80cf" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--kgnts-eth0" Jan 30 13:49:25.596120 containerd[1597]: 2025-01-30 13:49:25.591 [INFO][5266] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3c6b39ff8f06e3ae96187bc34712f7d5f94b346e3033eaec3132dc5a8a3a80cf" HandleID="k8s-pod-network.3c6b39ff8f06e3ae96187bc34712f7d5f94b346e3033eaec3132dc5a8a3a80cf" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--kgnts-eth0" Jan 30 13:49:25.596120 containerd[1597]: 2025-01-30 13:49:25.593 [INFO][5266] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:49:25.596120 containerd[1597]: 2025-01-30 13:49:25.594 [INFO][5260] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3c6b39ff8f06e3ae96187bc34712f7d5f94b346e3033eaec3132dc5a8a3a80cf" Jan 30 13:49:25.597201 containerd[1597]: time="2025-01-30T13:49:25.596178873Z" level=info msg="TearDown network for sandbox \"3c6b39ff8f06e3ae96187bc34712f7d5f94b346e3033eaec3132dc5a8a3a80cf\" successfully" Jan 30 13:49:25.601529 containerd[1597]: time="2025-01-30T13:49:25.601470904Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3c6b39ff8f06e3ae96187bc34712f7d5f94b346e3033eaec3132dc5a8a3a80cf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:49:25.601693 containerd[1597]: time="2025-01-30T13:49:25.601554161Z" level=info msg="RemovePodSandbox \"3c6b39ff8f06e3ae96187bc34712f7d5f94b346e3033eaec3132dc5a8a3a80cf\" returns successfully" Jan 30 13:49:25.602318 containerd[1597]: time="2025-01-30T13:49:25.602279743Z" level=info msg="StopPodSandbox for \"25d5485a7ef27ddf29a14cb54f9f09d6559c3280b393782636ebf1a780cfd875\"" Jan 30 13:49:25.706073 containerd[1597]: 2025-01-30 13:49:25.657 [WARNING][5284] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="25d5485a7ef27ddf29a14cb54f9f09d6559c3280b393782636ebf1a780cfd875" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-csi--node--driver--jrv56-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8ce3c383-738d-490f-a267-4c123b509bcf", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 48, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal", ContainerID:"74023e3b965146bafd526f893862a5bb8fbcf2bffc4dab5517e55d8c4eeb8ea4", Pod:"csi-node-driver-jrv56", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.2.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali75802108f1e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:49:25.706073 containerd[1597]: 2025-01-30 13:49:25.657 [INFO][5284] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="25d5485a7ef27ddf29a14cb54f9f09d6559c3280b393782636ebf1a780cfd875" Jan 30 13:49:25.706073 containerd[1597]: 2025-01-30 13:49:25.657 [INFO][5284] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="25d5485a7ef27ddf29a14cb54f9f09d6559c3280b393782636ebf1a780cfd875" iface="eth0" netns="" Jan 30 13:49:25.706073 containerd[1597]: 2025-01-30 13:49:25.657 [INFO][5284] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="25d5485a7ef27ddf29a14cb54f9f09d6559c3280b393782636ebf1a780cfd875" Jan 30 13:49:25.706073 containerd[1597]: 2025-01-30 13:49:25.658 [INFO][5284] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="25d5485a7ef27ddf29a14cb54f9f09d6559c3280b393782636ebf1a780cfd875" Jan 30 13:49:25.706073 containerd[1597]: 2025-01-30 13:49:25.694 [INFO][5291] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="25d5485a7ef27ddf29a14cb54f9f09d6559c3280b393782636ebf1a780cfd875" HandleID="k8s-pod-network.25d5485a7ef27ddf29a14cb54f9f09d6559c3280b393782636ebf1a780cfd875" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-csi--node--driver--jrv56-eth0" Jan 30 13:49:25.706073 containerd[1597]: 2025-01-30 13:49:25.694 [INFO][5291] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:49:25.706073 containerd[1597]: 2025-01-30 13:49:25.694 [INFO][5291] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:49:25.706073 containerd[1597]: 2025-01-30 13:49:25.701 [WARNING][5291] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="25d5485a7ef27ddf29a14cb54f9f09d6559c3280b393782636ebf1a780cfd875" HandleID="k8s-pod-network.25d5485a7ef27ddf29a14cb54f9f09d6559c3280b393782636ebf1a780cfd875" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-csi--node--driver--jrv56-eth0" Jan 30 13:49:25.706073 containerd[1597]: 2025-01-30 13:49:25.701 [INFO][5291] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="25d5485a7ef27ddf29a14cb54f9f09d6559c3280b393782636ebf1a780cfd875" HandleID="k8s-pod-network.25d5485a7ef27ddf29a14cb54f9f09d6559c3280b393782636ebf1a780cfd875" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-csi--node--driver--jrv56-eth0" Jan 30 13:49:25.706073 containerd[1597]: 2025-01-30 13:49:25.703 [INFO][5291] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:49:25.706073 containerd[1597]: 2025-01-30 13:49:25.704 [INFO][5284] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="25d5485a7ef27ddf29a14cb54f9f09d6559c3280b393782636ebf1a780cfd875" Jan 30 13:49:25.706073 containerd[1597]: time="2025-01-30T13:49:25.705896685Z" level=info msg="TearDown network for sandbox \"25d5485a7ef27ddf29a14cb54f9f09d6559c3280b393782636ebf1a780cfd875\" successfully" Jan 30 13:49:25.706073 containerd[1597]: time="2025-01-30T13:49:25.705954848Z" level=info msg="StopPodSandbox for \"25d5485a7ef27ddf29a14cb54f9f09d6559c3280b393782636ebf1a780cfd875\" returns successfully" Jan 30 13:49:25.708390 containerd[1597]: time="2025-01-30T13:49:25.708013088Z" level=info msg="RemovePodSandbox for \"25d5485a7ef27ddf29a14cb54f9f09d6559c3280b393782636ebf1a780cfd875\"" Jan 30 13:49:25.708390 containerd[1597]: time="2025-01-30T13:49:25.708110106Z" level=info msg="Forcibly stopping sandbox \"25d5485a7ef27ddf29a14cb54f9f09d6559c3280b393782636ebf1a780cfd875\"" Jan 30 13:49:25.816150 containerd[1597]: 2025-01-30 13:49:25.759 [WARNING][5309] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="25d5485a7ef27ddf29a14cb54f9f09d6559c3280b393782636ebf1a780cfd875" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-csi--node--driver--jrv56-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8ce3c383-738d-490f-a267-4c123b509bcf", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 48, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal", ContainerID:"74023e3b965146bafd526f893862a5bb8fbcf2bffc4dab5517e55d8c4eeb8ea4", Pod:"csi-node-driver-jrv56", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.2.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali75802108f1e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:49:25.816150 containerd[1597]: 2025-01-30 13:49:25.760 [INFO][5309] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="25d5485a7ef27ddf29a14cb54f9f09d6559c3280b393782636ebf1a780cfd875" Jan 30 13:49:25.816150 containerd[1597]: 2025-01-30 13:49:25.760 [INFO][5309] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="25d5485a7ef27ddf29a14cb54f9f09d6559c3280b393782636ebf1a780cfd875" iface="eth0" netns="" Jan 30 13:49:25.816150 containerd[1597]: 2025-01-30 13:49:25.760 [INFO][5309] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="25d5485a7ef27ddf29a14cb54f9f09d6559c3280b393782636ebf1a780cfd875" Jan 30 13:49:25.816150 containerd[1597]: 2025-01-30 13:49:25.760 [INFO][5309] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="25d5485a7ef27ddf29a14cb54f9f09d6559c3280b393782636ebf1a780cfd875" Jan 30 13:49:25.816150 containerd[1597]: 2025-01-30 13:49:25.802 [INFO][5316] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="25d5485a7ef27ddf29a14cb54f9f09d6559c3280b393782636ebf1a780cfd875" HandleID="k8s-pod-network.25d5485a7ef27ddf29a14cb54f9f09d6559c3280b393782636ebf1a780cfd875" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-csi--node--driver--jrv56-eth0" Jan 30 13:49:25.816150 containerd[1597]: 2025-01-30 13:49:25.803 [INFO][5316] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:49:25.816150 containerd[1597]: 2025-01-30 13:49:25.803 [INFO][5316] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:49:25.816150 containerd[1597]: 2025-01-30 13:49:25.812 [WARNING][5316] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="25d5485a7ef27ddf29a14cb54f9f09d6559c3280b393782636ebf1a780cfd875" HandleID="k8s-pod-network.25d5485a7ef27ddf29a14cb54f9f09d6559c3280b393782636ebf1a780cfd875" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-csi--node--driver--jrv56-eth0" Jan 30 13:49:25.816150 containerd[1597]: 2025-01-30 13:49:25.812 [INFO][5316] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="25d5485a7ef27ddf29a14cb54f9f09d6559c3280b393782636ebf1a780cfd875" HandleID="k8s-pod-network.25d5485a7ef27ddf29a14cb54f9f09d6559c3280b393782636ebf1a780cfd875" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-csi--node--driver--jrv56-eth0" Jan 30 13:49:25.816150 containerd[1597]: 2025-01-30 13:49:25.813 [INFO][5316] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:49:25.816150 containerd[1597]: 2025-01-30 13:49:25.814 [INFO][5309] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="25d5485a7ef27ddf29a14cb54f9f09d6559c3280b393782636ebf1a780cfd875" Jan 30 13:49:25.818091 containerd[1597]: time="2025-01-30T13:49:25.816204767Z" level=info msg="TearDown network for sandbox \"25d5485a7ef27ddf29a14cb54f9f09d6559c3280b393782636ebf1a780cfd875\" successfully" Jan 30 13:49:25.822124 containerd[1597]: time="2025-01-30T13:49:25.822004705Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"25d5485a7ef27ddf29a14cb54f9f09d6559c3280b393782636ebf1a780cfd875\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:49:25.822430 containerd[1597]: time="2025-01-30T13:49:25.822353982Z" level=info msg="RemovePodSandbox \"25d5485a7ef27ddf29a14cb54f9f09d6559c3280b393782636ebf1a780cfd875\" returns successfully" Jan 30 13:49:25.824127 containerd[1597]: time="2025-01-30T13:49:25.823656423Z" level=info msg="StopPodSandbox for \"4c9c40a812a591da952af179d9a573a88fbd976b65bd47d034f0f61837d8017e\"" Jan 30 13:49:25.918787 containerd[1597]: 2025-01-30 13:49:25.878 [WARNING][5334] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4c9c40a812a591da952af179d9a573a88fbd976b65bd47d034f0f61837d8017e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--apiserver--7876bf9cc5--rvcj9-eth0", GenerateName:"calico-apiserver-7876bf9cc5-", Namespace:"calico-apiserver", SelfLink:"", UID:"954b7f72-1740-46bf-9d10-67fe412470fe", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 48, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7876bf9cc5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal", ContainerID:"748edd451c9b3a7cac63d98b37e21f085b743a2c569d1b00d2b29327ae276c02", Pod:"calico-apiserver-7876bf9cc5-rvcj9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali59864786278", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:49:25.918787 containerd[1597]: 2025-01-30 13:49:25.878 [INFO][5334] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4c9c40a812a591da952af179d9a573a88fbd976b65bd47d034f0f61837d8017e" Jan 30 13:49:25.918787 containerd[1597]: 2025-01-30 13:49:25.878 [INFO][5334] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4c9c40a812a591da952af179d9a573a88fbd976b65bd47d034f0f61837d8017e" iface="eth0" netns="" Jan 30 13:49:25.918787 containerd[1597]: 2025-01-30 13:49:25.878 [INFO][5334] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4c9c40a812a591da952af179d9a573a88fbd976b65bd47d034f0f61837d8017e" Jan 30 13:49:25.918787 containerd[1597]: 2025-01-30 13:49:25.878 [INFO][5334] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4c9c40a812a591da952af179d9a573a88fbd976b65bd47d034f0f61837d8017e" Jan 30 13:49:25.918787 containerd[1597]: 2025-01-30 13:49:25.906 [INFO][5340] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4c9c40a812a591da952af179d9a573a88fbd976b65bd47d034f0f61837d8017e" HandleID="k8s-pod-network.4c9c40a812a591da952af179d9a573a88fbd976b65bd47d034f0f61837d8017e" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--apiserver--7876bf9cc5--rvcj9-eth0" Jan 30 13:49:25.918787 containerd[1597]: 2025-01-30 13:49:25.906 [INFO][5340] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:49:25.918787 containerd[1597]: 2025-01-30 13:49:25.906 [INFO][5340] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:49:25.918787 containerd[1597]: 2025-01-30 13:49:25.914 [WARNING][5340] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4c9c40a812a591da952af179d9a573a88fbd976b65bd47d034f0f61837d8017e" HandleID="k8s-pod-network.4c9c40a812a591da952af179d9a573a88fbd976b65bd47d034f0f61837d8017e" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--apiserver--7876bf9cc5--rvcj9-eth0" Jan 30 13:49:25.918787 containerd[1597]: 2025-01-30 13:49:25.914 [INFO][5340] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4c9c40a812a591da952af179d9a573a88fbd976b65bd47d034f0f61837d8017e" HandleID="k8s-pod-network.4c9c40a812a591da952af179d9a573a88fbd976b65bd47d034f0f61837d8017e" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--apiserver--7876bf9cc5--rvcj9-eth0" Jan 30 13:49:25.918787 containerd[1597]: 2025-01-30 13:49:25.916 [INFO][5340] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:49:25.918787 containerd[1597]: 2025-01-30 13:49:25.917 [INFO][5334] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4c9c40a812a591da952af179d9a573a88fbd976b65bd47d034f0f61837d8017e" Jan 30 13:49:25.920568 containerd[1597]: time="2025-01-30T13:49:25.918785824Z" level=info msg="TearDown network for sandbox \"4c9c40a812a591da952af179d9a573a88fbd976b65bd47d034f0f61837d8017e\" successfully" Jan 30 13:49:25.920568 containerd[1597]: time="2025-01-30T13:49:25.918821500Z" level=info msg="StopPodSandbox for \"4c9c40a812a591da952af179d9a573a88fbd976b65bd47d034f0f61837d8017e\" returns successfully" Jan 30 13:49:25.920568 containerd[1597]: time="2025-01-30T13:49:25.919536375Z" level=info msg="RemovePodSandbox for \"4c9c40a812a591da952af179d9a573a88fbd976b65bd47d034f0f61837d8017e\"" Jan 30 13:49:25.920568 containerd[1597]: time="2025-01-30T13:49:25.919580195Z" level=info msg="Forcibly stopping sandbox \"4c9c40a812a591da952af179d9a573a88fbd976b65bd47d034f0f61837d8017e\"" Jan 30 13:49:26.018498 containerd[1597]: 2025-01-30 13:49:25.969 [WARNING][5359] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4c9c40a812a591da952af179d9a573a88fbd976b65bd47d034f0f61837d8017e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--apiserver--7876bf9cc5--rvcj9-eth0", GenerateName:"calico-apiserver-7876bf9cc5-", Namespace:"calico-apiserver", SelfLink:"", UID:"954b7f72-1740-46bf-9d10-67fe412470fe", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 48, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7876bf9cc5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal", ContainerID:"748edd451c9b3a7cac63d98b37e21f085b743a2c569d1b00d2b29327ae276c02", Pod:"calico-apiserver-7876bf9cc5-rvcj9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali59864786278", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:49:26.018498 containerd[1597]: 2025-01-30 13:49:25.970 [INFO][5359] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4c9c40a812a591da952af179d9a573a88fbd976b65bd47d034f0f61837d8017e" Jan 30 13:49:26.018498 containerd[1597]: 2025-01-30 13:49:25.970 [INFO][5359] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4c9c40a812a591da952af179d9a573a88fbd976b65bd47d034f0f61837d8017e" iface="eth0" netns="" Jan 30 13:49:26.018498 containerd[1597]: 2025-01-30 13:49:25.970 [INFO][5359] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4c9c40a812a591da952af179d9a573a88fbd976b65bd47d034f0f61837d8017e" Jan 30 13:49:26.018498 containerd[1597]: 2025-01-30 13:49:25.970 [INFO][5359] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4c9c40a812a591da952af179d9a573a88fbd976b65bd47d034f0f61837d8017e" Jan 30 13:49:26.018498 containerd[1597]: 2025-01-30 13:49:26.006 [INFO][5365] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4c9c40a812a591da952af179d9a573a88fbd976b65bd47d034f0f61837d8017e" HandleID="k8s-pod-network.4c9c40a812a591da952af179d9a573a88fbd976b65bd47d034f0f61837d8017e" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--apiserver--7876bf9cc5--rvcj9-eth0" Jan 30 13:49:26.018498 containerd[1597]: 2025-01-30 13:49:26.006 [INFO][5365] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:49:26.018498 containerd[1597]: 2025-01-30 13:49:26.007 [INFO][5365] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:49:26.018498 containerd[1597]: 2025-01-30 13:49:26.014 [WARNING][5365] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4c9c40a812a591da952af179d9a573a88fbd976b65bd47d034f0f61837d8017e" HandleID="k8s-pod-network.4c9c40a812a591da952af179d9a573a88fbd976b65bd47d034f0f61837d8017e" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--apiserver--7876bf9cc5--rvcj9-eth0" Jan 30 13:49:26.018498 containerd[1597]: 2025-01-30 13:49:26.014 [INFO][5365] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4c9c40a812a591da952af179d9a573a88fbd976b65bd47d034f0f61837d8017e" HandleID="k8s-pod-network.4c9c40a812a591da952af179d9a573a88fbd976b65bd47d034f0f61837d8017e" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--apiserver--7876bf9cc5--rvcj9-eth0" Jan 30 13:49:26.018498 containerd[1597]: 2025-01-30 13:49:26.015 [INFO][5365] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:49:26.018498 containerd[1597]: 2025-01-30 13:49:26.017 [INFO][5359] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4c9c40a812a591da952af179d9a573a88fbd976b65bd47d034f0f61837d8017e" Jan 30 13:49:26.018498 containerd[1597]: time="2025-01-30T13:49:26.018355101Z" level=info msg="TearDown network for sandbox \"4c9c40a812a591da952af179d9a573a88fbd976b65bd47d034f0f61837d8017e\" successfully" Jan 30 13:49:26.024351 containerd[1597]: time="2025-01-30T13:49:26.024204235Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4c9c40a812a591da952af179d9a573a88fbd976b65bd47d034f0f61837d8017e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:49:26.024351 containerd[1597]: time="2025-01-30T13:49:26.024308788Z" level=info msg="RemovePodSandbox \"4c9c40a812a591da952af179d9a573a88fbd976b65bd47d034f0f61837d8017e\" returns successfully" Jan 30 13:49:26.026215 containerd[1597]: time="2025-01-30T13:49:26.025750596Z" level=info msg="StopPodSandbox for \"3d28b1b525f5762eb760ab01b33b72790294c53d0b8932b5a127c10fd661799c\"" Jan 30 13:49:26.130430 containerd[1597]: 2025-01-30 13:49:26.075 [WARNING][5384] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3d28b1b525f5762eb760ab01b33b72790294c53d0b8932b5a127c10fd661799c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--kube--controllers--5cd4d8684d--s5t7r-eth0", GenerateName:"calico-kube-controllers-5cd4d8684d-", Namespace:"calico-system", SelfLink:"", UID:"9b01ed56-b260-4e1d-b3d3-dac544b9b63d", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 48, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5cd4d8684d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal", ContainerID:"c252151db8fa5c6948ffd0a0e113c990665de5228e16c6aeb0d976f0964e0573", Pod:"calico-kube-controllers-5cd4d8684d-s5t7r", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.2.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5790e5492df", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:49:26.130430 containerd[1597]: 2025-01-30 13:49:26.076 [INFO][5384] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3d28b1b525f5762eb760ab01b33b72790294c53d0b8932b5a127c10fd661799c" Jan 30 13:49:26.130430 containerd[1597]: 2025-01-30 13:49:26.076 [INFO][5384] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3d28b1b525f5762eb760ab01b33b72790294c53d0b8932b5a127c10fd661799c" iface="eth0" netns="" Jan 30 13:49:26.130430 containerd[1597]: 2025-01-30 13:49:26.076 [INFO][5384] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3d28b1b525f5762eb760ab01b33b72790294c53d0b8932b5a127c10fd661799c" Jan 30 13:49:26.130430 containerd[1597]: 2025-01-30 13:49:26.076 [INFO][5384] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3d28b1b525f5762eb760ab01b33b72790294c53d0b8932b5a127c10fd661799c" Jan 30 13:49:26.130430 containerd[1597]: 2025-01-30 13:49:26.118 [INFO][5390] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3d28b1b525f5762eb760ab01b33b72790294c53d0b8932b5a127c10fd661799c" HandleID="k8s-pod-network.3d28b1b525f5762eb760ab01b33b72790294c53d0b8932b5a127c10fd661799c" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--kube--controllers--5cd4d8684d--s5t7r-eth0" Jan 30 13:49:26.130430 containerd[1597]: 2025-01-30 13:49:26.118 [INFO][5390] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:49:26.130430 containerd[1597]: 2025-01-30 13:49:26.118 [INFO][5390] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:49:26.130430 containerd[1597]: 2025-01-30 13:49:26.126 [WARNING][5390] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3d28b1b525f5762eb760ab01b33b72790294c53d0b8932b5a127c10fd661799c" HandleID="k8s-pod-network.3d28b1b525f5762eb760ab01b33b72790294c53d0b8932b5a127c10fd661799c" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--kube--controllers--5cd4d8684d--s5t7r-eth0" Jan 30 13:49:26.130430 containerd[1597]: 2025-01-30 13:49:26.126 [INFO][5390] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3d28b1b525f5762eb760ab01b33b72790294c53d0b8932b5a127c10fd661799c" HandleID="k8s-pod-network.3d28b1b525f5762eb760ab01b33b72790294c53d0b8932b5a127c10fd661799c" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--kube--controllers--5cd4d8684d--s5t7r-eth0" Jan 30 13:49:26.130430 containerd[1597]: 2025-01-30 13:49:26.127 [INFO][5390] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:49:26.130430 containerd[1597]: 2025-01-30 13:49:26.129 [INFO][5384] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3d28b1b525f5762eb760ab01b33b72790294c53d0b8932b5a127c10fd661799c" Jan 30 13:49:26.131559 containerd[1597]: time="2025-01-30T13:49:26.130447304Z" level=info msg="TearDown network for sandbox \"3d28b1b525f5762eb760ab01b33b72790294c53d0b8932b5a127c10fd661799c\" successfully" Jan 30 13:49:26.131559 containerd[1597]: time="2025-01-30T13:49:26.130484119Z" level=info msg="StopPodSandbox for \"3d28b1b525f5762eb760ab01b33b72790294c53d0b8932b5a127c10fd661799c\" returns successfully" Jan 30 13:49:26.131559 containerd[1597]: time="2025-01-30T13:49:26.131542597Z" level=info msg="RemovePodSandbox for \"3d28b1b525f5762eb760ab01b33b72790294c53d0b8932b5a127c10fd661799c\"" Jan 30 13:49:26.131780 containerd[1597]: time="2025-01-30T13:49:26.131583453Z" level=info msg="Forcibly stopping sandbox \"3d28b1b525f5762eb760ab01b33b72790294c53d0b8932b5a127c10fd661799c\"" Jan 30 13:49:26.250726 containerd[1597]: 2025-01-30 13:49:26.177 [WARNING][5408] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3d28b1b525f5762eb760ab01b33b72790294c53d0b8932b5a127c10fd661799c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--kube--controllers--5cd4d8684d--s5t7r-eth0", GenerateName:"calico-kube-controllers-5cd4d8684d-", Namespace:"calico-system", SelfLink:"", UID:"9b01ed56-b260-4e1d-b3d3-dac544b9b63d", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 48, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5cd4d8684d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-95c971ef03008a3d90e9.c.flatcar-212911.internal", ContainerID:"c252151db8fa5c6948ffd0a0e113c990665de5228e16c6aeb0d976f0964e0573", Pod:"calico-kube-controllers-5cd4d8684d-s5t7r", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.2.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5790e5492df", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:49:26.250726 containerd[1597]: 2025-01-30 13:49:26.178 [INFO][5408] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3d28b1b525f5762eb760ab01b33b72790294c53d0b8932b5a127c10fd661799c" Jan 30 13:49:26.250726 containerd[1597]: 2025-01-30 13:49:26.179 [INFO][5408] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3d28b1b525f5762eb760ab01b33b72790294c53d0b8932b5a127c10fd661799c" iface="eth0" netns="" Jan 30 13:49:26.250726 containerd[1597]: 2025-01-30 13:49:26.179 [INFO][5408] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3d28b1b525f5762eb760ab01b33b72790294c53d0b8932b5a127c10fd661799c" Jan 30 13:49:26.250726 containerd[1597]: 2025-01-30 13:49:26.179 [INFO][5408] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3d28b1b525f5762eb760ab01b33b72790294c53d0b8932b5a127c10fd661799c" Jan 30 13:49:26.250726 containerd[1597]: 2025-01-30 13:49:26.227 [INFO][5415] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3d28b1b525f5762eb760ab01b33b72790294c53d0b8932b5a127c10fd661799c" HandleID="k8s-pod-network.3d28b1b525f5762eb760ab01b33b72790294c53d0b8932b5a127c10fd661799c" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--kube--controllers--5cd4d8684d--s5t7r-eth0" Jan 30 13:49:26.250726 containerd[1597]: 2025-01-30 13:49:26.227 [INFO][5415] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:49:26.250726 containerd[1597]: 2025-01-30 13:49:26.228 [INFO][5415] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:49:26.250726 containerd[1597]: 2025-01-30 13:49:26.237 [WARNING][5415] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3d28b1b525f5762eb760ab01b33b72790294c53d0b8932b5a127c10fd661799c" HandleID="k8s-pod-network.3d28b1b525f5762eb760ab01b33b72790294c53d0b8932b5a127c10fd661799c" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--kube--controllers--5cd4d8684d--s5t7r-eth0" Jan 30 13:49:26.250726 containerd[1597]: 2025-01-30 13:49:26.238 [INFO][5415] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3d28b1b525f5762eb760ab01b33b72790294c53d0b8932b5a127c10fd661799c" HandleID="k8s-pod-network.3d28b1b525f5762eb760ab01b33b72790294c53d0b8932b5a127c10fd661799c" Workload="ci--4081--3--0--95c971ef03008a3d90e9.c.flatcar--212911.internal-k8s-calico--kube--controllers--5cd4d8684d--s5t7r-eth0" Jan 30 13:49:26.250726 containerd[1597]: 2025-01-30 13:49:26.243 [INFO][5415] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:49:26.250726 containerd[1597]: 2025-01-30 13:49:26.245 [INFO][5408] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3d28b1b525f5762eb760ab01b33b72790294c53d0b8932b5a127c10fd661799c" Jan 30 13:49:26.252642 containerd[1597]: time="2025-01-30T13:49:26.252122397Z" level=info msg="TearDown network for sandbox \"3d28b1b525f5762eb760ab01b33b72790294c53d0b8932b5a127c10fd661799c\" successfully" Jan 30 13:49:26.269413 containerd[1597]: time="2025-01-30T13:49:26.268345246Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3d28b1b525f5762eb760ab01b33b72790294c53d0b8932b5a127c10fd661799c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:49:26.269413 containerd[1597]: time="2025-01-30T13:49:26.268443066Z" level=info msg="RemovePodSandbox \"3d28b1b525f5762eb760ab01b33b72790294c53d0b8932b5a127c10fd661799c\" returns successfully" Jan 30 13:49:28.275864 systemd[1]: Started sshd@7-10.128.0.26:22-139.178.68.195:34826.service - OpenSSH per-connection server daemon (139.178.68.195:34826). Jan 30 13:49:28.635542 sshd[5450]: Accepted publickey for core from 139.178.68.195 port 34826 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:49:28.638838 sshd[5450]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:49:28.649304 systemd-logind[1582]: New session 8 of user core. Jan 30 13:49:28.657836 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 13:49:29.154238 sshd[5450]: pam_unix(sshd:session): session closed for user core Jan 30 13:49:29.159504 systemd[1]: sshd@7-10.128.0.26:22-139.178.68.195:34826.service: Deactivated successfully. Jan 30 13:49:29.165959 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 13:49:29.166462 systemd-logind[1582]: Session 8 logged out. Waiting for processes to exit. Jan 30 13:49:29.169676 systemd-logind[1582]: Removed session 8. Jan 30 13:49:34.213014 systemd[1]: Started sshd@8-10.128.0.26:22-139.178.68.195:34834.service - OpenSSH per-connection server daemon (139.178.68.195:34834). Jan 30 13:49:34.561183 sshd[5465]: Accepted publickey for core from 139.178.68.195 port 34834 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:49:34.563558 sshd[5465]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:49:34.570314 systemd-logind[1582]: New session 9 of user core. Jan 30 13:49:34.579854 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 13:49:34.887631 sshd[5465]: pam_unix(sshd:session): session closed for user core Jan 30 13:49:34.892703 systemd[1]: sshd@8-10.128.0.26:22-139.178.68.195:34834.service: Deactivated successfully. Jan 30 13:49:34.899641 systemd-logind[1582]: Session 9 logged out. Waiting for processes to exit. Jan 30 13:49:34.900560 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 13:49:34.903504 systemd-logind[1582]: Removed session 9. Jan 30 13:49:36.804215 kubelet[2800]: I0130 13:49:36.804046 2800 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:49:39.950815 systemd[1]: Started sshd@9-10.128.0.26:22-139.178.68.195:52464.service - OpenSSH per-connection server daemon (139.178.68.195:52464). Jan 30 13:49:40.338621 sshd[5484]: Accepted publickey for core from 139.178.68.195 port 52464 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:49:40.341605 sshd[5484]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:49:40.350114 systemd-logind[1582]: New session 10 of user core. Jan 30 13:49:40.358489 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 13:49:40.684986 sshd[5484]: pam_unix(sshd:session): session closed for user core Jan 30 13:49:40.694564 systemd[1]: sshd@9-10.128.0.26:22-139.178.68.195:52464.service: Deactivated successfully. Jan 30 13:49:40.702284 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 13:49:40.704307 systemd-logind[1582]: Session 10 logged out. Waiting for processes to exit. Jan 30 13:49:40.707339 systemd-logind[1582]: Removed session 10. Jan 30 13:49:40.744254 systemd[1]: Started sshd@10-10.128.0.26:22-139.178.68.195:52480.service - OpenSSH per-connection server daemon (139.178.68.195:52480). Jan 30 13:49:41.095440 sshd[5501]: Accepted publickey for core from 139.178.68.195 port 52480 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:49:41.097550 sshd[5501]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:49:41.104817 systemd-logind[1582]: New session 11 of user core. Jan 30 13:49:41.111173 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 13:49:41.476936 sshd[5501]: pam_unix(sshd:session): session closed for user core Jan 30 13:49:41.485660 systemd[1]: sshd@10-10.128.0.26:22-139.178.68.195:52480.service: Deactivated successfully. Jan 30 13:49:41.492445 systemd-logind[1582]: Session 11 logged out. Waiting for processes to exit. Jan 30 13:49:41.493125 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 13:49:41.495527 systemd-logind[1582]: Removed session 11. Jan 30 13:49:41.536312 systemd[1]: Started sshd@11-10.128.0.26:22-139.178.68.195:52488.service - OpenSSH per-connection server daemon (139.178.68.195:52488). Jan 30 13:49:41.881216 sshd[5513]: Accepted publickey for core from 139.178.68.195 port 52488 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:49:41.884244 sshd[5513]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:49:41.894619 systemd-logind[1582]: New session 12 of user core. Jan 30 13:49:41.900807 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 13:49:42.219235 sshd[5513]: pam_unix(sshd:session): session closed for user core Jan 30 13:49:42.224332 systemd[1]: sshd@11-10.128.0.26:22-139.178.68.195:52488.service: Deactivated successfully. Jan 30 13:49:42.230643 systemd-logind[1582]: Session 12 logged out. Waiting for processes to exit. Jan 30 13:49:42.231525 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 13:49:42.233691 systemd-logind[1582]: Removed session 12. Jan 30 13:49:47.278266 systemd[1]: Started sshd@12-10.128.0.26:22-139.178.68.195:57490.service - OpenSSH per-connection server daemon (139.178.68.195:57490). Jan 30 13:49:47.621301 sshd[5536]: Accepted publickey for core from 139.178.68.195 port 57490 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:49:47.623191 sshd[5536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:49:47.629482 systemd-logind[1582]: New session 13 of user core. Jan 30 13:49:47.636643 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 13:49:47.952025 sshd[5536]: pam_unix(sshd:session): session closed for user core Jan 30 13:49:47.957594 systemd[1]: sshd@12-10.128.0.26:22-139.178.68.195:57490.service: Deactivated successfully. Jan 30 13:49:47.965460 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 13:49:47.967053 systemd-logind[1582]: Session 13 logged out. Waiting for processes to exit. Jan 30 13:49:47.968821 systemd-logind[1582]: Removed session 13. Jan 30 13:49:49.620812 systemd[1]: run-containerd-runc-k8s.io-e7399fb22526f5a5292ab2805923036ed2b031884995d9db0a03b41110e71a87-runc.k86QvZ.mount: Deactivated successfully. Jan 30 13:49:53.010222 systemd[1]: Started sshd@13-10.128.0.26:22-139.178.68.195:57496.service - OpenSSH per-connection server daemon (139.178.68.195:57496). Jan 30 13:49:53.369978 sshd[5590]: Accepted publickey for core from 139.178.68.195 port 57496 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:49:53.372587 sshd[5590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:49:53.385192 systemd-logind[1582]: New session 14 of user core. Jan 30 13:49:53.393342 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 13:49:53.701353 sshd[5590]: pam_unix(sshd:session): session closed for user core Jan 30 13:49:53.712706 systemd[1]: sshd@13-10.128.0.26:22-139.178.68.195:57496.service: Deactivated successfully. Jan 30 13:49:53.721826 systemd-logind[1582]: Session 14 logged out. Waiting for processes to exit. Jan 30 13:49:53.722697 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 13:49:53.726822 systemd-logind[1582]: Removed session 14. Jan 30 13:49:58.760824 systemd[1]: Started sshd@14-10.128.0.26:22-139.178.68.195:36312.service - OpenSSH per-connection server daemon (139.178.68.195:36312). Jan 30 13:49:59.106234 sshd[5624]: Accepted publickey for core from 139.178.68.195 port 36312 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:49:59.108219 sshd[5624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:49:59.114365 systemd-logind[1582]: New session 15 of user core. Jan 30 13:49:59.117816 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 13:49:59.439675 sshd[5624]: pam_unix(sshd:session): session closed for user core Jan 30 13:49:59.445321 systemd[1]: sshd@14-10.128.0.26:22-139.178.68.195:36312.service: Deactivated successfully. Jan 30 13:49:59.451281 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 13:49:59.453774 systemd-logind[1582]: Session 15 logged out. Waiting for processes to exit. Jan 30 13:49:59.455300 systemd-logind[1582]: Removed session 15. Jan 30 13:50:04.497889 systemd[1]: Started sshd@15-10.128.0.26:22-139.178.68.195:36322.service - OpenSSH per-connection server daemon (139.178.68.195:36322). Jan 30 13:50:04.850188 sshd[5639]: Accepted publickey for core from 139.178.68.195 port 36322 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:50:04.852541 sshd[5639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:50:04.859751 systemd-logind[1582]: New session 16 of user core. Jan 30 13:50:04.867995 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 13:50:05.276688 sshd[5639]: pam_unix(sshd:session): session closed for user core Jan 30 13:50:05.287592 systemd[1]: sshd@15-10.128.0.26:22-139.178.68.195:36322.service: Deactivated successfully. Jan 30 13:50:05.300719 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 13:50:05.301017 systemd-logind[1582]: Session 16 logged out. Waiting for processes to exit. Jan 30 13:50:05.309970 systemd-logind[1582]: Removed session 16. Jan 30 13:50:05.337880 systemd[1]: Started sshd@16-10.128.0.26:22-139.178.68.195:55062.service - OpenSSH per-connection server daemon (139.178.68.195:55062). Jan 30 13:50:05.731617 sshd[5653]: Accepted publickey for core from 139.178.68.195 port 55062 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:50:05.734765 sshd[5653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:50:05.744649 systemd-logind[1582]: New session 17 of user core. Jan 30 13:50:05.751311 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 13:50:06.156537 sshd[5653]: pam_unix(sshd:session): session closed for user core Jan 30 13:50:06.164082 systemd-logind[1582]: Session 17 logged out. Waiting for processes to exit. Jan 30 13:50:06.165169 systemd[1]: sshd@16-10.128.0.26:22-139.178.68.195:55062.service: Deactivated successfully. Jan 30 13:50:06.178608 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 13:50:06.184937 systemd-logind[1582]: Removed session 17. Jan 30 13:50:06.221143 systemd[1]: Started sshd@17-10.128.0.26:22-139.178.68.195:55076.service - OpenSSH per-connection server daemon (139.178.68.195:55076). Jan 30 13:50:06.575211 sshd[5665]: Accepted publickey for core from 139.178.68.195 port 55076 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:50:06.577250 sshd[5665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:50:06.584942 systemd-logind[1582]: New session 18 of user core. Jan 30 13:50:06.591853 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 13:50:08.785837 sshd[5665]: pam_unix(sshd:session): session closed for user core Jan 30 13:50:08.794261 systemd[1]: sshd@17-10.128.0.26:22-139.178.68.195:55076.service: Deactivated successfully. Jan 30 13:50:08.796269 systemd-logind[1582]: Session 18 logged out. Waiting for processes to exit. Jan 30 13:50:08.804278 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 13:50:08.805796 systemd-logind[1582]: Removed session 18. Jan 30 13:50:08.846164 systemd[1]: Started sshd@18-10.128.0.26:22-139.178.68.195:55082.service - OpenSSH per-connection server daemon (139.178.68.195:55082). Jan 30 13:50:09.205881 sshd[5684]: Accepted publickey for core from 139.178.68.195 port 55082 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:50:09.207013 sshd[5684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:50:09.214720 systemd-logind[1582]: New session 19 of user core. Jan 30 13:50:09.220844 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 13:50:09.661478 sshd[5684]: pam_unix(sshd:session): session closed for user core Jan 30 13:50:09.666563 systemd[1]: sshd@18-10.128.0.26:22-139.178.68.195:55082.service: Deactivated successfully. Jan 30 13:50:09.674367 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 13:50:09.675769 systemd-logind[1582]: Session 19 logged out. Waiting for processes to exit. Jan 30 13:50:09.677285 systemd-logind[1582]: Removed session 19. Jan 30 13:50:09.721812 systemd[1]: Started sshd@19-10.128.0.26:22-139.178.68.195:55088.service - OpenSSH per-connection server daemon (139.178.68.195:55088). Jan 30 13:50:10.062170 sshd[5698]: Accepted publickey for core from 139.178.68.195 port 55088 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:50:10.064506 sshd[5698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:50:10.071091 systemd-logind[1582]: New session 20 of user core. Jan 30 13:50:10.075856 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 13:50:10.380785 sshd[5698]: pam_unix(sshd:session): session closed for user core Jan 30 13:50:10.387232 systemd[1]: sshd@19-10.128.0.26:22-139.178.68.195:55088.service: Deactivated successfully. Jan 30 13:50:10.392684 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 13:50:10.393775 systemd-logind[1582]: Session 20 logged out. Waiting for processes to exit. Jan 30 13:50:10.395325 systemd-logind[1582]: Removed session 20. Jan 30 13:50:15.441027 systemd[1]: Started sshd@20-10.128.0.26:22-139.178.68.195:53920.service - OpenSSH per-connection server daemon (139.178.68.195:53920). Jan 30 13:50:15.791683 sshd[5716]: Accepted publickey for core from 139.178.68.195 port 53920 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:50:15.794124 sshd[5716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:50:15.802488 systemd-logind[1582]: New session 21 of user core. Jan 30 13:50:15.809040 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 13:50:16.115296 sshd[5716]: pam_unix(sshd:session): session closed for user core Jan 30 13:50:16.120166 systemd[1]: sshd@20-10.128.0.26:22-139.178.68.195:53920.service: Deactivated successfully. Jan 30 13:50:16.127714 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 13:50:16.127809 systemd-logind[1582]: Session 21 logged out. Waiting for processes to exit. Jan 30 13:50:16.129977 systemd-logind[1582]: Removed session 21. Jan 30 13:50:20.770238 systemd[1]: run-containerd-runc-k8s.io-38963f1f569e8970b47ef5c0d18f9593098dba6b0f9374033b4a22e30fdd74b6-runc.RUk0ES.mount: Deactivated successfully. Jan 30 13:50:21.174383 systemd[1]: Started sshd@21-10.128.0.26:22-139.178.68.195:53924.service - OpenSSH per-connection server daemon (139.178.68.195:53924). Jan 30 13:50:21.527319 sshd[5751]: Accepted publickey for core from 139.178.68.195 port 53924 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:50:21.529329 sshd[5751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:50:21.535052 systemd-logind[1582]: New session 22 of user core. Jan 30 13:50:21.540833 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 13:50:21.852077 sshd[5751]: pam_unix(sshd:session): session closed for user core Jan 30 13:50:21.860612 systemd[1]: sshd@21-10.128.0.26:22-139.178.68.195:53924.service: Deactivated successfully. Jan 30 13:50:21.867080 systemd-logind[1582]: Session 22 logged out. Waiting for processes to exit. Jan 30 13:50:21.867659 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 13:50:21.871735 systemd-logind[1582]: Removed session 22. Jan 30 13:50:26.911163 systemd[1]: Started sshd@22-10.128.0.26:22-139.178.68.195:47014.service - OpenSSH per-connection server daemon (139.178.68.195:47014). Jan 30 13:50:27.258177 sshd[5791]: Accepted publickey for core from 139.178.68.195 port 47014 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:50:27.260095 sshd[5791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:50:27.266972 systemd-logind[1582]: New session 23 of user core. Jan 30 13:50:27.272897 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 13:50:27.583159 sshd[5791]: pam_unix(sshd:session): session closed for user core Jan 30 13:50:27.590112 systemd[1]: sshd@22-10.128.0.26:22-139.178.68.195:47014.service: Deactivated successfully. Jan 30 13:50:27.596024 systemd-logind[1582]: Session 23 logged out. Waiting for processes to exit. Jan 30 13:50:27.596950 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 13:50:27.598817 systemd-logind[1582]: Removed session 23.