Jan 30 13:48:59.088859 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 13:48:59.088907 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:48:59.088926 kernel: BIOS-provided physical RAM map: Jan 30 13:48:59.088941 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Jan 30 13:48:59.088955 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Jan 30 13:48:59.088969 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Jan 30 13:48:59.088986 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Jan 30 13:48:59.089005 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Jan 30 13:48:59.089020 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Jan 30 13:48:59.089035 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Jan 30 13:48:59.089050 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Jan 30 13:48:59.089065 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Jan 30 13:48:59.089081 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Jan 30 13:48:59.089095 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Jan 30 13:48:59.089118 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Jan 30 13:48:59.089144 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Jan 30 13:48:59.089164 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Jan 30 13:48:59.089180 kernel: NX (Execute Disable) protection: active Jan 30 13:48:59.089198 kernel: APIC: Static calls initialized Jan 30 13:48:59.089215 kernel: efi: EFI v2.7 by EDK II Jan 30 13:48:59.089230 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Jan 30 13:48:59.089247 kernel: SMBIOS 2.4 present. Jan 30 13:48:59.089264 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Jan 30 13:48:59.089281 kernel: Hypervisor detected: KVM Jan 30 13:48:59.089301 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 13:48:59.089317 kernel: kvm-clock: using sched offset of 12159584440 cycles Jan 30 13:48:59.089335 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 13:48:59.089353 kernel: tsc: Detected 2299.998 MHz processor Jan 30 13:48:59.089370 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 13:48:59.089388 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 13:48:59.089405 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Jan 30 13:48:59.089422 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Jan 30 13:48:59.089439 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 13:48:59.089459 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Jan 30 13:48:59.089484 kernel: Using GB pages for direct mapping Jan 30 13:48:59.089501 kernel: Secure boot disabled Jan 30 13:48:59.089518 kernel: ACPI: Early table checksum verification disabled Jan 30 13:48:59.089535 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Jan 30 13:48:59.089552 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Jan 30 13:48:59.089569 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Jan 30 13:48:59.089593 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Jan 30 13:48:59.089614 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Jan 30 13:48:59.089632 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Jan 30 13:48:59.089651 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Jan 30 13:48:59.089667 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Jan 30 13:48:59.089685 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Jan 30 13:48:59.089702 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Jan 30 13:48:59.089723 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Jan 30 13:48:59.089762 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Jan 30 13:48:59.089778 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Jan 30 13:48:59.089793 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Jan 30 13:48:59.089809 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Jan 30 13:48:59.089823 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Jan 30 13:48:59.089839 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Jan 30 13:48:59.089856 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Jan 30 13:48:59.089873 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Jan 30 13:48:59.089896 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Jan 30 13:48:59.089913 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 30 13:48:59.089930 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 30 13:48:59.089948 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 30 13:48:59.089964 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Jan 30 13:48:59.089980 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Jan 30 13:48:59.089997 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Jan 30 13:48:59.090015 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Jan 30 13:48:59.090031 kernel: NODE_DATA(0) allocated [mem 0x21fff8000-0x21fffdfff] Jan 30 13:48:59.090053 kernel: Zone ranges: Jan 30 13:48:59.090069 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 13:48:59.090087 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 30 13:48:59.090104 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Jan 30 13:48:59.090121 kernel: Movable zone start for each node Jan 30 13:48:59.090138 kernel: Early memory node ranges Jan 30 13:48:59.090155 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Jan 30 13:48:59.090173 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Jan 30 13:48:59.090189 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Jan 30 13:48:59.090206 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Jan 30 13:48:59.090226 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Jan 30 13:48:59.090243 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Jan 30 13:48:59.090260 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:48:59.090277 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Jan 30 13:48:59.090295 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Jan 30 13:48:59.090311 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 30 13:48:59.090327 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Jan 30 13:48:59.090344 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 30 13:48:59.090362 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 13:48:59.090384 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 30 13:48:59.090400 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 13:48:59.090418 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 13:48:59.090435 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 13:48:59.090453 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 13:48:59.090470 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 13:48:59.090495 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 30 13:48:59.090512 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 30 13:48:59.090528 kernel: Booting paravirtualized kernel on KVM Jan 30 13:48:59.090550 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 13:48:59.090567 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 30 13:48:59.090585 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 30 13:48:59.090602 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 30 13:48:59.090618 kernel: pcpu-alloc: [0] 0 1 Jan 30 13:48:59.090635 kernel: kvm-guest: PV spinlocks enabled Jan 30 13:48:59.090652 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 30 13:48:59.090671 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:48:59.090695 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:48:59.090711 kernel: random: crng init done Jan 30 13:48:59.090729 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 30 13:48:59.090763 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 13:48:59.090780 kernel: Fallback order for Node 0: 0 Jan 30 13:48:59.090796 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Jan 30 13:48:59.090814 kernel: Policy zone: Normal Jan 30 13:48:59.090831 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:48:59.090848 kernel: software IO TLB: area num 2. Jan 30 13:48:59.090870 kernel: Memory: 7513376K/7860584K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 346948K reserved, 0K cma-reserved) Jan 30 13:48:59.090888 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 13:48:59.090905 kernel: Kernel/User page tables isolation: enabled Jan 30 13:48:59.090923 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 13:48:59.090940 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 13:48:59.090956 kernel: Dynamic Preempt: voluntary Jan 30 13:48:59.090974 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:48:59.090993 kernel: rcu: RCU event tracing is enabled. Jan 30 13:48:59.091028 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 13:48:59.091046 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:48:59.091064 kernel: Rude variant of Tasks RCU enabled. Jan 30 13:48:59.091087 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:48:59.091106 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:48:59.091124 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 13:48:59.091143 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 30 13:48:59.091162 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:48:59.091181 kernel: Console: colour dummy device 80x25 Jan 30 13:48:59.091203 kernel: printk: console [ttyS0] enabled Jan 30 13:48:59.091223 kernel: ACPI: Core revision 20230628 Jan 30 13:48:59.091241 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 13:48:59.091260 kernel: x2apic enabled Jan 30 13:48:59.091279 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 13:48:59.091298 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Jan 30 13:48:59.091317 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 30 13:48:59.091337 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Jan 30 13:48:59.091359 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Jan 30 13:48:59.091378 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Jan 30 13:48:59.091397 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 13:48:59.091416 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 30 13:48:59.091435 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 30 13:48:59.091454 kernel: Spectre V2 : Mitigation: IBRS Jan 30 13:48:59.091482 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 13:48:59.091501 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 13:48:59.091520 kernel: RETBleed: Mitigation: IBRS Jan 30 13:48:59.091544 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 13:48:59.091562 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Jan 30 13:48:59.091578 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 13:48:59.091595 kernel: MDS: Mitigation: Clear CPU buffers Jan 30 13:48:59.091614 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 30 13:48:59.091633 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 13:48:59.091652 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 13:48:59.091671 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 13:48:59.091690 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 13:48:59.091713 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 30 13:48:59.091733 kernel: Freeing SMP alternatives memory: 32K Jan 30 13:48:59.091775 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:48:59.091795 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:48:59.091814 kernel: landlock: Up and running. Jan 30 13:48:59.091833 kernel: SELinux: Initializing. Jan 30 13:48:59.091851 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 13:48:59.091870 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 13:48:59.091888 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Jan 30 13:48:59.091911 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:48:59.091930 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:48:59.091948 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:48:59.091967 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Jan 30 13:48:59.091985 kernel: signal: max sigframe size: 1776 Jan 30 13:48:59.092004 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:48:59.092033 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:48:59.092051 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 30 13:48:59.092069 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:48:59.092091 kernel: smpboot: x86: Booting SMP configuration: Jan 30 13:48:59.092109 kernel: .... node #0, CPUs: #1 Jan 30 13:48:59.092129 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 30 13:48:59.092148 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 30 13:48:59.092167 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 13:48:59.092195 kernel: smpboot: Max logical packages: 1 Jan 30 13:48:59.092218 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jan 30 13:48:59.092245 kernel: devtmpfs: initialized Jan 30 13:48:59.092269 kernel: x86/mm: Memory block size: 128MB Jan 30 13:48:59.092288 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Jan 30 13:48:59.092307 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:48:59.092326 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 13:48:59.092345 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:48:59.092364 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:48:59.092383 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:48:59.092402 kernel: audit: type=2000 audit(1738244937.484:1): state=initialized audit_enabled=0 res=1 Jan 30 13:48:59.092421 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:48:59.092444 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 13:48:59.092464 kernel: cpuidle: using governor menu Jan 30 13:48:59.092491 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:48:59.092516 kernel: dca service started, version 1.12.1 Jan 30 13:48:59.092536 kernel: PCI: Using configuration type 1 for base access Jan 30 13:48:59.092555 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 13:48:59.092581 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 13:48:59.092598 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 13:48:59.092617 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:48:59.092641 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:48:59.092660 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:48:59.092686 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:48:59.092703 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:48:59.092721 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:48:59.092739 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 30 13:48:59.092773 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 13:48:59.092792 kernel: ACPI: Interpreter enabled Jan 30 13:48:59.092810 kernel: ACPI: PM: (supports S0 S3 S5) Jan 30 13:48:59.092834 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 13:48:59.092852 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 13:48:59.092868 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 30 13:48:59.092886 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 30 13:48:59.092905 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 13:48:59.093154 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 30 13:48:59.093352 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 30 13:48:59.093543 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 30 13:48:59.093573 kernel: PCI host bridge to bus 0000:00 Jan 30 13:48:59.093765 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 13:48:59.093945 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 13:48:59.094143 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 13:48:59.094330 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Jan 30 13:48:59.094498 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 13:48:59.094696 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 30 13:48:59.094918 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Jan 30 13:48:59.095103 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 30 13:48:59.095299 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 30 13:48:59.095527 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Jan 30 13:48:59.095716 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Jan 30 13:48:59.095948 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Jan 30 13:48:59.096145 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 30 13:48:59.096335 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Jan 30 13:48:59.096530 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Jan 30 13:48:59.096722 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Jan 30 13:48:59.096942 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Jan 30 13:48:59.097135 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Jan 30 13:48:59.097166 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 13:48:59.097186 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 13:48:59.097204 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 13:48:59.097221 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 13:48:59.097238 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 30 13:48:59.097256 kernel: iommu: Default domain type: Translated Jan 30 13:48:59.097273 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 13:48:59.097289 kernel: efivars: Registered efivars operations Jan 30 13:48:59.097308 kernel: PCI: Using ACPI for IRQ routing Jan 30 13:48:59.097333 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 13:48:59.097351 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Jan 30 13:48:59.097369 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Jan 30 13:48:59.097388 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Jan 30 13:48:59.097407 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Jan 30 13:48:59.097426 kernel: vgaarb: loaded Jan 30 13:48:59.097446 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 13:48:59.097465 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:48:59.097493 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:48:59.097517 kernel: pnp: PnP ACPI init Jan 30 13:48:59.097537 kernel: pnp: PnP ACPI: found 7 devices Jan 30 13:48:59.097557 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 13:48:59.097576 kernel: NET: Registered PF_INET protocol family Jan 30 13:48:59.097597 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 30 13:48:59.097617 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 30 13:48:59.097637 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:48:59.097657 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 13:48:59.097676 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 30 13:48:59.097700 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 30 13:48:59.097720 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 30 13:48:59.097739 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 30 13:48:59.097847 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:48:59.097865 kernel: NET: Registered PF_XDP protocol family Jan 30 13:48:59.098054 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 13:48:59.098225 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 13:48:59.098408 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 13:48:59.098590 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Jan 30 13:48:59.098806 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 30 13:48:59.098834 kernel: PCI: CLS 0 bytes, default 64 Jan 30 13:48:59.098855 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 30 13:48:59.098874 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Jan 30 13:48:59.098894 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 30 13:48:59.098923 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 30 13:48:59.098943 kernel: clocksource: Switched to clocksource tsc Jan 30 13:48:59.098969 kernel: Initialise system trusted keyrings Jan 30 13:48:59.098988 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 30 13:48:59.099008 kernel: Key type asymmetric registered Jan 30 13:48:59.099027 kernel: Asymmetric key parser 'x509' registered Jan 30 13:48:59.099047 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 13:48:59.099067 kernel: io scheduler mq-deadline registered Jan 30 13:48:59.099086 kernel: io scheduler kyber registered Jan 30 13:48:59.099106 kernel: io scheduler bfq registered Jan 30 13:48:59.099125 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 13:48:59.099150 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 30 13:48:59.099341 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Jan 30 13:48:59.099366 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jan 30 13:48:59.099569 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Jan 30 13:48:59.099600 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 30 13:48:59.099806 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Jan 30 13:48:59.099830 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:48:59.099850 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 13:48:59.099869 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 30 13:48:59.099896 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Jan 30 13:48:59.099916 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Jan 30 13:48:59.100123 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Jan 30 13:48:59.100149 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 13:48:59.100169 kernel: i8042: Warning: Keylock active Jan 30 13:48:59.100200 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 13:48:59.100222 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 13:48:59.100411 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 30 13:48:59.100594 kernel: rtc_cmos 00:00: registered as rtc0 Jan 30 13:48:59.100788 kernel: rtc_cmos 00:00: setting system clock to 2025-01-30T13:48:58 UTC (1738244938) Jan 30 13:48:59.100965 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 30 13:48:59.100989 kernel: intel_pstate: CPU model not supported Jan 30 13:48:59.101008 kernel: pstore: Using crash dump compression: deflate Jan 30 13:48:59.101027 kernel: pstore: Registered efi_pstore as persistent store backend Jan 30 13:48:59.101046 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:48:59.101065 kernel: Segment Routing with IPv6 Jan 30 13:48:59.101090 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:48:59.101109 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:48:59.101128 kernel: Key type dns_resolver registered Jan 30 13:48:59.101147 kernel: IPI shorthand broadcast: enabled Jan 30 13:48:59.101166 kernel: sched_clock: Marking stable (826004012, 169689029)->(1017507376, -21814335) Jan 30 13:48:59.101185 kernel: registered taskstats version 1 Jan 30 13:48:59.101204 kernel: Loading compiled-in X.509 certificates Jan 30 13:48:59.101223 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 13:48:59.101242 kernel: Key type .fscrypt registered Jan 30 13:48:59.101265 kernel: Key type fscrypt-provisioning registered Jan 30 13:48:59.101284 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:48:59.101303 kernel: ima: No architecture policies found Jan 30 13:48:59.101322 kernel: clk: Disabling unused clocks Jan 30 13:48:59.101341 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 13:48:59.101359 kernel: Write protecting the kernel read-only data: 36864k Jan 30 13:48:59.101379 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 13:48:59.101398 kernel: Run /init as init process Jan 30 13:48:59.101417 kernel: with arguments: Jan 30 13:48:59.101439 kernel: /init Jan 30 13:48:59.101457 kernel: with environment: Jan 30 13:48:59.101490 kernel: HOME=/ Jan 30 13:48:59.101517 kernel: TERM=linux Jan 30 13:48:59.101536 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:48:59.101564 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 30 13:48:59.101589 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:48:59.101624 systemd[1]: Detected virtualization google. Jan 30 13:48:59.101644 systemd[1]: Detected architecture x86-64. Jan 30 13:48:59.101663 systemd[1]: Running in initrd. Jan 30 13:48:59.101688 systemd[1]: No hostname configured, using default hostname. Jan 30 13:48:59.101708 systemd[1]: Hostname set to . Jan 30 13:48:59.101729 systemd[1]: Initializing machine ID from random generator. Jan 30 13:48:59.101774 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:48:59.101794 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:48:59.101818 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:48:59.101840 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:48:59.101867 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:48:59.101887 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:48:59.101912 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:48:59.101935 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:48:59.101956 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:48:59.101980 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:48:59.102000 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:48:59.102047 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:48:59.102072 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:48:59.102092 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:48:59.102113 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:48:59.102145 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:48:59.102170 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:48:59.102197 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:48:59.102223 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:48:59.102244 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:48:59.102265 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:48:59.102286 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:48:59.102312 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:48:59.102333 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:48:59.102359 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:48:59.102380 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:48:59.102401 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:48:59.102422 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:48:59.102443 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:48:59.102479 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:48:59.102501 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:48:59.102553 systemd-journald[183]: Collecting audit messages is disabled. Jan 30 13:48:59.102601 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:48:59.102622 systemd-journald[183]: Journal started Jan 30 13:48:59.102673 systemd-journald[183]: Runtime Journal (/run/log/journal/161dcfd666884636b29b95d9c64018f8) is 8.0M, max 148.7M, 140.7M free. Jan 30 13:48:59.108130 systemd-modules-load[184]: Inserted module 'overlay' Jan 30 13:48:59.114868 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:48:59.119360 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:48:59.146781 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:48:59.150821 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:48:59.153771 kernel: Bridge firewalling registered Jan 30 13:48:59.154104 systemd-modules-load[184]: Inserted module 'br_netfilter' Jan 30 13:48:59.157953 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:48:59.160960 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:48:59.169246 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:48:59.177189 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:48:59.192979 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:48:59.194719 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:48:59.203946 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:48:59.204707 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:48:59.221557 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:48:59.225300 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:48:59.235578 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:48:59.246485 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:48:59.253975 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:48:59.285007 systemd-resolved[212]: Positive Trust Anchors: Jan 30 13:48:59.285547 systemd-resolved[212]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:48:59.292856 dracut-cmdline[216]: dracut-dracut-053 Jan 30 13:48:59.292856 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:48:59.285769 systemd-resolved[212]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:48:59.292214 systemd-resolved[212]: Defaulting to hostname 'linux'. Jan 30 13:48:59.294806 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:48:59.309067 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:48:59.389784 kernel: SCSI subsystem initialized Jan 30 13:48:59.400796 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:48:59.412796 kernel: iscsi: registered transport (tcp) Jan 30 13:48:59.436320 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:48:59.436416 kernel: QLogic iSCSI HBA Driver Jan 30 13:48:59.492637 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:48:59.499124 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:48:59.540958 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:48:59.541051 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:48:59.541077 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:48:59.584803 kernel: raid6: avx2x4 gen() 18213 MB/s Jan 30 13:48:59.601789 kernel: raid6: avx2x2 gen() 18272 MB/s Jan 30 13:48:59.619150 kernel: raid6: avx2x1 gen() 14289 MB/s Jan 30 13:48:59.619185 kernel: raid6: using algorithm avx2x2 gen() 18272 MB/s Jan 30 13:48:59.637389 kernel: raid6: .... xor() 17697 MB/s, rmw enabled Jan 30 13:48:59.637436 kernel: raid6: using avx2x2 recovery algorithm Jan 30 13:48:59.660781 kernel: xor: automatically using best checksumming function avx Jan 30 13:48:59.839786 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:48:59.853131 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:48:59.864063 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:48:59.897983 systemd-udevd[398]: Using default interface naming scheme 'v255'. Jan 30 13:48:59.905270 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:48:59.916114 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:48:59.949278 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Jan 30 13:48:59.986877 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:48:59.993953 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:49:00.084971 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:49:00.094436 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:49:00.140711 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:49:00.148621 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:49:00.152888 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:49:00.157452 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:49:00.175990 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:49:00.210273 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:49:00.213896 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 13:49:00.256275 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:49:00.256841 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:49:00.283528 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 13:49:00.283571 kernel: AES CTR mode by8 optimization enabled Jan 30 13:49:00.294001 kernel: scsi host0: Virtio SCSI HBA Jan 30 13:49:00.294265 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:49:00.311902 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Jan 30 13:49:00.298868 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:49:00.299131 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:49:00.307513 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:49:00.333810 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:49:00.358500 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Jan 30 13:49:00.373292 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Jan 30 13:49:00.373589 kernel: sd 0:0:1:0: [sda] Write Protect is off Jan 30 13:49:00.373843 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Jan 30 13:49:00.374080 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 30 13:49:00.374311 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 13:49:00.374340 kernel: GPT:17805311 != 25165823 Jan 30 13:49:00.374374 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 13:49:00.374399 kernel: GPT:17805311 != 25165823 Jan 30 13:49:00.374424 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 13:49:00.374450 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:49:00.374483 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Jan 30 13:49:00.366792 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:49:00.381022 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:49:00.433776 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (466) Jan 30 13:49:00.438319 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:49:00.447904 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (447) Jan 30 13:49:00.455646 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Jan 30 13:49:00.469860 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Jan 30 13:49:00.490317 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 30 13:49:00.496516 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Jan 30 13:49:00.496760 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Jan 30 13:49:00.509987 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:49:00.535374 disk-uuid[549]: Primary Header is updated. Jan 30 13:49:00.535374 disk-uuid[549]: Secondary Entries is updated. Jan 30 13:49:00.535374 disk-uuid[549]: Secondary Header is updated. Jan 30 13:49:00.548771 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:49:00.572787 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:49:00.579766 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:49:01.579814 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:49:01.580730 disk-uuid[550]: The operation has completed successfully. Jan 30 13:49:01.647572 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:49:01.647721 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:49:01.680946 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:49:01.708476 sh[567]: Success Jan 30 13:49:01.711910 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 30 13:49:01.799083 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:49:01.806003 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:49:01.830332 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:49:01.861786 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 13:49:01.861859 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:49:01.878655 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:49:01.878713 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:49:01.885471 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:49:01.918777 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 30 13:49:01.924457 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:49:01.925424 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:49:01.930945 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:49:01.991760 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:49:01.991833 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:49:01.991860 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:49:01.989625 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:49:02.038380 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 13:49:02.038426 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:49:02.038453 kernel: BTRFS info (device sda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:49:02.056564 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:49:02.062981 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:49:02.159627 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:49:02.165640 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:49:02.266617 systemd-networkd[751]: lo: Link UP Jan 30 13:49:02.266637 systemd-networkd[751]: lo: Gained carrier Jan 30 13:49:02.268928 ignition[662]: Ignition 2.19.0 Jan 30 13:49:02.269311 systemd-networkd[751]: Enumeration completed Jan 30 13:49:02.268941 ignition[662]: Stage: fetch-offline Jan 30 13:49:02.270095 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:49:02.269001 ignition[662]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:49:02.270800 systemd-networkd[751]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:49:02.269017 ignition[662]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 30 13:49:02.270807 systemd-networkd[751]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:49:02.269169 ignition[662]: parsed url from cmdline: "" Jan 30 13:49:02.272991 systemd-networkd[751]: eth0: Link UP Jan 30 13:49:02.269176 ignition[662]: no config URL provided Jan 30 13:49:02.272997 systemd-networkd[751]: eth0: Gained carrier Jan 30 13:49:02.269186 ignition[662]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:49:02.273006 systemd-networkd[751]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:49:02.269210 ignition[662]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:49:02.287834 systemd-networkd[751]: eth0: DHCPv4 address 10.128.0.25/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 30 13:49:02.269223 ignition[662]: failed to fetch config: resource requires networking Jan 30 13:49:02.289556 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:49:02.269517 ignition[662]: Ignition finished successfully Jan 30 13:49:02.301517 systemd[1]: Reached target network.target - Network. Jan 30 13:49:02.368355 ignition[759]: Ignition 2.19.0 Jan 30 13:49:02.320981 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 13:49:02.368365 ignition[759]: Stage: fetch Jan 30 13:49:02.378614 unknown[759]: fetched base config from "system" Jan 30 13:49:02.368581 ignition[759]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:49:02.378626 unknown[759]: fetched base config from "system" Jan 30 13:49:02.368593 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 30 13:49:02.378635 unknown[759]: fetched user config from "gcp" Jan 30 13:49:02.368765 ignition[759]: parsed url from cmdline: "" Jan 30 13:49:02.381897 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 13:49:02.368773 ignition[759]: no config URL provided Jan 30 13:49:02.388972 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:49:02.368783 ignition[759]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:49:02.435356 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:49:02.368798 ignition[759]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:49:02.475973 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:49:02.368822 ignition[759]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Jan 30 13:49:02.516189 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:49:02.372657 ignition[759]: GET result: OK Jan 30 13:49:02.530178 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:49:02.372717 ignition[759]: parsing config with SHA512: fa5bac0662bf3d6457ebcf8e131e43e2b9177b1f54f5e0dedec6af1863448e1291b65216d9f484e526b28149d2f3d4ac28062e205befa8aaae9e681094291776 Jan 30 13:49:02.537091 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:49:02.379897 ignition[759]: fetch: fetch complete Jan 30 13:49:02.565020 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:49:02.379913 ignition[759]: fetch: fetch passed Jan 30 13:49:02.571072 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:49:02.380004 ignition[759]: Ignition finished successfully Jan 30 13:49:02.588095 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:49:02.430027 ignition[764]: Ignition 2.19.0 Jan 30 13:49:02.609919 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:49:02.430035 ignition[764]: Stage: kargs Jan 30 13:49:02.430255 ignition[764]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:49:02.430267 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 30 13:49:02.432103 ignition[764]: kargs: kargs passed Jan 30 13:49:02.432160 ignition[764]: Ignition finished successfully Jan 30 13:49:02.513631 ignition[771]: Ignition 2.19.0 Jan 30 13:49:02.513640 ignition[771]: Stage: disks Jan 30 13:49:02.513895 ignition[771]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:49:02.513909 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 30 13:49:02.515000 ignition[771]: disks: disks passed Jan 30 13:49:02.515057 ignition[771]: Ignition finished successfully Jan 30 13:49:02.654278 systemd-fsck[780]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 30 13:49:02.856452 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:49:02.874873 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:49:02.997970 kernel: EXT4-fs (sda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 13:49:02.998814 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:49:02.999618 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:49:03.031897 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:49:03.043587 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:49:03.058424 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 13:49:03.111937 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (788) Jan 30 13:49:03.111997 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:49:03.112022 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:49:03.112045 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:49:03.058483 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:49:03.153032 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 13:49:03.153087 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:49:03.058515 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:49:03.136981 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:49:03.162281 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:49:03.186004 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:49:03.308567 initrd-setup-root[812]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:49:03.318902 initrd-setup-root[819]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:49:03.328862 initrd-setup-root[826]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:49:03.338894 initrd-setup-root[833]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:49:03.469142 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:49:03.475068 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:49:03.513780 kernel: BTRFS info (device sda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:49:03.519985 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:49:03.529906 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:49:03.573901 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:49:03.582906 ignition[900]: INFO : Ignition 2.19.0 Jan 30 13:49:03.582906 ignition[900]: INFO : Stage: mount Jan 30 13:49:03.582906 ignition[900]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:49:03.582906 ignition[900]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 30 13:49:03.582906 ignition[900]: INFO : mount: mount passed Jan 30 13:49:03.582906 ignition[900]: INFO : Ignition finished successfully Jan 30 13:49:03.593288 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:49:03.605893 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:49:03.653975 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:49:03.700777 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (912) Jan 30 13:49:03.718065 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:49:03.718139 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:49:03.718177 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:49:03.739618 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 13:49:03.739709 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:49:03.742471 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:49:03.780816 ignition[929]: INFO : Ignition 2.19.0 Jan 30 13:49:03.780816 ignition[929]: INFO : Stage: files Jan 30 13:49:03.795838 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:49:03.795838 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 30 13:49:03.795838 ignition[929]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:49:03.795838 ignition[929]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:49:03.795838 ignition[929]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:49:03.795838 ignition[929]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:49:03.795838 ignition[929]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:49:03.795838 ignition[929]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:49:03.795838 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 30 13:49:03.795838 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 30 13:49:03.792805 unknown[929]: wrote ssh authorized keys file for user: core Jan 30 13:49:04.022988 systemd-networkd[751]: eth0: Gained IPv6LL Jan 30 13:49:04.942309 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 13:49:05.150624 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 30 13:49:05.167897 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:49:05.167897 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:49:05.167897 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:49:05.167897 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:49:05.167897 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:49:05.167897 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:49:05.167897 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:49:05.167897 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:49:05.167897 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:49:05.167897 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:49:05.167897 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 13:49:05.167897 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 13:49:05.167897 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 13:49:05.167897 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Jan 30 13:49:05.466841 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 30 13:49:05.858186 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 13:49:05.858186 ignition[929]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 30 13:49:05.896914 ignition[929]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:49:05.896914 ignition[929]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:49:05.896914 ignition[929]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 30 13:49:05.896914 ignition[929]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 30 13:49:05.896914 ignition[929]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 13:49:05.896914 ignition[929]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:49:05.896914 ignition[929]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:49:05.896914 ignition[929]: INFO : files: files passed Jan 30 13:49:05.896914 ignition[929]: INFO : Ignition finished successfully Jan 30 13:49:05.863617 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:49:05.882969 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:49:05.917952 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:49:05.949302 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:49:06.122998 initrd-setup-root-after-ignition[956]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:49:06.122998 initrd-setup-root-after-ignition[956]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:49:05.949468 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:49:06.177945 initrd-setup-root-after-ignition[960]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:49:05.978216 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:49:05.986157 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:49:06.012955 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:49:06.095237 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:49:06.095364 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:49:06.115606 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:49:06.132895 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:49:06.154065 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:49:06.160933 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:49:06.223411 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:49:06.244077 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:49:06.277814 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:49:06.292293 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:49:06.302286 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:49:06.320228 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:49:06.320425 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:49:06.358220 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:49:06.369301 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:49:06.404221 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:49:06.414300 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:49:06.431338 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:49:06.449298 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:49:06.468255 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:49:06.503220 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:49:06.513223 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:49:06.531243 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:49:06.549198 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:49:06.549418 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:49:06.583172 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:49:06.593174 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:49:06.610142 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:49:06.610302 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:49:06.627180 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:49:06.627384 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:49:06.674168 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:49:06.674378 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:49:06.771930 ignition[981]: INFO : Ignition 2.19.0 Jan 30 13:49:06.771930 ignition[981]: INFO : Stage: umount Jan 30 13:49:06.771930 ignition[981]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:49:06.771930 ignition[981]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 30 13:49:06.771930 ignition[981]: INFO : umount: umount passed Jan 30 13:49:06.771930 ignition[981]: INFO : Ignition finished successfully Jan 30 13:49:06.703301 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:49:06.703522 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:49:06.723105 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:49:06.754172 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:49:06.754449 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:49:06.789084 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:49:06.829031 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:49:06.829279 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:49:06.837341 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:49:06.837523 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:49:06.877898 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:49:06.878956 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:49:06.879069 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:49:06.894456 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:49:06.894566 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:49:06.916210 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:49:06.916347 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:49:06.927046 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:49:06.927100 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:49:06.954082 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:49:06.954155 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:49:06.964124 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 13:49:06.964187 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 13:49:06.982109 systemd[1]: Stopped target network.target - Network. Jan 30 13:49:06.999049 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:49:06.999120 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:49:07.016115 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:49:07.033071 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:49:07.036815 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:49:07.049042 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:49:07.067074 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:49:07.092053 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:49:07.092117 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:49:07.100094 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:49:07.100150 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:49:07.117107 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:49:07.117176 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:49:07.135172 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:49:07.135251 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:49:07.169112 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:49:07.169192 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:49:07.195384 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:49:07.199837 systemd-networkd[751]: eth0: DHCPv6 lease lost Jan 30 13:49:07.213145 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:49:07.231893 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:49:07.232050 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:49:07.259296 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:49:07.259656 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:49:07.268711 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:49:07.268829 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:49:07.290009 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:49:07.310910 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:49:07.311054 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:49:07.323015 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:49:07.323096 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:49:07.345973 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:49:07.346131 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:49:07.363981 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:49:07.771882 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Jan 30 13:49:07.364074 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:49:07.383127 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:49:07.402387 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:49:07.402553 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:49:07.430536 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:49:07.430674 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:49:07.439172 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:49:07.439226 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:49:07.466110 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:49:07.466198 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:49:07.495231 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:49:07.495311 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:49:07.539029 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:49:07.539146 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:49:07.570954 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:49:07.573039 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:49:07.573109 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:49:07.600300 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:49:07.600381 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:49:07.620662 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:49:07.620812 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:49:07.638630 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:49:07.638766 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:49:07.668264 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:49:07.694914 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:49:07.721920 systemd[1]: Switching root. Jan 30 13:49:08.025879 systemd-journald[183]: Journal stopped Jan 30 13:48:59.088859 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 13:48:59.088907 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:48:59.088926 kernel: BIOS-provided physical RAM map: Jan 30 13:48:59.088941 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Jan 30 13:48:59.088955 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Jan 30 13:48:59.088969 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Jan 30 13:48:59.088986 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Jan 30 13:48:59.089005 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Jan 30 13:48:59.089020 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Jan 30 13:48:59.089035 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Jan 30 13:48:59.089050 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Jan 30 13:48:59.089065 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Jan 30 13:48:59.089081 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Jan 30 13:48:59.089095 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Jan 30 13:48:59.089118 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Jan 30 13:48:59.089144 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Jan 30 13:48:59.089164 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Jan 30 13:48:59.089180 kernel: NX (Execute Disable) protection: active Jan 30 13:48:59.089198 kernel: APIC: Static calls initialized Jan 30 13:48:59.089215 kernel: efi: EFI v2.7 by EDK II Jan 30 13:48:59.089230 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Jan 30 13:48:59.089247 kernel: SMBIOS 2.4 present. Jan 30 13:48:59.089264 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Jan 30 13:48:59.089281 kernel: Hypervisor detected: KVM Jan 30 13:48:59.089301 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 13:48:59.089317 kernel: kvm-clock: using sched offset of 12159584440 cycles Jan 30 13:48:59.089335 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 13:48:59.089353 kernel: tsc: Detected 2299.998 MHz processor Jan 30 13:48:59.089370 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 13:48:59.089388 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 13:48:59.089405 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Jan 30 13:48:59.089422 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Jan 30 13:48:59.089439 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 13:48:59.089459 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Jan 30 13:48:59.089484 kernel: Using GB pages for direct mapping Jan 30 13:48:59.089501 kernel: Secure boot disabled Jan 30 13:48:59.089518 kernel: ACPI: Early table checksum verification disabled Jan 30 13:48:59.089535 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Jan 30 13:48:59.089552 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Jan 30 13:48:59.089569 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Jan 30 13:48:59.089593 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Jan 30 13:48:59.089614 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Jan 30 13:48:59.089632 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Jan 30 13:48:59.089651 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Jan 30 13:48:59.089667 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Jan 30 13:48:59.089685 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Jan 30 13:48:59.089702 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Jan 30 13:48:59.089723 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Jan 30 13:48:59.089762 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Jan 30 13:48:59.089778 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Jan 30 13:48:59.089793 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Jan 30 13:48:59.089809 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Jan 30 13:48:59.089823 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Jan 30 13:48:59.089839 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Jan 30 13:48:59.089856 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Jan 30 13:48:59.089873 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Jan 30 13:48:59.089896 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Jan 30 13:48:59.089913 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 30 13:48:59.089930 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 30 13:48:59.089948 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 30 13:48:59.089964 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Jan 30 13:48:59.089980 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Jan 30 13:48:59.089997 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Jan 30 13:48:59.090015 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Jan 30 13:48:59.090031 kernel: NODE_DATA(0) allocated [mem 0x21fff8000-0x21fffdfff] Jan 30 13:48:59.090053 kernel: Zone ranges: Jan 30 13:48:59.090069 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 13:48:59.090087 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 30 13:48:59.090104 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Jan 30 13:48:59.090121 kernel: Movable zone start for each node Jan 30 13:48:59.090138 kernel: Early memory node ranges Jan 30 13:48:59.090155 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Jan 30 13:48:59.090173 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Jan 30 13:48:59.090189 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Jan 30 13:48:59.090206 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Jan 30 13:48:59.090226 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Jan 30 13:48:59.090243 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Jan 30 13:48:59.090260 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:48:59.090277 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Jan 30 13:48:59.090295 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Jan 30 13:48:59.090311 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 30 13:48:59.090327 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Jan 30 13:48:59.090344 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 30 13:48:59.090362 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 13:48:59.090384 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 30 13:48:59.090400 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 13:48:59.090418 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 13:48:59.090435 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 13:48:59.090453 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 13:48:59.090470 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 13:48:59.090495 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 30 13:48:59.090512 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 30 13:48:59.090528 kernel: Booting paravirtualized kernel on KVM Jan 30 13:48:59.090550 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 13:48:59.090567 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 30 13:48:59.090585 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 30 13:48:59.090602 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 30 13:48:59.090618 kernel: pcpu-alloc: [0] 0 1 Jan 30 13:48:59.090635 kernel: kvm-guest: PV spinlocks enabled Jan 30 13:48:59.090652 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 30 13:48:59.090671 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:48:59.090695 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:48:59.090711 kernel: random: crng init done Jan 30 13:48:59.090729 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 30 13:48:59.090763 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 13:48:59.090780 kernel: Fallback order for Node 0: 0 Jan 30 13:48:59.090796 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Jan 30 13:48:59.090814 kernel: Policy zone: Normal Jan 30 13:48:59.090831 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:48:59.090848 kernel: software IO TLB: area num 2. Jan 30 13:48:59.090870 kernel: Memory: 7513376K/7860584K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 346948K reserved, 0K cma-reserved) Jan 30 13:48:59.090888 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 13:48:59.090905 kernel: Kernel/User page tables isolation: enabled Jan 30 13:48:59.090923 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 13:48:59.090940 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 13:48:59.090956 kernel: Dynamic Preempt: voluntary Jan 30 13:48:59.090974 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:48:59.090993 kernel: rcu: RCU event tracing is enabled. Jan 30 13:48:59.091028 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 13:48:59.091046 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:48:59.091064 kernel: Rude variant of Tasks RCU enabled. Jan 30 13:48:59.091087 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:48:59.091106 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:48:59.091124 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 13:48:59.091143 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 30 13:48:59.091162 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:48:59.091181 kernel: Console: colour dummy device 80x25 Jan 30 13:48:59.091203 kernel: printk: console [ttyS0] enabled Jan 30 13:48:59.091223 kernel: ACPI: Core revision 20230628 Jan 30 13:48:59.091241 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 13:48:59.091260 kernel: x2apic enabled Jan 30 13:48:59.091279 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 13:48:59.091298 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Jan 30 13:48:59.091317 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 30 13:48:59.091337 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Jan 30 13:48:59.091359 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Jan 30 13:48:59.091378 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Jan 30 13:48:59.091397 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 13:48:59.091416 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 30 13:48:59.091435 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 30 13:48:59.091454 kernel: Spectre V2 : Mitigation: IBRS Jan 30 13:48:59.091482 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 13:48:59.091501 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 13:48:59.091520 kernel: RETBleed: Mitigation: IBRS Jan 30 13:48:59.091544 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 13:48:59.091562 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Jan 30 13:48:59.091578 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 13:48:59.091595 kernel: MDS: Mitigation: Clear CPU buffers Jan 30 13:48:59.091614 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 30 13:48:59.091633 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 13:48:59.091652 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 13:48:59.091671 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 13:48:59.091690 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 13:48:59.091713 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 30 13:48:59.091733 kernel: Freeing SMP alternatives memory: 32K Jan 30 13:48:59.091775 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:48:59.091795 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:48:59.091814 kernel: landlock: Up and running. Jan 30 13:48:59.091833 kernel: SELinux: Initializing. Jan 30 13:48:59.091851 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 13:48:59.091870 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 13:48:59.091888 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Jan 30 13:48:59.091911 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:48:59.091930 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:48:59.091948 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:48:59.091967 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Jan 30 13:48:59.091985 kernel: signal: max sigframe size: 1776 Jan 30 13:48:59.092004 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:48:59.092033 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:48:59.092051 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 30 13:48:59.092069 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:48:59.092091 kernel: smpboot: x86: Booting SMP configuration: Jan 30 13:48:59.092109 kernel: .... node #0, CPUs: #1 Jan 30 13:48:59.092129 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 30 13:48:59.092148 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 30 13:48:59.092167 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 13:48:59.092195 kernel: smpboot: Max logical packages: 1 Jan 30 13:48:59.092218 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jan 30 13:48:59.092245 kernel: devtmpfs: initialized Jan 30 13:48:59.092269 kernel: x86/mm: Memory block size: 128MB Jan 30 13:48:59.092288 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Jan 30 13:48:59.092307 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:48:59.092326 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 13:48:59.092345 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:48:59.092364 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:48:59.092383 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:48:59.092402 kernel: audit: type=2000 audit(1738244937.484:1): state=initialized audit_enabled=0 res=1 Jan 30 13:48:59.092421 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:48:59.092444 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 13:48:59.092464 kernel: cpuidle: using governor menu Jan 30 13:48:59.092491 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:48:59.092516 kernel: dca service started, version 1.12.1 Jan 30 13:48:59.092536 kernel: PCI: Using configuration type 1 for base access Jan 30 13:48:59.092555 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 13:48:59.092581 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 13:48:59.092598 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 13:48:59.092617 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:48:59.092641 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:48:59.092660 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:48:59.092686 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:48:59.092703 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:48:59.092721 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:48:59.092739 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 30 13:48:59.092773 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 13:48:59.092792 kernel: ACPI: Interpreter enabled Jan 30 13:48:59.092810 kernel: ACPI: PM: (supports S0 S3 S5) Jan 30 13:48:59.092834 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 13:48:59.092852 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 13:48:59.092868 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 30 13:48:59.092886 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 30 13:48:59.092905 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 13:48:59.093154 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 30 13:48:59.093352 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 30 13:48:59.093543 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 30 13:48:59.093573 kernel: PCI host bridge to bus 0000:00 Jan 30 13:48:59.093765 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 13:48:59.093945 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 13:48:59.094143 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 13:48:59.094330 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Jan 30 13:48:59.094498 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 13:48:59.094696 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 30 13:48:59.094918 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Jan 30 13:48:59.095103 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 30 13:48:59.095299 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 30 13:48:59.095527 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Jan 30 13:48:59.095716 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Jan 30 13:48:59.095948 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Jan 30 13:48:59.096145 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 30 13:48:59.096335 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Jan 30 13:48:59.096530 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Jan 30 13:48:59.096722 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Jan 30 13:48:59.096942 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Jan 30 13:48:59.097135 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Jan 30 13:48:59.097166 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 13:48:59.097186 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 13:48:59.097204 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 13:48:59.097221 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 13:48:59.097238 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 30 13:48:59.097256 kernel: iommu: Default domain type: Translated Jan 30 13:48:59.097273 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 13:48:59.097289 kernel: efivars: Registered efivars operations Jan 30 13:48:59.097308 kernel: PCI: Using ACPI for IRQ routing Jan 30 13:48:59.097333 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 13:48:59.097351 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Jan 30 13:48:59.097369 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Jan 30 13:48:59.097388 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Jan 30 13:48:59.097407 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Jan 30 13:48:59.097426 kernel: vgaarb: loaded Jan 30 13:48:59.097446 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 13:48:59.097465 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:48:59.097493 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:48:59.097517 kernel: pnp: PnP ACPI init Jan 30 13:48:59.097537 kernel: pnp: PnP ACPI: found 7 devices Jan 30 13:48:59.097557 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 13:48:59.097576 kernel: NET: Registered PF_INET protocol family Jan 30 13:48:59.097597 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 30 13:48:59.097617 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 30 13:48:59.097637 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:48:59.097657 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 13:48:59.097676 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 30 13:48:59.097700 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 30 13:48:59.097720 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 30 13:48:59.097739 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 30 13:48:59.097847 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:48:59.097865 kernel: NET: Registered PF_XDP protocol family Jan 30 13:48:59.098054 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 13:48:59.098225 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 13:48:59.098408 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 13:48:59.098590 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Jan 30 13:48:59.098806 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 30 13:48:59.098834 kernel: PCI: CLS 0 bytes, default 64 Jan 30 13:48:59.098855 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 30 13:48:59.098874 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Jan 30 13:48:59.098894 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 30 13:48:59.098923 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 30 13:48:59.098943 kernel: clocksource: Switched to clocksource tsc Jan 30 13:48:59.098969 kernel: Initialise system trusted keyrings Jan 30 13:48:59.098988 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 30 13:48:59.099008 kernel: Key type asymmetric registered Jan 30 13:48:59.099027 kernel: Asymmetric key parser 'x509' registered Jan 30 13:48:59.099047 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 13:48:59.099067 kernel: io scheduler mq-deadline registered Jan 30 13:48:59.099086 kernel: io scheduler kyber registered Jan 30 13:48:59.099106 kernel: io scheduler bfq registered Jan 30 13:48:59.099125 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 13:48:59.099150 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 30 13:48:59.099341 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Jan 30 13:48:59.099366 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jan 30 13:48:59.099569 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Jan 30 13:48:59.099600 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 30 13:48:59.099806 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Jan 30 13:48:59.099830 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:48:59.099850 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 13:48:59.099869 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 30 13:48:59.099896 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Jan 30 13:48:59.099916 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Jan 30 13:48:59.100123 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Jan 30 13:48:59.100149 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 13:48:59.100169 kernel: i8042: Warning: Keylock active Jan 30 13:48:59.100200 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 13:48:59.100222 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 13:48:59.100411 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 30 13:48:59.100594 kernel: rtc_cmos 00:00: registered as rtc0 Jan 30 13:48:59.100788 kernel: rtc_cmos 00:00: setting system clock to 2025-01-30T13:48:58 UTC (1738244938) Jan 30 13:48:59.100965 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 30 13:48:59.100989 kernel: intel_pstate: CPU model not supported Jan 30 13:48:59.101008 kernel: pstore: Using crash dump compression: deflate Jan 30 13:48:59.101027 kernel: pstore: Registered efi_pstore as persistent store backend Jan 30 13:48:59.101046 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:48:59.101065 kernel: Segment Routing with IPv6 Jan 30 13:48:59.101090 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:48:59.101109 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:48:59.101128 kernel: Key type dns_resolver registered Jan 30 13:48:59.101147 kernel: IPI shorthand broadcast: enabled Jan 30 13:48:59.101166 kernel: sched_clock: Marking stable (826004012, 169689029)->(1017507376, -21814335) Jan 30 13:48:59.101185 kernel: registered taskstats version 1 Jan 30 13:48:59.101204 kernel: Loading compiled-in X.509 certificates Jan 30 13:48:59.101223 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 13:48:59.101242 kernel: Key type .fscrypt registered Jan 30 13:48:59.101265 kernel: Key type fscrypt-provisioning registered Jan 30 13:48:59.101284 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:48:59.101303 kernel: ima: No architecture policies found Jan 30 13:48:59.101322 kernel: clk: Disabling unused clocks Jan 30 13:48:59.101341 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 13:48:59.101359 kernel: Write protecting the kernel read-only data: 36864k Jan 30 13:48:59.101379 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 13:48:59.101398 kernel: Run /init as init process Jan 30 13:48:59.101417 kernel: with arguments: Jan 30 13:48:59.101439 kernel: /init Jan 30 13:48:59.101457 kernel: with environment: Jan 30 13:48:59.101490 kernel: HOME=/ Jan 30 13:48:59.101517 kernel: TERM=linux Jan 30 13:48:59.101536 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:48:59.101564 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 30 13:48:59.101589 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:48:59.101624 systemd[1]: Detected virtualization google. Jan 30 13:48:59.101644 systemd[1]: Detected architecture x86-64. Jan 30 13:48:59.101663 systemd[1]: Running in initrd. Jan 30 13:48:59.101688 systemd[1]: No hostname configured, using default hostname. Jan 30 13:48:59.101708 systemd[1]: Hostname set to . Jan 30 13:48:59.101729 systemd[1]: Initializing machine ID from random generator. Jan 30 13:48:59.101774 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:48:59.101794 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:48:59.101818 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:48:59.101840 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:48:59.101867 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:48:59.101887 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:48:59.101912 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:48:59.101935 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:48:59.101956 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:48:59.101980 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:48:59.102000 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:48:59.102047 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:48:59.102072 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:48:59.102092 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:48:59.102113 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:48:59.102145 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:48:59.102170 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:48:59.102197 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:48:59.102223 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:48:59.102244 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:48:59.102265 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:48:59.102286 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:48:59.102312 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:48:59.102333 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:48:59.102359 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:48:59.102380 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:48:59.102401 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:48:59.102422 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:48:59.102443 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:48:59.102479 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:48:59.102501 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:48:59.102553 systemd-journald[183]: Collecting audit messages is disabled. Jan 30 13:48:59.102601 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:48:59.102622 systemd-journald[183]: Journal started Jan 30 13:48:59.102673 systemd-journald[183]: Runtime Journal (/run/log/journal/161dcfd666884636b29b95d9c64018f8) is 8.0M, max 148.7M, 140.7M free. Jan 30 13:48:59.108130 systemd-modules-load[184]: Inserted module 'overlay' Jan 30 13:48:59.114868 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:48:59.119360 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:48:59.146781 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:48:59.150821 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:48:59.153771 kernel: Bridge firewalling registered Jan 30 13:48:59.154104 systemd-modules-load[184]: Inserted module 'br_netfilter' Jan 30 13:48:59.157953 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:48:59.160960 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:48:59.169246 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:48:59.177189 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:48:59.192979 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:48:59.194719 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:48:59.203946 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:48:59.204707 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:48:59.221557 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:48:59.225300 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:48:59.235578 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:48:59.246485 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:48:59.253975 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:48:59.285007 systemd-resolved[212]: Positive Trust Anchors: Jan 30 13:48:59.285547 systemd-resolved[212]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:48:59.292856 dracut-cmdline[216]: dracut-dracut-053 Jan 30 13:48:59.292856 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:48:59.285769 systemd-resolved[212]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:48:59.292214 systemd-resolved[212]: Defaulting to hostname 'linux'. Jan 30 13:48:59.294806 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:48:59.309067 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:48:59.389784 kernel: SCSI subsystem initialized Jan 30 13:48:59.400796 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:48:59.412796 kernel: iscsi: registered transport (tcp) Jan 30 13:48:59.436320 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:48:59.436416 kernel: QLogic iSCSI HBA Driver Jan 30 13:48:59.492637 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:48:59.499124 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:48:59.540958 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:48:59.541051 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:48:59.541077 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:48:59.584803 kernel: raid6: avx2x4 gen() 18213 MB/s Jan 30 13:48:59.601789 kernel: raid6: avx2x2 gen() 18272 MB/s Jan 30 13:48:59.619150 kernel: raid6: avx2x1 gen() 14289 MB/s Jan 30 13:48:59.619185 kernel: raid6: using algorithm avx2x2 gen() 18272 MB/s Jan 30 13:48:59.637389 kernel: raid6: .... xor() 17697 MB/s, rmw enabled Jan 30 13:48:59.637436 kernel: raid6: using avx2x2 recovery algorithm Jan 30 13:48:59.660781 kernel: xor: automatically using best checksumming function avx Jan 30 13:48:59.839786 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:48:59.853131 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:48:59.864063 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:48:59.897983 systemd-udevd[398]: Using default interface naming scheme 'v255'. Jan 30 13:48:59.905270 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:48:59.916114 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:48:59.949278 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Jan 30 13:48:59.986877 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:48:59.993953 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:49:00.084971 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:49:00.094436 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:49:00.140711 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:49:00.148621 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:49:00.152888 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:49:00.157452 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:49:00.175990 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:49:00.210273 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:49:00.213896 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 13:49:00.256275 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:49:00.256841 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:49:00.283528 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 13:49:00.283571 kernel: AES CTR mode by8 optimization enabled Jan 30 13:49:00.294001 kernel: scsi host0: Virtio SCSI HBA Jan 30 13:49:00.294265 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:49:00.311902 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Jan 30 13:49:00.298868 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:49:00.299131 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:49:00.307513 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:49:00.333810 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:49:00.358500 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Jan 30 13:49:00.373292 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Jan 30 13:49:00.373589 kernel: sd 0:0:1:0: [sda] Write Protect is off Jan 30 13:49:00.373843 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Jan 30 13:49:00.374080 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 30 13:49:00.374311 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 13:49:00.374340 kernel: GPT:17805311 != 25165823 Jan 30 13:49:00.374374 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 13:49:00.374399 kernel: GPT:17805311 != 25165823 Jan 30 13:49:00.374424 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 13:49:00.374450 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:49:00.374483 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Jan 30 13:49:00.366792 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:49:00.381022 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:49:00.433776 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (466) Jan 30 13:49:00.438319 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:49:00.447904 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (447) Jan 30 13:49:00.455646 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Jan 30 13:49:00.469860 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Jan 30 13:49:00.490317 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 30 13:49:00.496516 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Jan 30 13:49:00.496760 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Jan 30 13:49:00.509987 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:49:00.535374 disk-uuid[549]: Primary Header is updated. Jan 30 13:49:00.535374 disk-uuid[549]: Secondary Entries is updated. Jan 30 13:49:00.535374 disk-uuid[549]: Secondary Header is updated. Jan 30 13:49:00.548771 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:49:00.572787 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:49:00.579766 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:49:01.579814 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:49:01.580730 disk-uuid[550]: The operation has completed successfully. Jan 30 13:49:01.647572 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:49:01.647721 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:49:01.680946 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:49:01.708476 sh[567]: Success Jan 30 13:49:01.711910 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 30 13:49:01.799083 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:49:01.806003 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:49:01.830332 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:49:01.861786 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 13:49:01.861859 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:49:01.878655 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:49:01.878713 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:49:01.885471 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:49:01.918777 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 30 13:49:01.924457 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:49:01.925424 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:49:01.930945 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:49:01.991760 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:49:01.991833 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:49:01.991860 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:49:01.989625 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:49:02.038380 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 13:49:02.038426 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:49:02.038453 kernel: BTRFS info (device sda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:49:02.056564 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:49:02.062981 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:49:02.159627 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:49:02.165640 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:49:02.266617 systemd-networkd[751]: lo: Link UP Jan 30 13:49:02.266637 systemd-networkd[751]: lo: Gained carrier Jan 30 13:49:02.268928 ignition[662]: Ignition 2.19.0 Jan 30 13:49:02.269311 systemd-networkd[751]: Enumeration completed Jan 30 13:49:02.268941 ignition[662]: Stage: fetch-offline Jan 30 13:49:02.270095 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:49:02.269001 ignition[662]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:49:02.270800 systemd-networkd[751]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:49:02.269017 ignition[662]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 30 13:49:02.270807 systemd-networkd[751]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:49:02.269169 ignition[662]: parsed url from cmdline: "" Jan 30 13:49:02.272991 systemd-networkd[751]: eth0: Link UP Jan 30 13:49:02.269176 ignition[662]: no config URL provided Jan 30 13:49:02.272997 systemd-networkd[751]: eth0: Gained carrier Jan 30 13:49:02.269186 ignition[662]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:49:02.273006 systemd-networkd[751]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:49:02.269210 ignition[662]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:49:02.287834 systemd-networkd[751]: eth0: DHCPv4 address 10.128.0.25/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 30 13:49:02.269223 ignition[662]: failed to fetch config: resource requires networking Jan 30 13:49:02.289556 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:49:02.269517 ignition[662]: Ignition finished successfully Jan 30 13:49:02.301517 systemd[1]: Reached target network.target - Network. Jan 30 13:49:02.368355 ignition[759]: Ignition 2.19.0 Jan 30 13:49:02.320981 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 13:49:02.368365 ignition[759]: Stage: fetch Jan 30 13:49:02.378614 unknown[759]: fetched base config from "system" Jan 30 13:49:02.368581 ignition[759]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:49:02.378626 unknown[759]: fetched base config from "system" Jan 30 13:49:02.368593 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 30 13:49:02.378635 unknown[759]: fetched user config from "gcp" Jan 30 13:49:02.368765 ignition[759]: parsed url from cmdline: "" Jan 30 13:49:02.381897 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 13:49:02.368773 ignition[759]: no config URL provided Jan 30 13:49:02.388972 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:49:02.368783 ignition[759]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:49:02.435356 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:49:02.368798 ignition[759]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:49:02.475973 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:49:02.368822 ignition[759]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Jan 30 13:49:02.516189 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:49:02.372657 ignition[759]: GET result: OK Jan 30 13:49:02.530178 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:49:02.372717 ignition[759]: parsing config with SHA512: fa5bac0662bf3d6457ebcf8e131e43e2b9177b1f54f5e0dedec6af1863448e1291b65216d9f484e526b28149d2f3d4ac28062e205befa8aaae9e681094291776 Jan 30 13:49:02.537091 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:49:02.379897 ignition[759]: fetch: fetch complete Jan 30 13:49:02.565020 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:49:02.379913 ignition[759]: fetch: fetch passed Jan 30 13:49:02.571072 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:49:02.380004 ignition[759]: Ignition finished successfully Jan 30 13:49:02.588095 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:49:02.430027 ignition[764]: Ignition 2.19.0 Jan 30 13:49:02.609919 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:49:02.430035 ignition[764]: Stage: kargs Jan 30 13:49:02.430255 ignition[764]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:49:02.430267 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 30 13:49:02.432103 ignition[764]: kargs: kargs passed Jan 30 13:49:02.432160 ignition[764]: Ignition finished successfully Jan 30 13:49:02.513631 ignition[771]: Ignition 2.19.0 Jan 30 13:49:02.513640 ignition[771]: Stage: disks Jan 30 13:49:02.513895 ignition[771]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:49:02.513909 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 30 13:49:02.515000 ignition[771]: disks: disks passed Jan 30 13:49:02.515057 ignition[771]: Ignition finished successfully Jan 30 13:49:02.654278 systemd-fsck[780]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 30 13:49:02.856452 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:49:02.874873 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:49:02.997970 kernel: EXT4-fs (sda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 13:49:02.998814 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:49:02.999618 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:49:03.031897 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:49:03.043587 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:49:03.058424 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 13:49:03.111937 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (788) Jan 30 13:49:03.111997 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:49:03.112022 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:49:03.112045 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:49:03.058483 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:49:03.153032 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 13:49:03.153087 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:49:03.058515 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:49:03.136981 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:49:03.162281 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:49:03.186004 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:49:03.308567 initrd-setup-root[812]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:49:03.318902 initrd-setup-root[819]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:49:03.328862 initrd-setup-root[826]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:49:03.338894 initrd-setup-root[833]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:49:03.469142 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:49:03.475068 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:49:03.513780 kernel: BTRFS info (device sda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:49:03.519985 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:49:03.529906 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:49:03.573901 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:49:03.582906 ignition[900]: INFO : Ignition 2.19.0 Jan 30 13:49:03.582906 ignition[900]: INFO : Stage: mount Jan 30 13:49:03.582906 ignition[900]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:49:03.582906 ignition[900]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 30 13:49:03.582906 ignition[900]: INFO : mount: mount passed Jan 30 13:49:03.582906 ignition[900]: INFO : Ignition finished successfully Jan 30 13:49:03.593288 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:49:03.605893 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:49:03.653975 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:49:03.700777 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (912) Jan 30 13:49:03.718065 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:49:03.718139 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:49:03.718177 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:49:03.739618 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 13:49:03.739709 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:49:03.742471 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:49:03.780816 ignition[929]: INFO : Ignition 2.19.0 Jan 30 13:49:03.780816 ignition[929]: INFO : Stage: files Jan 30 13:49:03.795838 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:49:03.795838 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 30 13:49:03.795838 ignition[929]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:49:03.795838 ignition[929]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:49:03.795838 ignition[929]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:49:03.795838 ignition[929]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:49:03.795838 ignition[929]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:49:03.795838 ignition[929]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:49:03.795838 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 30 13:49:03.795838 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 30 13:49:03.792805 unknown[929]: wrote ssh authorized keys file for user: core Jan 30 13:49:04.022988 systemd-networkd[751]: eth0: Gained IPv6LL Jan 30 13:49:04.942309 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 13:49:05.150624 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 30 13:49:05.167897 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:49:05.167897 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:49:05.167897 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:49:05.167897 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:49:05.167897 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:49:05.167897 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:49:05.167897 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:49:05.167897 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:49:05.167897 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:49:05.167897 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:49:05.167897 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 13:49:05.167897 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 13:49:05.167897 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 13:49:05.167897 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Jan 30 13:49:05.466841 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 30 13:49:05.858186 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 13:49:05.858186 ignition[929]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 30 13:49:05.896914 ignition[929]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:49:05.896914 ignition[929]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:49:05.896914 ignition[929]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 30 13:49:05.896914 ignition[929]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 30 13:49:05.896914 ignition[929]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 13:49:05.896914 ignition[929]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:49:05.896914 ignition[929]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:49:05.896914 ignition[929]: INFO : files: files passed Jan 30 13:49:05.896914 ignition[929]: INFO : Ignition finished successfully Jan 30 13:49:05.863617 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:49:05.882969 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:49:05.917952 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:49:05.949302 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:49:06.122998 initrd-setup-root-after-ignition[956]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:49:06.122998 initrd-setup-root-after-ignition[956]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:49:05.949468 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:49:06.177945 initrd-setup-root-after-ignition[960]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:49:05.978216 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:49:05.986157 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:49:06.012955 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:49:06.095237 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:49:06.095364 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:49:06.115606 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:49:06.132895 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:49:06.154065 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:49:06.160933 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:49:06.223411 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:49:06.244077 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:49:06.277814 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:49:06.292293 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:49:06.302286 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:49:06.320228 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:49:06.320425 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:49:06.358220 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:49:06.369301 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:49:06.404221 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:49:06.414300 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:49:06.431338 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:49:06.449298 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:49:06.468255 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:49:06.503220 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:49:06.513223 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:49:06.531243 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:49:06.549198 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:49:06.549418 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:49:06.583172 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:49:06.593174 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:49:06.610142 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:49:06.610302 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:49:06.627180 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:49:06.627384 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:49:06.674168 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:49:06.674378 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:49:06.771930 ignition[981]: INFO : Ignition 2.19.0 Jan 30 13:49:06.771930 ignition[981]: INFO : Stage: umount Jan 30 13:49:06.771930 ignition[981]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:49:06.771930 ignition[981]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 30 13:49:06.771930 ignition[981]: INFO : umount: umount passed Jan 30 13:49:06.771930 ignition[981]: INFO : Ignition finished successfully Jan 30 13:49:06.703301 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:49:06.703522 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:49:06.723105 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:49:06.754172 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:49:06.754449 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:49:06.789084 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:49:06.829031 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:49:06.829279 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:49:06.837341 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:49:06.837523 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:49:06.877898 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:49:06.878956 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:49:06.879069 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:49:06.894456 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:49:06.894566 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:49:06.916210 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:49:06.916347 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:49:06.927046 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:49:06.927100 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:49:06.954082 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:49:06.954155 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:49:06.964124 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 13:49:06.964187 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 13:49:06.982109 systemd[1]: Stopped target network.target - Network. Jan 30 13:49:06.999049 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:49:06.999120 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:49:07.016115 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:49:07.033071 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:49:07.036815 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:49:07.049042 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:49:07.067074 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:49:07.092053 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:49:07.092117 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:49:07.100094 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:49:07.100150 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:49:07.117107 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:49:07.117176 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:49:07.135172 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:49:07.135251 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:49:07.169112 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:49:07.169192 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:49:07.195384 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:49:07.199837 systemd-networkd[751]: eth0: DHCPv6 lease lost Jan 30 13:49:07.213145 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:49:07.231893 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:49:07.232050 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:49:07.259296 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:49:07.259656 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:49:07.268711 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:49:07.268829 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:49:07.290009 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:49:07.310910 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:49:07.311054 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:49:07.323015 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:49:07.323096 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:49:07.345973 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:49:07.346131 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:49:07.363981 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:49:07.771882 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Jan 30 13:49:07.364074 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:49:07.383127 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:49:07.402387 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:49:07.402553 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:49:07.430536 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:49:07.430674 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:49:07.439172 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:49:07.439226 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:49:07.466110 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:49:07.466198 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:49:07.495231 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:49:07.495311 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:49:07.539029 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:49:07.539146 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:49:07.570954 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:49:07.573039 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:49:07.573109 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:49:07.600300 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:49:07.600381 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:49:07.620662 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:49:07.620812 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:49:07.638630 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:49:07.638766 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:49:07.668264 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:49:07.694914 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:49:07.721920 systemd[1]: Switching root. Jan 30 13:49:08.025879 systemd-journald[183]: Journal stopped Jan 30 13:49:10.400578 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 13:49:10.400620 kernel: SELinux: policy capability open_perms=1 Jan 30 13:49:10.400638 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 13:49:10.400650 kernel: SELinux: policy capability always_check_network=0 Jan 30 13:49:10.400661 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 13:49:10.400672 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 13:49:10.400684 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 13:49:10.400699 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 13:49:10.400710 kernel: audit: type=1403 audit(1738244948.369:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 13:49:10.400725 systemd[1]: Successfully loaded SELinux policy in 89.326ms. Jan 30 13:49:10.400739 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.705ms. Jan 30 13:49:10.400852 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:49:10.400875 systemd[1]: Detected virtualization google. Jan 30 13:49:10.400894 systemd[1]: Detected architecture x86-64. Jan 30 13:49:10.400921 systemd[1]: Detected first boot. Jan 30 13:49:10.400942 systemd[1]: Initializing machine ID from random generator. Jan 30 13:49:10.400963 zram_generator::config[1022]: No configuration found. Jan 30 13:49:10.400985 systemd[1]: Populated /etc with preset unit settings. Jan 30 13:49:10.401006 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 13:49:10.401030 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 13:49:10.401050 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 13:49:10.401071 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 13:49:10.401090 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 13:49:10.401110 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 13:49:10.401131 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 13:49:10.401153 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 13:49:10.401179 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 13:49:10.401223 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 13:49:10.401262 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 13:49:10.401283 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:49:10.401312 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:49:10.401339 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 13:49:10.401360 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 13:49:10.401381 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 13:49:10.401409 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:49:10.401430 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 13:49:10.401453 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:49:10.401474 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 13:49:10.401497 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 13:49:10.401520 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 13:49:10.401549 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 13:49:10.401572 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:49:10.401597 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:49:10.401721 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:49:10.401785 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:49:10.401812 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 13:49:10.401836 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 13:49:10.401859 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:49:10.401882 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:49:10.401905 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:49:10.401939 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 13:49:10.401964 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 13:49:10.401986 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 13:49:10.402007 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 13:49:10.402028 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:49:10.402052 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 13:49:10.402073 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 13:49:10.402095 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 13:49:10.402120 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 13:49:10.402140 systemd[1]: Reached target machines.target - Containers. Jan 30 13:49:10.402161 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 13:49:10.402186 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:49:10.402209 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:49:10.402238 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 13:49:10.402258 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:49:10.402279 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:49:10.402302 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:49:10.402325 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 13:49:10.402346 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:49:10.402365 kernel: ACPI: bus type drm_connector registered Jan 30 13:49:10.402385 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 13:49:10.402421 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 13:49:10.402449 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 13:49:10.402472 kernel: fuse: init (API version 7.39) Jan 30 13:49:10.402495 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 13:49:10.402518 kernel: loop: module loaded Jan 30 13:49:10.402541 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 13:49:10.402565 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:49:10.402588 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:49:10.402655 systemd-journald[1109]: Collecting audit messages is disabled. Jan 30 13:49:10.402707 systemd-journald[1109]: Journal started Jan 30 13:49:10.402770 systemd-journald[1109]: Runtime Journal (/run/log/journal/b6b504533c9f4a968025c4824b89c4dc) is 8.0M, max 148.7M, 140.7M free. Jan 30 13:49:09.214515 systemd[1]: Queued start job for default target multi-user.target. Jan 30 13:49:09.237638 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 30 13:49:09.238207 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 13:49:10.426790 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 13:49:10.452780 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 13:49:10.488531 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:49:10.488609 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 13:49:10.488647 systemd[1]: Stopped verity-setup.service. Jan 30 13:49:10.520778 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:49:10.531789 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:49:10.543293 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 13:49:10.554160 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 13:49:10.565126 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 13:49:10.575093 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 13:49:10.585066 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 13:49:10.595072 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 13:49:10.605253 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 13:49:10.617187 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:49:10.628159 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 13:49:10.628385 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 13:49:10.640239 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:49:10.640491 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:49:10.652207 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:49:10.652456 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:49:10.662238 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:49:10.662483 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:49:10.674218 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 13:49:10.674451 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 13:49:10.684194 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:49:10.684440 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:49:10.694283 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:49:10.704174 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 13:49:10.715196 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 13:49:10.727236 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:49:10.753146 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 13:49:10.774919 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 13:49:10.786164 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 13:49:10.795896 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 13:49:10.796113 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:49:10.808343 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 13:49:10.823972 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 13:49:10.841682 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 13:49:10.852020 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:49:10.859290 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 13:49:10.874389 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 13:49:10.885940 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:49:10.891892 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 13:49:10.902477 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:49:10.912688 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:49:10.917331 systemd-journald[1109]: Time spent on flushing to /var/log/journal/b6b504533c9f4a968025c4824b89c4dc is 90.668ms for 927 entries. Jan 30 13:49:10.917331 systemd-journald[1109]: System Journal (/var/log/journal/b6b504533c9f4a968025c4824b89c4dc) is 8.0M, max 584.8M, 576.8M free. Jan 30 13:49:11.054069 systemd-journald[1109]: Received client request to flush runtime journal. Jan 30 13:49:11.054133 kernel: loop0: detected capacity change from 0 to 54824 Jan 30 13:49:11.054164 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 13:49:10.941016 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 13:49:10.956965 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 13:49:10.975341 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 13:49:10.996823 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 13:49:11.008043 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 13:49:11.019346 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 13:49:11.031365 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 13:49:11.061879 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 13:49:11.087783 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 13:49:11.106864 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 13:49:11.119956 kernel: loop1: detected capacity change from 0 to 142488 Jan 30 13:49:11.124868 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:49:11.135601 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 13:49:11.154714 udevadm[1142]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 30 13:49:11.176276 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:49:11.191293 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 13:49:11.200811 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 13:49:11.221625 kernel: loop2: detected capacity change from 0 to 140768 Jan 30 13:49:11.258440 systemd-tmpfiles[1157]: ACLs are not supported, ignoring. Jan 30 13:49:11.258478 systemd-tmpfiles[1157]: ACLs are not supported, ignoring. Jan 30 13:49:11.298124 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:49:11.331907 kernel: loop3: detected capacity change from 0 to 218376 Jan 30 13:49:11.424073 kernel: loop4: detected capacity change from 0 to 54824 Jan 30 13:49:11.462773 kernel: loop5: detected capacity change from 0 to 142488 Jan 30 13:49:11.502790 kernel: loop6: detected capacity change from 0 to 140768 Jan 30 13:49:11.545772 kernel: loop7: detected capacity change from 0 to 218376 Jan 30 13:49:11.571666 (sd-merge)[1165]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Jan 30 13:49:11.572622 (sd-merge)[1165]: Merged extensions into '/usr'. Jan 30 13:49:11.584394 systemd[1]: Reloading requested from client PID 1140 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 13:49:11.584421 systemd[1]: Reloading... Jan 30 13:49:11.704786 zram_generator::config[1187]: No configuration found. Jan 30 13:49:12.037553 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:49:12.049563 ldconfig[1135]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 13:49:12.151260 systemd[1]: Reloading finished in 559 ms. Jan 30 13:49:12.176574 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 13:49:12.187431 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 13:49:12.210043 systemd[1]: Starting ensure-sysext.service... Jan 30 13:49:12.226918 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:49:12.253920 systemd[1]: Reloading requested from client PID 1231 ('systemctl') (unit ensure-sysext.service)... Jan 30 13:49:12.253946 systemd[1]: Reloading... Jan 30 13:49:12.303115 systemd-tmpfiles[1232]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 13:49:12.303864 systemd-tmpfiles[1232]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 13:49:12.306153 systemd-tmpfiles[1232]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 13:49:12.306853 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. Jan 30 13:49:12.306976 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. Jan 30 13:49:12.314876 systemd-tmpfiles[1232]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:49:12.315052 systemd-tmpfiles[1232]: Skipping /boot Jan 30 13:49:12.358215 systemd-tmpfiles[1232]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:49:12.360802 systemd-tmpfiles[1232]: Skipping /boot Jan 30 13:49:12.411779 zram_generator::config[1258]: No configuration found. Jan 30 13:49:12.557825 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:49:12.622817 systemd[1]: Reloading finished in 368 ms. Jan 30 13:49:12.643640 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 13:49:12.659436 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:49:12.684978 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:49:12.702392 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 13:49:12.723984 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 13:49:12.743990 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:49:12.770282 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:49:12.778151 augenrules[1320]: No rules Jan 30 13:49:12.791008 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 13:49:12.801821 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:49:12.817056 systemd-udevd[1318]: Using default interface naming scheme 'v255'. Jan 30 13:49:12.830111 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 13:49:12.842210 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 13:49:12.871763 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:49:12.872176 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:49:12.879504 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:49:12.898197 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:49:12.915078 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:49:12.924982 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:49:12.928871 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 13:49:12.928963 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:49:12.930618 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:49:12.933363 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 13:49:12.963570 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 13:49:12.976824 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 13:49:12.988574 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:49:12.988858 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:49:13.000591 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:49:13.001936 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:49:13.014132 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:49:13.014690 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:49:13.025618 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 13:49:13.036812 systemd-resolved[1315]: Positive Trust Anchors: Jan 30 13:49:13.037238 systemd-resolved[1315]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:49:13.037313 systemd-resolved[1315]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:49:13.046505 systemd-resolved[1315]: Defaulting to hostname 'linux'. Jan 30 13:49:13.050485 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:49:13.091379 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 30 13:49:13.092461 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:49:13.104270 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:49:13.104679 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:49:13.115093 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:49:13.131175 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:49:13.149889 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:49:13.169705 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:49:13.191639 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 30 13:49:13.203770 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 30 13:49:13.208012 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:49:13.212762 kernel: ACPI: button: Power Button [PWRF] Jan 30 13:49:13.219141 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:49:13.232769 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Jan 30 13:49:13.235767 kernel: ACPI: button: Sleep Button [SLPF] Jan 30 13:49:13.244139 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 13:49:13.254016 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:49:13.254188 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:49:13.259005 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:49:13.259265 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:49:13.271575 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:49:13.272637 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:49:13.283611 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:49:13.283889 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:49:13.299777 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1352) Jan 30 13:49:13.299857 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jan 30 13:49:13.332869 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 Jan 30 13:49:13.315667 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:49:13.315967 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:49:13.343740 systemd[1]: Finished ensure-sysext.service. Jan 30 13:49:13.366067 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 30 13:49:13.378271 kernel: EDAC MC: Ver: 3.0.0 Jan 30 13:49:13.430421 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Jan 30 13:49:13.439794 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 13:49:13.445863 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:49:13.445957 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:49:13.475979 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:49:13.508516 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 30 13:49:13.520451 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 13:49:13.530334 systemd-networkd[1375]: lo: Link UP Jan 30 13:49:13.530348 systemd-networkd[1375]: lo: Gained carrier Jan 30 13:49:13.532826 systemd-networkd[1375]: Enumeration completed Jan 30 13:49:13.533118 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:49:13.533591 systemd-networkd[1375]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:49:13.533598 systemd-networkd[1375]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:49:13.534346 systemd-networkd[1375]: eth0: Link UP Jan 30 13:49:13.534359 systemd-networkd[1375]: eth0: Gained carrier Jan 30 13:49:13.534384 systemd-networkd[1375]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:49:13.540808 systemd-networkd[1375]: eth0: DHCPv4 address 10.128.0.25/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 30 13:49:13.543146 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Jan 30 13:49:13.545489 systemd[1]: Reached target network.target - Network. Jan 30 13:49:13.552077 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 13:49:13.555963 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 13:49:13.558020 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 13:49:13.581943 lvm[1406]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:49:13.604163 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 13:49:13.624444 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 13:49:13.625713 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:49:13.633325 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 13:49:13.644895 lvm[1414]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:49:13.660562 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:49:13.672138 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:49:13.682025 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 13:49:13.692949 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 13:49:13.704068 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 13:49:13.714007 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 13:49:13.724896 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 13:49:13.735881 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 13:49:13.735946 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:49:13.744872 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:49:13.753668 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 13:49:13.765531 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 13:49:13.785566 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 13:49:13.795737 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 13:49:13.807060 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 13:49:13.817622 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:49:13.827861 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:49:13.835915 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:49:13.835971 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:49:13.847877 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 13:49:13.859488 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 13:49:13.876854 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 13:49:13.887871 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 13:49:13.927061 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 13:49:13.936880 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 13:49:13.937786 jq[1423]: false Jan 30 13:49:13.946163 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 13:49:13.964780 extend-filesystems[1424]: Found loop4 Jan 30 13:49:13.964780 extend-filesystems[1424]: Found loop5 Jan 30 13:49:13.964780 extend-filesystems[1424]: Found loop6 Jan 30 13:49:13.964780 extend-filesystems[1424]: Found loop7 Jan 30 13:49:13.964780 extend-filesystems[1424]: Found sda Jan 30 13:49:13.964780 extend-filesystems[1424]: Found sda1 Jan 30 13:49:13.964780 extend-filesystems[1424]: Found sda2 Jan 30 13:49:13.964780 extend-filesystems[1424]: Found sda3 Jan 30 13:49:13.964780 extend-filesystems[1424]: Found usr Jan 30 13:49:13.964780 extend-filesystems[1424]: Found sda4 Jan 30 13:49:13.964780 extend-filesystems[1424]: Found sda6 Jan 30 13:49:14.094998 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Jan 30 13:49:14.095060 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Jan 30 13:49:14.095088 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1352) Jan 30 13:49:14.015378 dbus-daemon[1422]: [system] SELinux support is enabled Jan 30 13:49:13.966159 systemd[1]: Started ntpd.service - Network Time Service. Jan 30 13:49:14.101103 coreos-metadata[1421]: Jan 30 13:49:13.986 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Jan 30 13:49:14.101103 coreos-metadata[1421]: Jan 30 13:49:13.993 INFO Fetch successful Jan 30 13:49:14.101103 coreos-metadata[1421]: Jan 30 13:49:13.994 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Jan 30 13:49:14.101103 coreos-metadata[1421]: Jan 30 13:49:13.994 INFO Fetch successful Jan 30 13:49:14.101103 coreos-metadata[1421]: Jan 30 13:49:13.994 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Jan 30 13:49:14.101103 coreos-metadata[1421]: Jan 30 13:49:13.995 INFO Fetch successful Jan 30 13:49:14.101103 coreos-metadata[1421]: Jan 30 13:49:13.995 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Jan 30 13:49:14.101103 coreos-metadata[1421]: Jan 30 13:49:13.997 INFO Fetch successful Jan 30 13:49:14.101700 extend-filesystems[1424]: Found sda7 Jan 30 13:49:14.101700 extend-filesystems[1424]: Found sda9 Jan 30 13:49:14.101700 extend-filesystems[1424]: Checking size of /dev/sda9 Jan 30 13:49:14.101700 extend-filesystems[1424]: Resized partition /dev/sda9 Jan 30 13:49:14.021326 dbus-daemon[1422]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1375 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 30 13:49:13.990717 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 13:49:14.172171 extend-filesystems[1442]: resize2fs 1.47.1 (20-May-2024) Jan 30 13:49:14.172171 extend-filesystems[1442]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 30 13:49:14.172171 extend-filesystems[1442]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 30 13:49:14.172171 extend-filesystems[1442]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Jan 30 13:49:14.169966 ntpd[1429]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 09:31:52 UTC 2025 (1): Starting Jan 30 13:49:14.048012 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 13:49:14.243878 extend-filesystems[1424]: Resized filesystem in /dev/sda9 Jan 30 13:49:14.170024 ntpd[1429]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 30 13:49:14.089027 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 13:49:14.285326 ntpd[1429]: 30 Jan 13:49:14 ntpd[1429]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 09:31:52 UTC 2025 (1): Starting Jan 30 13:49:14.285326 ntpd[1429]: 30 Jan 13:49:14 ntpd[1429]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 30 13:49:14.285326 ntpd[1429]: 30 Jan 13:49:14 ntpd[1429]: ---------------------------------------------------- Jan 30 13:49:14.285326 ntpd[1429]: 30 Jan 13:49:14 ntpd[1429]: ntp-4 is maintained by Network Time Foundation, Jan 30 13:49:14.285326 ntpd[1429]: 30 Jan 13:49:14 ntpd[1429]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 30 13:49:14.285326 ntpd[1429]: 30 Jan 13:49:14 ntpd[1429]: corporation. Support and training for ntp-4 are Jan 30 13:49:14.285326 ntpd[1429]: 30 Jan 13:49:14 ntpd[1429]: available at https://www.nwtime.org/support Jan 30 13:49:14.285326 ntpd[1429]: 30 Jan 13:49:14 ntpd[1429]: ---------------------------------------------------- Jan 30 13:49:14.285326 ntpd[1429]: 30 Jan 13:49:14 ntpd[1429]: proto: precision = 0.112 usec (-23) Jan 30 13:49:14.285326 ntpd[1429]: 30 Jan 13:49:14 ntpd[1429]: basedate set to 2025-01-17 Jan 30 13:49:14.285326 ntpd[1429]: 30 Jan 13:49:14 ntpd[1429]: gps base set to 2025-01-19 (week 2350) Jan 30 13:49:14.285326 ntpd[1429]: 30 Jan 13:49:14 ntpd[1429]: Listen and drop on 0 v6wildcard [::]:123 Jan 30 13:49:14.285326 ntpd[1429]: 30 Jan 13:49:14 ntpd[1429]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 30 13:49:14.285326 ntpd[1429]: 30 Jan 13:49:14 ntpd[1429]: Listen normally on 2 lo 127.0.0.1:123 Jan 30 13:49:14.285326 ntpd[1429]: 30 Jan 13:49:14 ntpd[1429]: Listen normally on 3 eth0 10.128.0.25:123 Jan 30 13:49:14.285326 ntpd[1429]: 30 Jan 13:49:14 ntpd[1429]: Listen normally on 4 lo [::1]:123 Jan 30 13:49:14.285326 ntpd[1429]: 30 Jan 13:49:14 ntpd[1429]: bind(21) AF_INET6 fe80::4001:aff:fe80:19%2#123 flags 0x11 failed: Cannot assign requested address Jan 30 13:49:14.285326 ntpd[1429]: 30 Jan 13:49:14 ntpd[1429]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:19%2#123 Jan 30 13:49:14.285326 ntpd[1429]: 30 Jan 13:49:14 ntpd[1429]: failed to init interface for address fe80::4001:aff:fe80:19%2 Jan 30 13:49:14.285326 ntpd[1429]: 30 Jan 13:49:14 ntpd[1429]: Listening on routing socket on fd #21 for interface updates Jan 30 13:49:14.285326 ntpd[1429]: 30 Jan 13:49:14 ntpd[1429]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 30 13:49:14.285326 ntpd[1429]: 30 Jan 13:49:14 ntpd[1429]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 30 13:49:14.170039 ntpd[1429]: ---------------------------------------------------- Jan 30 13:49:14.110977 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 13:49:14.170053 ntpd[1429]: ntp-4 is maintained by Network Time Foundation, Jan 30 13:49:14.141467 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Jan 30 13:49:14.170067 ntpd[1429]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 30 13:49:14.142217 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 13:49:14.290207 update_engine[1449]: I20250130 13:49:14.227717 1449 main.cc:92] Flatcar Update Engine starting Jan 30 13:49:14.290207 update_engine[1449]: I20250130 13:49:14.232158 1449 update_check_scheduler.cc:74] Next update check in 5m38s Jan 30 13:49:14.170080 ntpd[1429]: corporation. Support and training for ntp-4 are Jan 30 13:49:14.147916 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 13:49:14.170093 ntpd[1429]: available at https://www.nwtime.org/support Jan 30 13:49:14.170914 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 13:49:14.170107 ntpd[1429]: ---------------------------------------------------- Jan 30 13:49:14.184320 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 13:49:14.178851 ntpd[1429]: proto: precision = 0.112 usec (-23) Jan 30 13:49:14.209288 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 13:49:14.179270 ntpd[1429]: basedate set to 2025-01-17 Jan 30 13:49:14.210633 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 13:49:14.179291 ntpd[1429]: gps base set to 2025-01-19 (week 2350) Jan 30 13:49:14.212115 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 13:49:14.185305 ntpd[1429]: Listen and drop on 0 v6wildcard [::]:123 Jan 30 13:49:14.212348 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 13:49:14.185364 ntpd[1429]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 30 13:49:14.249202 systemd-logind[1448]: Watching system buttons on /dev/input/event1 (Power Button) Jan 30 13:49:14.186965 ntpd[1429]: Listen normally on 2 lo 127.0.0.1:123 Jan 30 13:49:14.249232 systemd-logind[1448]: Watching system buttons on /dev/input/event2 (Sleep Button) Jan 30 13:49:14.187021 ntpd[1429]: Listen normally on 3 eth0 10.128.0.25:123 Jan 30 13:49:14.249261 systemd-logind[1448]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 13:49:14.187078 ntpd[1429]: Listen normally on 4 lo [::1]:123 Jan 30 13:49:14.249681 systemd-logind[1448]: New seat seat0. Jan 30 13:49:14.187152 ntpd[1429]: bind(21) AF_INET6 fe80::4001:aff:fe80:19%2#123 flags 0x11 failed: Cannot assign requested address Jan 30 13:49:14.254296 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 13:49:14.187181 ntpd[1429]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:19%2#123 Jan 30 13:49:14.264272 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 13:49:14.187201 ntpd[1429]: failed to init interface for address fe80::4001:aff:fe80:19%2 Jan 30 13:49:14.265617 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 13:49:14.187245 ntpd[1429]: Listening on routing socket on fd #21 for interface updates Jan 30 13:49:14.284222 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 13:49:14.191842 ntpd[1429]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 30 13:49:14.284457 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 13:49:14.191875 ntpd[1429]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 30 13:49:14.298532 jq[1453]: true Jan 30 13:49:14.320997 (ntainerd)[1461]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 13:49:14.345776 jq[1459]: true Jan 30 13:49:14.352416 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 13:49:14.374367 dbus-daemon[1422]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 30 13:49:14.406103 systemd[1]: Started update-engine.service - Update Engine. Jan 30 13:49:14.435102 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 13:49:14.435392 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 13:49:14.435618 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 13:49:14.457097 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 30 13:49:14.463940 tar[1458]: linux-amd64/LICENSE Jan 30 13:49:14.464660 tar[1458]: linux-amd64/helm Jan 30 13:49:14.466890 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 13:49:14.467147 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 13:49:14.488724 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 13:49:14.553420 bash[1491]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:49:14.554827 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 13:49:14.578124 systemd[1]: Starting sshkeys.service... Jan 30 13:49:14.673865 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 30 13:49:14.692584 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 30 13:49:14.728466 dbus-daemon[1422]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 30 13:49:14.730149 dbus-daemon[1422]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1484 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 30 13:49:14.731817 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 30 13:49:14.759198 systemd[1]: Starting polkit.service - Authorization Manager... Jan 30 13:49:14.895479 polkitd[1497]: Started polkitd version 121 Jan 30 13:49:14.912774 coreos-metadata[1494]: Jan 30 13:49:14.908 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Jan 30 13:49:14.911641 polkitd[1497]: Loading rules from directory /etc/polkit-1/rules.d Jan 30 13:49:14.911726 polkitd[1497]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 30 13:49:14.916324 coreos-metadata[1494]: Jan 30 13:49:14.916 INFO Fetch failed with 404: resource not found Jan 30 13:49:14.916324 coreos-metadata[1494]: Jan 30 13:49:14.916 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Jan 30 13:49:14.917819 coreos-metadata[1494]: Jan 30 13:49:14.917 INFO Fetch successful Jan 30 13:49:14.917819 coreos-metadata[1494]: Jan 30 13:49:14.917 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Jan 30 13:49:14.918568 coreos-metadata[1494]: Jan 30 13:49:14.918 INFO Fetch failed with 404: resource not found Jan 30 13:49:14.918780 coreos-metadata[1494]: Jan 30 13:49:14.918 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Jan 30 13:49:14.919885 coreos-metadata[1494]: Jan 30 13:49:14.919 INFO Fetch failed with 404: resource not found Jan 30 13:49:14.919885 coreos-metadata[1494]: Jan 30 13:49:14.919 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Jan 30 13:49:14.922069 polkitd[1497]: Finished loading, compiling and executing 2 rules Jan 30 13:49:14.922862 dbus-daemon[1422]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 30 13:49:14.923192 systemd[1]: Started polkit.service - Authorization Manager. Jan 30 13:49:14.928052 coreos-metadata[1494]: Jan 30 13:49:14.926 INFO Fetch successful Jan 30 13:49:14.927445 polkitd[1497]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 30 13:49:14.935275 unknown[1494]: wrote ssh authorized keys file for user: core Jan 30 13:49:14.975413 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 13:49:14.992545 sshd_keygen[1454]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 13:49:14.995598 systemd-hostnamed[1484]: Hostname set to (transient) Jan 30 13:49:14.996338 systemd-resolved[1315]: System hostname changed to 'ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal'. Jan 30 13:49:15.004659 locksmithd[1488]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 13:49:15.016009 update-ssh-keys[1513]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:49:15.017174 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 30 13:49:15.034160 systemd[1]: Finished sshkeys.service. Jan 30 13:49:15.050463 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 13:49:15.073128 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 13:49:15.096423 systemd[1]: Started sshd@0-10.128.0.25:22-139.178.68.195:55034.service - OpenSSH per-connection server daemon (139.178.68.195:55034). Jan 30 13:49:15.109079 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 13:49:15.110401 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 13:49:15.136181 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 13:49:15.142772 containerd[1461]: time="2025-01-30T13:49:15.141173565Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 13:49:15.158910 systemd-networkd[1375]: eth0: Gained IPv6LL Jan 30 13:49:15.171916 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 13:49:15.183482 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 13:49:15.207008 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:49:15.225426 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 13:49:15.243156 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Jan 30 13:49:15.253809 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 13:49:15.263806 containerd[1461]: time="2025-01-30T13:49:15.263207839Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:49:15.269292 init.sh[1539]: + '[' -e /etc/default/instance_configs.cfg.template ']' Jan 30 13:49:15.281381 init.sh[1539]: + echo -e '[InstanceSetup]\nset_host_keys = false' Jan 30 13:49:15.281381 init.sh[1539]: + /usr/bin/google_instance_setup Jan 30 13:49:15.283655 containerd[1461]: time="2025-01-30T13:49:15.283602844Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:49:15.285717 containerd[1461]: time="2025-01-30T13:49:15.285389982Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 13:49:15.285717 containerd[1461]: time="2025-01-30T13:49:15.285466377Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 13:49:15.288109 containerd[1461]: time="2025-01-30T13:49:15.287802861Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 13:49:15.288109 containerd[1461]: time="2025-01-30T13:49:15.287872981Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 13:49:15.288109 containerd[1461]: time="2025-01-30T13:49:15.288036744Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:49:15.288109 containerd[1461]: time="2025-01-30T13:49:15.288064019Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:49:15.290012 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 13:49:15.290364 containerd[1461]: time="2025-01-30T13:49:15.289782727Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:49:15.290364 containerd[1461]: time="2025-01-30T13:49:15.290197699Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 13:49:15.293785 containerd[1461]: time="2025-01-30T13:49:15.290719204Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:49:15.293785 containerd[1461]: time="2025-01-30T13:49:15.293080502Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 13:49:15.293785 containerd[1461]: time="2025-01-30T13:49:15.293280818Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:49:15.293785 containerd[1461]: time="2025-01-30T13:49:15.293670751Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:49:15.296433 containerd[1461]: time="2025-01-30T13:49:15.295995629Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:49:15.296433 containerd[1461]: time="2025-01-30T13:49:15.296030778Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 13:49:15.296433 containerd[1461]: time="2025-01-30T13:49:15.296239301Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 13:49:15.296433 containerd[1461]: time="2025-01-30T13:49:15.296388701Z" level=info msg="metadata content store policy set" policy=shared Jan 30 13:49:15.308223 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 13:49:15.312772 containerd[1461]: time="2025-01-30T13:49:15.311895399Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 13:49:15.312772 containerd[1461]: time="2025-01-30T13:49:15.311980438Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 13:49:15.312772 containerd[1461]: time="2025-01-30T13:49:15.312010547Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 13:49:15.312772 containerd[1461]: time="2025-01-30T13:49:15.312083648Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 13:49:15.312772 containerd[1461]: time="2025-01-30T13:49:15.312108795Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 13:49:15.312772 containerd[1461]: time="2025-01-30T13:49:15.312315840Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 13:49:15.318202 containerd[1461]: time="2025-01-30T13:49:15.315187995Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 13:49:15.318202 containerd[1461]: time="2025-01-30T13:49:15.315384008Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 13:49:15.318202 containerd[1461]: time="2025-01-30T13:49:15.315413217Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 13:49:15.318202 containerd[1461]: time="2025-01-30T13:49:15.315436648Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 13:49:15.318202 containerd[1461]: time="2025-01-30T13:49:15.315462107Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 13:49:15.318202 containerd[1461]: time="2025-01-30T13:49:15.315485129Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 13:49:15.318202 containerd[1461]: time="2025-01-30T13:49:15.315506895Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 13:49:15.318202 containerd[1461]: time="2025-01-30T13:49:15.315531898Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 13:49:15.318202 containerd[1461]: time="2025-01-30T13:49:15.315556680Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 13:49:15.318202 containerd[1461]: time="2025-01-30T13:49:15.315578459Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 13:49:15.318202 containerd[1461]: time="2025-01-30T13:49:15.315600065Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 13:49:15.318202 containerd[1461]: time="2025-01-30T13:49:15.315619551Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 13:49:15.318202 containerd[1461]: time="2025-01-30T13:49:15.315663417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 13:49:15.318202 containerd[1461]: time="2025-01-30T13:49:15.315687986Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 13:49:15.319448 containerd[1461]: time="2025-01-30T13:49:15.315708677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 13:49:15.319205 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 13:49:15.323445 containerd[1461]: time="2025-01-30T13:49:15.321857627Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 13:49:15.323445 containerd[1461]: time="2025-01-30T13:49:15.321901235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 13:49:15.323445 containerd[1461]: time="2025-01-30T13:49:15.321960017Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 13:49:15.323445 containerd[1461]: time="2025-01-30T13:49:15.322003888Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 13:49:15.323445 containerd[1461]: time="2025-01-30T13:49:15.322029349Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 13:49:15.323445 containerd[1461]: time="2025-01-30T13:49:15.322052088Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 13:49:15.323445 containerd[1461]: time="2025-01-30T13:49:15.322088903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 13:49:15.323445 containerd[1461]: time="2025-01-30T13:49:15.322115599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 13:49:15.323445 containerd[1461]: time="2025-01-30T13:49:15.322139381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 13:49:15.331410 containerd[1461]: time="2025-01-30T13:49:15.331364640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 13:49:15.340413 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 13:49:15.340893 containerd[1461]: time="2025-01-30T13:49:15.340851641Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 13:49:15.341190 containerd[1461]: time="2025-01-30T13:49:15.341166607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 13:49:15.341545 containerd[1461]: time="2025-01-30T13:49:15.341515983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 13:49:15.342109 containerd[1461]: time="2025-01-30T13:49:15.342078314Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 13:49:15.342350 containerd[1461]: time="2025-01-30T13:49:15.342325767Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 13:49:15.342945 containerd[1461]: time="2025-01-30T13:49:15.342911625Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 13:49:15.343053 containerd[1461]: time="2025-01-30T13:49:15.343034638Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 13:49:15.343148 containerd[1461]: time="2025-01-30T13:49:15.343127502Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 13:49:15.343227 containerd[1461]: time="2025-01-30T13:49:15.343210530Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 13:49:15.343450 containerd[1461]: time="2025-01-30T13:49:15.343315102Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 13:49:15.343450 containerd[1461]: time="2025-01-30T13:49:15.343343563Z" level=info msg="NRI interface is disabled by configuration." Jan 30 13:49:15.343450 containerd[1461]: time="2025-01-30T13:49:15.343362448Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 13:49:15.345908 containerd[1461]: time="2025-01-30T13:49:15.344527533Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 13:49:15.345908 containerd[1461]: time="2025-01-30T13:49:15.344756099Z" level=info msg="Connect containerd service" Jan 30 13:49:15.345908 containerd[1461]: time="2025-01-30T13:49:15.344822798Z" level=info msg="using legacy CRI server" Jan 30 13:49:15.345908 containerd[1461]: time="2025-01-30T13:49:15.344856860Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 13:49:15.345908 containerd[1461]: time="2025-01-30T13:49:15.345044422Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 13:49:15.350437 containerd[1461]: time="2025-01-30T13:49:15.349349778Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:49:15.350437 containerd[1461]: time="2025-01-30T13:49:15.349493555Z" level=info msg="Start subscribing containerd event" Jan 30 13:49:15.350437 containerd[1461]: time="2025-01-30T13:49:15.349558239Z" level=info msg="Start recovering state" Jan 30 13:49:15.350437 containerd[1461]: time="2025-01-30T13:49:15.349657953Z" level=info msg="Start event monitor" Jan 30 13:49:15.350437 containerd[1461]: time="2025-01-30T13:49:15.349673917Z" level=info msg="Start snapshots syncer" Jan 30 13:49:15.350437 containerd[1461]: time="2025-01-30T13:49:15.349688492Z" level=info msg="Start cni network conf syncer for default" Jan 30 13:49:15.350437 containerd[1461]: time="2025-01-30T13:49:15.349700198Z" level=info msg="Start streaming server" Jan 30 13:49:15.350437 containerd[1461]: time="2025-01-30T13:49:15.350113794Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 13:49:15.350437 containerd[1461]: time="2025-01-30T13:49:15.350182590Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 13:49:15.350345 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 13:49:15.353370 containerd[1461]: time="2025-01-30T13:49:15.352827962Z" level=info msg="containerd successfully booted in 0.214591s" Jan 30 13:49:15.594785 sshd[1528]: Accepted publickey for core from 139.178.68.195 port 55034 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:49:15.600108 sshd[1528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:49:15.627264 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 13:49:15.646122 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 13:49:15.664177 systemd-logind[1448]: New session 1 of user core. Jan 30 13:49:15.698493 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 13:49:15.725705 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 13:49:15.748552 tar[1458]: linux-amd64/README.md Jan 30 13:49:15.770269 (systemd)[1556]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 13:49:15.771784 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 13:49:15.998952 systemd[1556]: Queued start job for default target default.target. Jan 30 13:49:16.004319 systemd[1556]: Created slice app.slice - User Application Slice. Jan 30 13:49:16.004358 systemd[1556]: Reached target paths.target - Paths. Jan 30 13:49:16.004384 systemd[1556]: Reached target timers.target - Timers. Jan 30 13:49:16.006988 systemd[1556]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 13:49:16.036351 systemd[1556]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 13:49:16.036528 systemd[1556]: Reached target sockets.target - Sockets. Jan 30 13:49:16.036553 systemd[1556]: Reached target basic.target - Basic System. Jan 30 13:49:16.036612 systemd[1556]: Reached target default.target - Main User Target. Jan 30 13:49:16.036664 systemd[1556]: Startup finished in 240ms. Jan 30 13:49:16.037605 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 13:49:16.053969 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 13:49:16.134094 instance-setup[1544]: INFO Running google_set_multiqueue. Jan 30 13:49:16.150312 instance-setup[1544]: INFO Set channels for eth0 to 2. Jan 30 13:49:16.154144 instance-setup[1544]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Jan 30 13:49:16.155554 instance-setup[1544]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Jan 30 13:49:16.155899 instance-setup[1544]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Jan 30 13:49:16.157604 instance-setup[1544]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Jan 30 13:49:16.157992 instance-setup[1544]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Jan 30 13:49:16.160143 instance-setup[1544]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Jan 30 13:49:16.160216 instance-setup[1544]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Jan 30 13:49:16.161682 instance-setup[1544]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Jan 30 13:49:16.170979 instance-setup[1544]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jan 30 13:49:16.175432 instance-setup[1544]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jan 30 13:49:16.177488 instance-setup[1544]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Jan 30 13:49:16.177536 instance-setup[1544]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Jan 30 13:49:16.197910 init.sh[1539]: + /usr/bin/google_metadata_script_runner --script-type startup Jan 30 13:49:16.342314 systemd[1]: Started sshd@1-10.128.0.25:22-139.178.68.195:33152.service - OpenSSH per-connection server daemon (139.178.68.195:33152). Jan 30 13:49:16.449385 startup-script[1598]: INFO Starting startup scripts. Jan 30 13:49:16.455245 startup-script[1598]: INFO No startup scripts found in metadata. Jan 30 13:49:16.455326 startup-script[1598]: INFO Finished running startup scripts. Jan 30 13:49:16.476170 init.sh[1539]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Jan 30 13:49:16.476170 init.sh[1539]: + daemon_pids=() Jan 30 13:49:16.476699 init.sh[1539]: + for d in accounts clock_skew network Jan 30 13:49:16.476699 init.sh[1539]: + daemon_pids+=($!) Jan 30 13:49:16.476699 init.sh[1539]: + for d in accounts clock_skew network Jan 30 13:49:16.477781 init.sh[1539]: + daemon_pids+=($!) Jan 30 13:49:16.477781 init.sh[1539]: + for d in accounts clock_skew network Jan 30 13:49:16.477781 init.sh[1539]: + daemon_pids+=($!) Jan 30 13:49:16.477781 init.sh[1539]: + NOTIFY_SOCKET=/run/systemd/notify Jan 30 13:49:16.477781 init.sh[1539]: + /usr/bin/systemd-notify --ready Jan 30 13:49:16.478055 init.sh[1605]: + /usr/bin/google_accounts_daemon Jan 30 13:49:16.478569 init.sh[1607]: + /usr/bin/google_network_daemon Jan 30 13:49:16.479357 init.sh[1606]: + /usr/bin/google_clock_skew_daemon Jan 30 13:49:16.496875 systemd[1]: Started oem-gce.service - GCE Linux Agent. Jan 30 13:49:16.506767 init.sh[1539]: + wait -n 1605 1606 1607 Jan 30 13:49:16.758821 sshd[1601]: Accepted publickey for core from 139.178.68.195 port 33152 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:49:16.762300 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:49:16.775165 systemd-logind[1448]: New session 2 of user core. Jan 30 13:49:16.781954 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 13:49:16.866256 google-networking[1607]: INFO Starting Google Networking daemon. Jan 30 13:49:16.872808 google-clock-skew[1606]: INFO Starting Google Clock Skew daemon. Jan 30 13:49:16.883065 google-clock-skew[1606]: INFO Clock drift token has changed: 0. Jan 30 13:49:16.946603 groupadd[1618]: group added to /etc/group: name=google-sudoers, GID=1000 Jan 30 13:49:16.955053 groupadd[1618]: group added to /etc/gshadow: name=google-sudoers Jan 30 13:49:17.013522 sshd[1601]: pam_unix(sshd:session): session closed for user core Jan 30 13:49:17.021321 systemd[1]: sshd@1-10.128.0.25:22-139.178.68.195:33152.service: Deactivated successfully. Jan 30 13:49:17.023512 groupadd[1618]: new group: name=google-sudoers, GID=1000 Jan 30 13:49:17.026315 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 13:49:17.029638 systemd-logind[1448]: Session 2 logged out. Waiting for processes to exit. Jan 30 13:49:17.032619 systemd-logind[1448]: Removed session 2. Jan 30 13:49:17.058382 google-accounts[1605]: INFO Starting Google Accounts daemon. Jan 30 13:49:17.070899 google-accounts[1605]: WARNING OS Login not installed. Jan 30 13:49:17.072846 google-accounts[1605]: INFO Creating a new user account for 0. Jan 30 13:49:17.080783 init.sh[1630]: useradd: invalid user name '0': use --badname to ignore Jan 30 13:49:17.078617 google-accounts[1605]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Jan 30 13:49:17.083401 systemd[1]: Started sshd@2-10.128.0.25:22-139.178.68.195:33162.service - OpenSSH per-connection server daemon (139.178.68.195:33162). Jan 30 13:49:17.170563 ntpd[1429]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:19%2]:123 Jan 30 13:49:17.171140 ntpd[1429]: 30 Jan 13:49:17 ntpd[1429]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:19%2]:123 Jan 30 13:49:17.000462 google-clock-skew[1606]: INFO Synced system time with hardware clock. Jan 30 13:49:17.018013 systemd-journald[1109]: Time jumped backwards, rotating. Jan 30 13:49:17.001898 systemd-resolved[1315]: Clock change detected. Flushing caches. Jan 30 13:49:17.041457 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:49:17.053159 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 13:49:17.059807 (kubelet)[1640]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:49:17.063768 systemd[1]: Startup finished in 996ms (kernel) + 9.599s (initrd) + 9.068s (userspace) = 19.665s. Jan 30 13:49:17.160069 sshd[1631]: Accepted publickey for core from 139.178.68.195 port 33162 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:49:17.162122 sshd[1631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:49:17.169772 systemd-logind[1448]: New session 3 of user core. Jan 30 13:49:17.177513 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 13:49:17.416114 sshd[1631]: pam_unix(sshd:session): session closed for user core Jan 30 13:49:17.421418 systemd[1]: sshd@2-10.128.0.25:22-139.178.68.195:33162.service: Deactivated successfully. Jan 30 13:49:17.424564 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 13:49:17.426920 systemd-logind[1448]: Session 3 logged out. Waiting for processes to exit. Jan 30 13:49:17.428522 systemd-logind[1448]: Removed session 3. Jan 30 13:49:17.910806 kubelet[1640]: E0130 13:49:17.910127 1640 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:49:17.913233 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:49:17.913531 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:49:17.914110 systemd[1]: kubelet.service: Consumed 1.254s CPU time. Jan 30 13:49:27.484649 systemd[1]: Started sshd@3-10.128.0.25:22-139.178.68.195:39556.service - OpenSSH per-connection server daemon (139.178.68.195:39556). Jan 30 13:49:27.831420 sshd[1656]: Accepted publickey for core from 139.178.68.195 port 39556 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:49:27.833253 sshd[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:49:27.838400 systemd-logind[1448]: New session 4 of user core. Jan 30 13:49:27.846445 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 13:49:28.034997 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 13:49:28.045124 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:49:28.088570 sshd[1656]: pam_unix(sshd:session): session closed for user core Jan 30 13:49:28.096001 systemd-logind[1448]: Session 4 logged out. Waiting for processes to exit. Jan 30 13:49:28.097034 systemd[1]: sshd@3-10.128.0.25:22-139.178.68.195:39556.service: Deactivated successfully. Jan 30 13:49:28.102162 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 13:49:28.104500 systemd-logind[1448]: Removed session 4. Jan 30 13:49:28.154634 systemd[1]: Started sshd@4-10.128.0.25:22-139.178.68.195:39568.service - OpenSSH per-connection server daemon (139.178.68.195:39568). Jan 30 13:49:28.386517 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:49:28.388975 (kubelet)[1673]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:49:28.441460 kubelet[1673]: E0130 13:49:28.441344 1673 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:49:28.445624 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:49:28.445862 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:49:28.500631 sshd[1666]: Accepted publickey for core from 139.178.68.195 port 39568 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:49:28.502489 sshd[1666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:49:28.508029 systemd-logind[1448]: New session 5 of user core. Jan 30 13:49:28.516446 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 13:49:28.747817 sshd[1666]: pam_unix(sshd:session): session closed for user core Jan 30 13:49:28.752031 systemd[1]: sshd@4-10.128.0.25:22-139.178.68.195:39568.service: Deactivated successfully. Jan 30 13:49:28.754407 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 13:49:28.756181 systemd-logind[1448]: Session 5 logged out. Waiting for processes to exit. Jan 30 13:49:28.757784 systemd-logind[1448]: Removed session 5. Jan 30 13:49:28.812636 systemd[1]: Started sshd@5-10.128.0.25:22-139.178.68.195:39584.service - OpenSSH per-connection server daemon (139.178.68.195:39584). Jan 30 13:49:29.154731 sshd[1686]: Accepted publickey for core from 139.178.68.195 port 39584 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:49:29.156647 sshd[1686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:49:29.162776 systemd-logind[1448]: New session 6 of user core. Jan 30 13:49:29.179522 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 13:49:29.407815 sshd[1686]: pam_unix(sshd:session): session closed for user core Jan 30 13:49:29.413244 systemd[1]: sshd@5-10.128.0.25:22-139.178.68.195:39584.service: Deactivated successfully. Jan 30 13:49:29.415367 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 13:49:29.416220 systemd-logind[1448]: Session 6 logged out. Waiting for processes to exit. Jan 30 13:49:29.417664 systemd-logind[1448]: Removed session 6. Jan 30 13:49:29.473641 systemd[1]: Started sshd@6-10.128.0.25:22-139.178.68.195:39586.service - OpenSSH per-connection server daemon (139.178.68.195:39586). Jan 30 13:49:29.819153 sshd[1693]: Accepted publickey for core from 139.178.68.195 port 39586 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:49:29.820866 sshd[1693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:49:29.827075 systemd-logind[1448]: New session 7 of user core. Jan 30 13:49:29.833487 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 13:49:30.041841 sudo[1696]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 13:49:30.042394 sudo[1696]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:49:30.058037 sudo[1696]: pam_unix(sudo:session): session closed for user root Jan 30 13:49:30.111345 sshd[1693]: pam_unix(sshd:session): session closed for user core Jan 30 13:49:30.117451 systemd[1]: sshd@6-10.128.0.25:22-139.178.68.195:39586.service: Deactivated successfully. Jan 30 13:49:30.119719 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 13:49:30.120654 systemd-logind[1448]: Session 7 logged out. Waiting for processes to exit. Jan 30 13:49:30.122057 systemd-logind[1448]: Removed session 7. Jan 30 13:49:30.174632 systemd[1]: Started sshd@7-10.128.0.25:22-139.178.68.195:39590.service - OpenSSH per-connection server daemon (139.178.68.195:39590). Jan 30 13:49:30.512924 sshd[1701]: Accepted publickey for core from 139.178.68.195 port 39590 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:49:30.514931 sshd[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:49:30.521495 systemd-logind[1448]: New session 8 of user core. Jan 30 13:49:30.528498 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 13:49:30.718005 sudo[1705]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 13:49:30.718515 sudo[1705]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:49:30.723240 sudo[1705]: pam_unix(sudo:session): session closed for user root Jan 30 13:49:30.736414 sudo[1704]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 13:49:30.736884 sudo[1704]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:49:30.753648 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 13:49:30.756474 auditctl[1708]: No rules Jan 30 13:49:30.757797 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:49:30.758089 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 13:49:30.760755 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:49:30.796972 augenrules[1726]: No rules Jan 30 13:49:30.797822 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:49:30.799696 sudo[1704]: pam_unix(sudo:session): session closed for user root Jan 30 13:49:30.851792 sshd[1701]: pam_unix(sshd:session): session closed for user core Jan 30 13:49:30.856005 systemd[1]: sshd@7-10.128.0.25:22-139.178.68.195:39590.service: Deactivated successfully. Jan 30 13:49:30.858379 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 13:49:30.860108 systemd-logind[1448]: Session 8 logged out. Waiting for processes to exit. Jan 30 13:49:30.861626 systemd-logind[1448]: Removed session 8. Jan 30 13:49:30.916654 systemd[1]: Started sshd@8-10.128.0.25:22-139.178.68.195:39592.service - OpenSSH per-connection server daemon (139.178.68.195:39592). Jan 30 13:49:31.255364 sshd[1734]: Accepted publickey for core from 139.178.68.195 port 39592 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:49:31.257099 sshd[1734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:49:31.263353 systemd-logind[1448]: New session 9 of user core. Jan 30 13:49:31.272548 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 13:49:31.461896 sudo[1737]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 13:49:31.462424 sudo[1737]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:49:31.884636 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 13:49:31.896916 (dockerd)[1754]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 13:49:32.325582 dockerd[1754]: time="2025-01-30T13:49:32.325494963Z" level=info msg="Starting up" Jan 30 13:49:32.440205 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1073726665-merged.mount: Deactivated successfully. Jan 30 13:49:32.460398 dockerd[1754]: time="2025-01-30T13:49:32.460353102Z" level=info msg="Loading containers: start." Jan 30 13:49:32.600315 kernel: Initializing XFRM netlink socket Jan 30 13:49:32.697461 systemd-networkd[1375]: docker0: Link UP Jan 30 13:49:32.717335 dockerd[1754]: time="2025-01-30T13:49:32.717286121Z" level=info msg="Loading containers: done." Jan 30 13:49:32.739961 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1169448843-merged.mount: Deactivated successfully. Jan 30 13:49:32.742861 dockerd[1754]: time="2025-01-30T13:49:32.742799247Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 13:49:32.742977 dockerd[1754]: time="2025-01-30T13:49:32.742935022Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 30 13:49:32.743144 dockerd[1754]: time="2025-01-30T13:49:32.743097855Z" level=info msg="Daemon has completed initialization" Jan 30 13:49:32.781450 dockerd[1754]: time="2025-01-30T13:49:32.780670492Z" level=info msg="API listen on /run/docker.sock" Jan 30 13:49:32.780906 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 13:49:33.594469 containerd[1461]: time="2025-01-30T13:49:33.594405626Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\"" Jan 30 13:49:34.190227 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2134908111.mount: Deactivated successfully. Jan 30 13:49:35.707439 containerd[1461]: time="2025-01-30T13:49:35.707371855Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:35.709041 containerd[1461]: time="2025-01-30T13:49:35.708970246Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.1: active requests=0, bytes read=28674830" Jan 30 13:49:35.710240 containerd[1461]: time="2025-01-30T13:49:35.710166488Z" level=info msg="ImageCreate event name:\"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:35.714666 containerd[1461]: time="2025-01-30T13:49:35.714628681Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:35.717022 containerd[1461]: time="2025-01-30T13:49:35.716464818Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.1\" with image id \"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\", size \"28671624\" in 2.121999734s" Jan 30 13:49:35.717022 containerd[1461]: time="2025-01-30T13:49:35.716516053Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\" returns image reference \"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\"" Jan 30 13:49:35.717809 containerd[1461]: time="2025-01-30T13:49:35.717599707Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\"" Jan 30 13:49:37.348181 containerd[1461]: time="2025-01-30T13:49:37.348118846Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:37.349785 containerd[1461]: time="2025-01-30T13:49:37.349718789Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.1: active requests=0, bytes read=24770717" Jan 30 13:49:37.351044 containerd[1461]: time="2025-01-30T13:49:37.350972341Z" level=info msg="ImageCreate event name:\"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:37.354366 containerd[1461]: time="2025-01-30T13:49:37.354324909Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:37.356193 containerd[1461]: time="2025-01-30T13:49:37.355775915Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.1\" with image id \"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\", size \"26258470\" in 1.638133741s" Jan 30 13:49:37.356193 containerd[1461]: time="2025-01-30T13:49:37.355826955Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\" returns image reference \"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\"" Jan 30 13:49:37.356775 containerd[1461]: time="2025-01-30T13:49:37.356737237Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\"" Jan 30 13:49:38.655703 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 13:49:38.664534 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:49:38.725172 containerd[1461]: time="2025-01-30T13:49:38.725072331Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:38.764903 containerd[1461]: time="2025-01-30T13:49:38.764363209Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.1: active requests=0, bytes read=19169765" Jan 30 13:49:38.767045 containerd[1461]: time="2025-01-30T13:49:38.766997181Z" level=info msg="ImageCreate event name:\"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:38.796629 containerd[1461]: time="2025-01-30T13:49:38.796566337Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:38.798420 containerd[1461]: time="2025-01-30T13:49:38.798350243Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.1\" with image id \"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\", size \"20657536\" in 1.441465484s" Jan 30 13:49:38.798420 containerd[1461]: time="2025-01-30T13:49:38.798419540Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\" returns image reference \"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\"" Jan 30 13:49:38.799169 containerd[1461]: time="2025-01-30T13:49:38.799114168Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\"" Jan 30 13:49:38.953842 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:49:38.967774 (kubelet)[1961]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:49:39.021193 kubelet[1961]: E0130 13:49:39.021146 1961 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:49:39.024042 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:49:39.024249 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:49:40.139528 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3904065789.mount: Deactivated successfully. Jan 30 13:49:40.783595 containerd[1461]: time="2025-01-30T13:49:40.783527008Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:40.784842 containerd[1461]: time="2025-01-30T13:49:40.784774252Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.1: active requests=0, bytes read=30909472" Jan 30 13:49:40.786149 containerd[1461]: time="2025-01-30T13:49:40.786081530Z" level=info msg="ImageCreate event name:\"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:40.788738 containerd[1461]: time="2025-01-30T13:49:40.788652085Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:40.790080 containerd[1461]: time="2025-01-30T13:49:40.789563502Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.1\" with image id \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\", repo tag \"registry.k8s.io/kube-proxy:v1.32.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\", size \"30908485\" in 1.990394192s" Jan 30 13:49:40.790080 containerd[1461]: time="2025-01-30T13:49:40.789613476Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\" returns image reference \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\"" Jan 30 13:49:40.792481 containerd[1461]: time="2025-01-30T13:49:40.792450288Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 30 13:49:41.277113 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3804911439.mount: Deactivated successfully. Jan 30 13:49:42.326744 containerd[1461]: time="2025-01-30T13:49:42.326672059Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:42.328435 containerd[1461]: time="2025-01-30T13:49:42.328370466Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565247" Jan 30 13:49:42.329703 containerd[1461]: time="2025-01-30T13:49:42.329637230Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:42.333214 containerd[1461]: time="2025-01-30T13:49:42.333156800Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:42.335081 containerd[1461]: time="2025-01-30T13:49:42.334912586Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.542318055s" Jan 30 13:49:42.335081 containerd[1461]: time="2025-01-30T13:49:42.334959966Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 30 13:49:42.336615 containerd[1461]: time="2025-01-30T13:49:42.336553368Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 30 13:49:42.779843 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3895040674.mount: Deactivated successfully. Jan 30 13:49:42.786087 containerd[1461]: time="2025-01-30T13:49:42.786031028Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:42.787312 containerd[1461]: time="2025-01-30T13:49:42.787243718Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321144" Jan 30 13:49:42.788513 containerd[1461]: time="2025-01-30T13:49:42.788437482Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:42.791474 containerd[1461]: time="2025-01-30T13:49:42.791408314Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:42.792640 containerd[1461]: time="2025-01-30T13:49:42.792452177Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 455.854124ms" Jan 30 13:49:42.792640 containerd[1461]: time="2025-01-30T13:49:42.792496993Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 30 13:49:42.793628 containerd[1461]: time="2025-01-30T13:49:42.793470562Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 30 13:49:43.236107 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1492800557.mount: Deactivated successfully. Jan 30 13:49:44.735121 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 30 13:49:45.449529 containerd[1461]: time="2025-01-30T13:49:45.449461791Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:45.451325 containerd[1461]: time="2025-01-30T13:49:45.451241620Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551326" Jan 30 13:49:45.452206 containerd[1461]: time="2025-01-30T13:49:45.452140209Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:45.456201 containerd[1461]: time="2025-01-30T13:49:45.456130717Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:45.457984 containerd[1461]: time="2025-01-30T13:49:45.457805428Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.664292954s" Jan 30 13:49:45.457984 containerd[1461]: time="2025-01-30T13:49:45.457853472Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 30 13:49:49.003250 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:49:49.010634 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:49:49.047644 systemd[1]: Reloading requested from client PID 2117 ('systemctl') (unit session-9.scope)... Jan 30 13:49:49.047667 systemd[1]: Reloading... Jan 30 13:49:49.209152 zram_generator::config[2157]: No configuration found. Jan 30 13:49:49.367710 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:49:49.479351 systemd[1]: Reloading finished in 431 ms. Jan 30 13:49:49.530668 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 13:49:49.530799 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 13:49:49.531458 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:49:49.539715 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:49:50.342773 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:49:50.350163 (kubelet)[2206]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:49:50.406337 kubelet[2206]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:49:50.406337 kubelet[2206]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 30 13:49:50.406337 kubelet[2206]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:49:50.406854 kubelet[2206]: I0130 13:49:50.406440 2206 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:49:50.961128 kubelet[2206]: I0130 13:49:50.961063 2206 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 30 13:49:50.961128 kubelet[2206]: I0130 13:49:50.961103 2206 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:49:50.961575 kubelet[2206]: I0130 13:49:50.961535 2206 server.go:954] "Client rotation is on, will bootstrap in background" Jan 30 13:49:50.995604 kubelet[2206]: E0130 13:49:50.995548 2206 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.128.0.25:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.25:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:49:50.999433 kubelet[2206]: I0130 13:49:50.999253 2206 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:49:51.014549 kubelet[2206]: E0130 13:49:51.014493 2206 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 13:49:51.014699 kubelet[2206]: I0130 13:49:51.014607 2206 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 13:49:51.018354 kubelet[2206]: I0130 13:49:51.018317 2206 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:49:51.018739 kubelet[2206]: I0130 13:49:51.018683 2206 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:49:51.018984 kubelet[2206]: I0130 13:49:51.018726 2206 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 13:49:51.019160 kubelet[2206]: I0130 13:49:51.018985 2206 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:49:51.019160 kubelet[2206]: I0130 13:49:51.019004 2206 container_manager_linux.go:304] "Creating device plugin manager" Jan 30 13:49:51.020139 kubelet[2206]: I0130 13:49:51.020099 2206 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:49:51.025586 kubelet[2206]: I0130 13:49:51.025546 2206 kubelet.go:446] "Attempting to sync node with API server" Jan 30 13:49:51.025586 kubelet[2206]: I0130 13:49:51.025580 2206 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:49:51.026064 kubelet[2206]: I0130 13:49:51.025607 2206 kubelet.go:352] "Adding apiserver pod source" Jan 30 13:49:51.026064 kubelet[2206]: I0130 13:49:51.025622 2206 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:49:51.035831 kubelet[2206]: W0130 13:49:51.035736 2206 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.25:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.25:6443: connect: connection refused Jan 30 13:49:51.035925 kubelet[2206]: E0130 13:49:51.035891 2206 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.25:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.25:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:49:51.037231 kubelet[2206]: W0130 13:49:51.036070 2206 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.25:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.25:6443: connect: connection refused Jan 30 13:49:51.037231 kubelet[2206]: E0130 13:49:51.036154 2206 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.25:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.25:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:49:51.037231 kubelet[2206]: I0130 13:49:51.036351 2206 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 13:49:51.037231 kubelet[2206]: I0130 13:49:51.037068 2206 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:49:51.037231 kubelet[2206]: W0130 13:49:51.037140 2206 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 13:49:51.039906 kubelet[2206]: I0130 13:49:51.039865 2206 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 30 13:49:51.039984 kubelet[2206]: I0130 13:49:51.039915 2206 server.go:1287] "Started kubelet" Jan 30 13:49:51.040166 kubelet[2206]: I0130 13:49:51.040105 2206 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:49:51.041847 kubelet[2206]: I0130 13:49:51.041375 2206 server.go:490] "Adding debug handlers to kubelet server" Jan 30 13:49:51.044289 kubelet[2206]: I0130 13:49:51.044228 2206 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:49:51.046578 kubelet[2206]: I0130 13:49:51.045592 2206 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:49:51.046578 kubelet[2206]: I0130 13:49:51.045892 2206 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:49:51.050095 kubelet[2206]: E0130 13:49:51.048188 2206 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.25:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.25:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal.181f7c9c1788cee8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal,UID:ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal,},FirstTimestamp:2025-01-30 13:49:51.039885032 +0000 UTC m=+0.684232468,LastTimestamp:2025-01-30 13:49:51.039885032 +0000 UTC m=+0.684232468,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal,}" Jan 30 13:49:51.052393 kubelet[2206]: I0130 13:49:51.051152 2206 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 13:49:51.053042 kubelet[2206]: I0130 13:49:51.052989 2206 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 30 13:49:51.053335 kubelet[2206]: E0130 13:49:51.053307 2206 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal\" not found" Jan 30 13:49:51.058632 kubelet[2206]: E0130 13:49:51.056797 2206 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.25:6443: connect: connection refused" interval="200ms" Jan 30 13:49:51.058632 kubelet[2206]: I0130 13:49:51.057301 2206 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:49:51.058632 kubelet[2206]: I0130 13:49:51.057348 2206 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:49:51.058632 kubelet[2206]: W0130 13:49:51.057767 2206 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.25:6443: connect: connection refused Jan 30 13:49:51.058632 kubelet[2206]: E0130 13:49:51.057838 2206 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.25:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:49:51.059870 kubelet[2206]: E0130 13:49:51.059844 2206 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:49:51.061286 kubelet[2206]: I0130 13:49:51.061252 2206 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:49:51.061412 kubelet[2206]: I0130 13:49:51.061397 2206 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:49:51.061577 kubelet[2206]: I0130 13:49:51.061555 2206 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:49:51.074954 kubelet[2206]: I0130 13:49:51.074888 2206 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:49:51.078203 kubelet[2206]: I0130 13:49:51.077694 2206 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:49:51.078203 kubelet[2206]: I0130 13:49:51.077742 2206 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 30 13:49:51.078203 kubelet[2206]: I0130 13:49:51.077774 2206 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 30 13:49:51.078203 kubelet[2206]: I0130 13:49:51.077792 2206 kubelet.go:2388] "Starting kubelet main sync loop" Jan 30 13:49:51.078203 kubelet[2206]: E0130 13:49:51.077883 2206 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:49:51.089771 kubelet[2206]: W0130 13:49:51.089698 2206 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.25:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.25:6443: connect: connection refused Jan 30 13:49:51.089890 kubelet[2206]: E0130 13:49:51.089774 2206 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.25:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.25:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:49:51.101769 kubelet[2206]: I0130 13:49:51.101733 2206 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 30 13:49:51.101769 kubelet[2206]: I0130 13:49:51.101758 2206 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 30 13:49:51.102052 kubelet[2206]: I0130 13:49:51.101781 2206 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:49:51.103893 kubelet[2206]: I0130 13:49:51.103869 2206 policy_none.go:49] "None policy: Start" Jan 30 13:49:51.103893 kubelet[2206]: I0130 13:49:51.103897 2206 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 30 13:49:51.104058 kubelet[2206]: I0130 13:49:51.103915 2206 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:49:51.111396 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 13:49:51.127017 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 13:49:51.131976 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 13:49:51.140340 kubelet[2206]: I0130 13:49:51.140310 2206 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:49:51.140764 kubelet[2206]: I0130 13:49:51.140730 2206 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 13:49:51.140911 kubelet[2206]: I0130 13:49:51.140867 2206 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:49:51.143192 kubelet[2206]: I0130 13:49:51.143169 2206 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:49:51.143548 kubelet[2206]: E0130 13:49:51.143205 2206 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 30 13:49:51.143760 kubelet[2206]: E0130 13:49:51.143678 2206 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal\" not found" Jan 30 13:49:51.221225 systemd[1]: Created slice kubepods-burstable-pod5ea8ba3199ced86c6029bdbcb604e29a.slice - libcontainer container kubepods-burstable-pod5ea8ba3199ced86c6029bdbcb604e29a.slice. Jan 30 13:49:51.232488 kubelet[2206]: E0130 13:49:51.232173 2206 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal\" not found" node="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:49:51.237712 systemd[1]: Created slice kubepods-burstable-podecdfc0968b5849cd8e9af49dcb4f2085.slice - libcontainer container kubepods-burstable-podecdfc0968b5849cd8e9af49dcb4f2085.slice. Jan 30 13:49:51.248011 kubelet[2206]: E0130 13:49:51.247781 2206 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal\" not found" node="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:49:51.251924 systemd[1]: Created slice kubepods-burstable-pod7581093808d45e82f382678a14b1e986.slice - libcontainer container kubepods-burstable-pod7581093808d45e82f382678a14b1e986.slice. Jan 30 13:49:51.254481 kubelet[2206]: E0130 13:49:51.254449 2206 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal\" not found" node="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:49:51.257936 kubelet[2206]: E0130 13:49:51.257870 2206 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.25:6443: connect: connection refused" interval="400ms" Jan 30 13:49:51.258923 kubelet[2206]: I0130 13:49:51.258538 2206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ecdfc0968b5849cd8e9af49dcb4f2085-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal\" (UID: \"ecdfc0968b5849cd8e9af49dcb4f2085\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:49:51.258923 kubelet[2206]: I0130 13:49:51.258587 2206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ecdfc0968b5849cd8e9af49dcb4f2085-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal\" (UID: \"ecdfc0968b5849cd8e9af49dcb4f2085\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:49:51.258923 kubelet[2206]: I0130 13:49:51.258621 2206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7581093808d45e82f382678a14b1e986-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal\" (UID: \"7581093808d45e82f382678a14b1e986\") " pod="kube-system/kube-scheduler-ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:49:51.258923 kubelet[2206]: I0130 13:49:51.258647 2206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5ea8ba3199ced86c6029bdbcb604e29a-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal\" (UID: \"5ea8ba3199ced86c6029bdbcb604e29a\") " pod="kube-system/kube-apiserver-ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:49:51.259201 kubelet[2206]: I0130 13:49:51.258677 2206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5ea8ba3199ced86c6029bdbcb604e29a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal\" (UID: \"5ea8ba3199ced86c6029bdbcb604e29a\") " pod="kube-system/kube-apiserver-ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:49:51.259201 kubelet[2206]: I0130 13:49:51.258707 2206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ecdfc0968b5849cd8e9af49dcb4f2085-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal\" (UID: \"ecdfc0968b5849cd8e9af49dcb4f2085\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:49:51.259201 kubelet[2206]: I0130 13:49:51.258735 2206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5ea8ba3199ced86c6029bdbcb604e29a-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal\" (UID: \"5ea8ba3199ced86c6029bdbcb604e29a\") " pod="kube-system/kube-apiserver-ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:49:51.259201 kubelet[2206]: I0130 13:49:51.258781 2206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ecdfc0968b5849cd8e9af49dcb4f2085-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal\" (UID: \"ecdfc0968b5849cd8e9af49dcb4f2085\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:49:51.259431 kubelet[2206]: I0130 13:49:51.258825 2206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ecdfc0968b5849cd8e9af49dcb4f2085-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal\" (UID: \"ecdfc0968b5849cd8e9af49dcb4f2085\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:49:51.259832 kubelet[2206]: I0130 13:49:51.259795 2206 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:49:51.260291 kubelet[2206]: E0130 13:49:51.260225 2206 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.128.0.25:6443/api/v1/nodes\": dial tcp 10.128.0.25:6443: connect: connection refused" node="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:49:51.465511 kubelet[2206]: I0130 13:49:51.465427 2206 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:49:51.466026 kubelet[2206]: E0130 13:49:51.465896 2206 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.128.0.25:6443/api/v1/nodes\": dial tcp 10.128.0.25:6443: connect: connection refused" node="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:49:51.534569 containerd[1461]: time="2025-01-30T13:49:51.534438487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal,Uid:5ea8ba3199ced86c6029bdbcb604e29a,Namespace:kube-system,Attempt:0,}" Jan 30 13:49:51.549590 containerd[1461]: time="2025-01-30T13:49:51.549507304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal,Uid:ecdfc0968b5849cd8e9af49dcb4f2085,Namespace:kube-system,Attempt:0,}" Jan 30 13:49:51.555430 containerd[1461]: time="2025-01-30T13:49:51.555383497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal,Uid:7581093808d45e82f382678a14b1e986,Namespace:kube-system,Attempt:0,}" Jan 30 13:49:51.658800 kubelet[2206]: E0130 13:49:51.658735 2206 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.25:6443: connect: connection refused" interval="800ms" Jan 30 13:49:51.872577 kubelet[2206]: I0130 13:49:51.872446 2206 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:49:51.873005 kubelet[2206]: E0130 13:49:51.872898 2206 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.128.0.25:6443/api/v1/nodes\": dial tcp 10.128.0.25:6443: connect: connection refused" node="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:49:51.919865 kubelet[2206]: W0130 13:49:51.919783 2206 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.25:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.25:6443: connect: connection refused Jan 30 13:49:51.919865 kubelet[2206]: E0130 13:49:51.919869 2206 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.25:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.25:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:49:51.977848 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount846612216.mount: Deactivated successfully. Jan 30 13:49:51.986360 containerd[1461]: time="2025-01-30T13:49:51.986306189Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:49:51.987549 containerd[1461]: time="2025-01-30T13:49:51.987488787Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:49:51.988729 containerd[1461]: time="2025-01-30T13:49:51.988671347Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:49:51.989615 containerd[1461]: time="2025-01-30T13:49:51.989554894Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312062" Jan 30 13:49:51.990973 containerd[1461]: time="2025-01-30T13:49:51.990856069Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:49:51.992760 containerd[1461]: time="2025-01-30T13:49:51.992326088Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:49:51.992760 containerd[1461]: time="2025-01-30T13:49:51.992660939Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:49:51.996207 containerd[1461]: time="2025-01-30T13:49:51.996168064Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:49:52.003917 containerd[1461]: time="2025-01-30T13:49:52.003850026Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 469.312431ms" Jan 30 13:49:52.006006 containerd[1461]: time="2025-01-30T13:49:52.005966128Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 450.489788ms" Jan 30 13:49:52.008111 containerd[1461]: time="2025-01-30T13:49:52.008041245Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 458.436935ms" Jan 30 13:49:52.195077 kubelet[2206]: W0130 13:49:52.194933 2206 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.25:6443: connect: connection refused Jan 30 13:49:52.195077 kubelet[2206]: E0130 13:49:52.195025 2206 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.25:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:49:52.219340 containerd[1461]: time="2025-01-30T13:49:52.219215156Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:49:52.221198 containerd[1461]: time="2025-01-30T13:49:52.221087178Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:49:52.221874 containerd[1461]: time="2025-01-30T13:49:52.221679483Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:49:52.227406 containerd[1461]: time="2025-01-30T13:49:52.227061240Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:49:52.227406 containerd[1461]: time="2025-01-30T13:49:52.227108767Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:49:52.227406 containerd[1461]: time="2025-01-30T13:49:52.227127233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:49:52.227406 containerd[1461]: time="2025-01-30T13:49:52.227227164Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:49:52.228285 containerd[1461]: time="2025-01-30T13:49:52.226524283Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:49:52.228285 containerd[1461]: time="2025-01-30T13:49:52.226593343Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:49:52.228285 containerd[1461]: time="2025-01-30T13:49:52.226642252Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:49:52.228285 containerd[1461]: time="2025-01-30T13:49:52.226843091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:49:52.228285 containerd[1461]: time="2025-01-30T13:49:52.227410619Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:49:52.270553 systemd[1]: Started cri-containerd-f245879c40be727c536fa20fbc03ec0078fcc0b3aa6851ced44c2b55621a438e.scope - libcontainer container f245879c40be727c536fa20fbc03ec0078fcc0b3aa6851ced44c2b55621a438e. Jan 30 13:49:52.283482 systemd[1]: Started cri-containerd-68d4c3492ce1cea83c1375415da9cfeff431dde51249849b457fdc7bd62ddd73.scope - libcontainer container 68d4c3492ce1cea83c1375415da9cfeff431dde51249849b457fdc7bd62ddd73. Jan 30 13:49:52.286750 systemd[1]: Started cri-containerd-e4122fc7a11711ad822bc009731c6be015c21c305408cf1bddb4e0ea458815d9.scope - libcontainer container e4122fc7a11711ad822bc009731c6be015c21c305408cf1bddb4e0ea458815d9. Jan 30 13:49:52.380520 containerd[1461]: time="2025-01-30T13:49:52.379669404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal,Uid:5ea8ba3199ced86c6029bdbcb604e29a,Namespace:kube-system,Attempt:0,} returns sandbox id \"e4122fc7a11711ad822bc009731c6be015c21c305408cf1bddb4e0ea458815d9\"" Jan 30 13:49:52.384374 kubelet[2206]: E0130 13:49:52.384305 2206 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-21291" Jan 30 13:49:52.386775 containerd[1461]: time="2025-01-30T13:49:52.386719114Z" level=info msg="CreateContainer within sandbox \"e4122fc7a11711ad822bc009731c6be015c21c305408cf1bddb4e0ea458815d9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 13:49:52.393733 containerd[1461]: time="2025-01-30T13:49:52.393682173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal,Uid:7581093808d45e82f382678a14b1e986,Namespace:kube-system,Attempt:0,} returns sandbox id \"f245879c40be727c536fa20fbc03ec0078fcc0b3aa6851ced44c2b55621a438e\"" Jan 30 13:49:52.395686 kubelet[2206]: E0130 13:49:52.395647 2206 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-21291" Jan 30 13:49:52.397903 containerd[1461]: time="2025-01-30T13:49:52.397840587Z" level=info msg="CreateContainer within sandbox \"f245879c40be727c536fa20fbc03ec0078fcc0b3aa6851ced44c2b55621a438e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 13:49:52.405941 containerd[1461]: time="2025-01-30T13:49:52.405901796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal,Uid:ecdfc0968b5849cd8e9af49dcb4f2085,Namespace:kube-system,Attempt:0,} returns sandbox id \"68d4c3492ce1cea83c1375415da9cfeff431dde51249849b457fdc7bd62ddd73\"" Jan 30 13:49:52.409088 kubelet[2206]: E0130 13:49:52.408920 2206 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4081-3-0-c562bfccd2c0c34ec50f.c.flat" Jan 30 13:49:52.410935 containerd[1461]: time="2025-01-30T13:49:52.410802488Z" level=info msg="CreateContainer within sandbox \"68d4c3492ce1cea83c1375415da9cfeff431dde51249849b457fdc7bd62ddd73\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 13:49:52.415502 containerd[1461]: time="2025-01-30T13:49:52.415454507Z" level=info msg="CreateContainer within sandbox \"e4122fc7a11711ad822bc009731c6be015c21c305408cf1bddb4e0ea458815d9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b63199d7b0f41770dbe9b3b4d8525c428e7b579bb5ace011d9dffdc8c99f2f70\"" Jan 30 13:49:52.416466 containerd[1461]: time="2025-01-30T13:49:52.416417017Z" level=info msg="StartContainer for \"b63199d7b0f41770dbe9b3b4d8525c428e7b579bb5ace011d9dffdc8c99f2f70\"" Jan 30 13:49:52.428553 containerd[1461]: time="2025-01-30T13:49:52.428387798Z" level=info msg="CreateContainer within sandbox \"f245879c40be727c536fa20fbc03ec0078fcc0b3aa6851ced44c2b55621a438e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d80d2c25e1fb69720dbe40f680be2abd82b2cf09f9d44a2d5471c548d19ab096\"" Jan 30 13:49:52.429075 containerd[1461]: time="2025-01-30T13:49:52.429041101Z" level=info msg="StartContainer for \"d80d2c25e1fb69720dbe40f680be2abd82b2cf09f9d44a2d5471c548d19ab096\"" Jan 30 13:49:52.436052 containerd[1461]: time="2025-01-30T13:49:52.435917044Z" level=info msg="CreateContainer within sandbox \"68d4c3492ce1cea83c1375415da9cfeff431dde51249849b457fdc7bd62ddd73\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"039b1ba81e1e8ceec5fd1d056ab4071accc2521c91b5bee86ad111579536108b\"" Jan 30 13:49:52.437414 containerd[1461]: time="2025-01-30T13:49:52.437361348Z" level=info msg="StartContainer for \"039b1ba81e1e8ceec5fd1d056ab4071accc2521c91b5bee86ad111579536108b\"" Jan 30 13:49:52.459667 kubelet[2206]: E0130 13:49:52.459449 2206 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.25:6443: connect: connection refused" interval="1.6s" Jan 30 13:49:52.467011 systemd[1]: Started cri-containerd-b63199d7b0f41770dbe9b3b4d8525c428e7b579bb5ace011d9dffdc8c99f2f70.scope - libcontainer container b63199d7b0f41770dbe9b3b4d8525c428e7b579bb5ace011d9dffdc8c99f2f70. Jan 30 13:49:52.490478 systemd[1]: Started cri-containerd-d80d2c25e1fb69720dbe40f680be2abd82b2cf09f9d44a2d5471c548d19ab096.scope - libcontainer container d80d2c25e1fb69720dbe40f680be2abd82b2cf09f9d44a2d5471c548d19ab096. Jan 30 13:49:52.520671 systemd[1]: Started cri-containerd-039b1ba81e1e8ceec5fd1d056ab4071accc2521c91b5bee86ad111579536108b.scope - libcontainer container 039b1ba81e1e8ceec5fd1d056ab4071accc2521c91b5bee86ad111579536108b. Jan 30 13:49:52.567074 kubelet[2206]: W0130 13:49:52.567029 2206 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.25:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.25:6443: connect: connection refused Jan 30 13:49:52.569599 kubelet[2206]: E0130 13:49:52.567212 2206 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.25:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.25:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:49:52.588867 containerd[1461]: time="2025-01-30T13:49:52.588823588Z" level=info msg="StartContainer for \"b63199d7b0f41770dbe9b3b4d8525c428e7b579bb5ace011d9dffdc8c99f2f70\" returns successfully" Jan 30 13:49:52.616441 kubelet[2206]: E0130 13:49:52.616302 2206 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.25:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.25:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal.181f7c9c1788cee8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal,UID:ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal,},FirstTimestamp:2025-01-30 13:49:51.039885032 +0000 UTC m=+0.684232468,LastTimestamp:2025-01-30 13:49:51.039885032 +0000 UTC m=+0.684232468,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal,}" Jan 30 13:49:52.628567 containerd[1461]: time="2025-01-30T13:49:52.628524421Z" level=info msg="StartContainer for \"d80d2c25e1fb69720dbe40f680be2abd82b2cf09f9d44a2d5471c548d19ab096\" returns successfully" Jan 30 13:49:52.631858 kubelet[2206]: W0130 13:49:52.631791 2206 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.25:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.25:6443: connect: connection refused Jan 30 13:49:52.632081 kubelet[2206]: E0130 13:49:52.632052 2206 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.25:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.25:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:49:52.639396 containerd[1461]: time="2025-01-30T13:49:52.638018995Z" level=info msg="StartContainer for \"039b1ba81e1e8ceec5fd1d056ab4071accc2521c91b5bee86ad111579536108b\" returns successfully" Jan 30 13:49:52.679504 kubelet[2206]: I0130 13:49:52.679461 2206 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:49:52.680476 kubelet[2206]: E0130 13:49:52.680425 2206 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.128.0.25:6443/api/v1/nodes\": dial tcp 10.128.0.25:6443: connect: connection refused" node="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:49:53.107435 kubelet[2206]: E0130 13:49:53.107077 2206 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal\" not found" node="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:49:53.108291 kubelet[2206]: E0130 13:49:53.107952 2206 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal\" not found" node="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:49:53.115924 kubelet[2206]: E0130 13:49:53.115657 2206 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal\" not found" node="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:49:54.115589 kubelet[2206]: E0130 13:49:54.115236 2206 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal\" not found" node="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:49:54.115589 kubelet[2206]: E0130 13:49:54.115325 2206 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal\" not found" node="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:49:54.289912 kubelet[2206]: I0130 13:49:54.288736 2206 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:49:56.132672 kubelet[2206]: E0130 13:49:56.132609 2206 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal\" not found" node="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:49:56.175471 kubelet[2206]: I0130 13:49:56.173572 2206 kubelet_node_status.go:79] "Successfully registered node" node="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:49:56.247300 kubelet[2206]: I0130 13:49:56.246521 2206 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:49:56.254825 kubelet[2206]: I0130 13:49:56.254796 2206 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:49:56.286957 kubelet[2206]: E0130 13:49:56.286772 2206 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:49:56.289206 kubelet[2206]: E0130 13:49:56.288228 2206 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:49:56.289206 kubelet[2206]: I0130 13:49:56.288282 2206 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:49:56.300289 kubelet[2206]: E0130 13:49:56.299099 2206 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:49:56.300289 kubelet[2206]: I0130 13:49:56.299132 2206 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:49:56.305822 kubelet[2206]: E0130 13:49:56.305766 2206 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:49:57.034434 kubelet[2206]: I0130 13:49:57.034385 2206 apiserver.go:52] "Watching apiserver" Jan 30 13:49:57.057625 kubelet[2206]: I0130 13:49:57.057575 2206 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:49:57.835937 systemd[1]: Reloading requested from client PID 2474 ('systemctl') (unit session-9.scope)... Jan 30 13:49:57.835961 systemd[1]: Reloading... Jan 30 13:49:57.974298 zram_generator::config[2514]: No configuration found. Jan 30 13:49:58.124530 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:49:58.252396 systemd[1]: Reloading finished in 415 ms. Jan 30 13:49:58.303346 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:49:58.325167 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 13:49:58.325493 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:49:58.325575 systemd[1]: kubelet.service: Consumed 1.164s CPU time, 122.2M memory peak, 0B memory swap peak. Jan 30 13:49:58.332685 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:49:58.601504 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:49:58.610099 (kubelet)[2562]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:49:58.676550 kubelet[2562]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:49:58.676550 kubelet[2562]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 30 13:49:58.676550 kubelet[2562]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:49:58.677150 kubelet[2562]: I0130 13:49:58.676693 2562 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:49:58.690322 kubelet[2562]: I0130 13:49:58.689811 2562 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 30 13:49:58.690322 kubelet[2562]: I0130 13:49:58.689837 2562 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:49:58.690322 kubelet[2562]: I0130 13:49:58.690073 2562 server.go:954] "Client rotation is on, will bootstrap in background" Jan 30 13:49:58.691600 kubelet[2562]: I0130 13:49:58.691563 2562 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 13:49:58.695997 kubelet[2562]: I0130 13:49:58.695795 2562 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:49:58.699698 kubelet[2562]: E0130 13:49:58.699664 2562 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 13:49:58.699805 kubelet[2562]: I0130 13:49:58.699763 2562 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 13:49:58.703323 kubelet[2562]: I0130 13:49:58.703299 2562 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:49:58.703701 kubelet[2562]: I0130 13:49:58.703645 2562 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:49:58.703928 kubelet[2562]: I0130 13:49:58.703688 2562 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 13:49:58.704085 kubelet[2562]: I0130 13:49:58.703928 2562 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:49:58.704085 kubelet[2562]: I0130 13:49:58.703948 2562 container_manager_linux.go:304] "Creating device plugin manager" Jan 30 13:49:58.704085 kubelet[2562]: I0130 13:49:58.704004 2562 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:49:58.704311 kubelet[2562]: I0130 13:49:58.704230 2562 kubelet.go:446] "Attempting to sync node with API server" Jan 30 13:49:58.704311 kubelet[2562]: I0130 13:49:58.704251 2562 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:49:58.704311 kubelet[2562]: I0130 13:49:58.704299 2562 kubelet.go:352] "Adding apiserver pod source" Jan 30 13:49:58.704469 kubelet[2562]: I0130 13:49:58.704316 2562 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:49:58.707553 kubelet[2562]: I0130 13:49:58.707355 2562 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 13:49:58.710287 kubelet[2562]: I0130 13:49:58.708408 2562 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:49:58.717736 kubelet[2562]: I0130 13:49:58.717198 2562 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 30 13:49:58.717736 kubelet[2562]: I0130 13:49:58.717246 2562 server.go:1287] "Started kubelet" Jan 30 13:49:58.722670 kubelet[2562]: I0130 13:49:58.722649 2562 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:49:58.732231 kubelet[2562]: I0130 13:49:58.732166 2562 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:49:58.735170 kubelet[2562]: I0130 13:49:58.733606 2562 server.go:490] "Adding debug handlers to kubelet server" Jan 30 13:49:58.735170 kubelet[2562]: I0130 13:49:58.734925 2562 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:49:58.735382 kubelet[2562]: I0130 13:49:58.735216 2562 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:49:58.735541 kubelet[2562]: I0130 13:49:58.735514 2562 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 13:49:58.740704 kubelet[2562]: I0130 13:49:58.740088 2562 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 30 13:49:58.740857 kubelet[2562]: E0130 13:49:58.740717 2562 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal\" not found" Jan 30 13:49:58.745847 kubelet[2562]: I0130 13:49:58.745572 2562 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:49:58.745847 kubelet[2562]: I0130 13:49:58.745702 2562 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:49:58.752658 kubelet[2562]: I0130 13:49:58.752625 2562 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:49:58.759492 kubelet[2562]: I0130 13:49:58.759462 2562 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:49:58.760060 kubelet[2562]: I0130 13:49:58.759628 2562 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:49:58.763842 kubelet[2562]: I0130 13:49:58.763777 2562 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:49:58.765349 kubelet[2562]: I0130 13:49:58.765168 2562 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:49:58.765349 kubelet[2562]: I0130 13:49:58.765200 2562 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 30 13:49:58.765349 kubelet[2562]: I0130 13:49:58.765221 2562 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 30 13:49:58.765349 kubelet[2562]: I0130 13:49:58.765229 2562 kubelet.go:2388] "Starting kubelet main sync loop" Jan 30 13:49:58.765349 kubelet[2562]: E0130 13:49:58.765305 2562 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:49:58.833798 kubelet[2562]: I0130 13:49:58.833763 2562 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 30 13:49:58.833798 kubelet[2562]: I0130 13:49:58.833792 2562 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 30 13:49:58.834002 kubelet[2562]: I0130 13:49:58.833817 2562 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:49:58.834285 kubelet[2562]: I0130 13:49:58.834053 2562 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 13:49:58.834285 kubelet[2562]: I0130 13:49:58.834076 2562 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 13:49:58.834285 kubelet[2562]: I0130 13:49:58.834107 2562 policy_none.go:49] "None policy: Start" Jan 30 13:49:58.834285 kubelet[2562]: I0130 13:49:58.834122 2562 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 30 13:49:58.834285 kubelet[2562]: I0130 13:49:58.834139 2562 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:49:58.834561 kubelet[2562]: I0130 13:49:58.834331 2562 state_mem.go:75] "Updated machine memory state" Jan 30 13:49:58.840705 kubelet[2562]: I0130 13:49:58.840681 2562 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:49:58.842405 kubelet[2562]: I0130 13:49:58.842167 2562 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 13:49:58.842405 kubelet[2562]: I0130 13:49:58.842190 2562 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:49:58.843803 kubelet[2562]: I0130 13:49:58.842748 2562 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:49:58.847757 kubelet[2562]: E0130 13:49:58.845939 2562 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 30 13:49:58.867495 kubelet[2562]: I0130 13:49:58.867308 2562 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:49:58.869105 kubelet[2562]: I0130 13:49:58.868727 2562 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:49:58.869945 kubelet[2562]: I0130 13:49:58.869529 2562 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:49:58.884379 kubelet[2562]: W0130 13:49:58.884354 2562 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jan 30 13:49:58.885396 kubelet[2562]: W0130 13:49:58.884984 2562 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jan 30 13:49:58.885716 kubelet[2562]: W0130 13:49:58.885014 2562 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jan 30 13:49:58.961592 kubelet[2562]: I0130 13:49:58.961522 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5ea8ba3199ced86c6029bdbcb604e29a-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal\" (UID: \"5ea8ba3199ced86c6029bdbcb604e29a\") " pod="kube-system/kube-apiserver-ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:49:58.961592 kubelet[2562]: I0130 13:49:58.961578 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5ea8ba3199ced86c6029bdbcb604e29a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal\" (UID: \"5ea8ba3199ced86c6029bdbcb604e29a\") " pod="kube-system/kube-apiserver-ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:49:58.961832 kubelet[2562]: I0130 13:49:58.961631 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ecdfc0968b5849cd8e9af49dcb4f2085-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal\" (UID: \"ecdfc0968b5849cd8e9af49dcb4f2085\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:49:58.961832 kubelet[2562]: I0130 13:49:58.961658 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ecdfc0968b5849cd8e9af49dcb4f2085-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal\" (UID: \"ecdfc0968b5849cd8e9af49dcb4f2085\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:49:58.961832 kubelet[2562]: I0130 13:49:58.961702 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ecdfc0968b5849cd8e9af49dcb4f2085-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal\" (UID: \"ecdfc0968b5849cd8e9af49dcb4f2085\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:49:58.961832 kubelet[2562]: I0130 13:49:58.961736 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7581093808d45e82f382678a14b1e986-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal\" (UID: \"7581093808d45e82f382678a14b1e986\") " pod="kube-system/kube-scheduler-ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:49:58.962083 kubelet[2562]: I0130 13:49:58.961761 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ecdfc0968b5849cd8e9af49dcb4f2085-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal\" (UID: \"ecdfc0968b5849cd8e9af49dcb4f2085\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:49:58.962083 kubelet[2562]: I0130 13:49:58.961796 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ecdfc0968b5849cd8e9af49dcb4f2085-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal\" (UID: \"ecdfc0968b5849cd8e9af49dcb4f2085\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:49:58.962083 kubelet[2562]: I0130 13:49:58.961828 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5ea8ba3199ced86c6029bdbcb604e29a-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal\" (UID: \"5ea8ba3199ced86c6029bdbcb604e29a\") " pod="kube-system/kube-apiserver-ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:49:58.966137 kubelet[2562]: I0130 13:49:58.966104 2562 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:49:58.976083 kubelet[2562]: I0130 13:49:58.975681 2562 kubelet_node_status.go:125] "Node was previously registered" node="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:49:58.976083 kubelet[2562]: I0130 13:49:58.975775 2562 kubelet_node_status.go:79] "Successfully registered node" node="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:49:59.400317 update_engine[1449]: I20250130 13:49:59.399320 1449 update_attempter.cc:509] Updating boot flags... Jan 30 13:49:59.495403 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2612) Jan 30 13:49:59.643466 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2616) Jan 30 13:49:59.711491 kubelet[2562]: I0130 13:49:59.705613 2562 apiserver.go:52] "Watching apiserver" Jan 30 13:49:59.760235 kubelet[2562]: I0130 13:49:59.760206 2562 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:49:59.812304 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2616) Jan 30 13:49:59.927779 kubelet[2562]: I0130 13:49:59.927702 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" podStartSLOduration=1.926920125 podStartE2EDuration="1.926920125s" podCreationTimestamp="2025-01-30 13:49:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:49:59.894154195 +0000 UTC m=+1.277271574" watchObservedRunningTime="2025-01-30 13:49:59.926920125 +0000 UTC m=+1.310037501" Jan 30 13:49:59.928233 kubelet[2562]: I0130 13:49:59.928187 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" podStartSLOduration=1.9281679619999998 podStartE2EDuration="1.928167962s" podCreationTimestamp="2025-01-30 13:49:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:49:59.920678997 +0000 UTC m=+1.303796376" watchObservedRunningTime="2025-01-30 13:49:59.928167962 +0000 UTC m=+1.311285320" Jan 30 13:49:59.968928 kubelet[2562]: I0130 13:49:59.967544 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" podStartSLOduration=1.967517195 podStartE2EDuration="1.967517195s" podCreationTimestamp="2025-01-30 13:49:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:49:59.945423901 +0000 UTC m=+1.328541284" watchObservedRunningTime="2025-01-30 13:49:59.967517195 +0000 UTC m=+1.350634574" Jan 30 13:50:04.846982 sudo[1737]: pam_unix(sudo:session): session closed for user root Jan 30 13:50:04.899200 sshd[1734]: pam_unix(sshd:session): session closed for user core Jan 30 13:50:04.905682 systemd[1]: sshd@8-10.128.0.25:22-139.178.68.195:39592.service: Deactivated successfully. Jan 30 13:50:04.908681 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 13:50:04.908952 systemd[1]: session-9.scope: Consumed 6.448s CPU time, 155.8M memory peak, 0B memory swap peak. Jan 30 13:50:04.909927 systemd-logind[1448]: Session 9 logged out. Waiting for processes to exit. Jan 30 13:50:04.911622 systemd-logind[1448]: Removed session 9. Jan 30 13:50:05.322557 kubelet[2562]: I0130 13:50:05.322495 2562 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 13:50:05.323134 containerd[1461]: time="2025-01-30T13:50:05.323090893Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 13:50:05.323796 kubelet[2562]: I0130 13:50:05.323395 2562 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 13:50:05.798055 systemd[1]: Created slice kubepods-besteffort-podec5d8c90_e20e_452c_8f56_1d7fed8e0d83.slice - libcontainer container kubepods-besteffort-podec5d8c90_e20e_452c_8f56_1d7fed8e0d83.slice. Jan 30 13:50:05.906750 kubelet[2562]: I0130 13:50:05.906640 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ec5d8c90-e20e-452c-8f56-1d7fed8e0d83-kube-proxy\") pod \"kube-proxy-lbx4v\" (UID: \"ec5d8c90-e20e-452c-8f56-1d7fed8e0d83\") " pod="kube-system/kube-proxy-lbx4v" Jan 30 13:50:05.906750 kubelet[2562]: I0130 13:50:05.906697 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ec5d8c90-e20e-452c-8f56-1d7fed8e0d83-lib-modules\") pod \"kube-proxy-lbx4v\" (UID: \"ec5d8c90-e20e-452c-8f56-1d7fed8e0d83\") " pod="kube-system/kube-proxy-lbx4v" Jan 30 13:50:05.906750 kubelet[2562]: I0130 13:50:05.906743 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsngs\" (UniqueName: \"kubernetes.io/projected/ec5d8c90-e20e-452c-8f56-1d7fed8e0d83-kube-api-access-qsngs\") pod \"kube-proxy-lbx4v\" (UID: \"ec5d8c90-e20e-452c-8f56-1d7fed8e0d83\") " pod="kube-system/kube-proxy-lbx4v" Jan 30 13:50:05.907096 kubelet[2562]: I0130 13:50:05.906781 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ec5d8c90-e20e-452c-8f56-1d7fed8e0d83-xtables-lock\") pod \"kube-proxy-lbx4v\" (UID: \"ec5d8c90-e20e-452c-8f56-1d7fed8e0d83\") " pod="kube-system/kube-proxy-lbx4v" Jan 30 13:50:06.111959 containerd[1461]: time="2025-01-30T13:50:06.111376379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lbx4v,Uid:ec5d8c90-e20e-452c-8f56-1d7fed8e0d83,Namespace:kube-system,Attempt:0,}" Jan 30 13:50:06.151351 containerd[1461]: time="2025-01-30T13:50:06.149464380Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:50:06.151351 containerd[1461]: time="2025-01-30T13:50:06.149556765Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:50:06.151351 containerd[1461]: time="2025-01-30T13:50:06.149578172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:50:06.151351 containerd[1461]: time="2025-01-30T13:50:06.149691553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:50:06.190477 systemd[1]: Started cri-containerd-3ac33b653579bf2d02f05fc887d0b4966b64e136c3b92e603e65b0c4322d4723.scope - libcontainer container 3ac33b653579bf2d02f05fc887d0b4966b64e136c3b92e603e65b0c4322d4723. Jan 30 13:50:06.229171 containerd[1461]: time="2025-01-30T13:50:06.229095598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lbx4v,Uid:ec5d8c90-e20e-452c-8f56-1d7fed8e0d83,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ac33b653579bf2d02f05fc887d0b4966b64e136c3b92e603e65b0c4322d4723\"" Jan 30 13:50:06.236090 containerd[1461]: time="2025-01-30T13:50:06.235851542Z" level=info msg="CreateContainer within sandbox \"3ac33b653579bf2d02f05fc887d0b4966b64e136c3b92e603e65b0c4322d4723\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 13:50:06.255406 containerd[1461]: time="2025-01-30T13:50:06.255357238Z" level=info msg="CreateContainer within sandbox \"3ac33b653579bf2d02f05fc887d0b4966b64e136c3b92e603e65b0c4322d4723\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d9b1934fcb86ad3b5cf04d4b1bfa7715f174a6ccd33dccdffd1629d014c0fb23\"" Jan 30 13:50:06.256325 containerd[1461]: time="2025-01-30T13:50:06.256112379Z" level=info msg="StartContainer for \"d9b1934fcb86ad3b5cf04d4b1bfa7715f174a6ccd33dccdffd1629d014c0fb23\"" Jan 30 13:50:06.296479 systemd[1]: Started cri-containerd-d9b1934fcb86ad3b5cf04d4b1bfa7715f174a6ccd33dccdffd1629d014c0fb23.scope - libcontainer container d9b1934fcb86ad3b5cf04d4b1bfa7715f174a6ccd33dccdffd1629d014c0fb23. Jan 30 13:50:06.357130 systemd[1]: Created slice kubepods-besteffort-pod78c1b5c7_9ee0_4fb0_a2b2_d8de5fc98f81.slice - libcontainer container kubepods-besteffort-pod78c1b5c7_9ee0_4fb0_a2b2_d8de5fc98f81.slice. Jan 30 13:50:06.359533 containerd[1461]: time="2025-01-30T13:50:06.358185522Z" level=info msg="StartContainer for \"d9b1934fcb86ad3b5cf04d4b1bfa7715f174a6ccd33dccdffd1629d014c0fb23\" returns successfully" Jan 30 13:50:06.511292 kubelet[2562]: I0130 13:50:06.511229 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwnxg\" (UniqueName: \"kubernetes.io/projected/78c1b5c7-9ee0-4fb0-a2b2-d8de5fc98f81-kube-api-access-qwnxg\") pod \"tigera-operator-7d68577dc5-5jfkq\" (UID: \"78c1b5c7-9ee0-4fb0-a2b2-d8de5fc98f81\") " pod="tigera-operator/tigera-operator-7d68577dc5-5jfkq" Jan 30 13:50:06.511872 kubelet[2562]: I0130 13:50:06.511336 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/78c1b5c7-9ee0-4fb0-a2b2-d8de5fc98f81-var-lib-calico\") pod \"tigera-operator-7d68577dc5-5jfkq\" (UID: \"78c1b5c7-9ee0-4fb0-a2b2-d8de5fc98f81\") " pod="tigera-operator/tigera-operator-7d68577dc5-5jfkq" Jan 30 13:50:06.665245 containerd[1461]: time="2025-01-30T13:50:06.665100199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d68577dc5-5jfkq,Uid:78c1b5c7-9ee0-4fb0-a2b2-d8de5fc98f81,Namespace:tigera-operator,Attempt:0,}" Jan 30 13:50:06.715497 containerd[1461]: time="2025-01-30T13:50:06.715064287Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:50:06.715497 containerd[1461]: time="2025-01-30T13:50:06.715311025Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:50:06.716507 containerd[1461]: time="2025-01-30T13:50:06.715345048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:50:06.716507 containerd[1461]: time="2025-01-30T13:50:06.716065232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:50:06.748836 systemd[1]: Started cri-containerd-234a6c049b27580f5e6e23bc9a9466c7b0fc56da753d59c1295cd7ca15484b62.scope - libcontainer container 234a6c049b27580f5e6e23bc9a9466c7b0fc56da753d59c1295cd7ca15484b62. Jan 30 13:50:06.852420 containerd[1461]: time="2025-01-30T13:50:06.851600244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d68577dc5-5jfkq,Uid:78c1b5c7-9ee0-4fb0-a2b2-d8de5fc98f81,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"234a6c049b27580f5e6e23bc9a9466c7b0fc56da753d59c1295cd7ca15484b62\"" Jan 30 13:50:06.856396 containerd[1461]: time="2025-01-30T13:50:06.856354290Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 30 13:50:06.859628 kubelet[2562]: I0130 13:50:06.859407 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lbx4v" podStartSLOduration=1.859357359 podStartE2EDuration="1.859357359s" podCreationTimestamp="2025-01-30 13:50:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:50:06.857145974 +0000 UTC m=+8.240263367" watchObservedRunningTime="2025-01-30 13:50:06.859357359 +0000 UTC m=+8.242474741" Jan 30 13:50:07.864901 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount704337046.mount: Deactivated successfully. Jan 30 13:50:08.990627 containerd[1461]: time="2025-01-30T13:50:08.990554194Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:50:08.991960 containerd[1461]: time="2025-01-30T13:50:08.991870428Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Jan 30 13:50:08.993036 containerd[1461]: time="2025-01-30T13:50:08.992962874Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:50:08.997647 containerd[1461]: time="2025-01-30T13:50:08.997575626Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:50:08.998730 containerd[1461]: time="2025-01-30T13:50:08.998659136Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.142247371s" Jan 30 13:50:08.998832 containerd[1461]: time="2025-01-30T13:50:08.998726333Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 30 13:50:09.002773 containerd[1461]: time="2025-01-30T13:50:09.002735347Z" level=info msg="CreateContainer within sandbox \"234a6c049b27580f5e6e23bc9a9466c7b0fc56da753d59c1295cd7ca15484b62\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 30 13:50:09.021629 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1053508600.mount: Deactivated successfully. Jan 30 13:50:09.022762 containerd[1461]: time="2025-01-30T13:50:09.022716134Z" level=info msg="CreateContainer within sandbox \"234a6c049b27580f5e6e23bc9a9466c7b0fc56da753d59c1295cd7ca15484b62\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a153ff7f0bf1efb7dee0793a3ae58c7cd63af8decaf5772c846e1d2aaf3a797d\"" Jan 30 13:50:09.024316 containerd[1461]: time="2025-01-30T13:50:09.024188944Z" level=info msg="StartContainer for \"a153ff7f0bf1efb7dee0793a3ae58c7cd63af8decaf5772c846e1d2aaf3a797d\"" Jan 30 13:50:09.069945 systemd[1]: run-containerd-runc-k8s.io-a153ff7f0bf1efb7dee0793a3ae58c7cd63af8decaf5772c846e1d2aaf3a797d-runc.gLX070.mount: Deactivated successfully. Jan 30 13:50:09.079104 systemd[1]: Started cri-containerd-a153ff7f0bf1efb7dee0793a3ae58c7cd63af8decaf5772c846e1d2aaf3a797d.scope - libcontainer container a153ff7f0bf1efb7dee0793a3ae58c7cd63af8decaf5772c846e1d2aaf3a797d. Jan 30 13:50:09.129757 containerd[1461]: time="2025-01-30T13:50:09.129535423Z" level=info msg="StartContainer for \"a153ff7f0bf1efb7dee0793a3ae58c7cd63af8decaf5772c846e1d2aaf3a797d\" returns successfully" Jan 30 13:50:12.401454 kubelet[2562]: I0130 13:50:12.401367 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7d68577dc5-5jfkq" podStartSLOduration=4.2557693 podStartE2EDuration="6.401342746s" podCreationTimestamp="2025-01-30 13:50:06 +0000 UTC" firstStartedPulling="2025-01-30 13:50:06.854346806 +0000 UTC m=+8.237464176" lastFinishedPulling="2025-01-30 13:50:08.999920249 +0000 UTC m=+10.383037622" observedRunningTime="2025-01-30 13:50:09.865725305 +0000 UTC m=+11.248842688" watchObservedRunningTime="2025-01-30 13:50:12.401342746 +0000 UTC m=+13.784460125" Jan 30 13:50:12.415590 systemd[1]: Created slice kubepods-besteffort-pode82d8bc5_c6c3_4cbb_8545_f22f22a5e303.slice - libcontainer container kubepods-besteffort-pode82d8bc5_c6c3_4cbb_8545_f22f22a5e303.slice. Jan 30 13:50:12.517968 systemd[1]: Created slice kubepods-besteffort-podac56559e_0dd7_4990_ae15_c03ffae1689a.slice - libcontainer container kubepods-besteffort-podac56559e_0dd7_4990_ae15_c03ffae1689a.slice. Jan 30 13:50:12.550621 kubelet[2562]: I0130 13:50:12.550554 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ac56559e-0dd7-4990-ae15-c03ffae1689a-cni-bin-dir\") pod \"calico-node-krczz\" (UID: \"ac56559e-0dd7-4990-ae15-c03ffae1689a\") " pod="calico-system/calico-node-krczz" Jan 30 13:50:12.550621 kubelet[2562]: I0130 13:50:12.550626 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ac56559e-0dd7-4990-ae15-c03ffae1689a-cni-net-dir\") pod \"calico-node-krczz\" (UID: \"ac56559e-0dd7-4990-ae15-c03ffae1689a\") " pod="calico-system/calico-node-krczz" Jan 30 13:50:12.550877 kubelet[2562]: I0130 13:50:12.550657 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e82d8bc5-c6c3-4cbb-8545-f22f22a5e303-tigera-ca-bundle\") pod \"calico-typha-7df4999d99-j5jmf\" (UID: \"e82d8bc5-c6c3-4cbb-8545-f22f22a5e303\") " pod="calico-system/calico-typha-7df4999d99-j5jmf" Jan 30 13:50:12.550877 kubelet[2562]: I0130 13:50:12.550683 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqdjb\" (UniqueName: \"kubernetes.io/projected/ac56559e-0dd7-4990-ae15-c03ffae1689a-kube-api-access-vqdjb\") pod \"calico-node-krczz\" (UID: \"ac56559e-0dd7-4990-ae15-c03ffae1689a\") " pod="calico-system/calico-node-krczz" Jan 30 13:50:12.550877 kubelet[2562]: I0130 13:50:12.550707 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e82d8bc5-c6c3-4cbb-8545-f22f22a5e303-typha-certs\") pod \"calico-typha-7df4999d99-j5jmf\" (UID: \"e82d8bc5-c6c3-4cbb-8545-f22f22a5e303\") " pod="calico-system/calico-typha-7df4999d99-j5jmf" Jan 30 13:50:12.550877 kubelet[2562]: I0130 13:50:12.550733 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ac56559e-0dd7-4990-ae15-c03ffae1689a-xtables-lock\") pod \"calico-node-krczz\" (UID: \"ac56559e-0dd7-4990-ae15-c03ffae1689a\") " pod="calico-system/calico-node-krczz" Jan 30 13:50:12.550877 kubelet[2562]: I0130 13:50:12.550758 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac56559e-0dd7-4990-ae15-c03ffae1689a-tigera-ca-bundle\") pod \"calico-node-krczz\" (UID: \"ac56559e-0dd7-4990-ae15-c03ffae1689a\") " pod="calico-system/calico-node-krczz" Jan 30 13:50:12.551147 kubelet[2562]: I0130 13:50:12.550782 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ac56559e-0dd7-4990-ae15-c03ffae1689a-var-run-calico\") pod \"calico-node-krczz\" (UID: \"ac56559e-0dd7-4990-ae15-c03ffae1689a\") " pod="calico-system/calico-node-krczz" Jan 30 13:50:12.551147 kubelet[2562]: I0130 13:50:12.550806 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ac56559e-0dd7-4990-ae15-c03ffae1689a-cni-log-dir\") pod \"calico-node-krczz\" (UID: \"ac56559e-0dd7-4990-ae15-c03ffae1689a\") " pod="calico-system/calico-node-krczz" Jan 30 13:50:12.551147 kubelet[2562]: I0130 13:50:12.550835 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ac56559e-0dd7-4990-ae15-c03ffae1689a-lib-modules\") pod \"calico-node-krczz\" (UID: \"ac56559e-0dd7-4990-ae15-c03ffae1689a\") " pod="calico-system/calico-node-krczz" Jan 30 13:50:12.551147 kubelet[2562]: I0130 13:50:12.550862 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ac56559e-0dd7-4990-ae15-c03ffae1689a-policysync\") pod \"calico-node-krczz\" (UID: \"ac56559e-0dd7-4990-ae15-c03ffae1689a\") " pod="calico-system/calico-node-krczz" Jan 30 13:50:12.551147 kubelet[2562]: I0130 13:50:12.550889 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlhl9\" (UniqueName: \"kubernetes.io/projected/e82d8bc5-c6c3-4cbb-8545-f22f22a5e303-kube-api-access-rlhl9\") pod \"calico-typha-7df4999d99-j5jmf\" (UID: \"e82d8bc5-c6c3-4cbb-8545-f22f22a5e303\") " pod="calico-system/calico-typha-7df4999d99-j5jmf" Jan 30 13:50:12.551429 kubelet[2562]: I0130 13:50:12.550919 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ac56559e-0dd7-4990-ae15-c03ffae1689a-flexvol-driver-host\") pod \"calico-node-krczz\" (UID: \"ac56559e-0dd7-4990-ae15-c03ffae1689a\") " pod="calico-system/calico-node-krczz" Jan 30 13:50:12.551429 kubelet[2562]: I0130 13:50:12.550949 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ac56559e-0dd7-4990-ae15-c03ffae1689a-node-certs\") pod \"calico-node-krczz\" (UID: \"ac56559e-0dd7-4990-ae15-c03ffae1689a\") " pod="calico-system/calico-node-krczz" Jan 30 13:50:12.551429 kubelet[2562]: I0130 13:50:12.550983 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ac56559e-0dd7-4990-ae15-c03ffae1689a-var-lib-calico\") pod \"calico-node-krczz\" (UID: \"ac56559e-0dd7-4990-ae15-c03ffae1689a\") " pod="calico-system/calico-node-krczz" Jan 30 13:50:12.623371 kubelet[2562]: E0130 13:50:12.622490 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zc8sb" podUID="8fb98178-8803-46f6-a0be-7adf365c426b" Jan 30 13:50:12.653644 kubelet[2562]: I0130 13:50:12.651981 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8fb98178-8803-46f6-a0be-7adf365c426b-kubelet-dir\") pod \"csi-node-driver-zc8sb\" (UID: \"8fb98178-8803-46f6-a0be-7adf365c426b\") " pod="calico-system/csi-node-driver-zc8sb" Jan 30 13:50:12.653644 kubelet[2562]: I0130 13:50:12.652080 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8fb98178-8803-46f6-a0be-7adf365c426b-registration-dir\") pod \"csi-node-driver-zc8sb\" (UID: \"8fb98178-8803-46f6-a0be-7adf365c426b\") " pod="calico-system/csi-node-driver-zc8sb" Jan 30 13:50:12.653644 kubelet[2562]: I0130 13:50:12.652119 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8fb98178-8803-46f6-a0be-7adf365c426b-socket-dir\") pod \"csi-node-driver-zc8sb\" (UID: \"8fb98178-8803-46f6-a0be-7adf365c426b\") " pod="calico-system/csi-node-driver-zc8sb" Jan 30 13:50:12.653644 kubelet[2562]: I0130 13:50:12.652256 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n5cl\" (UniqueName: \"kubernetes.io/projected/8fb98178-8803-46f6-a0be-7adf365c426b-kube-api-access-8n5cl\") pod \"csi-node-driver-zc8sb\" (UID: \"8fb98178-8803-46f6-a0be-7adf365c426b\") " pod="calico-system/csi-node-driver-zc8sb" Jan 30 13:50:12.653644 kubelet[2562]: I0130 13:50:12.652358 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8fb98178-8803-46f6-a0be-7adf365c426b-varrun\") pod \"csi-node-driver-zc8sb\" (UID: \"8fb98178-8803-46f6-a0be-7adf365c426b\") " pod="calico-system/csi-node-driver-zc8sb" Jan 30 13:50:12.669298 kubelet[2562]: E0130 13:50:12.668765 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:50:12.669298 kubelet[2562]: W0130 13:50:12.668794 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:50:12.669298 kubelet[2562]: E0130 13:50:12.668825 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:50:12.674731 kubelet[2562]: E0130 13:50:12.674689 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:50:12.675375 kubelet[2562]: W0130 13:50:12.675343 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:50:12.690293 kubelet[2562]: E0130 13:50:12.687456 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:50:12.698603 kubelet[2562]: E0130 13:50:12.698461 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:50:12.698603 kubelet[2562]: W0130 13:50:12.698489 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:50:12.698603 kubelet[2562]: E0130 13:50:12.698544 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:50:12.718186 kubelet[2562]: E0130 13:50:12.717387 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:50:12.718186 kubelet[2562]: W0130 13:50:12.717422 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:50:12.718186 kubelet[2562]: E0130 13:50:12.717464 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:50:12.725927 kubelet[2562]: E0130 13:50:12.724897 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:50:12.725927 kubelet[2562]: W0130 13:50:12.724940 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:50:12.725927 kubelet[2562]: E0130 13:50:12.724969 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:50:12.726199 containerd[1461]: time="2025-01-30T13:50:12.725443958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7df4999d99-j5jmf,Uid:e82d8bc5-c6c3-4cbb-8545-f22f22a5e303,Namespace:calico-system,Attempt:0,}" Jan 30 13:50:12.761604 kubelet[2562]: E0130 13:50:12.759314 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:50:12.761604 kubelet[2562]: W0130 13:50:12.759347 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:50:12.761604 kubelet[2562]: E0130 13:50:12.759379 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:50:12.761604 kubelet[2562]: E0130 13:50:12.760469 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:50:12.761604 kubelet[2562]: W0130 13:50:12.760487 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:50:12.765289 kubelet[2562]: E0130 13:50:12.763414 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:50:12.765289 kubelet[2562]: E0130 13:50:12.764897 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:50:12.765289 kubelet[2562]: W0130 13:50:12.764915 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:50:12.765289 kubelet[2562]: E0130 13:50:12.764948 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:50:12.765580 kubelet[2562]: E0130 13:50:12.765482 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:50:12.765580 kubelet[2562]: W0130 13:50:12.765506 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:50:12.770306 kubelet[2562]: E0130 13:50:12.767182 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:50:12.770306 kubelet[2562]: W0130 13:50:12.767203 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:50:12.770306 kubelet[2562]: E0130 13:50:12.769362 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:50:12.770306 kubelet[2562]: W0130 13:50:12.769379 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:50:12.770306 kubelet[2562]: E0130 13:50:12.769400 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:50:12.770306 kubelet[2562]: E0130 13:50:12.770197 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:50:12.770306 kubelet[2562]: W0130 13:50:12.770212 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:50:12.770306 kubelet[2562]: E0130 13:50:12.770230 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:50:12.772606 kubelet[2562]: E0130 13:50:12.772581 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:50:12.772606 kubelet[2562]: W0130 13:50:12.772604 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:50:12.772769 kubelet[2562]: E0130 13:50:12.772624 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:50:12.774289 kubelet[2562]: E0130 13:50:12.773601 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:50:12.774289 kubelet[2562]: W0130 13:50:12.773627 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:50:12.774289 kubelet[2562]: E0130 13:50:12.773648 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:50:12.774289 kubelet[2562]: E0130 13:50:12.773687 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:50:12.774565 kubelet[2562]: E0130 13:50:12.774316 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:50:12.774565 kubelet[2562]: W0130 13:50:12.774473 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:50:12.774565 kubelet[2562]: E0130 13:50:12.774492 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:50:12.775169 kubelet[2562]: E0130 13:50:12.775118 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:50:12.777753 kubelet[2562]: E0130 13:50:12.776424 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:50:12.777753 kubelet[2562]: W0130 13:50:12.776446 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:50:12.777905 kubelet[2562]: E0130 13:50:12.777841 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:50:12.780149 kubelet[2562]: E0130 13:50:12.780109 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:50:12.780496 kubelet[2562]: W0130 13:50:12.780466 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:50:12.781399 kubelet[2562]: E0130 13:50:12.780509 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:50:12.784292 kubelet[2562]: E0130 13:50:12.782148 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:50:12.784292 kubelet[2562]: W0130 13:50:12.782202 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:50:12.784292 kubelet[2562]: E0130 13:50:12.783791 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:50:12.784746 kubelet[2562]: E0130 13:50:12.784721 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:50:12.784746 kubelet[2562]: W0130 13:50:12.784746 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:50:12.786426 kubelet[2562]: E0130 13:50:12.786396 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:50:12.786923 kubelet[2562]: E0130 13:50:12.786815 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:50:12.786923 kubelet[2562]: W0130 13:50:12.786839 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:50:12.786923 kubelet[2562]: E0130 13:50:12.786916 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:50:12.787194 kubelet[2562]: E0130 13:50:12.787172 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:50:12.787194 kubelet[2562]: W0130 13:50:12.787193 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:50:12.787350 kubelet[2562]: E0130 13:50:12.787284 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:50:12.792065 kubelet[2562]: E0130 13:50:12.790546 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:50:12.792065 kubelet[2562]: W0130 13:50:12.790571 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:50:12.792065 kubelet[2562]: E0130 13:50:12.790608 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:50:12.792065 kubelet[2562]: E0130 13:50:12.791714 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:50:12.792065 kubelet[2562]: W0130 13:50:12.791732 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:50:12.792065 kubelet[2562]: E0130 13:50:12.791861 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:50:12.793439 kubelet[2562]: E0130 13:50:12.792551 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:50:12.793439 kubelet[2562]: W0130 13:50:12.792569 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:50:12.793439 kubelet[2562]: E0130 13:50:12.793345 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:50:12.793942 kubelet[2562]: E0130 13:50:12.793901 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:50:12.793942 kubelet[2562]: W0130 13:50:12.793920 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:50:12.795514 kubelet[2562]: E0130 13:50:12.795311 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:50:12.795706 kubelet[2562]: E0130 13:50:12.795690 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:50:12.795880 kubelet[2562]: W0130 13:50:12.795785 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:50:12.795983 kubelet[2562]: E0130 13:50:12.795964 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:50:12.796423 kubelet[2562]: E0130 13:50:12.796368 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:50:12.796423 kubelet[2562]: W0130 13:50:12.796387 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:50:12.798143 kubelet[2562]: E0130 13:50:12.798107 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:50:12.799083 kubelet[2562]: E0130 13:50:12.798906 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:50:12.799083 kubelet[2562]: W0130 13:50:12.798940 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:50:12.799834 kubelet[2562]: E0130 13:50:12.799184 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:50:12.804386 kubelet[2562]: E0130 13:50:12.804308 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:50:12.804386 kubelet[2562]: W0130 13:50:12.804341 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:50:12.804806 kubelet[2562]: E0130 13:50:12.804487 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:50:12.806318 kubelet[2562]: E0130 13:50:12.805438 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:50:12.806589 kubelet[2562]: W0130 13:50:12.806395 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:50:12.806773 kubelet[2562]: E0130 13:50:12.806423 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:50:12.812015 containerd[1461]: time="2025-01-30T13:50:12.811851633Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:50:12.812015 containerd[1461]: time="2025-01-30T13:50:12.811958618Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:50:12.812015 containerd[1461]: time="2025-01-30T13:50:12.811983912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:50:12.812360 containerd[1461]: time="2025-01-30T13:50:12.812281320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:50:12.832024 containerd[1461]: time="2025-01-30T13:50:12.831819820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-krczz,Uid:ac56559e-0dd7-4990-ae15-c03ffae1689a,Namespace:calico-system,Attempt:0,}" Jan 30 13:50:12.861962 kubelet[2562]: E0130 13:50:12.861928 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:50:12.862149 kubelet[2562]: W0130 13:50:12.862127 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:50:12.862337 kubelet[2562]: E0130 13:50:12.862315 2562 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:50:12.898969 systemd[1]: Started cri-containerd-ea7f20ea93b27fc808073d09536f8004d98fb085cacf9aa66dc6e75c62d1f9d4.scope - libcontainer container ea7f20ea93b27fc808073d09536f8004d98fb085cacf9aa66dc6e75c62d1f9d4. Jan 30 13:50:12.937961 containerd[1461]: time="2025-01-30T13:50:12.937811941Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:50:12.938371 containerd[1461]: time="2025-01-30T13:50:12.938022423Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:50:12.939736 containerd[1461]: time="2025-01-30T13:50:12.938572890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:50:12.939736 containerd[1461]: time="2025-01-30T13:50:12.938727621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:50:12.975883 systemd[1]: Started cri-containerd-2157e223729dec6d7ea31f8b68c9f12deec7cdffce95d6d348d7e4805581e34b.scope - libcontainer container 2157e223729dec6d7ea31f8b68c9f12deec7cdffce95d6d348d7e4805581e34b. Jan 30 13:50:13.045525 containerd[1461]: time="2025-01-30T13:50:13.045174086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-krczz,Uid:ac56559e-0dd7-4990-ae15-c03ffae1689a,Namespace:calico-system,Attempt:0,} returns sandbox id \"2157e223729dec6d7ea31f8b68c9f12deec7cdffce95d6d348d7e4805581e34b\"" Jan 30 13:50:13.049125 containerd[1461]: time="2025-01-30T13:50:13.048983077Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 30 13:50:13.099519 containerd[1461]: time="2025-01-30T13:50:13.099141005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7df4999d99-j5jmf,Uid:e82d8bc5-c6c3-4cbb-8545-f22f22a5e303,Namespace:calico-system,Attempt:0,} returns sandbox id \"ea7f20ea93b27fc808073d09536f8004d98fb085cacf9aa66dc6e75c62d1f9d4\"" Jan 30 13:50:14.079005 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount30898726.mount: Deactivated successfully. Jan 30 13:50:14.239735 containerd[1461]: time="2025-01-30T13:50:14.239558171Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:50:14.244073 containerd[1461]: time="2025-01-30T13:50:14.242632328Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 30 13:50:14.244464 containerd[1461]: time="2025-01-30T13:50:14.244232183Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:50:14.250592 containerd[1461]: time="2025-01-30T13:50:14.250524355Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:50:14.251850 containerd[1461]: time="2025-01-30T13:50:14.251710172Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.20268329s" Jan 30 13:50:14.251850 containerd[1461]: time="2025-01-30T13:50:14.251756404Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 30 13:50:14.255177 containerd[1461]: time="2025-01-30T13:50:14.254554651Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 30 13:50:14.257281 containerd[1461]: time="2025-01-30T13:50:14.257122096Z" level=info msg="CreateContainer within sandbox \"2157e223729dec6d7ea31f8b68c9f12deec7cdffce95d6d348d7e4805581e34b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 30 13:50:14.278190 containerd[1461]: time="2025-01-30T13:50:14.278128428Z" level=info msg="CreateContainer within sandbox \"2157e223729dec6d7ea31f8b68c9f12deec7cdffce95d6d348d7e4805581e34b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"3c3ec33cfd715155015174c4da8d44d378ded91553134898940a71dfa46a03b5\"" Jan 30 13:50:14.278973 containerd[1461]: time="2025-01-30T13:50:14.278935371Z" level=info msg="StartContainer for \"3c3ec33cfd715155015174c4da8d44d378ded91553134898940a71dfa46a03b5\"" Jan 30 13:50:14.329473 systemd[1]: Started cri-containerd-3c3ec33cfd715155015174c4da8d44d378ded91553134898940a71dfa46a03b5.scope - libcontainer container 3c3ec33cfd715155015174c4da8d44d378ded91553134898940a71dfa46a03b5. Jan 30 13:50:14.387373 containerd[1461]: time="2025-01-30T13:50:14.387301559Z" level=info msg="StartContainer for \"3c3ec33cfd715155015174c4da8d44d378ded91553134898940a71dfa46a03b5\" returns successfully" Jan 30 13:50:14.406713 systemd[1]: cri-containerd-3c3ec33cfd715155015174c4da8d44d378ded91553134898940a71dfa46a03b5.scope: Deactivated successfully. Jan 30 13:50:14.666602 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c3ec33cfd715155015174c4da8d44d378ded91553134898940a71dfa46a03b5-rootfs.mount: Deactivated successfully. Jan 30 13:50:14.751872 containerd[1461]: time="2025-01-30T13:50:14.751528392Z" level=info msg="shim disconnected" id=3c3ec33cfd715155015174c4da8d44d378ded91553134898940a71dfa46a03b5 namespace=k8s.io Jan 30 13:50:14.751872 containerd[1461]: time="2025-01-30T13:50:14.751616832Z" level=warning msg="cleaning up after shim disconnected" id=3c3ec33cfd715155015174c4da8d44d378ded91553134898940a71dfa46a03b5 namespace=k8s.io Jan 30 13:50:14.751872 containerd[1461]: time="2025-01-30T13:50:14.751633291Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:50:14.767888 kubelet[2562]: E0130 13:50:14.767724 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zc8sb" podUID="8fb98178-8803-46f6-a0be-7adf365c426b" Jan 30 13:50:16.503149 containerd[1461]: time="2025-01-30T13:50:16.503086934Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:50:16.504735 containerd[1461]: time="2025-01-30T13:50:16.504666578Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Jan 30 13:50:16.506380 containerd[1461]: time="2025-01-30T13:50:16.506228575Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:50:16.511087 containerd[1461]: time="2025-01-30T13:50:16.509698622Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:50:16.511087 containerd[1461]: time="2025-01-30T13:50:16.510640987Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.256042606s" Jan 30 13:50:16.511087 containerd[1461]: time="2025-01-30T13:50:16.510680124Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 30 13:50:16.513117 containerd[1461]: time="2025-01-30T13:50:16.513087257Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 30 13:50:16.539074 containerd[1461]: time="2025-01-30T13:50:16.539014726Z" level=info msg="CreateContainer within sandbox \"ea7f20ea93b27fc808073d09536f8004d98fb085cacf9aa66dc6e75c62d1f9d4\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 30 13:50:16.560834 containerd[1461]: time="2025-01-30T13:50:16.559923272Z" level=info msg="CreateContainer within sandbox \"ea7f20ea93b27fc808073d09536f8004d98fb085cacf9aa66dc6e75c62d1f9d4\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4dbb2dd76aa4558aca29015bbc38a7e8aba705860ea7e245cc9dcc4d32066bba\"" Jan 30 13:50:16.561725 containerd[1461]: time="2025-01-30T13:50:16.561693784Z" level=info msg="StartContainer for \"4dbb2dd76aa4558aca29015bbc38a7e8aba705860ea7e245cc9dcc4d32066bba\"" Jan 30 13:50:16.631782 systemd[1]: Started cri-containerd-4dbb2dd76aa4558aca29015bbc38a7e8aba705860ea7e245cc9dcc4d32066bba.scope - libcontainer container 4dbb2dd76aa4558aca29015bbc38a7e8aba705860ea7e245cc9dcc4d32066bba. Jan 30 13:50:16.767130 kubelet[2562]: E0130 13:50:16.766639 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zc8sb" podUID="8fb98178-8803-46f6-a0be-7adf365c426b" Jan 30 13:50:16.857553 containerd[1461]: time="2025-01-30T13:50:16.856723506Z" level=info msg="StartContainer for \"4dbb2dd76aa4558aca29015bbc38a7e8aba705860ea7e245cc9dcc4d32066bba\" returns successfully" Jan 30 13:50:17.872048 kubelet[2562]: I0130 13:50:17.870948 2562 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:50:18.767034 kubelet[2562]: E0130 13:50:18.766960 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zc8sb" podUID="8fb98178-8803-46f6-a0be-7adf365c426b" Jan 30 13:50:20.355542 containerd[1461]: time="2025-01-30T13:50:20.355481594Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:50:20.356848 containerd[1461]: time="2025-01-30T13:50:20.356795538Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 30 13:50:20.358085 containerd[1461]: time="2025-01-30T13:50:20.358022552Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:50:20.362197 containerd[1461]: time="2025-01-30T13:50:20.362131239Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:50:20.363895 containerd[1461]: time="2025-01-30T13:50:20.363339616Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 3.85008692s" Jan 30 13:50:20.363895 containerd[1461]: time="2025-01-30T13:50:20.363409568Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 30 13:50:20.367510 containerd[1461]: time="2025-01-30T13:50:20.367433795Z" level=info msg="CreateContainer within sandbox \"2157e223729dec6d7ea31f8b68c9f12deec7cdffce95d6d348d7e4805581e34b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 30 13:50:20.394102 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2351844153.mount: Deactivated successfully. Jan 30 13:50:20.397694 containerd[1461]: time="2025-01-30T13:50:20.397643283Z" level=info msg="CreateContainer within sandbox \"2157e223729dec6d7ea31f8b68c9f12deec7cdffce95d6d348d7e4805581e34b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d58627082e499577d03ff2567bb1aebec88a7690945a7a077d49f1cd2d36884a\"" Jan 30 13:50:20.399408 containerd[1461]: time="2025-01-30T13:50:20.398515784Z" level=info msg="StartContainer for \"d58627082e499577d03ff2567bb1aebec88a7690945a7a077d49f1cd2d36884a\"" Jan 30 13:50:20.447068 systemd[1]: run-containerd-runc-k8s.io-d58627082e499577d03ff2567bb1aebec88a7690945a7a077d49f1cd2d36884a-runc.r6Ey7A.mount: Deactivated successfully. Jan 30 13:50:20.455434 systemd[1]: Started cri-containerd-d58627082e499577d03ff2567bb1aebec88a7690945a7a077d49f1cd2d36884a.scope - libcontainer container d58627082e499577d03ff2567bb1aebec88a7690945a7a077d49f1cd2d36884a. Jan 30 13:50:20.501095 containerd[1461]: time="2025-01-30T13:50:20.501038351Z" level=info msg="StartContainer for \"d58627082e499577d03ff2567bb1aebec88a7690945a7a077d49f1cd2d36884a\" returns successfully" Jan 30 13:50:20.766324 kubelet[2562]: E0130 13:50:20.766040 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zc8sb" podUID="8fb98178-8803-46f6-a0be-7adf365c426b" Jan 30 13:50:20.909203 kubelet[2562]: I0130 13:50:20.908620 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7df4999d99-j5jmf" podStartSLOduration=5.49791865 podStartE2EDuration="8.908597414s" podCreationTimestamp="2025-01-30 13:50:12 +0000 UTC" firstStartedPulling="2025-01-30 13:50:13.101462169 +0000 UTC m=+14.484579522" lastFinishedPulling="2025-01-30 13:50:16.51214092 +0000 UTC m=+17.895258286" observedRunningTime="2025-01-30 13:50:16.888284189 +0000 UTC m=+18.271401569" watchObservedRunningTime="2025-01-30 13:50:20.908597414 +0000 UTC m=+22.291714796" Jan 30 13:50:21.453730 containerd[1461]: time="2025-01-30T13:50:21.453665566Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:50:21.458301 systemd[1]: cri-containerd-d58627082e499577d03ff2567bb1aebec88a7690945a7a077d49f1cd2d36884a.scope: Deactivated successfully. Jan 30 13:50:21.504973 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d58627082e499577d03ff2567bb1aebec88a7690945a7a077d49f1cd2d36884a-rootfs.mount: Deactivated successfully. Jan 30 13:50:21.534662 kubelet[2562]: I0130 13:50:21.534430 2562 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Jan 30 13:50:21.595969 systemd[1]: Created slice kubepods-burstable-poda7c8f8ce_efc5_476c_8e2d_4cfe1db0bb33.slice - libcontainer container kubepods-burstable-poda7c8f8ce_efc5_476c_8e2d_4cfe1db0bb33.slice. Jan 30 13:50:21.610465 systemd[1]: Created slice kubepods-besteffort-pod3a17c611_62b6_4f6c_b060_7ce3741ea277.slice - libcontainer container kubepods-besteffort-pod3a17c611_62b6_4f6c_b060_7ce3741ea277.slice. Jan 30 13:50:21.624551 systemd[1]: Created slice kubepods-burstable-podbaa3f6f5_b333_4a45_a993_1c13d0b07855.slice - libcontainer container kubepods-burstable-podbaa3f6f5_b333_4a45_a993_1c13d0b07855.slice. Jan 30 13:50:21.639491 systemd[1]: Created slice kubepods-besteffort-pod2051b70c_c342_4cb4_8d41_4fbc2d79f291.slice - libcontainer container kubepods-besteffort-pod2051b70c_c342_4cb4_8d41_4fbc2d79f291.slice. Jan 30 13:50:21.646417 kubelet[2562]: I0130 13:50:21.646305 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ddnf\" (UniqueName: \"kubernetes.io/projected/baa3f6f5-b333-4a45-a993-1c13d0b07855-kube-api-access-5ddnf\") pod \"coredns-668d6bf9bc-xtgk9\" (UID: \"baa3f6f5-b333-4a45-a993-1c13d0b07855\") " pod="kube-system/coredns-668d6bf9bc-xtgk9" Jan 30 13:50:21.648029 kubelet[2562]: I0130 13:50:21.647123 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2051b70c-c342-4cb4-8d41-4fbc2d79f291-calico-apiserver-certs\") pod \"calico-apiserver-5d54978bfb-cbdp5\" (UID: \"2051b70c-c342-4cb4-8d41-4fbc2d79f291\") " pod="calico-apiserver/calico-apiserver-5d54978bfb-cbdp5" Jan 30 13:50:21.648029 kubelet[2562]: I0130 13:50:21.647192 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a17c611-62b6-4f6c-b060-7ce3741ea277-tigera-ca-bundle\") pod \"calico-kube-controllers-5786599c57-zjrvp\" (UID: \"3a17c611-62b6-4f6c-b060-7ce3741ea277\") " pod="calico-system/calico-kube-controllers-5786599c57-zjrvp" Jan 30 13:50:21.648029 kubelet[2562]: I0130 13:50:21.647232 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/baa3f6f5-b333-4a45-a993-1c13d0b07855-config-volume\") pod \"coredns-668d6bf9bc-xtgk9\" (UID: \"baa3f6f5-b333-4a45-a993-1c13d0b07855\") " pod="kube-system/coredns-668d6bf9bc-xtgk9" Jan 30 13:50:21.648029 kubelet[2562]: I0130 13:50:21.647285 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a7c8f8ce-efc5-476c-8e2d-4cfe1db0bb33-config-volume\") pod \"coredns-668d6bf9bc-nmmjn\" (UID: \"a7c8f8ce-efc5-476c-8e2d-4cfe1db0bb33\") " pod="kube-system/coredns-668d6bf9bc-nmmjn" Jan 30 13:50:21.648029 kubelet[2562]: I0130 13:50:21.647318 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm6ck\" (UniqueName: \"kubernetes.io/projected/3a17c611-62b6-4f6c-b060-7ce3741ea277-kube-api-access-nm6ck\") pod \"calico-kube-controllers-5786599c57-zjrvp\" (UID: \"3a17c611-62b6-4f6c-b060-7ce3741ea277\") " pod="calico-system/calico-kube-controllers-5786599c57-zjrvp" Jan 30 13:50:21.648375 kubelet[2562]: I0130 13:50:21.647351 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/58e99075-1376-48e5-b2c7-99d946da1951-calico-apiserver-certs\") pod \"calico-apiserver-5d54978bfb-dbkmf\" (UID: \"58e99075-1376-48e5-b2c7-99d946da1951\") " pod="calico-apiserver/calico-apiserver-5d54978bfb-dbkmf" Jan 30 13:50:21.648375 kubelet[2562]: I0130 13:50:21.647380 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v98l\" (UniqueName: \"kubernetes.io/projected/58e99075-1376-48e5-b2c7-99d946da1951-kube-api-access-2v98l\") pod \"calico-apiserver-5d54978bfb-dbkmf\" (UID: \"58e99075-1376-48e5-b2c7-99d946da1951\") " pod="calico-apiserver/calico-apiserver-5d54978bfb-dbkmf" Jan 30 13:50:21.648375 kubelet[2562]: I0130 13:50:21.647413 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5m5v\" (UniqueName: \"kubernetes.io/projected/a7c8f8ce-efc5-476c-8e2d-4cfe1db0bb33-kube-api-access-k5m5v\") pod \"coredns-668d6bf9bc-nmmjn\" (UID: \"a7c8f8ce-efc5-476c-8e2d-4cfe1db0bb33\") " pod="kube-system/coredns-668d6bf9bc-nmmjn" Jan 30 13:50:21.648375 kubelet[2562]: I0130 13:50:21.647447 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zhwr\" (UniqueName: \"kubernetes.io/projected/2051b70c-c342-4cb4-8d41-4fbc2d79f291-kube-api-access-4zhwr\") pod \"calico-apiserver-5d54978bfb-cbdp5\" (UID: \"2051b70c-c342-4cb4-8d41-4fbc2d79f291\") " pod="calico-apiserver/calico-apiserver-5d54978bfb-cbdp5" Jan 30 13:50:21.655583 systemd[1]: Created slice kubepods-besteffort-pod58e99075_1376_48e5_b2c7_99d946da1951.slice - libcontainer container kubepods-besteffort-pod58e99075_1376_48e5_b2c7_99d946da1951.slice. Jan 30 13:50:21.902499 containerd[1461]: time="2025-01-30T13:50:21.902326915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nmmjn,Uid:a7c8f8ce-efc5-476c-8e2d-4cfe1db0bb33,Namespace:kube-system,Attempt:0,}" Jan 30 13:50:21.917323 containerd[1461]: time="2025-01-30T13:50:21.917238264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5786599c57-zjrvp,Uid:3a17c611-62b6-4f6c-b060-7ce3741ea277,Namespace:calico-system,Attempt:0,}" Jan 30 13:50:21.932812 containerd[1461]: time="2025-01-30T13:50:21.932752210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xtgk9,Uid:baa3f6f5-b333-4a45-a993-1c13d0b07855,Namespace:kube-system,Attempt:0,}" Jan 30 13:50:21.948002 containerd[1461]: time="2025-01-30T13:50:21.947949404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d54978bfb-cbdp5,Uid:2051b70c-c342-4cb4-8d41-4fbc2d79f291,Namespace:calico-apiserver,Attempt:0,}" Jan 30 13:50:21.995449 containerd[1461]: time="2025-01-30T13:50:21.995382407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d54978bfb-dbkmf,Uid:58e99075-1376-48e5-b2c7-99d946da1951,Namespace:calico-apiserver,Attempt:0,}" Jan 30 13:50:22.540650 containerd[1461]: time="2025-01-30T13:50:22.540546697Z" level=info msg="shim disconnected" id=d58627082e499577d03ff2567bb1aebec88a7690945a7a077d49f1cd2d36884a namespace=k8s.io Jan 30 13:50:22.540650 containerd[1461]: time="2025-01-30T13:50:22.540616825Z" level=warning msg="cleaning up after shim disconnected" id=d58627082e499577d03ff2567bb1aebec88a7690945a7a077d49f1cd2d36884a namespace=k8s.io Jan 30 13:50:22.540650 containerd[1461]: time="2025-01-30T13:50:22.540631655Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:50:22.779750 systemd[1]: Created slice kubepods-besteffort-pod8fb98178_8803_46f6_a0be_7adf365c426b.slice - libcontainer container kubepods-besteffort-pod8fb98178_8803_46f6_a0be_7adf365c426b.slice. Jan 30 13:50:22.794906 containerd[1461]: time="2025-01-30T13:50:22.794409528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zc8sb,Uid:8fb98178-8803-46f6-a0be-7adf365c426b,Namespace:calico-system,Attempt:0,}" Jan 30 13:50:22.881890 containerd[1461]: time="2025-01-30T13:50:22.881830132Z" level=error msg="Failed to destroy network for sandbox \"2588830b5fc43e2baf0335f07a7e46bd4ab7893b73e8b3ed99e02510b3196e4f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:50:22.885326 containerd[1461]: time="2025-01-30T13:50:22.885012068Z" level=error msg="encountered an error cleaning up failed sandbox \"2588830b5fc43e2baf0335f07a7e46bd4ab7893b73e8b3ed99e02510b3196e4f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:50:22.891047 containerd[1461]: time="2025-01-30T13:50:22.885772167Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nmmjn,Uid:a7c8f8ce-efc5-476c-8e2d-4cfe1db0bb33,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2588830b5fc43e2baf0335f07a7e46bd4ab7893b73e8b3ed99e02510b3196e4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:50:22.891848 kubelet[2562]: E0130 13:50:22.891337 2562 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2588830b5fc43e2baf0335f07a7e46bd4ab7893b73e8b3ed99e02510b3196e4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:50:22.891848 kubelet[2562]: E0130 13:50:22.891432 2562 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2588830b5fc43e2baf0335f07a7e46bd4ab7893b73e8b3ed99e02510b3196e4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-nmmjn" Jan 30 13:50:22.891848 kubelet[2562]: E0130 13:50:22.891466 2562 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2588830b5fc43e2baf0335f07a7e46bd4ab7893b73e8b3ed99e02510b3196e4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-nmmjn" Jan 30 13:50:22.892770 kubelet[2562]: E0130 13:50:22.892334 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-nmmjn_kube-system(a7c8f8ce-efc5-476c-8e2d-4cfe1db0bb33)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-nmmjn_kube-system(a7c8f8ce-efc5-476c-8e2d-4cfe1db0bb33)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2588830b5fc43e2baf0335f07a7e46bd4ab7893b73e8b3ed99e02510b3196e4f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-nmmjn" podUID="a7c8f8ce-efc5-476c-8e2d-4cfe1db0bb33" Jan 30 13:50:22.903337 containerd[1461]: time="2025-01-30T13:50:22.902697196Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 30 13:50:22.908289 kubelet[2562]: I0130 13:50:22.907896 2562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2588830b5fc43e2baf0335f07a7e46bd4ab7893b73e8b3ed99e02510b3196e4f" Jan 30 13:50:22.912555 containerd[1461]: time="2025-01-30T13:50:22.912498289Z" level=info msg="StopPodSandbox for \"2588830b5fc43e2baf0335f07a7e46bd4ab7893b73e8b3ed99e02510b3196e4f\"" Jan 30 13:50:22.912862 containerd[1461]: time="2025-01-30T13:50:22.912757039Z" level=info msg="Ensure that sandbox 2588830b5fc43e2baf0335f07a7e46bd4ab7893b73e8b3ed99e02510b3196e4f in task-service has been cleanup successfully" Jan 30 13:50:22.932055 containerd[1461]: time="2025-01-30T13:50:22.931996769Z" level=error msg="Failed to destroy network for sandbox \"cd390294d8e3de20565a85fbc0d0431ab47850fb5ca309b5a4aa9da84fd28689\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:50:22.933110 containerd[1461]: time="2025-01-30T13:50:22.933056496Z" level=error msg="encountered an error cleaning up failed sandbox \"cd390294d8e3de20565a85fbc0d0431ab47850fb5ca309b5a4aa9da84fd28689\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:50:22.933236 containerd[1461]: time="2025-01-30T13:50:22.933166984Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xtgk9,Uid:baa3f6f5-b333-4a45-a993-1c13d0b07855,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cd390294d8e3de20565a85fbc0d0431ab47850fb5ca309b5a4aa9da84fd28689\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:50:22.934223 kubelet[2562]: E0130 13:50:22.933681 2562 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd390294d8e3de20565a85fbc0d0431ab47850fb5ca309b5a4aa9da84fd28689\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:50:22.934223 kubelet[2562]: E0130 13:50:22.933774 2562 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd390294d8e3de20565a85fbc0d0431ab47850fb5ca309b5a4aa9da84fd28689\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-xtgk9" Jan 30 13:50:22.934223 kubelet[2562]: E0130 13:50:22.933834 2562 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd390294d8e3de20565a85fbc0d0431ab47850fb5ca309b5a4aa9da84fd28689\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-xtgk9" Jan 30 13:50:22.934535 kubelet[2562]: E0130 13:50:22.934179 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-xtgk9_kube-system(baa3f6f5-b333-4a45-a993-1c13d0b07855)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-xtgk9_kube-system(baa3f6f5-b333-4a45-a993-1c13d0b07855)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cd390294d8e3de20565a85fbc0d0431ab47850fb5ca309b5a4aa9da84fd28689\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-xtgk9" podUID="baa3f6f5-b333-4a45-a993-1c13d0b07855" Jan 30 13:50:22.958313 containerd[1461]: time="2025-01-30T13:50:22.956525486Z" level=error msg="Failed to destroy network for sandbox \"58209fed96d67646a80783f3c4ecb2875331ab32213690facf0a635f0f5e4d31\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:50:22.958313 containerd[1461]: time="2025-01-30T13:50:22.956976043Z" level=error msg="encountered an error cleaning up failed sandbox \"58209fed96d67646a80783f3c4ecb2875331ab32213690facf0a635f0f5e4d31\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:50:22.958313 containerd[1461]: time="2025-01-30T13:50:22.957044358Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5786599c57-zjrvp,Uid:3a17c611-62b6-4f6c-b060-7ce3741ea277,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"58209fed96d67646a80783f3c4ecb2875331ab32213690facf0a635f0f5e4d31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:50:22.958313 containerd[1461]: time="2025-01-30T13:50:22.957229075Z" level=error msg="Failed to destroy network for sandbox \"d2e8cca43b34f6f28ce31ced3d8862774a7412a45ab990316d083129f586c007\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:50:22.958313 containerd[1461]: time="2025-01-30T13:50:22.957611142Z" level=error msg="encountered an error cleaning up failed sandbox \"d2e8cca43b34f6f28ce31ced3d8862774a7412a45ab990316d083129f586c007\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:50:22.958313 containerd[1461]: time="2025-01-30T13:50:22.957664571Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d54978bfb-cbdp5,Uid:2051b70c-c342-4cb4-8d41-4fbc2d79f291,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d2e8cca43b34f6f28ce31ced3d8862774a7412a45ab990316d083129f586c007\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:50:22.959057 kubelet[2562]: E0130 13:50:22.959020 2562 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2e8cca43b34f6f28ce31ced3d8862774a7412a45ab990316d083129f586c007\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:50:22.959218 kubelet[2562]: E0130 13:50:22.959193 2562 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2e8cca43b34f6f28ce31ced3d8862774a7412a45ab990316d083129f586c007\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d54978bfb-cbdp5" Jan 30 13:50:22.959378 kubelet[2562]: E0130 13:50:22.959351 2562 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2e8cca43b34f6f28ce31ced3d8862774a7412a45ab990316d083129f586c007\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d54978bfb-cbdp5" Jan 30 13:50:22.959541 kubelet[2562]: E0130 13:50:22.959505 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d54978bfb-cbdp5_calico-apiserver(2051b70c-c342-4cb4-8d41-4fbc2d79f291)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d54978bfb-cbdp5_calico-apiserver(2051b70c-c342-4cb4-8d41-4fbc2d79f291)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d2e8cca43b34f6f28ce31ced3d8862774a7412a45ab990316d083129f586c007\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d54978bfb-cbdp5" podUID="2051b70c-c342-4cb4-8d41-4fbc2d79f291" Jan 30 13:50:22.959952 kubelet[2562]: E0130 13:50:22.957251 2562 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58209fed96d67646a80783f3c4ecb2875331ab32213690facf0a635f0f5e4d31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:50:22.960174 kubelet[2562]: E0130 13:50:22.960122 2562 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58209fed96d67646a80783f3c4ecb2875331ab32213690facf0a635f0f5e4d31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5786599c57-zjrvp" Jan 30 13:50:22.960416 kubelet[2562]: E0130 13:50:22.960374 2562 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58209fed96d67646a80783f3c4ecb2875331ab32213690facf0a635f0f5e4d31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5786599c57-zjrvp" Jan 30 13:50:22.960704 kubelet[2562]: E0130 13:50:22.960576 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5786599c57-zjrvp_calico-system(3a17c611-62b6-4f6c-b060-7ce3741ea277)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5786599c57-zjrvp_calico-system(3a17c611-62b6-4f6c-b060-7ce3741ea277)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"58209fed96d67646a80783f3c4ecb2875331ab32213690facf0a635f0f5e4d31\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5786599c57-zjrvp" podUID="3a17c611-62b6-4f6c-b060-7ce3741ea277" Jan 30 13:50:22.996587 containerd[1461]: time="2025-01-30T13:50:22.996519022Z" level=error msg="Failed to destroy network for sandbox \"421e0c771368fe0e4ea98a719c41a544386fe75ef7226effaa6f667e5b1d45e2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:50:22.998365 containerd[1461]: time="2025-01-30T13:50:22.998318314Z" level=error msg="encountered an error cleaning up failed sandbox \"421e0c771368fe0e4ea98a719c41a544386fe75ef7226effaa6f667e5b1d45e2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:50:22.999652 containerd[1461]: time="2025-01-30T13:50:22.998582682Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d54978bfb-dbkmf,Uid:58e99075-1376-48e5-b2c7-99d946da1951,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"421e0c771368fe0e4ea98a719c41a544386fe75ef7226effaa6f667e5b1d45e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:50:22.999990 kubelet[2562]: E0130 13:50:22.999948 2562 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"421e0c771368fe0e4ea98a719c41a544386fe75ef7226effaa6f667e5b1d45e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:50:23.000199 kubelet[2562]: E0130 13:50:23.000153 2562 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"421e0c771368fe0e4ea98a719c41a544386fe75ef7226effaa6f667e5b1d45e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d54978bfb-dbkmf" Jan 30 13:50:23.000506 kubelet[2562]: E0130 13:50:23.000330 2562 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"421e0c771368fe0e4ea98a719c41a544386fe75ef7226effaa6f667e5b1d45e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d54978bfb-dbkmf" Jan 30 13:50:23.001840 kubelet[2562]: E0130 13:50:23.001736 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d54978bfb-dbkmf_calico-apiserver(58e99075-1376-48e5-b2c7-99d946da1951)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d54978bfb-dbkmf_calico-apiserver(58e99075-1376-48e5-b2c7-99d946da1951)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"421e0c771368fe0e4ea98a719c41a544386fe75ef7226effaa6f667e5b1d45e2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d54978bfb-dbkmf" podUID="58e99075-1376-48e5-b2c7-99d946da1951" Jan 30 13:50:23.020042 containerd[1461]: time="2025-01-30T13:50:23.019964136Z" level=error msg="StopPodSandbox for \"2588830b5fc43e2baf0335f07a7e46bd4ab7893b73e8b3ed99e02510b3196e4f\" failed" error="failed to destroy network for sandbox \"2588830b5fc43e2baf0335f07a7e46bd4ab7893b73e8b3ed99e02510b3196e4f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:50:23.020773 kubelet[2562]: E0130 13:50:23.020558 2562 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2588830b5fc43e2baf0335f07a7e46bd4ab7893b73e8b3ed99e02510b3196e4f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2588830b5fc43e2baf0335f07a7e46bd4ab7893b73e8b3ed99e02510b3196e4f" Jan 30 13:50:23.020773 kubelet[2562]: E0130 13:50:23.020638 2562 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2588830b5fc43e2baf0335f07a7e46bd4ab7893b73e8b3ed99e02510b3196e4f"} Jan 30 13:50:23.020773 kubelet[2562]: E0130 13:50:23.020717 2562 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a7c8f8ce-efc5-476c-8e2d-4cfe1db0bb33\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2588830b5fc43e2baf0335f07a7e46bd4ab7893b73e8b3ed99e02510b3196e4f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:50:23.020773 kubelet[2562]: E0130 13:50:23.020752 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a7c8f8ce-efc5-476c-8e2d-4cfe1db0bb33\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2588830b5fc43e2baf0335f07a7e46bd4ab7893b73e8b3ed99e02510b3196e4f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-nmmjn" podUID="a7c8f8ce-efc5-476c-8e2d-4cfe1db0bb33" Jan 30 13:50:23.027140 containerd[1461]: time="2025-01-30T13:50:23.027092400Z" level=error msg="Failed to destroy network for sandbox \"9aff32cda864b2d1608ee553006b832fbb7a75307a308fa69c30294e172e612c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:50:23.027564 containerd[1461]: time="2025-01-30T13:50:23.027522896Z" level=error msg="encountered an error cleaning up failed sandbox \"9aff32cda864b2d1608ee553006b832fbb7a75307a308fa69c30294e172e612c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:50:23.027678 containerd[1461]: time="2025-01-30T13:50:23.027602796Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zc8sb,Uid:8fb98178-8803-46f6-a0be-7adf365c426b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9aff32cda864b2d1608ee553006b832fbb7a75307a308fa69c30294e172e612c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:50:23.027960 kubelet[2562]: E0130 13:50:23.027887 2562 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9aff32cda864b2d1608ee553006b832fbb7a75307a308fa69c30294e172e612c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:50:23.028071 kubelet[2562]: E0130 13:50:23.027965 2562 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9aff32cda864b2d1608ee553006b832fbb7a75307a308fa69c30294e172e612c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zc8sb" Jan 30 13:50:23.028071 kubelet[2562]: E0130 13:50:23.028001 2562 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9aff32cda864b2d1608ee553006b832fbb7a75307a308fa69c30294e172e612c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zc8sb" Jan 30 13:50:23.028225 kubelet[2562]: E0130 13:50:23.028080 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zc8sb_calico-system(8fb98178-8803-46f6-a0be-7adf365c426b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zc8sb_calico-system(8fb98178-8803-46f6-a0be-7adf365c426b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9aff32cda864b2d1608ee553006b832fbb7a75307a308fa69c30294e172e612c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zc8sb" podUID="8fb98178-8803-46f6-a0be-7adf365c426b" Jan 30 13:50:23.502695 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d2e8cca43b34f6f28ce31ced3d8862774a7412a45ab990316d083129f586c007-shm.mount: Deactivated successfully. Jan 30 13:50:23.502844 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cd390294d8e3de20565a85fbc0d0431ab47850fb5ca309b5a4aa9da84fd28689-shm.mount: Deactivated successfully. Jan 30 13:50:23.502957 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-58209fed96d67646a80783f3c4ecb2875331ab32213690facf0a635f0f5e4d31-shm.mount: Deactivated successfully. Jan 30 13:50:23.503057 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2588830b5fc43e2baf0335f07a7e46bd4ab7893b73e8b3ed99e02510b3196e4f-shm.mount: Deactivated successfully. Jan 30 13:50:23.914155 kubelet[2562]: I0130 13:50:23.911980 2562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="421e0c771368fe0e4ea98a719c41a544386fe75ef7226effaa6f667e5b1d45e2" Jan 30 13:50:23.914950 containerd[1461]: time="2025-01-30T13:50:23.914912942Z" level=info msg="StopPodSandbox for \"421e0c771368fe0e4ea98a719c41a544386fe75ef7226effaa6f667e5b1d45e2\"" Jan 30 13:50:23.915893 containerd[1461]: time="2025-01-30T13:50:23.915852722Z" level=info msg="Ensure that sandbox 421e0c771368fe0e4ea98a719c41a544386fe75ef7226effaa6f667e5b1d45e2 in task-service has been cleanup successfully" Jan 30 13:50:23.920141 kubelet[2562]: I0130 13:50:23.919923 2562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd390294d8e3de20565a85fbc0d0431ab47850fb5ca309b5a4aa9da84fd28689" Jan 30 13:50:23.922501 kubelet[2562]: I0130 13:50:23.922451 2562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9aff32cda864b2d1608ee553006b832fbb7a75307a308fa69c30294e172e612c" Jan 30 13:50:23.924607 containerd[1461]: time="2025-01-30T13:50:23.922723583Z" level=info msg="StopPodSandbox for \"cd390294d8e3de20565a85fbc0d0431ab47850fb5ca309b5a4aa9da84fd28689\"" Jan 30 13:50:23.924607 containerd[1461]: time="2025-01-30T13:50:23.922940013Z" level=info msg="Ensure that sandbox cd390294d8e3de20565a85fbc0d0431ab47850fb5ca309b5a4aa9da84fd28689 in task-service has been cleanup successfully" Jan 30 13:50:23.929869 containerd[1461]: time="2025-01-30T13:50:23.929826617Z" level=info msg="StopPodSandbox for \"9aff32cda864b2d1608ee553006b832fbb7a75307a308fa69c30294e172e612c\"" Jan 30 13:50:23.930170 containerd[1461]: time="2025-01-30T13:50:23.930141426Z" level=info msg="Ensure that sandbox 9aff32cda864b2d1608ee553006b832fbb7a75307a308fa69c30294e172e612c in task-service has been cleanup successfully" Jan 30 13:50:23.940304 kubelet[2562]: I0130 13:50:23.939869 2562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2e8cca43b34f6f28ce31ced3d8862774a7412a45ab990316d083129f586c007" Jan 30 13:50:23.943043 containerd[1461]: time="2025-01-30T13:50:23.943005119Z" level=info msg="StopPodSandbox for \"d2e8cca43b34f6f28ce31ced3d8862774a7412a45ab990316d083129f586c007\"" Jan 30 13:50:23.946067 containerd[1461]: time="2025-01-30T13:50:23.946032949Z" level=info msg="Ensure that sandbox d2e8cca43b34f6f28ce31ced3d8862774a7412a45ab990316d083129f586c007 in task-service has been cleanup successfully" Jan 30 13:50:23.947944 kubelet[2562]: I0130 13:50:23.947349 2562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58209fed96d67646a80783f3c4ecb2875331ab32213690facf0a635f0f5e4d31" Jan 30 13:50:23.955528 containerd[1461]: time="2025-01-30T13:50:23.955493835Z" level=info msg="StopPodSandbox for \"58209fed96d67646a80783f3c4ecb2875331ab32213690facf0a635f0f5e4d31\"" Jan 30 13:50:23.956102 containerd[1461]: time="2025-01-30T13:50:23.955978108Z" level=info msg="Ensure that sandbox 58209fed96d67646a80783f3c4ecb2875331ab32213690facf0a635f0f5e4d31 in task-service has been cleanup successfully" Jan 30 13:50:24.047785 containerd[1461]: time="2025-01-30T13:50:24.047717633Z" level=error msg="StopPodSandbox for \"421e0c771368fe0e4ea98a719c41a544386fe75ef7226effaa6f667e5b1d45e2\" failed" error="failed to destroy network for sandbox \"421e0c771368fe0e4ea98a719c41a544386fe75ef7226effaa6f667e5b1d45e2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:50:24.048384 kubelet[2562]: E0130 13:50:24.048004 2562 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"421e0c771368fe0e4ea98a719c41a544386fe75ef7226effaa6f667e5b1d45e2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="421e0c771368fe0e4ea98a719c41a544386fe75ef7226effaa6f667e5b1d45e2" Jan 30 13:50:24.048384 kubelet[2562]: E0130 13:50:24.048061 2562 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"421e0c771368fe0e4ea98a719c41a544386fe75ef7226effaa6f667e5b1d45e2"} Jan 30 13:50:24.048384 kubelet[2562]: E0130 13:50:24.048133 2562 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"58e99075-1376-48e5-b2c7-99d946da1951\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"421e0c771368fe0e4ea98a719c41a544386fe75ef7226effaa6f667e5b1d45e2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:50:24.048384 kubelet[2562]: E0130 13:50:24.048171 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"58e99075-1376-48e5-b2c7-99d946da1951\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"421e0c771368fe0e4ea98a719c41a544386fe75ef7226effaa6f667e5b1d45e2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d54978bfb-dbkmf" podUID="58e99075-1376-48e5-b2c7-99d946da1951" Jan 30 13:50:24.049643 containerd[1461]: time="2025-01-30T13:50:24.049594077Z" level=error msg="StopPodSandbox for \"cd390294d8e3de20565a85fbc0d0431ab47850fb5ca309b5a4aa9da84fd28689\" failed" error="failed to destroy network for sandbox \"cd390294d8e3de20565a85fbc0d0431ab47850fb5ca309b5a4aa9da84fd28689\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:50:24.050132 kubelet[2562]: E0130 13:50:24.049811 2562 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cd390294d8e3de20565a85fbc0d0431ab47850fb5ca309b5a4aa9da84fd28689\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cd390294d8e3de20565a85fbc0d0431ab47850fb5ca309b5a4aa9da84fd28689" Jan 30 13:50:24.050132 kubelet[2562]: E0130 13:50:24.049879 2562 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cd390294d8e3de20565a85fbc0d0431ab47850fb5ca309b5a4aa9da84fd28689"} Jan 30 13:50:24.050132 kubelet[2562]: E0130 13:50:24.049933 2562 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"baa3f6f5-b333-4a45-a993-1c13d0b07855\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cd390294d8e3de20565a85fbc0d0431ab47850fb5ca309b5a4aa9da84fd28689\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:50:24.050132 kubelet[2562]: E0130 13:50:24.050013 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"baa3f6f5-b333-4a45-a993-1c13d0b07855\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cd390294d8e3de20565a85fbc0d0431ab47850fb5ca309b5a4aa9da84fd28689\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-xtgk9" podUID="baa3f6f5-b333-4a45-a993-1c13d0b07855" Jan 30 13:50:24.086396 containerd[1461]: time="2025-01-30T13:50:24.086326684Z" level=error msg="StopPodSandbox for \"9aff32cda864b2d1608ee553006b832fbb7a75307a308fa69c30294e172e612c\" failed" error="failed to destroy network for sandbox \"9aff32cda864b2d1608ee553006b832fbb7a75307a308fa69c30294e172e612c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:50:24.087139 kubelet[2562]: E0130 13:50:24.086888 2562 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9aff32cda864b2d1608ee553006b832fbb7a75307a308fa69c30294e172e612c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9aff32cda864b2d1608ee553006b832fbb7a75307a308fa69c30294e172e612c" Jan 30 13:50:24.087139 kubelet[2562]: E0130 13:50:24.086967 2562 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9aff32cda864b2d1608ee553006b832fbb7a75307a308fa69c30294e172e612c"} Jan 30 13:50:24.087139 kubelet[2562]: E0130 13:50:24.087021 2562 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8fb98178-8803-46f6-a0be-7adf365c426b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9aff32cda864b2d1608ee553006b832fbb7a75307a308fa69c30294e172e612c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:50:24.087139 kubelet[2562]: E0130 13:50:24.087059 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8fb98178-8803-46f6-a0be-7adf365c426b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9aff32cda864b2d1608ee553006b832fbb7a75307a308fa69c30294e172e612c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zc8sb" podUID="8fb98178-8803-46f6-a0be-7adf365c426b" Jan 30 13:50:24.097047 containerd[1461]: time="2025-01-30T13:50:24.096986622Z" level=error msg="StopPodSandbox for \"d2e8cca43b34f6f28ce31ced3d8862774a7412a45ab990316d083129f586c007\" failed" error="failed to destroy network for sandbox \"d2e8cca43b34f6f28ce31ced3d8862774a7412a45ab990316d083129f586c007\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:50:24.097724 kubelet[2562]: E0130 13:50:24.097494 2562 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d2e8cca43b34f6f28ce31ced3d8862774a7412a45ab990316d083129f586c007\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d2e8cca43b34f6f28ce31ced3d8862774a7412a45ab990316d083129f586c007" Jan 30 13:50:24.097724 kubelet[2562]: E0130 13:50:24.097578 2562 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d2e8cca43b34f6f28ce31ced3d8862774a7412a45ab990316d083129f586c007"} Jan 30 13:50:24.097724 kubelet[2562]: E0130 13:50:24.097630 2562 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2051b70c-c342-4cb4-8d41-4fbc2d79f291\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d2e8cca43b34f6f28ce31ced3d8862774a7412a45ab990316d083129f586c007\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:50:24.097724 kubelet[2562]: E0130 13:50:24.097668 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2051b70c-c342-4cb4-8d41-4fbc2d79f291\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d2e8cca43b34f6f28ce31ced3d8862774a7412a45ab990316d083129f586c007\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d54978bfb-cbdp5" podUID="2051b70c-c342-4cb4-8d41-4fbc2d79f291" Jan 30 13:50:24.108723 containerd[1461]: time="2025-01-30T13:50:24.108617941Z" level=error msg="StopPodSandbox for \"58209fed96d67646a80783f3c4ecb2875331ab32213690facf0a635f0f5e4d31\" failed" error="failed to destroy network for sandbox \"58209fed96d67646a80783f3c4ecb2875331ab32213690facf0a635f0f5e4d31\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:50:24.109184 kubelet[2562]: E0130 13:50:24.108913 2562 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"58209fed96d67646a80783f3c4ecb2875331ab32213690facf0a635f0f5e4d31\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="58209fed96d67646a80783f3c4ecb2875331ab32213690facf0a635f0f5e4d31" Jan 30 13:50:24.109184 kubelet[2562]: E0130 13:50:24.108980 2562 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"58209fed96d67646a80783f3c4ecb2875331ab32213690facf0a635f0f5e4d31"} Jan 30 13:50:24.109184 kubelet[2562]: E0130 13:50:24.109029 2562 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3a17c611-62b6-4f6c-b060-7ce3741ea277\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"58209fed96d67646a80783f3c4ecb2875331ab32213690facf0a635f0f5e4d31\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:50:24.109184 kubelet[2562]: E0130 13:50:24.109069 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3a17c611-62b6-4f6c-b060-7ce3741ea277\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"58209fed96d67646a80783f3c4ecb2875331ab32213690facf0a635f0f5e4d31\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5786599c57-zjrvp" podUID="3a17c611-62b6-4f6c-b060-7ce3741ea277" Jan 30 13:50:29.413720 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1564895969.mount: Deactivated successfully. Jan 30 13:50:29.458400 containerd[1461]: time="2025-01-30T13:50:29.458334138Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:50:29.459591 containerd[1461]: time="2025-01-30T13:50:29.459520508Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 30 13:50:29.461030 containerd[1461]: time="2025-01-30T13:50:29.460965512Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:50:29.463939 containerd[1461]: time="2025-01-30T13:50:29.463886355Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:50:29.464910 containerd[1461]: time="2025-01-30T13:50:29.464740227Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 6.56167068s" Jan 30 13:50:29.464910 containerd[1461]: time="2025-01-30T13:50:29.464788467Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 30 13:50:29.479604 containerd[1461]: time="2025-01-30T13:50:29.479505242Z" level=info msg="CreateContainer within sandbox \"2157e223729dec6d7ea31f8b68c9f12deec7cdffce95d6d348d7e4805581e34b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 30 13:50:29.508211 containerd[1461]: time="2025-01-30T13:50:29.507608924Z" level=info msg="CreateContainer within sandbox \"2157e223729dec6d7ea31f8b68c9f12deec7cdffce95d6d348d7e4805581e34b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c689af58f15102aacb1afac22f886f92049243ed3cd73649af57d0c7b73cb73d\"" Jan 30 13:50:29.510770 containerd[1461]: time="2025-01-30T13:50:29.509392383Z" level=info msg="StartContainer for \"c689af58f15102aacb1afac22f886f92049243ed3cd73649af57d0c7b73cb73d\"" Jan 30 13:50:29.555492 systemd[1]: Started cri-containerd-c689af58f15102aacb1afac22f886f92049243ed3cd73649af57d0c7b73cb73d.scope - libcontainer container c689af58f15102aacb1afac22f886f92049243ed3cd73649af57d0c7b73cb73d. Jan 30 13:50:29.603047 containerd[1461]: time="2025-01-30T13:50:29.602998181Z" level=info msg="StartContainer for \"c689af58f15102aacb1afac22f886f92049243ed3cd73649af57d0c7b73cb73d\" returns successfully" Jan 30 13:50:29.702598 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 30 13:50:29.702749 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 30 13:50:34.768919 containerd[1461]: time="2025-01-30T13:50:34.768112871Z" level=info msg="StopPodSandbox for \"58209fed96d67646a80783f3c4ecb2875331ab32213690facf0a635f0f5e4d31\"" Jan 30 13:50:34.770446 containerd[1461]: time="2025-01-30T13:50:34.769054026Z" level=info msg="StopPodSandbox for \"d2e8cca43b34f6f28ce31ced3d8862774a7412a45ab990316d083129f586c007\"" Jan 30 13:50:34.860361 kubelet[2562]: I0130 13:50:34.860253 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-krczz" podStartSLOduration=6.442399714 podStartE2EDuration="22.860227873s" podCreationTimestamp="2025-01-30 13:50:12 +0000 UTC" firstStartedPulling="2025-01-30 13:50:13.048146058 +0000 UTC m=+14.431263418" lastFinishedPulling="2025-01-30 13:50:29.465974208 +0000 UTC m=+30.849091577" observedRunningTime="2025-01-30 13:50:29.998319185 +0000 UTC m=+31.381436568" watchObservedRunningTime="2025-01-30 13:50:34.860227873 +0000 UTC m=+36.243345253" Jan 30 13:50:34.925696 containerd[1461]: 2025-01-30 13:50:34.863 [INFO][3838] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d2e8cca43b34f6f28ce31ced3d8862774a7412a45ab990316d083129f586c007" Jan 30 13:50:34.925696 containerd[1461]: 2025-01-30 13:50:34.863 [INFO][3838] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d2e8cca43b34f6f28ce31ced3d8862774a7412a45ab990316d083129f586c007" iface="eth0" netns="/var/run/netns/cni-a9adbe16-0be1-0330-c6e2-395626e2644c" Jan 30 13:50:34.925696 containerd[1461]: 2025-01-30 13:50:34.865 [INFO][3838] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d2e8cca43b34f6f28ce31ced3d8862774a7412a45ab990316d083129f586c007" iface="eth0" netns="/var/run/netns/cni-a9adbe16-0be1-0330-c6e2-395626e2644c" Jan 30 13:50:34.925696 containerd[1461]: 2025-01-30 13:50:34.866 [INFO][3838] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d2e8cca43b34f6f28ce31ced3d8862774a7412a45ab990316d083129f586c007" iface="eth0" netns="/var/run/netns/cni-a9adbe16-0be1-0330-c6e2-395626e2644c" Jan 30 13:50:34.925696 containerd[1461]: 2025-01-30 13:50:34.866 [INFO][3838] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d2e8cca43b34f6f28ce31ced3d8862774a7412a45ab990316d083129f586c007" Jan 30 13:50:34.925696 containerd[1461]: 2025-01-30 13:50:34.867 [INFO][3838] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d2e8cca43b34f6f28ce31ced3d8862774a7412a45ab990316d083129f586c007" Jan 30 13:50:34.925696 containerd[1461]: 2025-01-30 13:50:34.909 [INFO][3849] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d2e8cca43b34f6f28ce31ced3d8862774a7412a45ab990316d083129f586c007" HandleID="k8s-pod-network.d2e8cca43b34f6f28ce31ced3d8862774a7412a45ab990316d083129f586c007" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--apiserver--5d54978bfb--cbdp5-eth0" Jan 30 13:50:34.925696 containerd[1461]: 2025-01-30 13:50:34.909 [INFO][3849] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:50:34.925696 containerd[1461]: 2025-01-30 13:50:34.909 [INFO][3849] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:50:34.925696 containerd[1461]: 2025-01-30 13:50:34.919 [WARNING][3849] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d2e8cca43b34f6f28ce31ced3d8862774a7412a45ab990316d083129f586c007" HandleID="k8s-pod-network.d2e8cca43b34f6f28ce31ced3d8862774a7412a45ab990316d083129f586c007" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--apiserver--5d54978bfb--cbdp5-eth0" Jan 30 13:50:34.925696 containerd[1461]: 2025-01-30 13:50:34.919 [INFO][3849] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d2e8cca43b34f6f28ce31ced3d8862774a7412a45ab990316d083129f586c007" HandleID="k8s-pod-network.d2e8cca43b34f6f28ce31ced3d8862774a7412a45ab990316d083129f586c007" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--apiserver--5d54978bfb--cbdp5-eth0" Jan 30 13:50:34.925696 containerd[1461]: 2025-01-30 13:50:34.921 [INFO][3849] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:50:34.925696 containerd[1461]: 2025-01-30 13:50:34.924 [INFO][3838] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d2e8cca43b34f6f28ce31ced3d8862774a7412a45ab990316d083129f586c007" Jan 30 13:50:34.926444 containerd[1461]: time="2025-01-30T13:50:34.926387323Z" level=info msg="TearDown network for sandbox \"d2e8cca43b34f6f28ce31ced3d8862774a7412a45ab990316d083129f586c007\" successfully" Jan 30 13:50:34.926503 containerd[1461]: time="2025-01-30T13:50:34.926446679Z" level=info msg="StopPodSandbox for \"d2e8cca43b34f6f28ce31ced3d8862774a7412a45ab990316d083129f586c007\" returns successfully" Jan 30 13:50:34.929336 containerd[1461]: time="2025-01-30T13:50:34.929092765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d54978bfb-cbdp5,Uid:2051b70c-c342-4cb4-8d41-4fbc2d79f291,Namespace:calico-apiserver,Attempt:1,}" Jan 30 13:50:34.933318 systemd[1]: run-netns-cni\x2da9adbe16\x2d0be1\x2d0330\x2dc6e2\x2d395626e2644c.mount: Deactivated successfully. Jan 30 13:50:34.947546 containerd[1461]: 2025-01-30 13:50:34.865 [INFO][3837] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="58209fed96d67646a80783f3c4ecb2875331ab32213690facf0a635f0f5e4d31" Jan 30 13:50:34.947546 containerd[1461]: 2025-01-30 13:50:34.866 [INFO][3837] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="58209fed96d67646a80783f3c4ecb2875331ab32213690facf0a635f0f5e4d31" iface="eth0" netns="/var/run/netns/cni-c89d2b06-33c9-c47d-6b4a-f0d66938a8a3" Jan 30 13:50:34.947546 containerd[1461]: 2025-01-30 13:50:34.868 [INFO][3837] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="58209fed96d67646a80783f3c4ecb2875331ab32213690facf0a635f0f5e4d31" iface="eth0" netns="/var/run/netns/cni-c89d2b06-33c9-c47d-6b4a-f0d66938a8a3" Jan 30 13:50:34.947546 containerd[1461]: 2025-01-30 13:50:34.870 [INFO][3837] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="58209fed96d67646a80783f3c4ecb2875331ab32213690facf0a635f0f5e4d31" iface="eth0" netns="/var/run/netns/cni-c89d2b06-33c9-c47d-6b4a-f0d66938a8a3" Jan 30 13:50:34.947546 containerd[1461]: 2025-01-30 13:50:34.870 [INFO][3837] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="58209fed96d67646a80783f3c4ecb2875331ab32213690facf0a635f0f5e4d31" Jan 30 13:50:34.947546 containerd[1461]: 2025-01-30 13:50:34.870 [INFO][3837] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="58209fed96d67646a80783f3c4ecb2875331ab32213690facf0a635f0f5e4d31" Jan 30 13:50:34.947546 containerd[1461]: 2025-01-30 13:50:34.909 [INFO][3850] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="58209fed96d67646a80783f3c4ecb2875331ab32213690facf0a635f0f5e4d31" HandleID="k8s-pod-network.58209fed96d67646a80783f3c4ecb2875331ab32213690facf0a635f0f5e4d31" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--kube--controllers--5786599c57--zjrvp-eth0" Jan 30 13:50:34.947546 containerd[1461]: 2025-01-30 13:50:34.910 [INFO][3850] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:50:34.947546 containerd[1461]: 2025-01-30 13:50:34.921 [INFO][3850] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:50:34.947546 containerd[1461]: 2025-01-30 13:50:34.939 [WARNING][3850] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="58209fed96d67646a80783f3c4ecb2875331ab32213690facf0a635f0f5e4d31" HandleID="k8s-pod-network.58209fed96d67646a80783f3c4ecb2875331ab32213690facf0a635f0f5e4d31" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--kube--controllers--5786599c57--zjrvp-eth0" Jan 30 13:50:34.947546 containerd[1461]: 2025-01-30 13:50:34.939 [INFO][3850] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="58209fed96d67646a80783f3c4ecb2875331ab32213690facf0a635f0f5e4d31" HandleID="k8s-pod-network.58209fed96d67646a80783f3c4ecb2875331ab32213690facf0a635f0f5e4d31" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--kube--controllers--5786599c57--zjrvp-eth0" Jan 30 13:50:34.947546 containerd[1461]: 2025-01-30 13:50:34.941 [INFO][3850] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:50:34.947546 containerd[1461]: 2025-01-30 13:50:34.943 [INFO][3837] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="58209fed96d67646a80783f3c4ecb2875331ab32213690facf0a635f0f5e4d31" Jan 30 13:50:34.950791 containerd[1461]: time="2025-01-30T13:50:34.947547371Z" level=info msg="TearDown network for sandbox \"58209fed96d67646a80783f3c4ecb2875331ab32213690facf0a635f0f5e4d31\" successfully" Jan 30 13:50:34.950791 containerd[1461]: time="2025-01-30T13:50:34.947578636Z" level=info msg="StopPodSandbox for \"58209fed96d67646a80783f3c4ecb2875331ab32213690facf0a635f0f5e4d31\" returns successfully" Jan 30 13:50:34.950791 containerd[1461]: time="2025-01-30T13:50:34.949699443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5786599c57-zjrvp,Uid:3a17c611-62b6-4f6c-b060-7ce3741ea277,Namespace:calico-system,Attempt:1,}" Jan 30 13:50:34.963231 systemd[1]: run-netns-cni\x2dc89d2b06\x2d33c9\x2dc47d\x2d6b4a\x2df0d66938a8a3.mount: Deactivated successfully. Jan 30 13:50:35.175335 systemd-networkd[1375]: cali020b57b80a6: Link UP Jan 30 13:50:35.176735 systemd-networkd[1375]: cali020b57b80a6: Gained carrier Jan 30 13:50:35.199623 containerd[1461]: 2025-01-30 13:50:35.019 [INFO][3861] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 13:50:35.199623 containerd[1461]: 2025-01-30 13:50:35.046 [INFO][3861] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--apiserver--5d54978bfb--cbdp5-eth0 calico-apiserver-5d54978bfb- calico-apiserver 2051b70c-c342-4cb4-8d41-4fbc2d79f291 720 0 2025-01-30 13:50:12 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d54978bfb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal calico-apiserver-5d54978bfb-cbdp5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali020b57b80a6 [] []}} ContainerID="dcc2762ea0f545d037bb6ede10414711518de6f44933d9f31fdbf411ac5002f4" Namespace="calico-apiserver" Pod="calico-apiserver-5d54978bfb-cbdp5" WorkloadEndpoint="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--apiserver--5d54978bfb--cbdp5-" Jan 30 13:50:35.199623 containerd[1461]: 2025-01-30 13:50:35.046 [INFO][3861] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="dcc2762ea0f545d037bb6ede10414711518de6f44933d9f31fdbf411ac5002f4" Namespace="calico-apiserver" Pod="calico-apiserver-5d54978bfb-cbdp5" WorkloadEndpoint="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--apiserver--5d54978bfb--cbdp5-eth0" Jan 30 13:50:35.199623 containerd[1461]: 2025-01-30 13:50:35.109 [INFO][3884] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dcc2762ea0f545d037bb6ede10414711518de6f44933d9f31fdbf411ac5002f4" HandleID="k8s-pod-network.dcc2762ea0f545d037bb6ede10414711518de6f44933d9f31fdbf411ac5002f4" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--apiserver--5d54978bfb--cbdp5-eth0" Jan 30 13:50:35.199623 containerd[1461]: 2025-01-30 13:50:35.125 [INFO][3884] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dcc2762ea0f545d037bb6ede10414711518de6f44933d9f31fdbf411ac5002f4" HandleID="k8s-pod-network.dcc2762ea0f545d037bb6ede10414711518de6f44933d9f31fdbf411ac5002f4" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--apiserver--5d54978bfb--cbdp5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003359b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal", "pod":"calico-apiserver-5d54978bfb-cbdp5", "timestamp":"2025-01-30 13:50:35.10980973 +0000 UTC"}, Hostname:"ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:50:35.199623 containerd[1461]: 2025-01-30 13:50:35.126 [INFO][3884] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:50:35.199623 containerd[1461]: 2025-01-30 13:50:35.126 [INFO][3884] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:50:35.199623 containerd[1461]: 2025-01-30 13:50:35.126 [INFO][3884] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal' Jan 30 13:50:35.199623 containerd[1461]: 2025-01-30 13:50:35.129 [INFO][3884] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.dcc2762ea0f545d037bb6ede10414711518de6f44933d9f31fdbf411ac5002f4" host="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:50:35.199623 containerd[1461]: 2025-01-30 13:50:35.136 [INFO][3884] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:50:35.199623 containerd[1461]: 2025-01-30 13:50:35.141 [INFO][3884] ipam/ipam.go 489: Trying affinity for 192.168.87.192/26 host="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:50:35.199623 containerd[1461]: 2025-01-30 13:50:35.143 [INFO][3884] ipam/ipam.go 155: Attempting to load block cidr=192.168.87.192/26 host="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:50:35.199623 containerd[1461]: 2025-01-30 13:50:35.146 [INFO][3884] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.87.192/26 host="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:50:35.199623 containerd[1461]: 2025-01-30 13:50:35.146 [INFO][3884] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.87.192/26 handle="k8s-pod-network.dcc2762ea0f545d037bb6ede10414711518de6f44933d9f31fdbf411ac5002f4" host="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:50:35.199623 containerd[1461]: 2025-01-30 13:50:35.147 [INFO][3884] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.dcc2762ea0f545d037bb6ede10414711518de6f44933d9f31fdbf411ac5002f4 Jan 30 13:50:35.199623 containerd[1461]: 2025-01-30 13:50:35.154 [INFO][3884] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.87.192/26 handle="k8s-pod-network.dcc2762ea0f545d037bb6ede10414711518de6f44933d9f31fdbf411ac5002f4" host="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:50:35.199623 containerd[1461]: 2025-01-30 13:50:35.161 [INFO][3884] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.87.193/26] block=192.168.87.192/26 handle="k8s-pod-network.dcc2762ea0f545d037bb6ede10414711518de6f44933d9f31fdbf411ac5002f4" host="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:50:35.199623 containerd[1461]: 2025-01-30 13:50:35.161 [INFO][3884] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.87.193/26] handle="k8s-pod-network.dcc2762ea0f545d037bb6ede10414711518de6f44933d9f31fdbf411ac5002f4" host="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:50:35.199623 containerd[1461]: 2025-01-30 13:50:35.161 [INFO][3884] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:50:35.199623 containerd[1461]: 2025-01-30 13:50:35.161 [INFO][3884] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.87.193/26] IPv6=[] ContainerID="dcc2762ea0f545d037bb6ede10414711518de6f44933d9f31fdbf411ac5002f4" HandleID="k8s-pod-network.dcc2762ea0f545d037bb6ede10414711518de6f44933d9f31fdbf411ac5002f4" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--apiserver--5d54978bfb--cbdp5-eth0" Jan 30 13:50:35.201160 containerd[1461]: 2025-01-30 13:50:35.163 [INFO][3861] cni-plugin/k8s.go 386: Populated endpoint ContainerID="dcc2762ea0f545d037bb6ede10414711518de6f44933d9f31fdbf411ac5002f4" Namespace="calico-apiserver" Pod="calico-apiserver-5d54978bfb-cbdp5" WorkloadEndpoint="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--apiserver--5d54978bfb--cbdp5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--apiserver--5d54978bfb--cbdp5-eth0", GenerateName:"calico-apiserver-5d54978bfb-", Namespace:"calico-apiserver", SelfLink:"", UID:"2051b70c-c342-4cb4-8d41-4fbc2d79f291", ResourceVersion:"720", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 50, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d54978bfb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-5d54978bfb-cbdp5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.87.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali020b57b80a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:50:35.201160 containerd[1461]: 2025-01-30 13:50:35.164 [INFO][3861] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.87.193/32] ContainerID="dcc2762ea0f545d037bb6ede10414711518de6f44933d9f31fdbf411ac5002f4" Namespace="calico-apiserver" Pod="calico-apiserver-5d54978bfb-cbdp5" WorkloadEndpoint="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--apiserver--5d54978bfb--cbdp5-eth0" Jan 30 13:50:35.201160 containerd[1461]: 2025-01-30 13:50:35.164 [INFO][3861] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali020b57b80a6 ContainerID="dcc2762ea0f545d037bb6ede10414711518de6f44933d9f31fdbf411ac5002f4" Namespace="calico-apiserver" Pod="calico-apiserver-5d54978bfb-cbdp5" WorkloadEndpoint="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--apiserver--5d54978bfb--cbdp5-eth0" Jan 30 13:50:35.201160 containerd[1461]: 2025-01-30 13:50:35.174 [INFO][3861] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dcc2762ea0f545d037bb6ede10414711518de6f44933d9f31fdbf411ac5002f4" Namespace="calico-apiserver" Pod="calico-apiserver-5d54978bfb-cbdp5" WorkloadEndpoint="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--apiserver--5d54978bfb--cbdp5-eth0" Jan 30 13:50:35.201160 containerd[1461]: 2025-01-30 13:50:35.175 [INFO][3861] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="dcc2762ea0f545d037bb6ede10414711518de6f44933d9f31fdbf411ac5002f4" Namespace="calico-apiserver" Pod="calico-apiserver-5d54978bfb-cbdp5" WorkloadEndpoint="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--apiserver--5d54978bfb--cbdp5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--apiserver--5d54978bfb--cbdp5-eth0", GenerateName:"calico-apiserver-5d54978bfb-", Namespace:"calico-apiserver", SelfLink:"", UID:"2051b70c-c342-4cb4-8d41-4fbc2d79f291", ResourceVersion:"720", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 50, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d54978bfb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal", ContainerID:"dcc2762ea0f545d037bb6ede10414711518de6f44933d9f31fdbf411ac5002f4", Pod:"calico-apiserver-5d54978bfb-cbdp5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.87.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali020b57b80a6", MAC:"1a:6e:d8:00:cf:5f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:50:35.201160 containerd[1461]: 2025-01-30 13:50:35.196 [INFO][3861] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="dcc2762ea0f545d037bb6ede10414711518de6f44933d9f31fdbf411ac5002f4" Namespace="calico-apiserver" Pod="calico-apiserver-5d54978bfb-cbdp5" WorkloadEndpoint="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--apiserver--5d54978bfb--cbdp5-eth0" Jan 30 13:50:35.230811 containerd[1461]: time="2025-01-30T13:50:35.230461930Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:50:35.230811 containerd[1461]: time="2025-01-30T13:50:35.230596149Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:50:35.230811 containerd[1461]: time="2025-01-30T13:50:35.230627333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:50:35.231562 containerd[1461]: time="2025-01-30T13:50:35.230835478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:50:35.284511 systemd[1]: Started cri-containerd-dcc2762ea0f545d037bb6ede10414711518de6f44933d9f31fdbf411ac5002f4.scope - libcontainer container dcc2762ea0f545d037bb6ede10414711518de6f44933d9f31fdbf411ac5002f4. Jan 30 13:50:35.304108 systemd-networkd[1375]: cali7dc03b293f3: Link UP Jan 30 13:50:35.305151 systemd-networkd[1375]: cali7dc03b293f3: Gained carrier Jan 30 13:50:35.351750 containerd[1461]: 2025-01-30 13:50:35.034 [INFO][3870] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 13:50:35.351750 containerd[1461]: 2025-01-30 13:50:35.050 [INFO][3870] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--kube--controllers--5786599c57--zjrvp-eth0 calico-kube-controllers-5786599c57- calico-system 3a17c611-62b6-4f6c-b060-7ce3741ea277 721 0 2025-01-30 13:50:12 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5786599c57 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal calico-kube-controllers-5786599c57-zjrvp eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7dc03b293f3 [] []}} ContainerID="f3100e757cdea7a3d112766af456d960c52fb3451feb4884052237c81f390b64" Namespace="calico-system" Pod="calico-kube-controllers-5786599c57-zjrvp" WorkloadEndpoint="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--kube--controllers--5786599c57--zjrvp-" Jan 30 13:50:35.351750 containerd[1461]: 2025-01-30 13:50:35.050 [INFO][3870] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f3100e757cdea7a3d112766af456d960c52fb3451feb4884052237c81f390b64" Namespace="calico-system" Pod="calico-kube-controllers-5786599c57-zjrvp" WorkloadEndpoint="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--kube--controllers--5786599c57--zjrvp-eth0" Jan 30 13:50:35.351750 containerd[1461]: 2025-01-30 13:50:35.120 [INFO][3888] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f3100e757cdea7a3d112766af456d960c52fb3451feb4884052237c81f390b64" HandleID="k8s-pod-network.f3100e757cdea7a3d112766af456d960c52fb3451feb4884052237c81f390b64" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--kube--controllers--5786599c57--zjrvp-eth0" Jan 30 13:50:35.351750 containerd[1461]: 2025-01-30 13:50:35.133 [INFO][3888] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f3100e757cdea7a3d112766af456d960c52fb3451feb4884052237c81f390b64" HandleID="k8s-pod-network.f3100e757cdea7a3d112766af456d960c52fb3451feb4884052237c81f390b64" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--kube--controllers--5786599c57--zjrvp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000334c80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal", "pod":"calico-kube-controllers-5786599c57-zjrvp", "timestamp":"2025-01-30 13:50:35.120004635 +0000 UTC"}, Hostname:"ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:50:35.351750 containerd[1461]: 2025-01-30 13:50:35.134 [INFO][3888] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:50:35.351750 containerd[1461]: 2025-01-30 13:50:35.161 [INFO][3888] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:50:35.351750 containerd[1461]: 2025-01-30 13:50:35.161 [INFO][3888] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal' Jan 30 13:50:35.351750 containerd[1461]: 2025-01-30 13:50:35.232 [INFO][3888] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f3100e757cdea7a3d112766af456d960c52fb3451feb4884052237c81f390b64" host="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:50:35.351750 containerd[1461]: 2025-01-30 13:50:35.252 [INFO][3888] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:50:35.351750 containerd[1461]: 2025-01-30 13:50:35.264 [INFO][3888] ipam/ipam.go 489: Trying affinity for 192.168.87.192/26 host="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:50:35.351750 containerd[1461]: 2025-01-30 13:50:35.267 [INFO][3888] ipam/ipam.go 155: Attempting to load block cidr=192.168.87.192/26 host="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:50:35.351750 containerd[1461]: 2025-01-30 13:50:35.270 [INFO][3888] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.87.192/26 host="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:50:35.351750 containerd[1461]: 2025-01-30 13:50:35.270 [INFO][3888] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.87.192/26 handle="k8s-pod-network.f3100e757cdea7a3d112766af456d960c52fb3451feb4884052237c81f390b64" host="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:50:35.351750 containerd[1461]: 2025-01-30 13:50:35.272 [INFO][3888] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f3100e757cdea7a3d112766af456d960c52fb3451feb4884052237c81f390b64 Jan 30 13:50:35.351750 containerd[1461]: 2025-01-30 13:50:35.282 [INFO][3888] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.87.192/26 handle="k8s-pod-network.f3100e757cdea7a3d112766af456d960c52fb3451feb4884052237c81f390b64" host="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:50:35.351750 containerd[1461]: 2025-01-30 13:50:35.295 [INFO][3888] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.87.194/26] block=192.168.87.192/26 handle="k8s-pod-network.f3100e757cdea7a3d112766af456d960c52fb3451feb4884052237c81f390b64" host="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:50:35.351750 containerd[1461]: 2025-01-30 13:50:35.295 [INFO][3888] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.87.194/26] handle="k8s-pod-network.f3100e757cdea7a3d112766af456d960c52fb3451feb4884052237c81f390b64" host="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:50:35.351750 containerd[1461]: 2025-01-30 13:50:35.295 [INFO][3888] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:50:35.351750 containerd[1461]: 2025-01-30 13:50:35.296 [INFO][3888] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.87.194/26] IPv6=[] ContainerID="f3100e757cdea7a3d112766af456d960c52fb3451feb4884052237c81f390b64" HandleID="k8s-pod-network.f3100e757cdea7a3d112766af456d960c52fb3451feb4884052237c81f390b64" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--kube--controllers--5786599c57--zjrvp-eth0" Jan 30 13:50:35.354023 containerd[1461]: 2025-01-30 13:50:35.298 [INFO][3870] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f3100e757cdea7a3d112766af456d960c52fb3451feb4884052237c81f390b64" Namespace="calico-system" Pod="calico-kube-controllers-5786599c57-zjrvp" WorkloadEndpoint="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--kube--controllers--5786599c57--zjrvp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--kube--controllers--5786599c57--zjrvp-eth0", GenerateName:"calico-kube-controllers-5786599c57-", Namespace:"calico-system", SelfLink:"", UID:"3a17c611-62b6-4f6c-b060-7ce3741ea277", ResourceVersion:"721", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 50, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5786599c57", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-kube-controllers-5786599c57-zjrvp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.87.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7dc03b293f3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:50:35.354023 containerd[1461]: 2025-01-30 13:50:35.298 [INFO][3870] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.87.194/32] ContainerID="f3100e757cdea7a3d112766af456d960c52fb3451feb4884052237c81f390b64" Namespace="calico-system" Pod="calico-kube-controllers-5786599c57-zjrvp" WorkloadEndpoint="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--kube--controllers--5786599c57--zjrvp-eth0" Jan 30 13:50:35.354023 containerd[1461]: 2025-01-30 13:50:35.298 [INFO][3870] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7dc03b293f3 ContainerID="f3100e757cdea7a3d112766af456d960c52fb3451feb4884052237c81f390b64" Namespace="calico-system" Pod="calico-kube-controllers-5786599c57-zjrvp" WorkloadEndpoint="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--kube--controllers--5786599c57--zjrvp-eth0" Jan 30 13:50:35.354023 containerd[1461]: 2025-01-30 13:50:35.301 [INFO][3870] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f3100e757cdea7a3d112766af456d960c52fb3451feb4884052237c81f390b64" Namespace="calico-system" Pod="calico-kube-controllers-5786599c57-zjrvp" WorkloadEndpoint="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--kube--controllers--5786599c57--zjrvp-eth0" Jan 30 13:50:35.354023 containerd[1461]: 2025-01-30 13:50:35.301 [INFO][3870] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f3100e757cdea7a3d112766af456d960c52fb3451feb4884052237c81f390b64" Namespace="calico-system" Pod="calico-kube-controllers-5786599c57-zjrvp" WorkloadEndpoint="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--kube--controllers--5786599c57--zjrvp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--kube--controllers--5786599c57--zjrvp-eth0", GenerateName:"calico-kube-controllers-5786599c57-", Namespace:"calico-system", SelfLink:"", UID:"3a17c611-62b6-4f6c-b060-7ce3741ea277", ResourceVersion:"721", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 50, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5786599c57", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal", ContainerID:"f3100e757cdea7a3d112766af456d960c52fb3451feb4884052237c81f390b64", Pod:"calico-kube-controllers-5786599c57-zjrvp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.87.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7dc03b293f3", MAC:"a2:b2:b1:c3:d5:71", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:50:35.354023 containerd[1461]: 2025-01-30 13:50:35.348 [INFO][3870] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f3100e757cdea7a3d112766af456d960c52fb3451feb4884052237c81f390b64" Namespace="calico-system" Pod="calico-kube-controllers-5786599c57-zjrvp" WorkloadEndpoint="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--kube--controllers--5786599c57--zjrvp-eth0" Jan 30 13:50:35.417402 containerd[1461]: time="2025-01-30T13:50:35.414428132Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:50:35.417402 containerd[1461]: time="2025-01-30T13:50:35.414955011Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:50:35.417402 containerd[1461]: time="2025-01-30T13:50:35.414976620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:50:35.417402 containerd[1461]: time="2025-01-30T13:50:35.415113985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:50:35.458470 systemd[1]: Started cri-containerd-f3100e757cdea7a3d112766af456d960c52fb3451feb4884052237c81f390b64.scope - libcontainer container f3100e757cdea7a3d112766af456d960c52fb3451feb4884052237c81f390b64. Jan 30 13:50:35.557456 containerd[1461]: time="2025-01-30T13:50:35.557369779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d54978bfb-cbdp5,Uid:2051b70c-c342-4cb4-8d41-4fbc2d79f291,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"dcc2762ea0f545d037bb6ede10414711518de6f44933d9f31fdbf411ac5002f4\"" Jan 30 13:50:35.561999 containerd[1461]: time="2025-01-30T13:50:35.561678371Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 13:50:35.574990 containerd[1461]: time="2025-01-30T13:50:35.574937403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5786599c57-zjrvp,Uid:3a17c611-62b6-4f6c-b060-7ce3741ea277,Namespace:calico-system,Attempt:1,} returns sandbox id \"f3100e757cdea7a3d112766af456d960c52fb3451feb4884052237c81f390b64\"" Jan 30 13:50:35.767605 containerd[1461]: time="2025-01-30T13:50:35.767152611Z" level=info msg="StopPodSandbox for \"9aff32cda864b2d1608ee553006b832fbb7a75307a308fa69c30294e172e612c\"" Jan 30 13:50:35.771064 containerd[1461]: time="2025-01-30T13:50:35.770988307Z" level=info msg="StopPodSandbox for \"cd390294d8e3de20565a85fbc0d0431ab47850fb5ca309b5a4aa9da84fd28689\"" Jan 30 13:50:35.921320 containerd[1461]: 2025-01-30 13:50:35.862 [INFO][4049] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9aff32cda864b2d1608ee553006b832fbb7a75307a308fa69c30294e172e612c" Jan 30 13:50:35.921320 containerd[1461]: 2025-01-30 13:50:35.862 [INFO][4049] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9aff32cda864b2d1608ee553006b832fbb7a75307a308fa69c30294e172e612c" iface="eth0" netns="/var/run/netns/cni-41cb03c1-f944-78c9-a3c2-06dcf871f616" Jan 30 13:50:35.921320 containerd[1461]: 2025-01-30 13:50:35.863 [INFO][4049] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9aff32cda864b2d1608ee553006b832fbb7a75307a308fa69c30294e172e612c" iface="eth0" netns="/var/run/netns/cni-41cb03c1-f944-78c9-a3c2-06dcf871f616" Jan 30 13:50:35.921320 containerd[1461]: 2025-01-30 13:50:35.863 [INFO][4049] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9aff32cda864b2d1608ee553006b832fbb7a75307a308fa69c30294e172e612c" iface="eth0" netns="/var/run/netns/cni-41cb03c1-f944-78c9-a3c2-06dcf871f616" Jan 30 13:50:35.921320 containerd[1461]: 2025-01-30 13:50:35.863 [INFO][4049] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9aff32cda864b2d1608ee553006b832fbb7a75307a308fa69c30294e172e612c" Jan 30 13:50:35.921320 containerd[1461]: 2025-01-30 13:50:35.864 [INFO][4049] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9aff32cda864b2d1608ee553006b832fbb7a75307a308fa69c30294e172e612c" Jan 30 13:50:35.921320 containerd[1461]: 2025-01-30 13:50:35.904 [INFO][4063] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9aff32cda864b2d1608ee553006b832fbb7a75307a308fa69c30294e172e612c" HandleID="k8s-pod-network.9aff32cda864b2d1608ee553006b832fbb7a75307a308fa69c30294e172e612c" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-csi--node--driver--zc8sb-eth0" Jan 30 13:50:35.921320 containerd[1461]: 2025-01-30 13:50:35.905 [INFO][4063] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:50:35.921320 containerd[1461]: 2025-01-30 13:50:35.905 [INFO][4063] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:50:35.921320 containerd[1461]: 2025-01-30 13:50:35.914 [WARNING][4063] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9aff32cda864b2d1608ee553006b832fbb7a75307a308fa69c30294e172e612c" HandleID="k8s-pod-network.9aff32cda864b2d1608ee553006b832fbb7a75307a308fa69c30294e172e612c" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-csi--node--driver--zc8sb-eth0" Jan 30 13:50:35.921320 containerd[1461]: 2025-01-30 13:50:35.914 [INFO][4063] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9aff32cda864b2d1608ee553006b832fbb7a75307a308fa69c30294e172e612c" HandleID="k8s-pod-network.9aff32cda864b2d1608ee553006b832fbb7a75307a308fa69c30294e172e612c" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-csi--node--driver--zc8sb-eth0" Jan 30 13:50:35.921320 containerd[1461]: 2025-01-30 13:50:35.916 [INFO][4063] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:50:35.921320 containerd[1461]: 2025-01-30 13:50:35.919 [INFO][4049] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9aff32cda864b2d1608ee553006b832fbb7a75307a308fa69c30294e172e612c" Jan 30 13:50:35.923611 containerd[1461]: time="2025-01-30T13:50:35.921241148Z" level=info msg="TearDown network for sandbox \"9aff32cda864b2d1608ee553006b832fbb7a75307a308fa69c30294e172e612c\" successfully" Jan 30 13:50:35.923611 containerd[1461]: time="2025-01-30T13:50:35.921440054Z" level=info msg="StopPodSandbox for \"9aff32cda864b2d1608ee553006b832fbb7a75307a308fa69c30294e172e612c\" returns successfully" Jan 30 13:50:35.924820 containerd[1461]: time="2025-01-30T13:50:35.924574793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zc8sb,Uid:8fb98178-8803-46f6-a0be-7adf365c426b,Namespace:calico-system,Attempt:1,}" Jan 30 13:50:35.936930 systemd[1]: run-netns-cni\x2d41cb03c1\x2df944\x2d78c9\x2da3c2\x2d06dcf871f616.mount: Deactivated successfully. Jan 30 13:50:35.949994 containerd[1461]: 2025-01-30 13:50:35.861 [INFO][4050] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cd390294d8e3de20565a85fbc0d0431ab47850fb5ca309b5a4aa9da84fd28689" Jan 30 13:50:35.949994 containerd[1461]: 2025-01-30 13:50:35.862 [INFO][4050] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cd390294d8e3de20565a85fbc0d0431ab47850fb5ca309b5a4aa9da84fd28689" iface="eth0" netns="/var/run/netns/cni-412743ca-0104-d03b-a476-2bbc3e22ef2b" Jan 30 13:50:35.949994 containerd[1461]: 2025-01-30 13:50:35.865 [INFO][4050] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cd390294d8e3de20565a85fbc0d0431ab47850fb5ca309b5a4aa9da84fd28689" iface="eth0" netns="/var/run/netns/cni-412743ca-0104-d03b-a476-2bbc3e22ef2b" Jan 30 13:50:35.949994 containerd[1461]: 2025-01-30 13:50:35.866 [INFO][4050] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cd390294d8e3de20565a85fbc0d0431ab47850fb5ca309b5a4aa9da84fd28689" iface="eth0" netns="/var/run/netns/cni-412743ca-0104-d03b-a476-2bbc3e22ef2b" Jan 30 13:50:35.949994 containerd[1461]: 2025-01-30 13:50:35.866 [INFO][4050] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cd390294d8e3de20565a85fbc0d0431ab47850fb5ca309b5a4aa9da84fd28689" Jan 30 13:50:35.949994 containerd[1461]: 2025-01-30 13:50:35.867 [INFO][4050] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cd390294d8e3de20565a85fbc0d0431ab47850fb5ca309b5a4aa9da84fd28689" Jan 30 13:50:35.949994 containerd[1461]: 2025-01-30 13:50:35.908 [INFO][4064] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cd390294d8e3de20565a85fbc0d0431ab47850fb5ca309b5a4aa9da84fd28689" HandleID="k8s-pod-network.cd390294d8e3de20565a85fbc0d0431ab47850fb5ca309b5a4aa9da84fd28689" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--xtgk9-eth0" Jan 30 13:50:35.949994 containerd[1461]: 2025-01-30 13:50:35.908 [INFO][4064] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:50:35.949994 containerd[1461]: 2025-01-30 13:50:35.916 [INFO][4064] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:50:35.949994 containerd[1461]: 2025-01-30 13:50:35.941 [WARNING][4064] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cd390294d8e3de20565a85fbc0d0431ab47850fb5ca309b5a4aa9da84fd28689" HandleID="k8s-pod-network.cd390294d8e3de20565a85fbc0d0431ab47850fb5ca309b5a4aa9da84fd28689" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--xtgk9-eth0" Jan 30 13:50:35.949994 containerd[1461]: 2025-01-30 13:50:35.942 [INFO][4064] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cd390294d8e3de20565a85fbc0d0431ab47850fb5ca309b5a4aa9da84fd28689" HandleID="k8s-pod-network.cd390294d8e3de20565a85fbc0d0431ab47850fb5ca309b5a4aa9da84fd28689" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--xtgk9-eth0" Jan 30 13:50:35.949994 containerd[1461]: 2025-01-30 13:50:35.946 [INFO][4064] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:50:35.949994 containerd[1461]: 2025-01-30 13:50:35.948 [INFO][4050] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cd390294d8e3de20565a85fbc0d0431ab47850fb5ca309b5a4aa9da84fd28689" Jan 30 13:50:35.949994 containerd[1461]: time="2025-01-30T13:50:35.949924118Z" level=info msg="TearDown network for sandbox \"cd390294d8e3de20565a85fbc0d0431ab47850fb5ca309b5a4aa9da84fd28689\" successfully" Jan 30 13:50:35.949994 containerd[1461]: time="2025-01-30T13:50:35.949956739Z" level=info msg="StopPodSandbox for \"cd390294d8e3de20565a85fbc0d0431ab47850fb5ca309b5a4aa9da84fd28689\" returns successfully" Jan 30 13:50:35.954580 containerd[1461]: time="2025-01-30T13:50:35.954529709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xtgk9,Uid:baa3f6f5-b333-4a45-a993-1c13d0b07855,Namespace:kube-system,Attempt:1,}" Jan 30 13:50:35.956447 systemd[1]: run-netns-cni\x2d412743ca\x2d0104\x2dd03b\x2da476\x2d2bbc3e22ef2b.mount: Deactivated successfully. Jan 30 13:50:36.172635 systemd-networkd[1375]: cali1c2b6de50bd: Link UP Jan 30 13:50:36.173558 systemd-networkd[1375]: cali1c2b6de50bd: Gained carrier Jan 30 13:50:36.197392 containerd[1461]: 2025-01-30 13:50:36.022 [INFO][4077] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 13:50:36.197392 containerd[1461]: 2025-01-30 13:50:36.046 [INFO][4077] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-csi--node--driver--zc8sb-eth0 csi-node-driver- calico-system 8fb98178-8803-46f6-a0be-7adf365c426b 734 0 2025-01-30 13:50:12 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:84cddb44f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal csi-node-driver-zc8sb eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali1c2b6de50bd [] []}} ContainerID="c7f52d56e98751ec93e3af60ca27e8f864cdde336f2621ad7419ffed3ea11fd0" Namespace="calico-system" Pod="csi-node-driver-zc8sb" WorkloadEndpoint="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-csi--node--driver--zc8sb-" Jan 30 13:50:36.197392 containerd[1461]: 2025-01-30 13:50:36.046 [INFO][4077] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c7f52d56e98751ec93e3af60ca27e8f864cdde336f2621ad7419ffed3ea11fd0" Namespace="calico-system" Pod="csi-node-driver-zc8sb" WorkloadEndpoint="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-csi--node--driver--zc8sb-eth0" Jan 30 13:50:36.197392 containerd[1461]: 2025-01-30 13:50:36.103 [INFO][4100] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c7f52d56e98751ec93e3af60ca27e8f864cdde336f2621ad7419ffed3ea11fd0" HandleID="k8s-pod-network.c7f52d56e98751ec93e3af60ca27e8f864cdde336f2621ad7419ffed3ea11fd0" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-csi--node--driver--zc8sb-eth0" Jan 30 13:50:36.197392 containerd[1461]: 2025-01-30 13:50:36.124 [INFO][4100] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c7f52d56e98751ec93e3af60ca27e8f864cdde336f2621ad7419ffed3ea11fd0" HandleID="k8s-pod-network.c7f52d56e98751ec93e3af60ca27e8f864cdde336f2621ad7419ffed3ea11fd0" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-csi--node--driver--zc8sb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290ae0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal", "pod":"csi-node-driver-zc8sb", "timestamp":"2025-01-30 13:50:36.103961902 +0000 UTC"}, Hostname:"ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:50:36.197392 containerd[1461]: 2025-01-30 13:50:36.125 [INFO][4100] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:50:36.197392 containerd[1461]: 2025-01-30 13:50:36.125 [INFO][4100] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:50:36.197392 containerd[1461]: 2025-01-30 13:50:36.125 [INFO][4100] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal' Jan 30 13:50:36.197392 containerd[1461]: 2025-01-30 13:50:36.129 [INFO][4100] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c7f52d56e98751ec93e3af60ca27e8f864cdde336f2621ad7419ffed3ea11fd0" host="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:50:36.197392 containerd[1461]: 2025-01-30 13:50:36.135 [INFO][4100] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:50:36.197392 containerd[1461]: 2025-01-30 13:50:36.143 [INFO][4100] ipam/ipam.go 489: Trying affinity for 192.168.87.192/26 host="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:50:36.197392 containerd[1461]: 2025-01-30 13:50:36.145 [INFO][4100] ipam/ipam.go 155: Attempting to load block cidr=192.168.87.192/26 host="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:50:36.197392 containerd[1461]: 2025-01-30 13:50:36.148 [INFO][4100] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.87.192/26 host="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:50:36.197392 containerd[1461]: 2025-01-30 13:50:36.148 [INFO][4100] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.87.192/26 handle="k8s-pod-network.c7f52d56e98751ec93e3af60ca27e8f864cdde336f2621ad7419ffed3ea11fd0" host="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:50:36.197392 containerd[1461]: 2025-01-30 13:50:36.150 [INFO][4100] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c7f52d56e98751ec93e3af60ca27e8f864cdde336f2621ad7419ffed3ea11fd0 Jan 30 13:50:36.197392 containerd[1461]: 2025-01-30 13:50:36.156 [INFO][4100] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.87.192/26 handle="k8s-pod-network.c7f52d56e98751ec93e3af60ca27e8f864cdde336f2621ad7419ffed3ea11fd0" host="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:50:36.197392 containerd[1461]: 2025-01-30 13:50:36.166 [INFO][4100] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.87.195/26] block=192.168.87.192/26 handle="k8s-pod-network.c7f52d56e98751ec93e3af60ca27e8f864cdde336f2621ad7419ffed3ea11fd0" host="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:50:36.197392 containerd[1461]: 2025-01-30 13:50:36.166 [INFO][4100] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.87.195/26] handle="k8s-pod-network.c7f52d56e98751ec93e3af60ca27e8f864cdde336f2621ad7419ffed3ea11fd0" host="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:50:36.197392 containerd[1461]: 2025-01-30 13:50:36.166 [INFO][4100] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:50:36.197392 containerd[1461]: 2025-01-30 13:50:36.166 [INFO][4100] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.87.195/26] IPv6=[] ContainerID="c7f52d56e98751ec93e3af60ca27e8f864cdde336f2621ad7419ffed3ea11fd0" HandleID="k8s-pod-network.c7f52d56e98751ec93e3af60ca27e8f864cdde336f2621ad7419ffed3ea11fd0" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-csi--node--driver--zc8sb-eth0" Jan 30 13:50:36.198695 containerd[1461]: 2025-01-30 13:50:36.168 [INFO][4077] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c7f52d56e98751ec93e3af60ca27e8f864cdde336f2621ad7419ffed3ea11fd0" Namespace="calico-system" Pod="csi-node-driver-zc8sb" WorkloadEndpoint="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-csi--node--driver--zc8sb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-csi--node--driver--zc8sb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8fb98178-8803-46f6-a0be-7adf365c426b", ResourceVersion:"734", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 50, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal", ContainerID:"", Pod:"csi-node-driver-zc8sb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.87.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1c2b6de50bd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:50:36.198695 containerd[1461]: 2025-01-30 13:50:36.169 [INFO][4077] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.87.195/32] ContainerID="c7f52d56e98751ec93e3af60ca27e8f864cdde336f2621ad7419ffed3ea11fd0" Namespace="calico-system" Pod="csi-node-driver-zc8sb" WorkloadEndpoint="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-csi--node--driver--zc8sb-eth0" Jan 30 13:50:36.198695 containerd[1461]: 2025-01-30 13:50:36.169 [INFO][4077] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1c2b6de50bd ContainerID="c7f52d56e98751ec93e3af60ca27e8f864cdde336f2621ad7419ffed3ea11fd0" Namespace="calico-system" Pod="csi-node-driver-zc8sb" WorkloadEndpoint="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-csi--node--driver--zc8sb-eth0" Jan 30 13:50:36.198695 containerd[1461]: 2025-01-30 13:50:36.173 [INFO][4077] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c7f52d56e98751ec93e3af60ca27e8f864cdde336f2621ad7419ffed3ea11fd0" Namespace="calico-system" Pod="csi-node-driver-zc8sb" WorkloadEndpoint="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-csi--node--driver--zc8sb-eth0" Jan 30 13:50:36.198695 containerd[1461]: 2025-01-30 13:50:36.174 [INFO][4077] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c7f52d56e98751ec93e3af60ca27e8f864cdde336f2621ad7419ffed3ea11fd0" Namespace="calico-system" Pod="csi-node-driver-zc8sb" WorkloadEndpoint="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-csi--node--driver--zc8sb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-csi--node--driver--zc8sb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8fb98178-8803-46f6-a0be-7adf365c426b", ResourceVersion:"734", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 50, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal", ContainerID:"c7f52d56e98751ec93e3af60ca27e8f864cdde336f2621ad7419ffed3ea11fd0", Pod:"csi-node-driver-zc8sb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.87.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1c2b6de50bd", MAC:"ce:39:27:89:f0:a2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:50:36.198695 containerd[1461]: 2025-01-30 13:50:36.193 [INFO][4077] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c7f52d56e98751ec93e3af60ca27e8f864cdde336f2621ad7419ffed3ea11fd0" Namespace="calico-system" Pod="csi-node-driver-zc8sb" WorkloadEndpoint="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-csi--node--driver--zc8sb-eth0" Jan 30 13:50:36.224768 containerd[1461]: time="2025-01-30T13:50:36.224630774Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:50:36.224768 containerd[1461]: time="2025-01-30T13:50:36.224703643Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:50:36.224982 containerd[1461]: time="2025-01-30T13:50:36.224728556Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:50:36.225300 containerd[1461]: time="2025-01-30T13:50:36.225166509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:50:36.278521 systemd[1]: Started cri-containerd-c7f52d56e98751ec93e3af60ca27e8f864cdde336f2621ad7419ffed3ea11fd0.scope - libcontainer container c7f52d56e98751ec93e3af60ca27e8f864cdde336f2621ad7419ffed3ea11fd0. Jan 30 13:50:36.307376 systemd-networkd[1375]: cali84a8359e760: Link UP Jan 30 13:50:36.307735 systemd-networkd[1375]: cali84a8359e760: Gained carrier Jan 30 13:50:36.333446 containerd[1461]: 2025-01-30 13:50:36.040 [INFO][4085] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 13:50:36.333446 containerd[1461]: 2025-01-30 13:50:36.060 [INFO][4085] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--xtgk9-eth0 coredns-668d6bf9bc- kube-system baa3f6f5-b333-4a45-a993-1c13d0b07855 733 0 2025-01-30 13:50:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal coredns-668d6bf9bc-xtgk9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali84a8359e760 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="911fec02918239eed789ca17354c89c5e2916ffd92703c2d5d77dbe718af32e2" Namespace="kube-system" Pod="coredns-668d6bf9bc-xtgk9" WorkloadEndpoint="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--xtgk9-" Jan 30 13:50:36.333446 containerd[1461]: 2025-01-30 13:50:36.060 [INFO][4085] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="911fec02918239eed789ca17354c89c5e2916ffd92703c2d5d77dbe718af32e2" Namespace="kube-system" Pod="coredns-668d6bf9bc-xtgk9" WorkloadEndpoint="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--xtgk9-eth0" Jan 30 13:50:36.333446 containerd[1461]: 2025-01-30 13:50:36.128 [INFO][4104] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="911fec02918239eed789ca17354c89c5e2916ffd92703c2d5d77dbe718af32e2" HandleID="k8s-pod-network.911fec02918239eed789ca17354c89c5e2916ffd92703c2d5d77dbe718af32e2" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--xtgk9-eth0" Jan 30 13:50:36.333446 containerd[1461]: 2025-01-30 13:50:36.145 [INFO][4104] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="911fec02918239eed789ca17354c89c5e2916ffd92703c2d5d77dbe718af32e2" HandleID="k8s-pod-network.911fec02918239eed789ca17354c89c5e2916ffd92703c2d5d77dbe718af32e2" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--xtgk9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050670), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal", "pod":"coredns-668d6bf9bc-xtgk9", "timestamp":"2025-01-30 13:50:36.128920759 +0000 UTC"}, Hostname:"ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:50:36.333446 containerd[1461]: 2025-01-30 13:50:36.146 [INFO][4104] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:50:36.333446 containerd[1461]: 2025-01-30 13:50:36.166 [INFO][4104] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:50:36.333446 containerd[1461]: 2025-01-30 13:50:36.166 [INFO][4104] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal' Jan 30 13:50:36.333446 containerd[1461]: 2025-01-30 13:50:36.238 [INFO][4104] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.911fec02918239eed789ca17354c89c5e2916ffd92703c2d5d77dbe718af32e2" host="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:50:36.333446 containerd[1461]: 2025-01-30 13:50:36.251 [INFO][4104] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:50:36.333446 containerd[1461]: 2025-01-30 13:50:36.258 [INFO][4104] ipam/ipam.go 489: Trying affinity for 192.168.87.192/26 host="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:50:36.333446 containerd[1461]: 2025-01-30 13:50:36.263 [INFO][4104] ipam/ipam.go 155: Attempting to load block cidr=192.168.87.192/26 host="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:50:36.333446 containerd[1461]: 2025-01-30 13:50:36.268 [INFO][4104] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.87.192/26 host="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:50:36.333446 containerd[1461]: 2025-01-30 13:50:36.268 [INFO][4104] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.87.192/26 handle="k8s-pod-network.911fec02918239eed789ca17354c89c5e2916ffd92703c2d5d77dbe718af32e2" host="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:50:36.333446 containerd[1461]: 2025-01-30 13:50:36.271 [INFO][4104] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.911fec02918239eed789ca17354c89c5e2916ffd92703c2d5d77dbe718af32e2 Jan 30 13:50:36.333446 containerd[1461]: 2025-01-30 13:50:36.282 [INFO][4104] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.87.192/26 handle="k8s-pod-network.911fec02918239eed789ca17354c89c5e2916ffd92703c2d5d77dbe718af32e2" host="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:50:36.333446 containerd[1461]: 2025-01-30 13:50:36.299 [INFO][4104] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.87.196/26] block=192.168.87.192/26 handle="k8s-pod-network.911fec02918239eed789ca17354c89c5e2916ffd92703c2d5d77dbe718af32e2" host="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:50:36.333446 containerd[1461]: 2025-01-30 13:50:36.299 [INFO][4104] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.87.196/26] handle="k8s-pod-network.911fec02918239eed789ca17354c89c5e2916ffd92703c2d5d77dbe718af32e2" host="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:50:36.333446 containerd[1461]: 2025-01-30 13:50:36.299 [INFO][4104] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:50:36.333446 containerd[1461]: 2025-01-30 13:50:36.299 [INFO][4104] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.87.196/26] IPv6=[] ContainerID="911fec02918239eed789ca17354c89c5e2916ffd92703c2d5d77dbe718af32e2" HandleID="k8s-pod-network.911fec02918239eed789ca17354c89c5e2916ffd92703c2d5d77dbe718af32e2" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--xtgk9-eth0" Jan 30 13:50:36.335220 containerd[1461]: 2025-01-30 13:50:36.301 [INFO][4085] cni-plugin/k8s.go 386: Populated endpoint ContainerID="911fec02918239eed789ca17354c89c5e2916ffd92703c2d5d77dbe718af32e2" Namespace="kube-system" Pod="coredns-668d6bf9bc-xtgk9" WorkloadEndpoint="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--xtgk9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--xtgk9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"baa3f6f5-b333-4a45-a993-1c13d0b07855", ResourceVersion:"733", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 50, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-668d6bf9bc-xtgk9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.87.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali84a8359e760", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:50:36.335220 containerd[1461]: 2025-01-30 13:50:36.302 [INFO][4085] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.87.196/32] ContainerID="911fec02918239eed789ca17354c89c5e2916ffd92703c2d5d77dbe718af32e2" Namespace="kube-system" Pod="coredns-668d6bf9bc-xtgk9" WorkloadEndpoint="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--xtgk9-eth0" Jan 30 13:50:36.335220 containerd[1461]: 2025-01-30 13:50:36.302 [INFO][4085] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali84a8359e760 ContainerID="911fec02918239eed789ca17354c89c5e2916ffd92703c2d5d77dbe718af32e2" Namespace="kube-system" Pod="coredns-668d6bf9bc-xtgk9" WorkloadEndpoint="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--xtgk9-eth0" Jan 30 13:50:36.335220 containerd[1461]: 2025-01-30 13:50:36.307 [INFO][4085] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="911fec02918239eed789ca17354c89c5e2916ffd92703c2d5d77dbe718af32e2" Namespace="kube-system" Pod="coredns-668d6bf9bc-xtgk9" WorkloadEndpoint="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--xtgk9-eth0" Jan 30 13:50:36.335220 containerd[1461]: 2025-01-30 13:50:36.308 [INFO][4085] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="911fec02918239eed789ca17354c89c5e2916ffd92703c2d5d77dbe718af32e2" Namespace="kube-system" Pod="coredns-668d6bf9bc-xtgk9" WorkloadEndpoint="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--xtgk9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--xtgk9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"baa3f6f5-b333-4a45-a993-1c13d0b07855", ResourceVersion:"733", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 50, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal", ContainerID:"911fec02918239eed789ca17354c89c5e2916ffd92703c2d5d77dbe718af32e2", Pod:"coredns-668d6bf9bc-xtgk9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.87.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali84a8359e760", MAC:"ee:1a:ca:5b:da:c2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:50:36.335220 containerd[1461]: 2025-01-30 13:50:36.331 [INFO][4085] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="911fec02918239eed789ca17354c89c5e2916ffd92703c2d5d77dbe718af32e2" Namespace="kube-system" Pod="coredns-668d6bf9bc-xtgk9" WorkloadEndpoint="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--xtgk9-eth0" Jan 30 13:50:36.353320 containerd[1461]: time="2025-01-30T13:50:36.353157918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zc8sb,Uid:8fb98178-8803-46f6-a0be-7adf365c426b,Namespace:calico-system,Attempt:1,} returns sandbox id \"c7f52d56e98751ec93e3af60ca27e8f864cdde336f2621ad7419ffed3ea11fd0\"" Jan 30 13:50:36.372409 containerd[1461]: time="2025-01-30T13:50:36.371940012Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:50:36.372409 containerd[1461]: time="2025-01-30T13:50:36.372007149Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:50:36.372409 containerd[1461]: time="2025-01-30T13:50:36.372055665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:50:36.372409 containerd[1461]: time="2025-01-30T13:50:36.372208630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:50:36.397555 systemd[1]: Started cri-containerd-911fec02918239eed789ca17354c89c5e2916ffd92703c2d5d77dbe718af32e2.scope - libcontainer container 911fec02918239eed789ca17354c89c5e2916ffd92703c2d5d77dbe718af32e2. Jan 30 13:50:36.470936 containerd[1461]: time="2025-01-30T13:50:36.470734830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xtgk9,Uid:baa3f6f5-b333-4a45-a993-1c13d0b07855,Namespace:kube-system,Attempt:1,} returns sandbox id \"911fec02918239eed789ca17354c89c5e2916ffd92703c2d5d77dbe718af32e2\"" Jan 30 13:50:36.485180 containerd[1461]: time="2025-01-30T13:50:36.485111150Z" level=info msg="CreateContainer within sandbox \"911fec02918239eed789ca17354c89c5e2916ffd92703c2d5d77dbe718af32e2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:50:36.519421 containerd[1461]: time="2025-01-30T13:50:36.519367415Z" level=info msg="CreateContainer within sandbox \"911fec02918239eed789ca17354c89c5e2916ffd92703c2d5d77dbe718af32e2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"da994266650e18ad38d311f70406f50ced9e05f16597bcf804ca22d261930667\"" Jan 30 13:50:36.522359 containerd[1461]: time="2025-01-30T13:50:36.521556972Z" level=info msg="StartContainer for \"da994266650e18ad38d311f70406f50ced9e05f16597bcf804ca22d261930667\"" Jan 30 13:50:36.571497 systemd[1]: Started cri-containerd-da994266650e18ad38d311f70406f50ced9e05f16597bcf804ca22d261930667.scope - libcontainer container da994266650e18ad38d311f70406f50ced9e05f16597bcf804ca22d261930667. Jan 30 13:50:36.633754 containerd[1461]: time="2025-01-30T13:50:36.633704332Z" level=info msg="StartContainer for \"da994266650e18ad38d311f70406f50ced9e05f16597bcf804ca22d261930667\" returns successfully" Jan 30 13:50:36.783751 systemd-networkd[1375]: cali020b57b80a6: Gained IPv6LL Jan 30 13:50:36.792610 containerd[1461]: time="2025-01-30T13:50:36.792570479Z" level=info msg="StopPodSandbox for \"421e0c771368fe0e4ea98a719c41a544386fe75ef7226effaa6f667e5b1d45e2\"" Jan 30 13:50:36.977395 systemd-networkd[1375]: cali7dc03b293f3: Gained IPv6LL Jan 30 13:50:37.072343 containerd[1461]: 2025-01-30 13:50:36.980 [INFO][4284] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="421e0c771368fe0e4ea98a719c41a544386fe75ef7226effaa6f667e5b1d45e2" Jan 30 13:50:37.072343 containerd[1461]: 2025-01-30 13:50:36.980 [INFO][4284] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="421e0c771368fe0e4ea98a719c41a544386fe75ef7226effaa6f667e5b1d45e2" iface="eth0" netns="/var/run/netns/cni-461539d2-1a1f-6b5d-786e-5f8ca39c5818" Jan 30 13:50:37.072343 containerd[1461]: 2025-01-30 13:50:36.982 [INFO][4284] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="421e0c771368fe0e4ea98a719c41a544386fe75ef7226effaa6f667e5b1d45e2" iface="eth0" netns="/var/run/netns/cni-461539d2-1a1f-6b5d-786e-5f8ca39c5818" Jan 30 13:50:37.072343 containerd[1461]: 2025-01-30 13:50:36.984 [INFO][4284] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="421e0c771368fe0e4ea98a719c41a544386fe75ef7226effaa6f667e5b1d45e2" iface="eth0" netns="/var/run/netns/cni-461539d2-1a1f-6b5d-786e-5f8ca39c5818" Jan 30 13:50:37.072343 containerd[1461]: 2025-01-30 13:50:36.984 [INFO][4284] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="421e0c771368fe0e4ea98a719c41a544386fe75ef7226effaa6f667e5b1d45e2" Jan 30 13:50:37.072343 containerd[1461]: 2025-01-30 13:50:36.984 [INFO][4284] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="421e0c771368fe0e4ea98a719c41a544386fe75ef7226effaa6f667e5b1d45e2" Jan 30 13:50:37.072343 containerd[1461]: 2025-01-30 13:50:37.033 [INFO][4294] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="421e0c771368fe0e4ea98a719c41a544386fe75ef7226effaa6f667e5b1d45e2" HandleID="k8s-pod-network.421e0c771368fe0e4ea98a719c41a544386fe75ef7226effaa6f667e5b1d45e2" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--apiserver--5d54978bfb--dbkmf-eth0" Jan 30 13:50:37.072343 containerd[1461]: 2025-01-30 13:50:37.034 [INFO][4294] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:50:37.072343 containerd[1461]: 2025-01-30 13:50:37.034 [INFO][4294] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:50:37.072343 containerd[1461]: 2025-01-30 13:50:37.048 [WARNING][4294] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="421e0c771368fe0e4ea98a719c41a544386fe75ef7226effaa6f667e5b1d45e2" HandleID="k8s-pod-network.421e0c771368fe0e4ea98a719c41a544386fe75ef7226effaa6f667e5b1d45e2" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--apiserver--5d54978bfb--dbkmf-eth0" Jan 30 13:50:37.072343 containerd[1461]: 2025-01-30 13:50:37.048 [INFO][4294] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="421e0c771368fe0e4ea98a719c41a544386fe75ef7226effaa6f667e5b1d45e2" HandleID="k8s-pod-network.421e0c771368fe0e4ea98a719c41a544386fe75ef7226effaa6f667e5b1d45e2" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--apiserver--5d54978bfb--dbkmf-eth0" Jan 30 13:50:37.072343 containerd[1461]: 2025-01-30 13:50:37.063 [INFO][4294] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:50:37.072343 containerd[1461]: 2025-01-30 13:50:37.065 [INFO][4284] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="421e0c771368fe0e4ea98a719c41a544386fe75ef7226effaa6f667e5b1d45e2" Jan 30 13:50:37.076219 containerd[1461]: time="2025-01-30T13:50:37.075148889Z" level=info msg="TearDown network for sandbox \"421e0c771368fe0e4ea98a719c41a544386fe75ef7226effaa6f667e5b1d45e2\" successfully" Jan 30 13:50:37.076219 containerd[1461]: time="2025-01-30T13:50:37.075194299Z" level=info msg="StopPodSandbox for \"421e0c771368fe0e4ea98a719c41a544386fe75ef7226effaa6f667e5b1d45e2\" returns successfully" Jan 30 13:50:37.084763 containerd[1461]: time="2025-01-30T13:50:37.084055263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d54978bfb-dbkmf,Uid:58e99075-1376-48e5-b2c7-99d946da1951,Namespace:calico-apiserver,Attempt:1,}" Jan 30 13:50:37.085053 systemd[1]: run-netns-cni\x2d461539d2\x2d1a1f\x2d6b5d\x2d786e\x2d5f8ca39c5818.mount: Deactivated successfully. Jan 30 13:50:37.097375 kubelet[2562]: I0130 13:50:37.097125 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-xtgk9" podStartSLOduration=31.097097189 podStartE2EDuration="31.097097189s" podCreationTimestamp="2025-01-30 13:50:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:50:37.058545244 +0000 UTC m=+38.441662625" watchObservedRunningTime="2025-01-30 13:50:37.097097189 +0000 UTC m=+38.480214565" Jan 30 13:50:37.368021 systemd-networkd[1375]: cali4d9b535ea12: Link UP Jan 30 13:50:37.373555 systemd-networkd[1375]: cali4d9b535ea12: Gained carrier Jan 30 13:50:37.407587 containerd[1461]: 2025-01-30 13:50:37.178 [INFO][4307] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 13:50:37.407587 containerd[1461]: 2025-01-30 13:50:37.198 [INFO][4307] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--apiserver--5d54978bfb--dbkmf-eth0 calico-apiserver-5d54978bfb- calico-apiserver 58e99075-1376-48e5-b2c7-99d946da1951 748 0 2025-01-30 13:50:12 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d54978bfb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal calico-apiserver-5d54978bfb-dbkmf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4d9b535ea12 [] []}} ContainerID="a6aab7009007a38cceedcd7c698b1349e733ba89172511beaf66c78bab84742d" Namespace="calico-apiserver" Pod="calico-apiserver-5d54978bfb-dbkmf" WorkloadEndpoint="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--apiserver--5d54978bfb--dbkmf-" Jan 30 13:50:37.407587 containerd[1461]: 2025-01-30 13:50:37.198 [INFO][4307] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a6aab7009007a38cceedcd7c698b1349e733ba89172511beaf66c78bab84742d" Namespace="calico-apiserver" Pod="calico-apiserver-5d54978bfb-dbkmf" WorkloadEndpoint="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--apiserver--5d54978bfb--dbkmf-eth0" Jan 30 13:50:37.407587 containerd[1461]: 2025-01-30 13:50:37.292 [INFO][4321] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a6aab7009007a38cceedcd7c698b1349e733ba89172511beaf66c78bab84742d" HandleID="k8s-pod-network.a6aab7009007a38cceedcd7c698b1349e733ba89172511beaf66c78bab84742d" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--apiserver--5d54978bfb--dbkmf-eth0" Jan 30 13:50:37.407587 containerd[1461]: 2025-01-30 13:50:37.309 [INFO][4321] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a6aab7009007a38cceedcd7c698b1349e733ba89172511beaf66c78bab84742d" HandleID="k8s-pod-network.a6aab7009007a38cceedcd7c698b1349e733ba89172511beaf66c78bab84742d" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--apiserver--5d54978bfb--dbkmf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00040eb90), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal", "pod":"calico-apiserver-5d54978bfb-dbkmf", "timestamp":"2025-01-30 13:50:37.292668916 +0000 UTC"}, Hostname:"ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:50:37.407587 containerd[1461]: 2025-01-30 13:50:37.309 [INFO][4321] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:50:37.407587 containerd[1461]: 2025-01-30 13:50:37.309 [INFO][4321] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:50:37.407587 containerd[1461]: 2025-01-30 13:50:37.309 [INFO][4321] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal' Jan 30 13:50:37.407587 containerd[1461]: 2025-01-30 13:50:37.312 [INFO][4321] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a6aab7009007a38cceedcd7c698b1349e733ba89172511beaf66c78bab84742d" host="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:50:37.407587 containerd[1461]: 2025-01-30 13:50:37.319 [INFO][4321] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:50:37.407587 containerd[1461]: 2025-01-30 13:50:37.327 [INFO][4321] ipam/ipam.go 489: Trying affinity for 192.168.87.192/26 host="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:50:37.407587 containerd[1461]: 2025-01-30 13:50:37.330 [INFO][4321] ipam/ipam.go 155: Attempting to load block cidr=192.168.87.192/26 host="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:50:37.407587 containerd[1461]: 2025-01-30 13:50:37.335 [INFO][4321] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.87.192/26 host="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:50:37.407587 containerd[1461]: 2025-01-30 13:50:37.335 [INFO][4321] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.87.192/26 handle="k8s-pod-network.a6aab7009007a38cceedcd7c698b1349e733ba89172511beaf66c78bab84742d" host="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:50:37.407587 containerd[1461]: 2025-01-30 13:50:37.337 [INFO][4321] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a6aab7009007a38cceedcd7c698b1349e733ba89172511beaf66c78bab84742d Jan 30 13:50:37.407587 containerd[1461]: 2025-01-30 13:50:37.345 [INFO][4321] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.87.192/26 handle="k8s-pod-network.a6aab7009007a38cceedcd7c698b1349e733ba89172511beaf66c78bab84742d" host="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:50:37.407587 containerd[1461]: 2025-01-30 13:50:37.355 [INFO][4321] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.87.197/26] block=192.168.87.192/26 handle="k8s-pod-network.a6aab7009007a38cceedcd7c698b1349e733ba89172511beaf66c78bab84742d" host="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:50:37.407587 containerd[1461]: 2025-01-30 13:50:37.356 [INFO][4321] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.87.197/26] handle="k8s-pod-network.a6aab7009007a38cceedcd7c698b1349e733ba89172511beaf66c78bab84742d" host="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:50:37.407587 containerd[1461]: 2025-01-30 13:50:37.356 [INFO][4321] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:50:37.407587 containerd[1461]: 2025-01-30 13:50:37.356 [INFO][4321] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.87.197/26] IPv6=[] ContainerID="a6aab7009007a38cceedcd7c698b1349e733ba89172511beaf66c78bab84742d" HandleID="k8s-pod-network.a6aab7009007a38cceedcd7c698b1349e733ba89172511beaf66c78bab84742d" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--apiserver--5d54978bfb--dbkmf-eth0" Jan 30 13:50:37.409419 containerd[1461]: 2025-01-30 13:50:37.359 [INFO][4307] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a6aab7009007a38cceedcd7c698b1349e733ba89172511beaf66c78bab84742d" Namespace="calico-apiserver" Pod="calico-apiserver-5d54978bfb-dbkmf" WorkloadEndpoint="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--apiserver--5d54978bfb--dbkmf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--apiserver--5d54978bfb--dbkmf-eth0", GenerateName:"calico-apiserver-5d54978bfb-", Namespace:"calico-apiserver", SelfLink:"", UID:"58e99075-1376-48e5-b2c7-99d946da1951", ResourceVersion:"748", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 50, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d54978bfb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-5d54978bfb-dbkmf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.87.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4d9b535ea12", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:50:37.409419 containerd[1461]: 2025-01-30 13:50:37.359 [INFO][4307] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.87.197/32] ContainerID="a6aab7009007a38cceedcd7c698b1349e733ba89172511beaf66c78bab84742d" Namespace="calico-apiserver" Pod="calico-apiserver-5d54978bfb-dbkmf" WorkloadEndpoint="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--apiserver--5d54978bfb--dbkmf-eth0" Jan 30 13:50:37.409419 containerd[1461]: 2025-01-30 13:50:37.359 [INFO][4307] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4d9b535ea12 ContainerID="a6aab7009007a38cceedcd7c698b1349e733ba89172511beaf66c78bab84742d" Namespace="calico-apiserver" Pod="calico-apiserver-5d54978bfb-dbkmf" WorkloadEndpoint="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--apiserver--5d54978bfb--dbkmf-eth0" Jan 30 13:50:37.409419 containerd[1461]: 2025-01-30 13:50:37.379 [INFO][4307] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a6aab7009007a38cceedcd7c698b1349e733ba89172511beaf66c78bab84742d" Namespace="calico-apiserver" Pod="calico-apiserver-5d54978bfb-dbkmf" WorkloadEndpoint="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--apiserver--5d54978bfb--dbkmf-eth0" Jan 30 13:50:37.409419 containerd[1461]: 2025-01-30 13:50:37.381 [INFO][4307] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a6aab7009007a38cceedcd7c698b1349e733ba89172511beaf66c78bab84742d" Namespace="calico-apiserver" Pod="calico-apiserver-5d54978bfb-dbkmf" WorkloadEndpoint="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--apiserver--5d54978bfb--dbkmf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--apiserver--5d54978bfb--dbkmf-eth0", GenerateName:"calico-apiserver-5d54978bfb-", Namespace:"calico-apiserver", SelfLink:"", UID:"58e99075-1376-48e5-b2c7-99d946da1951", ResourceVersion:"748", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 50, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d54978bfb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal", ContainerID:"a6aab7009007a38cceedcd7c698b1349e733ba89172511beaf66c78bab84742d", Pod:"calico-apiserver-5d54978bfb-dbkmf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.87.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4d9b535ea12", MAC:"02:4c:ee:d6:b9:87", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:50:37.409419 containerd[1461]: 2025-01-30 13:50:37.404 [INFO][4307] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a6aab7009007a38cceedcd7c698b1349e733ba89172511beaf66c78bab84742d" Namespace="calico-apiserver" Pod="calico-apiserver-5d54978bfb-dbkmf" WorkloadEndpoint="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--apiserver--5d54978bfb--dbkmf-eth0" Jan 30 13:50:37.457677 containerd[1461]: time="2025-01-30T13:50:37.457547544Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:50:37.457677 containerd[1461]: time="2025-01-30T13:50:37.457635711Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:50:37.458250 containerd[1461]: time="2025-01-30T13:50:37.457673936Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:50:37.458250 containerd[1461]: time="2025-01-30T13:50:37.457795750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:50:37.487431 systemd-networkd[1375]: cali1c2b6de50bd: Gained IPv6LL Jan 30 13:50:37.511476 systemd[1]: Started cri-containerd-a6aab7009007a38cceedcd7c698b1349e733ba89172511beaf66c78bab84742d.scope - libcontainer container a6aab7009007a38cceedcd7c698b1349e733ba89172511beaf66c78bab84742d. Jan 30 13:50:37.617042 containerd[1461]: time="2025-01-30T13:50:37.616993638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d54978bfb-dbkmf,Uid:58e99075-1376-48e5-b2c7-99d946da1951,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"a6aab7009007a38cceedcd7c698b1349e733ba89172511beaf66c78bab84742d\"" Jan 30 13:50:37.766950 containerd[1461]: time="2025-01-30T13:50:37.766906937Z" level=info msg="StopPodSandbox for \"2588830b5fc43e2baf0335f07a7e46bd4ab7893b73e8b3ed99e02510b3196e4f\"" Jan 30 13:50:37.942568 containerd[1461]: 2025-01-30 13:50:37.861 [INFO][4396] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2588830b5fc43e2baf0335f07a7e46bd4ab7893b73e8b3ed99e02510b3196e4f" Jan 30 13:50:37.942568 containerd[1461]: 2025-01-30 13:50:37.862 [INFO][4396] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2588830b5fc43e2baf0335f07a7e46bd4ab7893b73e8b3ed99e02510b3196e4f" iface="eth0" netns="/var/run/netns/cni-b4cc50db-abc6-c73c-b2b7-83b01f76905f" Jan 30 13:50:37.942568 containerd[1461]: 2025-01-30 13:50:37.862 [INFO][4396] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2588830b5fc43e2baf0335f07a7e46bd4ab7893b73e8b3ed99e02510b3196e4f" iface="eth0" netns="/var/run/netns/cni-b4cc50db-abc6-c73c-b2b7-83b01f76905f" Jan 30 13:50:37.942568 containerd[1461]: 2025-01-30 13:50:37.863 [INFO][4396] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2588830b5fc43e2baf0335f07a7e46bd4ab7893b73e8b3ed99e02510b3196e4f" iface="eth0" netns="/var/run/netns/cni-b4cc50db-abc6-c73c-b2b7-83b01f76905f" Jan 30 13:50:37.942568 containerd[1461]: 2025-01-30 13:50:37.863 [INFO][4396] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2588830b5fc43e2baf0335f07a7e46bd4ab7893b73e8b3ed99e02510b3196e4f" Jan 30 13:50:37.942568 containerd[1461]: 2025-01-30 13:50:37.863 [INFO][4396] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2588830b5fc43e2baf0335f07a7e46bd4ab7893b73e8b3ed99e02510b3196e4f" Jan 30 13:50:37.942568 containerd[1461]: 2025-01-30 13:50:37.918 [INFO][4404] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2588830b5fc43e2baf0335f07a7e46bd4ab7893b73e8b3ed99e02510b3196e4f" HandleID="k8s-pod-network.2588830b5fc43e2baf0335f07a7e46bd4ab7893b73e8b3ed99e02510b3196e4f" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--nmmjn-eth0" Jan 30 13:50:37.942568 containerd[1461]: 2025-01-30 13:50:37.921 [INFO][4404] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:50:37.942568 containerd[1461]: 2025-01-30 13:50:37.921 [INFO][4404] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:50:37.942568 containerd[1461]: 2025-01-30 13:50:37.933 [WARNING][4404] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2588830b5fc43e2baf0335f07a7e46bd4ab7893b73e8b3ed99e02510b3196e4f" HandleID="k8s-pod-network.2588830b5fc43e2baf0335f07a7e46bd4ab7893b73e8b3ed99e02510b3196e4f" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--nmmjn-eth0" Jan 30 13:50:37.942568 containerd[1461]: 2025-01-30 13:50:37.933 [INFO][4404] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2588830b5fc43e2baf0335f07a7e46bd4ab7893b73e8b3ed99e02510b3196e4f" HandleID="k8s-pod-network.2588830b5fc43e2baf0335f07a7e46bd4ab7893b73e8b3ed99e02510b3196e4f" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--nmmjn-eth0" Jan 30 13:50:37.942568 containerd[1461]: 2025-01-30 13:50:37.936 [INFO][4404] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:50:37.942568 containerd[1461]: 2025-01-30 13:50:37.940 [INFO][4396] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2588830b5fc43e2baf0335f07a7e46bd4ab7893b73e8b3ed99e02510b3196e4f" Jan 30 13:50:37.947666 containerd[1461]: time="2025-01-30T13:50:37.945421359Z" level=info msg="TearDown network for sandbox \"2588830b5fc43e2baf0335f07a7e46bd4ab7893b73e8b3ed99e02510b3196e4f\" successfully" Jan 30 13:50:37.947666 containerd[1461]: time="2025-01-30T13:50:37.945466143Z" level=info msg="StopPodSandbox for \"2588830b5fc43e2baf0335f07a7e46bd4ab7893b73e8b3ed99e02510b3196e4f\" returns successfully" Jan 30 13:50:37.951634 containerd[1461]: time="2025-01-30T13:50:37.948206304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nmmjn,Uid:a7c8f8ce-efc5-476c-8e2d-4cfe1db0bb33,Namespace:kube-system,Attempt:1,}" Jan 30 13:50:37.950691 systemd[1]: run-netns-cni\x2db4cc50db\x2dabc6\x2dc73c\x2db2b7\x2d83b01f76905f.mount: Deactivated successfully. Jan 30 13:50:37.998464 systemd-networkd[1375]: cali84a8359e760: Gained IPv6LL Jan 30 13:50:38.329615 systemd-networkd[1375]: cali47b10689824: Link UP Jan 30 13:50:38.329912 systemd-networkd[1375]: cali47b10689824: Gained carrier Jan 30 13:50:38.372084 containerd[1461]: 2025-01-30 13:50:38.063 [INFO][4417] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 13:50:38.372084 containerd[1461]: 2025-01-30 13:50:38.092 [INFO][4417] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--nmmjn-eth0 coredns-668d6bf9bc- kube-system a7c8f8ce-efc5-476c-8e2d-4cfe1db0bb33 765 0 2025-01-30 13:50:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal coredns-668d6bf9bc-nmmjn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali47b10689824 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="f774f022f51c391feb6147487fb005f9274a95741c57c94e78789c5dd499dccb" Namespace="kube-system" Pod="coredns-668d6bf9bc-nmmjn" WorkloadEndpoint="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--nmmjn-" Jan 30 13:50:38.372084 containerd[1461]: 2025-01-30 13:50:38.093 [INFO][4417] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f774f022f51c391feb6147487fb005f9274a95741c57c94e78789c5dd499dccb" Namespace="kube-system" Pod="coredns-668d6bf9bc-nmmjn" WorkloadEndpoint="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--nmmjn-eth0" Jan 30 13:50:38.372084 containerd[1461]: 2025-01-30 13:50:38.225 [INFO][4434] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f774f022f51c391feb6147487fb005f9274a95741c57c94e78789c5dd499dccb" HandleID="k8s-pod-network.f774f022f51c391feb6147487fb005f9274a95741c57c94e78789c5dd499dccb" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--nmmjn-eth0" Jan 30 13:50:38.372084 containerd[1461]: 2025-01-30 13:50:38.249 [INFO][4434] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f774f022f51c391feb6147487fb005f9274a95741c57c94e78789c5dd499dccb" HandleID="k8s-pod-network.f774f022f51c391feb6147487fb005f9274a95741c57c94e78789c5dd499dccb" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--nmmjn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319870), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal", "pod":"coredns-668d6bf9bc-nmmjn", "timestamp":"2025-01-30 13:50:38.225407591 +0000 UTC"}, Hostname:"ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:50:38.372084 containerd[1461]: 2025-01-30 13:50:38.249 [INFO][4434] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:50:38.372084 containerd[1461]: 2025-01-30 13:50:38.249 [INFO][4434] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:50:38.372084 containerd[1461]: 2025-01-30 13:50:38.249 [INFO][4434] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal' Jan 30 13:50:38.372084 containerd[1461]: 2025-01-30 13:50:38.255 [INFO][4434] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f774f022f51c391feb6147487fb005f9274a95741c57c94e78789c5dd499dccb" host="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:50:38.372084 containerd[1461]: 2025-01-30 13:50:38.265 [INFO][4434] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:50:38.372084 containerd[1461]: 2025-01-30 13:50:38.275 [INFO][4434] ipam/ipam.go 489: Trying affinity for 192.168.87.192/26 host="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:50:38.372084 containerd[1461]: 2025-01-30 13:50:38.280 [INFO][4434] ipam/ipam.go 155: Attempting to load block cidr=192.168.87.192/26 host="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:50:38.372084 containerd[1461]: 2025-01-30 13:50:38.288 [INFO][4434] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.87.192/26 host="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:50:38.372084 containerd[1461]: 2025-01-30 13:50:38.288 [INFO][4434] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.87.192/26 handle="k8s-pod-network.f774f022f51c391feb6147487fb005f9274a95741c57c94e78789c5dd499dccb" host="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:50:38.372084 containerd[1461]: 2025-01-30 13:50:38.292 [INFO][4434] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f774f022f51c391feb6147487fb005f9274a95741c57c94e78789c5dd499dccb Jan 30 13:50:38.372084 containerd[1461]: 2025-01-30 13:50:38.301 [INFO][4434] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.87.192/26 handle="k8s-pod-network.f774f022f51c391feb6147487fb005f9274a95741c57c94e78789c5dd499dccb" host="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:50:38.372084 containerd[1461]: 2025-01-30 13:50:38.317 [INFO][4434] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.87.198/26] block=192.168.87.192/26 handle="k8s-pod-network.f774f022f51c391feb6147487fb005f9274a95741c57c94e78789c5dd499dccb" host="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:50:38.372084 containerd[1461]: 2025-01-30 13:50:38.317 [INFO][4434] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.87.198/26] handle="k8s-pod-network.f774f022f51c391feb6147487fb005f9274a95741c57c94e78789c5dd499dccb" host="ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal" Jan 30 13:50:38.372084 containerd[1461]: 2025-01-30 13:50:38.317 [INFO][4434] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:50:38.372084 containerd[1461]: 2025-01-30 13:50:38.317 [INFO][4434] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.87.198/26] IPv6=[] ContainerID="f774f022f51c391feb6147487fb005f9274a95741c57c94e78789c5dd499dccb" HandleID="k8s-pod-network.f774f022f51c391feb6147487fb005f9274a95741c57c94e78789c5dd499dccb" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--nmmjn-eth0" Jan 30 13:50:38.373414 containerd[1461]: 2025-01-30 13:50:38.323 [INFO][4417] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f774f022f51c391feb6147487fb005f9274a95741c57c94e78789c5dd499dccb" Namespace="kube-system" Pod="coredns-668d6bf9bc-nmmjn" WorkloadEndpoint="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--nmmjn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--nmmjn-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a7c8f8ce-efc5-476c-8e2d-4cfe1db0bb33", ResourceVersion:"765", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 50, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-668d6bf9bc-nmmjn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.87.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali47b10689824", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:50:38.373414 containerd[1461]: 2025-01-30 13:50:38.323 [INFO][4417] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.87.198/32] ContainerID="f774f022f51c391feb6147487fb005f9274a95741c57c94e78789c5dd499dccb" Namespace="kube-system" Pod="coredns-668d6bf9bc-nmmjn" WorkloadEndpoint="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--nmmjn-eth0" Jan 30 13:50:38.373414 containerd[1461]: 2025-01-30 13:50:38.323 [INFO][4417] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali47b10689824 ContainerID="f774f022f51c391feb6147487fb005f9274a95741c57c94e78789c5dd499dccb" Namespace="kube-system" Pod="coredns-668d6bf9bc-nmmjn" WorkloadEndpoint="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--nmmjn-eth0" Jan 30 13:50:38.373414 containerd[1461]: 2025-01-30 13:50:38.332 [INFO][4417] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f774f022f51c391feb6147487fb005f9274a95741c57c94e78789c5dd499dccb" Namespace="kube-system" Pod="coredns-668d6bf9bc-nmmjn" WorkloadEndpoint="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--nmmjn-eth0" Jan 30 13:50:38.373414 containerd[1461]: 2025-01-30 13:50:38.336 [INFO][4417] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f774f022f51c391feb6147487fb005f9274a95741c57c94e78789c5dd499dccb" Namespace="kube-system" Pod="coredns-668d6bf9bc-nmmjn" WorkloadEndpoint="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--nmmjn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--nmmjn-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a7c8f8ce-efc5-476c-8e2d-4cfe1db0bb33", ResourceVersion:"765", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 50, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal", ContainerID:"f774f022f51c391feb6147487fb005f9274a95741c57c94e78789c5dd499dccb", Pod:"coredns-668d6bf9bc-nmmjn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.87.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali47b10689824", MAC:"7a:4a:1f:ce:96:a8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:50:38.373414 containerd[1461]: 2025-01-30 13:50:38.368 [INFO][4417] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f774f022f51c391feb6147487fb005f9274a95741c57c94e78789c5dd499dccb" Namespace="kube-system" Pod="coredns-668d6bf9bc-nmmjn" WorkloadEndpoint="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--nmmjn-eth0" Jan 30 13:50:38.418822 containerd[1461]: time="2025-01-30T13:50:38.418555043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:50:38.418822 containerd[1461]: time="2025-01-30T13:50:38.418628086Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:50:38.418822 containerd[1461]: time="2025-01-30T13:50:38.418649449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:50:38.418822 containerd[1461]: time="2025-01-30T13:50:38.418759656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:50:38.470672 systemd[1]: Started cri-containerd-f774f022f51c391feb6147487fb005f9274a95741c57c94e78789c5dd499dccb.scope - libcontainer container f774f022f51c391feb6147487fb005f9274a95741c57c94e78789c5dd499dccb. Jan 30 13:50:38.553798 containerd[1461]: time="2025-01-30T13:50:38.553719779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nmmjn,Uid:a7c8f8ce-efc5-476c-8e2d-4cfe1db0bb33,Namespace:kube-system,Attempt:1,} returns sandbox id \"f774f022f51c391feb6147487fb005f9274a95741c57c94e78789c5dd499dccb\"" Jan 30 13:50:38.561527 containerd[1461]: time="2025-01-30T13:50:38.561310825Z" level=info msg="CreateContainer within sandbox \"f774f022f51c391feb6147487fb005f9274a95741c57c94e78789c5dd499dccb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:50:38.585063 containerd[1461]: time="2025-01-30T13:50:38.584912593Z" level=info msg="CreateContainer within sandbox \"f774f022f51c391feb6147487fb005f9274a95741c57c94e78789c5dd499dccb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d979bd5bf0e2382628cac1cbb7bf908bb647164457a41a21e33e548057d726fe\"" Jan 30 13:50:38.589935 containerd[1461]: time="2025-01-30T13:50:38.588996967Z" level=info msg="StartContainer for \"d979bd5bf0e2382628cac1cbb7bf908bb647164457a41a21e33e548057d726fe\"" Jan 30 13:50:38.651434 systemd[1]: Started cri-containerd-d979bd5bf0e2382628cac1cbb7bf908bb647164457a41a21e33e548057d726fe.scope - libcontainer container d979bd5bf0e2382628cac1cbb7bf908bb647164457a41a21e33e548057d726fe. Jan 30 13:50:38.717435 containerd[1461]: time="2025-01-30T13:50:38.717390299Z" level=info msg="StartContainer for \"d979bd5bf0e2382628cac1cbb7bf908bb647164457a41a21e33e548057d726fe\" returns successfully" Jan 30 13:50:38.948707 systemd[1]: run-containerd-runc-k8s.io-f774f022f51c391feb6147487fb005f9274a95741c57c94e78789c5dd499dccb-runc.oCCq37.mount: Deactivated successfully. Jan 30 13:50:38.958569 systemd-networkd[1375]: cali4d9b535ea12: Gained IPv6LL Jan 30 13:50:38.966751 containerd[1461]: time="2025-01-30T13:50:38.966688442Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:50:38.968053 containerd[1461]: time="2025-01-30T13:50:38.968005393Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 30 13:50:38.968952 containerd[1461]: time="2025-01-30T13:50:38.968914105Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:50:38.973027 containerd[1461]: time="2025-01-30T13:50:38.972988279Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:50:38.974893 containerd[1461]: time="2025-01-30T13:50:38.974379333Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 3.412652447s" Jan 30 13:50:38.974893 containerd[1461]: time="2025-01-30T13:50:38.974745458Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 30 13:50:38.978631 containerd[1461]: time="2025-01-30T13:50:38.978591769Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 30 13:50:38.987249 containerd[1461]: time="2025-01-30T13:50:38.986427610Z" level=info msg="CreateContainer within sandbox \"dcc2762ea0f545d037bb6ede10414711518de6f44933d9f31fdbf411ac5002f4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 13:50:39.010110 containerd[1461]: time="2025-01-30T13:50:39.008025066Z" level=info msg="CreateContainer within sandbox \"dcc2762ea0f545d037bb6ede10414711518de6f44933d9f31fdbf411ac5002f4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"90cea7759f450bb6a9584ff83256f1bb1cdd0ee676997dd206b2cba4298e9ef5\"" Jan 30 13:50:39.012648 containerd[1461]: time="2025-01-30T13:50:39.012602058Z" level=info msg="StartContainer for \"90cea7759f450bb6a9584ff83256f1bb1cdd0ee676997dd206b2cba4298e9ef5\"" Jan 30 13:50:39.120476 systemd[1]: Started cri-containerd-90cea7759f450bb6a9584ff83256f1bb1cdd0ee676997dd206b2cba4298e9ef5.scope - libcontainer container 90cea7759f450bb6a9584ff83256f1bb1cdd0ee676997dd206b2cba4298e9ef5. Jan 30 13:50:39.125286 kubelet[2562]: I0130 13:50:39.122644 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-nmmjn" podStartSLOduration=33.122617214 podStartE2EDuration="33.122617214s" podCreationTimestamp="2025-01-30 13:50:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:50:39.084325249 +0000 UTC m=+40.467442630" watchObservedRunningTime="2025-01-30 13:50:39.122617214 +0000 UTC m=+40.505734600" Jan 30 13:50:39.254986 containerd[1461]: time="2025-01-30T13:50:39.254768599Z" level=info msg="StartContainer for \"90cea7759f450bb6a9584ff83256f1bb1cdd0ee676997dd206b2cba4298e9ef5\" returns successfully" Jan 30 13:50:39.332341 kubelet[2562]: I0130 13:50:39.332283 2562 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:50:39.599434 systemd-networkd[1375]: cali47b10689824: Gained IPv6LL Jan 30 13:50:40.387863 kubelet[2562]: I0130 13:50:40.387073 2562 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:50:40.418559 kubelet[2562]: I0130 13:50:40.416576 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5d54978bfb-cbdp5" podStartSLOduration=24.998706594 podStartE2EDuration="28.416530303s" podCreationTimestamp="2025-01-30 13:50:12 +0000 UTC" firstStartedPulling="2025-01-30 13:50:35.560309939 +0000 UTC m=+36.943427294" lastFinishedPulling="2025-01-30 13:50:38.978133625 +0000 UTC m=+40.361251003" observedRunningTime="2025-01-30 13:50:40.083151719 +0000 UTC m=+41.466269101" watchObservedRunningTime="2025-01-30 13:50:40.416530303 +0000 UTC m=+41.799647684" Jan 30 13:50:41.065303 kubelet[2562]: I0130 13:50:41.065233 2562 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:50:41.667341 containerd[1461]: time="2025-01-30T13:50:41.664893875Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:50:41.668654 containerd[1461]: time="2025-01-30T13:50:41.667708431Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 30 13:50:41.671294 containerd[1461]: time="2025-01-30T13:50:41.668674469Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:50:41.675731 containerd[1461]: time="2025-01-30T13:50:41.675566118Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:50:41.679118 containerd[1461]: time="2025-01-30T13:50:41.678900803Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.698240614s" Jan 30 13:50:41.679118 containerd[1461]: time="2025-01-30T13:50:41.678971633Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 30 13:50:41.681287 containerd[1461]: time="2025-01-30T13:50:41.680994335Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 30 13:50:41.697037 containerd[1461]: time="2025-01-30T13:50:41.696992575Z" level=info msg="CreateContainer within sandbox \"f3100e757cdea7a3d112766af456d960c52fb3451feb4884052237c81f390b64\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 30 13:50:41.732310 containerd[1461]: time="2025-01-30T13:50:41.731832915Z" level=info msg="CreateContainer within sandbox \"f3100e757cdea7a3d112766af456d960c52fb3451feb4884052237c81f390b64\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"66fcfac1654fa28a522378325305e4d101b2b27075bdb3477c242578b7f30880\"" Jan 30 13:50:41.735778 containerd[1461]: time="2025-01-30T13:50:41.735729904Z" level=info msg="StartContainer for \"66fcfac1654fa28a522378325305e4d101b2b27075bdb3477c242578b7f30880\"" Jan 30 13:50:41.748021 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4062123874.mount: Deactivated successfully. Jan 30 13:50:41.780305 kernel: bpftool[4733]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 30 13:50:41.814192 systemd[1]: Started cri-containerd-66fcfac1654fa28a522378325305e4d101b2b27075bdb3477c242578b7f30880.scope - libcontainer container 66fcfac1654fa28a522378325305e4d101b2b27075bdb3477c242578b7f30880. Jan 30 13:50:41.874161 ntpd[1429]: Listen normally on 7 cali020b57b80a6 [fe80::ecee:eeff:feee:eeee%4]:123 Jan 30 13:50:41.875488 ntpd[1429]: 30 Jan 13:50:41 ntpd[1429]: Listen normally on 7 cali020b57b80a6 [fe80::ecee:eeff:feee:eeee%4]:123 Jan 30 13:50:41.875488 ntpd[1429]: 30 Jan 13:50:41 ntpd[1429]: Listen normally on 8 cali7dc03b293f3 [fe80::ecee:eeff:feee:eeee%5]:123 Jan 30 13:50:41.875488 ntpd[1429]: 30 Jan 13:50:41 ntpd[1429]: Listen normally on 9 cali1c2b6de50bd [fe80::ecee:eeff:feee:eeee%6]:123 Jan 30 13:50:41.875488 ntpd[1429]: 30 Jan 13:50:41 ntpd[1429]: Listen normally on 10 cali84a8359e760 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 30 13:50:41.875488 ntpd[1429]: 30 Jan 13:50:41 ntpd[1429]: Listen normally on 11 cali4d9b535ea12 [fe80::ecee:eeff:feee:eeee%8]:123 Jan 30 13:50:41.875488 ntpd[1429]: 30 Jan 13:50:41 ntpd[1429]: Listen normally on 12 cali47b10689824 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 30 13:50:41.874293 ntpd[1429]: Listen normally on 8 cali7dc03b293f3 [fe80::ecee:eeff:feee:eeee%5]:123 Jan 30 13:50:41.874354 ntpd[1429]: Listen normally on 9 cali1c2b6de50bd [fe80::ecee:eeff:feee:eeee%6]:123 Jan 30 13:50:41.874412 ntpd[1429]: Listen normally on 10 cali84a8359e760 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 30 13:50:41.874473 ntpd[1429]: Listen normally on 11 cali4d9b535ea12 [fe80::ecee:eeff:feee:eeee%8]:123 Jan 30 13:50:41.874528 ntpd[1429]: Listen normally on 12 cali47b10689824 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 30 13:50:41.894754 containerd[1461]: time="2025-01-30T13:50:41.894693800Z" level=info msg="StartContainer for \"66fcfac1654fa28a522378325305e4d101b2b27075bdb3477c242578b7f30880\" returns successfully" Jan 30 13:50:42.095725 kubelet[2562]: I0130 13:50:42.095546 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5786599c57-zjrvp" podStartSLOduration=23.994035303 podStartE2EDuration="30.095522939s" podCreationTimestamp="2025-01-30 13:50:12 +0000 UTC" firstStartedPulling="2025-01-30 13:50:35.578590521 +0000 UTC m=+36.961707879" lastFinishedPulling="2025-01-30 13:50:41.680078151 +0000 UTC m=+43.063195515" observedRunningTime="2025-01-30 13:50:42.091593712 +0000 UTC m=+43.474711106" watchObservedRunningTime="2025-01-30 13:50:42.095522939 +0000 UTC m=+43.478640319" Jan 30 13:50:42.226337 systemd-networkd[1375]: vxlan.calico: Link UP Jan 30 13:50:42.226354 systemd-networkd[1375]: vxlan.calico: Gained carrier Jan 30 13:50:42.909765 containerd[1461]: time="2025-01-30T13:50:42.909695632Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:50:42.911073 containerd[1461]: time="2025-01-30T13:50:42.910997241Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 30 13:50:42.912498 containerd[1461]: time="2025-01-30T13:50:42.912431942Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:50:42.915221 containerd[1461]: time="2025-01-30T13:50:42.915180153Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:50:42.916942 containerd[1461]: time="2025-01-30T13:50:42.916163282Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.235125715s" Jan 30 13:50:42.916942 containerd[1461]: time="2025-01-30T13:50:42.916211494Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 30 13:50:42.919483 containerd[1461]: time="2025-01-30T13:50:42.917962626Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 13:50:42.920391 containerd[1461]: time="2025-01-30T13:50:42.919962043Z" level=info msg="CreateContainer within sandbox \"c7f52d56e98751ec93e3af60ca27e8f864cdde336f2621ad7419ffed3ea11fd0\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 30 13:50:42.944748 containerd[1461]: time="2025-01-30T13:50:42.944700803Z" level=info msg="CreateContainer within sandbox \"c7f52d56e98751ec93e3af60ca27e8f864cdde336f2621ad7419ffed3ea11fd0\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"fe172cbec5e53ade5a65a23f2c201bec5189f26221518e2cfed3b0439ad1ec92\"" Jan 30 13:50:42.945745 containerd[1461]: time="2025-01-30T13:50:42.945534223Z" level=info msg="StartContainer for \"fe172cbec5e53ade5a65a23f2c201bec5189f26221518e2cfed3b0439ad1ec92\"" Jan 30 13:50:43.002488 systemd[1]: Started cri-containerd-fe172cbec5e53ade5a65a23f2c201bec5189f26221518e2cfed3b0439ad1ec92.scope - libcontainer container fe172cbec5e53ade5a65a23f2c201bec5189f26221518e2cfed3b0439ad1ec92. Jan 30 13:50:43.077444 containerd[1461]: time="2025-01-30T13:50:43.076811253Z" level=info msg="StartContainer for \"fe172cbec5e53ade5a65a23f2c201bec5189f26221518e2cfed3b0439ad1ec92\" returns successfully" Jan 30 13:50:43.084869 kubelet[2562]: I0130 13:50:43.084699 2562 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:50:43.142897 containerd[1461]: time="2025-01-30T13:50:43.141073945Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:50:43.146705 containerd[1461]: time="2025-01-30T13:50:43.146136678Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 30 13:50:43.158727 containerd[1461]: time="2025-01-30T13:50:43.158663866Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 240.657652ms" Jan 30 13:50:43.159024 containerd[1461]: time="2025-01-30T13:50:43.158991472Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 30 13:50:43.161800 containerd[1461]: time="2025-01-30T13:50:43.161068680Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 30 13:50:43.163443 containerd[1461]: time="2025-01-30T13:50:43.163407273Z" level=info msg="CreateContainer within sandbox \"a6aab7009007a38cceedcd7c698b1349e733ba89172511beaf66c78bab84742d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 13:50:43.186474 containerd[1461]: time="2025-01-30T13:50:43.186413854Z" level=info msg="CreateContainer within sandbox \"a6aab7009007a38cceedcd7c698b1349e733ba89172511beaf66c78bab84742d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"08d7e416ef3aa054d032396a31d4c8c68f3dffac4cb265a258525fac1d169805\"" Jan 30 13:50:43.187681 containerd[1461]: time="2025-01-30T13:50:43.187643789Z" level=info msg="StartContainer for \"08d7e416ef3aa054d032396a31d4c8c68f3dffac4cb265a258525fac1d169805\"" Jan 30 13:50:43.237481 systemd[1]: Started cri-containerd-08d7e416ef3aa054d032396a31d4c8c68f3dffac4cb265a258525fac1d169805.scope - libcontainer container 08d7e416ef3aa054d032396a31d4c8c68f3dffac4cb265a258525fac1d169805. Jan 30 13:50:43.340150 containerd[1461]: time="2025-01-30T13:50:43.339946083Z" level=info msg="StartContainer for \"08d7e416ef3aa054d032396a31d4c8c68f3dffac4cb265a258525fac1d169805\" returns successfully" Jan 30 13:50:43.797224 kubelet[2562]: I0130 13:50:43.797153 2562 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:50:43.950575 systemd-networkd[1375]: vxlan.calico: Gained IPv6LL Jan 30 13:50:44.128708 kubelet[2562]: I0130 13:50:44.128222 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5d54978bfb-dbkmf" podStartSLOduration=26.587135477 podStartE2EDuration="32.126255388s" podCreationTimestamp="2025-01-30 13:50:12 +0000 UTC" firstStartedPulling="2025-01-30 13:50:37.621385517 +0000 UTC m=+39.004502876" lastFinishedPulling="2025-01-30 13:50:43.160505423 +0000 UTC m=+44.543622787" observedRunningTime="2025-01-30 13:50:44.121375072 +0000 UTC m=+45.504492465" watchObservedRunningTime="2025-01-30 13:50:44.126255388 +0000 UTC m=+45.509372990" Jan 30 13:50:44.636443 containerd[1461]: time="2025-01-30T13:50:44.636380166Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:50:44.638561 containerd[1461]: time="2025-01-30T13:50:44.638502075Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 30 13:50:44.638853 containerd[1461]: time="2025-01-30T13:50:44.638821058Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:50:44.644942 containerd[1461]: time="2025-01-30T13:50:44.644697395Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:50:44.647297 containerd[1461]: time="2025-01-30T13:50:44.646879915Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.485509482s" Jan 30 13:50:44.647297 containerd[1461]: time="2025-01-30T13:50:44.646933782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 30 13:50:44.655158 containerd[1461]: time="2025-01-30T13:50:44.654947728Z" level=info msg="CreateContainer within sandbox \"c7f52d56e98751ec93e3af60ca27e8f864cdde336f2621ad7419ffed3ea11fd0\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 30 13:50:44.685313 containerd[1461]: time="2025-01-30T13:50:44.683368561Z" level=info msg="CreateContainer within sandbox \"c7f52d56e98751ec93e3af60ca27e8f864cdde336f2621ad7419ffed3ea11fd0\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"2a619e22b5d7c24bff1dc18ea2b56a042ea2941dc98667b107e57fb7e7edb219\"" Jan 30 13:50:44.687788 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4123225867.mount: Deactivated successfully. Jan 30 13:50:44.690716 containerd[1461]: time="2025-01-30T13:50:44.690677393Z" level=info msg="StartContainer for \"2a619e22b5d7c24bff1dc18ea2b56a042ea2941dc98667b107e57fb7e7edb219\"" Jan 30 13:50:44.763509 systemd[1]: Started cri-containerd-2a619e22b5d7c24bff1dc18ea2b56a042ea2941dc98667b107e57fb7e7edb219.scope - libcontainer container 2a619e22b5d7c24bff1dc18ea2b56a042ea2941dc98667b107e57fb7e7edb219. Jan 30 13:50:44.818305 containerd[1461]: time="2025-01-30T13:50:44.818208045Z" level=info msg="StartContainer for \"2a619e22b5d7c24bff1dc18ea2b56a042ea2941dc98667b107e57fb7e7edb219\" returns successfully" Jan 30 13:50:44.874372 kubelet[2562]: I0130 13:50:44.874135 2562 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 30 13:50:44.874372 kubelet[2562]: I0130 13:50:44.874185 2562 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 30 13:50:45.126372 kubelet[2562]: I0130 13:50:45.125465 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-zc8sb" podStartSLOduration=24.833200739 podStartE2EDuration="33.125431079s" podCreationTimestamp="2025-01-30 13:50:12 +0000 UTC" firstStartedPulling="2025-01-30 13:50:36.356243312 +0000 UTC m=+37.739360685" lastFinishedPulling="2025-01-30 13:50:44.648473648 +0000 UTC m=+46.031591025" observedRunningTime="2025-01-30 13:50:45.124584075 +0000 UTC m=+46.507701455" watchObservedRunningTime="2025-01-30 13:50:45.125431079 +0000 UTC m=+46.508548460" Jan 30 13:50:46.874073 ntpd[1429]: Listen normally on 13 vxlan.calico 192.168.87.192:123 Jan 30 13:50:46.874252 ntpd[1429]: Listen normally on 14 vxlan.calico [fe80::64bd:e9ff:fea8:bc87%10]:123 Jan 30 13:50:46.874618 ntpd[1429]: 30 Jan 13:50:46 ntpd[1429]: Listen normally on 13 vxlan.calico 192.168.87.192:123 Jan 30 13:50:46.874618 ntpd[1429]: 30 Jan 13:50:46 ntpd[1429]: Listen normally on 14 vxlan.calico [fe80::64bd:e9ff:fea8:bc87%10]:123 Jan 30 13:50:51.008045 systemd[1]: Started sshd@9-10.128.0.25:22-139.178.68.195:57726.service - OpenSSH per-connection server daemon (139.178.68.195:57726). Jan 30 13:50:51.372780 sshd[5007]: Accepted publickey for core from 139.178.68.195 port 57726 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:50:51.373967 sshd[5007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:50:51.392367 systemd-logind[1448]: New session 10 of user core. Jan 30 13:50:51.395801 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 13:50:51.770710 sshd[5007]: pam_unix(sshd:session): session closed for user core Jan 30 13:50:51.777398 systemd-logind[1448]: Session 10 logged out. Waiting for processes to exit. Jan 30 13:50:51.778148 systemd[1]: sshd@9-10.128.0.25:22-139.178.68.195:57726.service: Deactivated successfully. Jan 30 13:50:51.781578 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 13:50:51.783056 systemd-logind[1448]: Removed session 10. Jan 30 13:50:56.838544 systemd[1]: Started sshd@10-10.128.0.25:22-139.178.68.195:36534.service - OpenSSH per-connection server daemon (139.178.68.195:36534). Jan 30 13:50:57.188118 sshd[5051]: Accepted publickey for core from 139.178.68.195 port 36534 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:50:57.190507 sshd[5051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:50:57.197705 systemd-logind[1448]: New session 11 of user core. Jan 30 13:50:57.202515 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 13:50:57.516997 sshd[5051]: pam_unix(sshd:session): session closed for user core Jan 30 13:50:57.522195 systemd[1]: sshd@10-10.128.0.25:22-139.178.68.195:36534.service: Deactivated successfully. Jan 30 13:50:57.525949 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 13:50:57.530450 systemd-logind[1448]: Session 11 logged out. Waiting for processes to exit. Jan 30 13:50:57.532826 systemd-logind[1448]: Removed session 11. Jan 30 13:50:58.780327 containerd[1461]: time="2025-01-30T13:50:58.778814299Z" level=info msg="StopPodSandbox for \"cd390294d8e3de20565a85fbc0d0431ab47850fb5ca309b5a4aa9da84fd28689\"" Jan 30 13:50:58.896353 containerd[1461]: 2025-01-30 13:50:58.843 [WARNING][5077] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cd390294d8e3de20565a85fbc0d0431ab47850fb5ca309b5a4aa9da84fd28689" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--xtgk9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"baa3f6f5-b333-4a45-a993-1c13d0b07855", ResourceVersion:"753", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 50, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal", ContainerID:"911fec02918239eed789ca17354c89c5e2916ffd92703c2d5d77dbe718af32e2", Pod:"coredns-668d6bf9bc-xtgk9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.87.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali84a8359e760", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:50:58.896353 containerd[1461]: 2025-01-30 13:50:58.843 [INFO][5077] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cd390294d8e3de20565a85fbc0d0431ab47850fb5ca309b5a4aa9da84fd28689" Jan 30 13:50:58.896353 containerd[1461]: 2025-01-30 13:50:58.843 [INFO][5077] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cd390294d8e3de20565a85fbc0d0431ab47850fb5ca309b5a4aa9da84fd28689" iface="eth0" netns="" Jan 30 13:50:58.896353 containerd[1461]: 2025-01-30 13:50:58.843 [INFO][5077] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cd390294d8e3de20565a85fbc0d0431ab47850fb5ca309b5a4aa9da84fd28689" Jan 30 13:50:58.896353 containerd[1461]: 2025-01-30 13:50:58.843 [INFO][5077] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cd390294d8e3de20565a85fbc0d0431ab47850fb5ca309b5a4aa9da84fd28689" Jan 30 13:50:58.896353 containerd[1461]: 2025-01-30 13:50:58.883 [INFO][5083] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cd390294d8e3de20565a85fbc0d0431ab47850fb5ca309b5a4aa9da84fd28689" HandleID="k8s-pod-network.cd390294d8e3de20565a85fbc0d0431ab47850fb5ca309b5a4aa9da84fd28689" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--xtgk9-eth0" Jan 30 13:50:58.896353 containerd[1461]: 2025-01-30 13:50:58.884 [INFO][5083] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:50:58.896353 containerd[1461]: 2025-01-30 13:50:58.884 [INFO][5083] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:50:58.896353 containerd[1461]: 2025-01-30 13:50:58.892 [WARNING][5083] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cd390294d8e3de20565a85fbc0d0431ab47850fb5ca309b5a4aa9da84fd28689" HandleID="k8s-pod-network.cd390294d8e3de20565a85fbc0d0431ab47850fb5ca309b5a4aa9da84fd28689" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--xtgk9-eth0" Jan 30 13:50:58.896353 containerd[1461]: 2025-01-30 13:50:58.892 [INFO][5083] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cd390294d8e3de20565a85fbc0d0431ab47850fb5ca309b5a4aa9da84fd28689" HandleID="k8s-pod-network.cd390294d8e3de20565a85fbc0d0431ab47850fb5ca309b5a4aa9da84fd28689" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--xtgk9-eth0" Jan 30 13:50:58.896353 containerd[1461]: 2025-01-30 13:50:58.894 [INFO][5083] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:50:58.896353 containerd[1461]: 2025-01-30 13:50:58.895 [INFO][5077] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cd390294d8e3de20565a85fbc0d0431ab47850fb5ca309b5a4aa9da84fd28689" Jan 30 13:50:58.896353 containerd[1461]: time="2025-01-30T13:50:58.896362329Z" level=info msg="TearDown network for sandbox \"cd390294d8e3de20565a85fbc0d0431ab47850fb5ca309b5a4aa9da84fd28689\" successfully" Jan 30 13:50:58.896353 containerd[1461]: time="2025-01-30T13:50:58.896399894Z" level=info msg="StopPodSandbox for \"cd390294d8e3de20565a85fbc0d0431ab47850fb5ca309b5a4aa9da84fd28689\" returns successfully" Jan 30 13:50:58.898154 containerd[1461]: time="2025-01-30T13:50:58.897215443Z" level=info msg="RemovePodSandbox for \"cd390294d8e3de20565a85fbc0d0431ab47850fb5ca309b5a4aa9da84fd28689\"" Jan 30 13:50:58.898154 containerd[1461]: time="2025-01-30T13:50:58.897273716Z" level=info msg="Forcibly stopping sandbox \"cd390294d8e3de20565a85fbc0d0431ab47850fb5ca309b5a4aa9da84fd28689\"" Jan 30 13:50:58.983942 containerd[1461]: 2025-01-30 13:50:58.944 [WARNING][5101] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cd390294d8e3de20565a85fbc0d0431ab47850fb5ca309b5a4aa9da84fd28689" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--xtgk9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"baa3f6f5-b333-4a45-a993-1c13d0b07855", ResourceVersion:"753", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 50, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal", ContainerID:"911fec02918239eed789ca17354c89c5e2916ffd92703c2d5d77dbe718af32e2", Pod:"coredns-668d6bf9bc-xtgk9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.87.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali84a8359e760", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:50:58.983942 containerd[1461]: 2025-01-30 13:50:58.944 [INFO][5101] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cd390294d8e3de20565a85fbc0d0431ab47850fb5ca309b5a4aa9da84fd28689" Jan 30 13:50:58.983942 containerd[1461]: 2025-01-30 13:50:58.944 [INFO][5101] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cd390294d8e3de20565a85fbc0d0431ab47850fb5ca309b5a4aa9da84fd28689" iface="eth0" netns="" Jan 30 13:50:58.983942 containerd[1461]: 2025-01-30 13:50:58.944 [INFO][5101] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cd390294d8e3de20565a85fbc0d0431ab47850fb5ca309b5a4aa9da84fd28689" Jan 30 13:50:58.983942 containerd[1461]: 2025-01-30 13:50:58.944 [INFO][5101] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cd390294d8e3de20565a85fbc0d0431ab47850fb5ca309b5a4aa9da84fd28689" Jan 30 13:50:58.983942 containerd[1461]: 2025-01-30 13:50:58.972 [INFO][5107] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cd390294d8e3de20565a85fbc0d0431ab47850fb5ca309b5a4aa9da84fd28689" HandleID="k8s-pod-network.cd390294d8e3de20565a85fbc0d0431ab47850fb5ca309b5a4aa9da84fd28689" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--xtgk9-eth0" Jan 30 13:50:58.983942 containerd[1461]: 2025-01-30 13:50:58.972 [INFO][5107] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:50:58.983942 containerd[1461]: 2025-01-30 13:50:58.972 [INFO][5107] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:50:58.983942 containerd[1461]: 2025-01-30 13:50:58.979 [WARNING][5107] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cd390294d8e3de20565a85fbc0d0431ab47850fb5ca309b5a4aa9da84fd28689" HandleID="k8s-pod-network.cd390294d8e3de20565a85fbc0d0431ab47850fb5ca309b5a4aa9da84fd28689" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--xtgk9-eth0" Jan 30 13:50:58.983942 containerd[1461]: 2025-01-30 13:50:58.979 [INFO][5107] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cd390294d8e3de20565a85fbc0d0431ab47850fb5ca309b5a4aa9da84fd28689" HandleID="k8s-pod-network.cd390294d8e3de20565a85fbc0d0431ab47850fb5ca309b5a4aa9da84fd28689" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--xtgk9-eth0" Jan 30 13:50:58.983942 containerd[1461]: 2025-01-30 13:50:58.981 [INFO][5107] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:50:58.983942 containerd[1461]: 2025-01-30 13:50:58.982 [INFO][5101] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cd390294d8e3de20565a85fbc0d0431ab47850fb5ca309b5a4aa9da84fd28689" Jan 30 13:50:58.985158 containerd[1461]: time="2025-01-30T13:50:58.983989959Z" level=info msg="TearDown network for sandbox \"cd390294d8e3de20565a85fbc0d0431ab47850fb5ca309b5a4aa9da84fd28689\" successfully" Jan 30 13:50:58.988923 containerd[1461]: time="2025-01-30T13:50:58.988877805Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cd390294d8e3de20565a85fbc0d0431ab47850fb5ca309b5a4aa9da84fd28689\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:50:58.989106 containerd[1461]: time="2025-01-30T13:50:58.988972423Z" level=info msg="RemovePodSandbox \"cd390294d8e3de20565a85fbc0d0431ab47850fb5ca309b5a4aa9da84fd28689\" returns successfully" Jan 30 13:50:58.989757 containerd[1461]: time="2025-01-30T13:50:58.989711907Z" level=info msg="StopPodSandbox for \"d2e8cca43b34f6f28ce31ced3d8862774a7412a45ab990316d083129f586c007\"" Jan 30 13:50:59.079320 containerd[1461]: 2025-01-30 13:50:59.036 [WARNING][5125] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d2e8cca43b34f6f28ce31ced3d8862774a7412a45ab990316d083129f586c007" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--apiserver--5d54978bfb--cbdp5-eth0", GenerateName:"calico-apiserver-5d54978bfb-", Namespace:"calico-apiserver", SelfLink:"", UID:"2051b70c-c342-4cb4-8d41-4fbc2d79f291", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 50, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d54978bfb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal", ContainerID:"dcc2762ea0f545d037bb6ede10414711518de6f44933d9f31fdbf411ac5002f4", Pod:"calico-apiserver-5d54978bfb-cbdp5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.87.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali020b57b80a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:50:59.079320 containerd[1461]: 2025-01-30 13:50:59.036 [INFO][5125] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d2e8cca43b34f6f28ce31ced3d8862774a7412a45ab990316d083129f586c007" Jan 30 13:50:59.079320 containerd[1461]: 2025-01-30 13:50:59.037 [INFO][5125] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d2e8cca43b34f6f28ce31ced3d8862774a7412a45ab990316d083129f586c007" iface="eth0" netns="" Jan 30 13:50:59.079320 containerd[1461]: 2025-01-30 13:50:59.037 [INFO][5125] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d2e8cca43b34f6f28ce31ced3d8862774a7412a45ab990316d083129f586c007" Jan 30 13:50:59.079320 containerd[1461]: 2025-01-30 13:50:59.037 [INFO][5125] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d2e8cca43b34f6f28ce31ced3d8862774a7412a45ab990316d083129f586c007" Jan 30 13:50:59.079320 containerd[1461]: 2025-01-30 13:50:59.065 [INFO][5131] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d2e8cca43b34f6f28ce31ced3d8862774a7412a45ab990316d083129f586c007" HandleID="k8s-pod-network.d2e8cca43b34f6f28ce31ced3d8862774a7412a45ab990316d083129f586c007" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--apiserver--5d54978bfb--cbdp5-eth0" Jan 30 13:50:59.079320 containerd[1461]: 2025-01-30 13:50:59.066 [INFO][5131] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:50:59.079320 containerd[1461]: 2025-01-30 13:50:59.066 [INFO][5131] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:50:59.079320 containerd[1461]: 2025-01-30 13:50:59.074 [WARNING][5131] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d2e8cca43b34f6f28ce31ced3d8862774a7412a45ab990316d083129f586c007" HandleID="k8s-pod-network.d2e8cca43b34f6f28ce31ced3d8862774a7412a45ab990316d083129f586c007" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--apiserver--5d54978bfb--cbdp5-eth0" Jan 30 13:50:59.079320 containerd[1461]: 2025-01-30 13:50:59.074 [INFO][5131] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d2e8cca43b34f6f28ce31ced3d8862774a7412a45ab990316d083129f586c007" HandleID="k8s-pod-network.d2e8cca43b34f6f28ce31ced3d8862774a7412a45ab990316d083129f586c007" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--apiserver--5d54978bfb--cbdp5-eth0" Jan 30 13:50:59.079320 containerd[1461]: 2025-01-30 13:50:59.076 [INFO][5131] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:50:59.079320 containerd[1461]: 2025-01-30 13:50:59.078 [INFO][5125] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d2e8cca43b34f6f28ce31ced3d8862774a7412a45ab990316d083129f586c007" Jan 30 13:50:59.079320 containerd[1461]: time="2025-01-30T13:50:59.079253152Z" level=info msg="TearDown network for sandbox \"d2e8cca43b34f6f28ce31ced3d8862774a7412a45ab990316d083129f586c007\" successfully" Jan 30 13:50:59.079320 containerd[1461]: time="2025-01-30T13:50:59.079315660Z" level=info msg="StopPodSandbox for \"d2e8cca43b34f6f28ce31ced3d8862774a7412a45ab990316d083129f586c007\" returns successfully" Jan 30 13:50:59.082607 containerd[1461]: time="2025-01-30T13:50:59.080452335Z" level=info msg="RemovePodSandbox for \"d2e8cca43b34f6f28ce31ced3d8862774a7412a45ab990316d083129f586c007\"" Jan 30 13:50:59.082607 containerd[1461]: time="2025-01-30T13:50:59.080494136Z" level=info msg="Forcibly stopping sandbox \"d2e8cca43b34f6f28ce31ced3d8862774a7412a45ab990316d083129f586c007\"" Jan 30 13:50:59.192423 containerd[1461]: 2025-01-30 13:50:59.152 [WARNING][5150] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d2e8cca43b34f6f28ce31ced3d8862774a7412a45ab990316d083129f586c007" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--apiserver--5d54978bfb--cbdp5-eth0", GenerateName:"calico-apiserver-5d54978bfb-", Namespace:"calico-apiserver", SelfLink:"", UID:"2051b70c-c342-4cb4-8d41-4fbc2d79f291", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 50, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d54978bfb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal", ContainerID:"dcc2762ea0f545d037bb6ede10414711518de6f44933d9f31fdbf411ac5002f4", Pod:"calico-apiserver-5d54978bfb-cbdp5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.87.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali020b57b80a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:50:59.192423 containerd[1461]: 2025-01-30 13:50:59.152 [INFO][5150] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d2e8cca43b34f6f28ce31ced3d8862774a7412a45ab990316d083129f586c007" Jan 30 13:50:59.192423 containerd[1461]: 2025-01-30 13:50:59.152 [INFO][5150] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d2e8cca43b34f6f28ce31ced3d8862774a7412a45ab990316d083129f586c007" iface="eth0" netns="" Jan 30 13:50:59.192423 containerd[1461]: 2025-01-30 13:50:59.152 [INFO][5150] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d2e8cca43b34f6f28ce31ced3d8862774a7412a45ab990316d083129f586c007" Jan 30 13:50:59.192423 containerd[1461]: 2025-01-30 13:50:59.152 [INFO][5150] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d2e8cca43b34f6f28ce31ced3d8862774a7412a45ab990316d083129f586c007" Jan 30 13:50:59.192423 containerd[1461]: 2025-01-30 13:50:59.177 [INFO][5156] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d2e8cca43b34f6f28ce31ced3d8862774a7412a45ab990316d083129f586c007" HandleID="k8s-pod-network.d2e8cca43b34f6f28ce31ced3d8862774a7412a45ab990316d083129f586c007" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--apiserver--5d54978bfb--cbdp5-eth0" Jan 30 13:50:59.192423 containerd[1461]: 2025-01-30 13:50:59.177 [INFO][5156] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:50:59.192423 containerd[1461]: 2025-01-30 13:50:59.177 [INFO][5156] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:50:59.192423 containerd[1461]: 2025-01-30 13:50:59.187 [WARNING][5156] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d2e8cca43b34f6f28ce31ced3d8862774a7412a45ab990316d083129f586c007" HandleID="k8s-pod-network.d2e8cca43b34f6f28ce31ced3d8862774a7412a45ab990316d083129f586c007" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--apiserver--5d54978bfb--cbdp5-eth0" Jan 30 13:50:59.192423 containerd[1461]: 2025-01-30 13:50:59.187 [INFO][5156] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d2e8cca43b34f6f28ce31ced3d8862774a7412a45ab990316d083129f586c007" HandleID="k8s-pod-network.d2e8cca43b34f6f28ce31ced3d8862774a7412a45ab990316d083129f586c007" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--apiserver--5d54978bfb--cbdp5-eth0" Jan 30 13:50:59.192423 containerd[1461]: 2025-01-30 13:50:59.189 [INFO][5156] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:50:59.192423 containerd[1461]: 2025-01-30 13:50:59.191 [INFO][5150] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d2e8cca43b34f6f28ce31ced3d8862774a7412a45ab990316d083129f586c007" Jan 30 13:50:59.193427 containerd[1461]: time="2025-01-30T13:50:59.192479649Z" level=info msg="TearDown network for sandbox \"d2e8cca43b34f6f28ce31ced3d8862774a7412a45ab990316d083129f586c007\" successfully" Jan 30 13:50:59.197052 containerd[1461]: time="2025-01-30T13:50:59.196982706Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d2e8cca43b34f6f28ce31ced3d8862774a7412a45ab990316d083129f586c007\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:50:59.197231 containerd[1461]: time="2025-01-30T13:50:59.197071123Z" level=info msg="RemovePodSandbox \"d2e8cca43b34f6f28ce31ced3d8862774a7412a45ab990316d083129f586c007\" returns successfully" Jan 30 13:50:59.197766 containerd[1461]: time="2025-01-30T13:50:59.197730520Z" level=info msg="StopPodSandbox for \"9aff32cda864b2d1608ee553006b832fbb7a75307a308fa69c30294e172e612c\"" Jan 30 13:50:59.288833 containerd[1461]: 2025-01-30 13:50:59.247 [WARNING][5175] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9aff32cda864b2d1608ee553006b832fbb7a75307a308fa69c30294e172e612c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-csi--node--driver--zc8sb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8fb98178-8803-46f6-a0be-7adf365c426b", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 50, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal", ContainerID:"c7f52d56e98751ec93e3af60ca27e8f864cdde336f2621ad7419ffed3ea11fd0", Pod:"csi-node-driver-zc8sb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.87.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1c2b6de50bd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:50:59.288833 containerd[1461]: 2025-01-30 13:50:59.247 [INFO][5175] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9aff32cda864b2d1608ee553006b832fbb7a75307a308fa69c30294e172e612c" Jan 30 13:50:59.288833 containerd[1461]: 2025-01-30 13:50:59.247 [INFO][5175] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9aff32cda864b2d1608ee553006b832fbb7a75307a308fa69c30294e172e612c" iface="eth0" netns="" Jan 30 13:50:59.288833 containerd[1461]: 2025-01-30 13:50:59.247 [INFO][5175] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9aff32cda864b2d1608ee553006b832fbb7a75307a308fa69c30294e172e612c" Jan 30 13:50:59.288833 containerd[1461]: 2025-01-30 13:50:59.247 [INFO][5175] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9aff32cda864b2d1608ee553006b832fbb7a75307a308fa69c30294e172e612c" Jan 30 13:50:59.288833 containerd[1461]: 2025-01-30 13:50:59.275 [INFO][5181] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9aff32cda864b2d1608ee553006b832fbb7a75307a308fa69c30294e172e612c" HandleID="k8s-pod-network.9aff32cda864b2d1608ee553006b832fbb7a75307a308fa69c30294e172e612c" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-csi--node--driver--zc8sb-eth0" Jan 30 13:50:59.288833 containerd[1461]: 2025-01-30 13:50:59.276 [INFO][5181] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:50:59.288833 containerd[1461]: 2025-01-30 13:50:59.276 [INFO][5181] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:50:59.288833 containerd[1461]: 2025-01-30 13:50:59.284 [WARNING][5181] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9aff32cda864b2d1608ee553006b832fbb7a75307a308fa69c30294e172e612c" HandleID="k8s-pod-network.9aff32cda864b2d1608ee553006b832fbb7a75307a308fa69c30294e172e612c" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-csi--node--driver--zc8sb-eth0" Jan 30 13:50:59.288833 containerd[1461]: 2025-01-30 13:50:59.284 [INFO][5181] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9aff32cda864b2d1608ee553006b832fbb7a75307a308fa69c30294e172e612c" HandleID="k8s-pod-network.9aff32cda864b2d1608ee553006b832fbb7a75307a308fa69c30294e172e612c" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-csi--node--driver--zc8sb-eth0" Jan 30 13:50:59.288833 containerd[1461]: 2025-01-30 13:50:59.286 [INFO][5181] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:50:59.288833 containerd[1461]: 2025-01-30 13:50:59.287 [INFO][5175] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9aff32cda864b2d1608ee553006b832fbb7a75307a308fa69c30294e172e612c" Jan 30 13:50:59.288833 containerd[1461]: time="2025-01-30T13:50:59.288772708Z" level=info msg="TearDown network for sandbox \"9aff32cda864b2d1608ee553006b832fbb7a75307a308fa69c30294e172e612c\" successfully" Jan 30 13:50:59.288833 containerd[1461]: time="2025-01-30T13:50:59.288798391Z" level=info msg="StopPodSandbox for \"9aff32cda864b2d1608ee553006b832fbb7a75307a308fa69c30294e172e612c\" returns successfully" Jan 30 13:50:59.290570 containerd[1461]: time="2025-01-30T13:50:59.289899486Z" level=info msg="RemovePodSandbox for \"9aff32cda864b2d1608ee553006b832fbb7a75307a308fa69c30294e172e612c\"" Jan 30 13:50:59.290570 containerd[1461]: time="2025-01-30T13:50:59.289938537Z" level=info msg="Forcibly stopping sandbox \"9aff32cda864b2d1608ee553006b832fbb7a75307a308fa69c30294e172e612c\"" Jan 30 13:50:59.380386 containerd[1461]: 2025-01-30 13:50:59.338 [WARNING][5200] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9aff32cda864b2d1608ee553006b832fbb7a75307a308fa69c30294e172e612c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-csi--node--driver--zc8sb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8fb98178-8803-46f6-a0be-7adf365c426b", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 50, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal", ContainerID:"c7f52d56e98751ec93e3af60ca27e8f864cdde336f2621ad7419ffed3ea11fd0", Pod:"csi-node-driver-zc8sb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.87.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1c2b6de50bd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:50:59.380386 containerd[1461]: 2025-01-30 13:50:59.338 [INFO][5200] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9aff32cda864b2d1608ee553006b832fbb7a75307a308fa69c30294e172e612c" Jan 30 13:50:59.380386 containerd[1461]: 2025-01-30 13:50:59.338 [INFO][5200] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9aff32cda864b2d1608ee553006b832fbb7a75307a308fa69c30294e172e612c" iface="eth0" netns="" Jan 30 13:50:59.380386 containerd[1461]: 2025-01-30 13:50:59.338 [INFO][5200] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9aff32cda864b2d1608ee553006b832fbb7a75307a308fa69c30294e172e612c" Jan 30 13:50:59.380386 containerd[1461]: 2025-01-30 13:50:59.338 [INFO][5200] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9aff32cda864b2d1608ee553006b832fbb7a75307a308fa69c30294e172e612c" Jan 30 13:50:59.380386 containerd[1461]: 2025-01-30 13:50:59.366 [INFO][5206] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9aff32cda864b2d1608ee553006b832fbb7a75307a308fa69c30294e172e612c" HandleID="k8s-pod-network.9aff32cda864b2d1608ee553006b832fbb7a75307a308fa69c30294e172e612c" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-csi--node--driver--zc8sb-eth0" Jan 30 13:50:59.380386 containerd[1461]: 2025-01-30 13:50:59.366 [INFO][5206] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:50:59.380386 containerd[1461]: 2025-01-30 13:50:59.366 [INFO][5206] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:50:59.380386 containerd[1461]: 2025-01-30 13:50:59.374 [WARNING][5206] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9aff32cda864b2d1608ee553006b832fbb7a75307a308fa69c30294e172e612c" HandleID="k8s-pod-network.9aff32cda864b2d1608ee553006b832fbb7a75307a308fa69c30294e172e612c" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-csi--node--driver--zc8sb-eth0" Jan 30 13:50:59.380386 containerd[1461]: 2025-01-30 13:50:59.374 [INFO][5206] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9aff32cda864b2d1608ee553006b832fbb7a75307a308fa69c30294e172e612c" HandleID="k8s-pod-network.9aff32cda864b2d1608ee553006b832fbb7a75307a308fa69c30294e172e612c" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-csi--node--driver--zc8sb-eth0" Jan 30 13:50:59.380386 containerd[1461]: 2025-01-30 13:50:59.376 [INFO][5206] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:50:59.380386 containerd[1461]: 2025-01-30 13:50:59.378 [INFO][5200] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9aff32cda864b2d1608ee553006b832fbb7a75307a308fa69c30294e172e612c" Jan 30 13:50:59.380386 containerd[1461]: time="2025-01-30T13:50:59.379563392Z" level=info msg="TearDown network for sandbox \"9aff32cda864b2d1608ee553006b832fbb7a75307a308fa69c30294e172e612c\" successfully" Jan 30 13:50:59.386086 containerd[1461]: time="2025-01-30T13:50:59.386030637Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9aff32cda864b2d1608ee553006b832fbb7a75307a308fa69c30294e172e612c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:50:59.386199 containerd[1461]: time="2025-01-30T13:50:59.386111242Z" level=info msg="RemovePodSandbox \"9aff32cda864b2d1608ee553006b832fbb7a75307a308fa69c30294e172e612c\" returns successfully" Jan 30 13:50:59.387094 containerd[1461]: time="2025-01-30T13:50:59.386742090Z" level=info msg="StopPodSandbox for \"2588830b5fc43e2baf0335f07a7e46bd4ab7893b73e8b3ed99e02510b3196e4f\"" Jan 30 13:50:59.486956 containerd[1461]: 2025-01-30 13:50:59.440 [WARNING][5224] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2588830b5fc43e2baf0335f07a7e46bd4ab7893b73e8b3ed99e02510b3196e4f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--nmmjn-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a7c8f8ce-efc5-476c-8e2d-4cfe1db0bb33", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 50, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal", ContainerID:"f774f022f51c391feb6147487fb005f9274a95741c57c94e78789c5dd499dccb", Pod:"coredns-668d6bf9bc-nmmjn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.87.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali47b10689824", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:50:59.486956 containerd[1461]: 2025-01-30 13:50:59.440 [INFO][5224] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2588830b5fc43e2baf0335f07a7e46bd4ab7893b73e8b3ed99e02510b3196e4f" Jan 30 13:50:59.486956 containerd[1461]: 2025-01-30 13:50:59.440 [INFO][5224] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2588830b5fc43e2baf0335f07a7e46bd4ab7893b73e8b3ed99e02510b3196e4f" iface="eth0" netns="" Jan 30 13:50:59.486956 containerd[1461]: 2025-01-30 13:50:59.440 [INFO][5224] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2588830b5fc43e2baf0335f07a7e46bd4ab7893b73e8b3ed99e02510b3196e4f" Jan 30 13:50:59.486956 containerd[1461]: 2025-01-30 13:50:59.440 [INFO][5224] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2588830b5fc43e2baf0335f07a7e46bd4ab7893b73e8b3ed99e02510b3196e4f" Jan 30 13:50:59.486956 containerd[1461]: 2025-01-30 13:50:59.472 [INFO][5231] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2588830b5fc43e2baf0335f07a7e46bd4ab7893b73e8b3ed99e02510b3196e4f" HandleID="k8s-pod-network.2588830b5fc43e2baf0335f07a7e46bd4ab7893b73e8b3ed99e02510b3196e4f" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--nmmjn-eth0" Jan 30 13:50:59.486956 containerd[1461]: 2025-01-30 13:50:59.472 [INFO][5231] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:50:59.486956 containerd[1461]: 2025-01-30 13:50:59.472 [INFO][5231] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:50:59.486956 containerd[1461]: 2025-01-30 13:50:59.482 [WARNING][5231] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2588830b5fc43e2baf0335f07a7e46bd4ab7893b73e8b3ed99e02510b3196e4f" HandleID="k8s-pod-network.2588830b5fc43e2baf0335f07a7e46bd4ab7893b73e8b3ed99e02510b3196e4f" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--nmmjn-eth0" Jan 30 13:50:59.486956 containerd[1461]: 2025-01-30 13:50:59.482 [INFO][5231] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2588830b5fc43e2baf0335f07a7e46bd4ab7893b73e8b3ed99e02510b3196e4f" HandleID="k8s-pod-network.2588830b5fc43e2baf0335f07a7e46bd4ab7893b73e8b3ed99e02510b3196e4f" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--nmmjn-eth0" Jan 30 13:50:59.486956 containerd[1461]: 2025-01-30 13:50:59.484 [INFO][5231] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:50:59.486956 containerd[1461]: 2025-01-30 13:50:59.485 [INFO][5224] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2588830b5fc43e2baf0335f07a7e46bd4ab7893b73e8b3ed99e02510b3196e4f" Jan 30 13:50:59.488422 containerd[1461]: time="2025-01-30T13:50:59.486970801Z" level=info msg="TearDown network for sandbox \"2588830b5fc43e2baf0335f07a7e46bd4ab7893b73e8b3ed99e02510b3196e4f\" successfully" Jan 30 13:50:59.488422 containerd[1461]: time="2025-01-30T13:50:59.487015684Z" level=info msg="StopPodSandbox for \"2588830b5fc43e2baf0335f07a7e46bd4ab7893b73e8b3ed99e02510b3196e4f\" returns successfully" Jan 30 13:50:59.488422 containerd[1461]: time="2025-01-30T13:50:59.487633567Z" level=info msg="RemovePodSandbox for \"2588830b5fc43e2baf0335f07a7e46bd4ab7893b73e8b3ed99e02510b3196e4f\"" Jan 30 13:50:59.488422 containerd[1461]: time="2025-01-30T13:50:59.487673050Z" level=info msg="Forcibly stopping sandbox \"2588830b5fc43e2baf0335f07a7e46bd4ab7893b73e8b3ed99e02510b3196e4f\"" Jan 30 13:50:59.575299 containerd[1461]: 2025-01-30 13:50:59.532 [WARNING][5249] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2588830b5fc43e2baf0335f07a7e46bd4ab7893b73e8b3ed99e02510b3196e4f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--nmmjn-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a7c8f8ce-efc5-476c-8e2d-4cfe1db0bb33", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 50, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal", ContainerID:"f774f022f51c391feb6147487fb005f9274a95741c57c94e78789c5dd499dccb", Pod:"coredns-668d6bf9bc-nmmjn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.87.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali47b10689824", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:50:59.575299 containerd[1461]: 2025-01-30 13:50:59.533 [INFO][5249] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2588830b5fc43e2baf0335f07a7e46bd4ab7893b73e8b3ed99e02510b3196e4f" Jan 30 13:50:59.575299 containerd[1461]: 2025-01-30 13:50:59.533 [INFO][5249] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2588830b5fc43e2baf0335f07a7e46bd4ab7893b73e8b3ed99e02510b3196e4f" iface="eth0" netns="" Jan 30 13:50:59.575299 containerd[1461]: 2025-01-30 13:50:59.533 [INFO][5249] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2588830b5fc43e2baf0335f07a7e46bd4ab7893b73e8b3ed99e02510b3196e4f" Jan 30 13:50:59.575299 containerd[1461]: 2025-01-30 13:50:59.533 [INFO][5249] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2588830b5fc43e2baf0335f07a7e46bd4ab7893b73e8b3ed99e02510b3196e4f" Jan 30 13:50:59.575299 containerd[1461]: 2025-01-30 13:50:59.558 [INFO][5255] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2588830b5fc43e2baf0335f07a7e46bd4ab7893b73e8b3ed99e02510b3196e4f" HandleID="k8s-pod-network.2588830b5fc43e2baf0335f07a7e46bd4ab7893b73e8b3ed99e02510b3196e4f" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--nmmjn-eth0" Jan 30 13:50:59.575299 containerd[1461]: 2025-01-30 13:50:59.558 [INFO][5255] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:50:59.575299 containerd[1461]: 2025-01-30 13:50:59.558 [INFO][5255] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:50:59.575299 containerd[1461]: 2025-01-30 13:50:59.566 [WARNING][5255] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2588830b5fc43e2baf0335f07a7e46bd4ab7893b73e8b3ed99e02510b3196e4f" HandleID="k8s-pod-network.2588830b5fc43e2baf0335f07a7e46bd4ab7893b73e8b3ed99e02510b3196e4f" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--nmmjn-eth0" Jan 30 13:50:59.575299 containerd[1461]: 2025-01-30 13:50:59.566 [INFO][5255] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2588830b5fc43e2baf0335f07a7e46bd4ab7893b73e8b3ed99e02510b3196e4f" HandleID="k8s-pod-network.2588830b5fc43e2baf0335f07a7e46bd4ab7893b73e8b3ed99e02510b3196e4f" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--nmmjn-eth0" Jan 30 13:50:59.575299 containerd[1461]: 2025-01-30 13:50:59.571 [INFO][5255] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:50:59.575299 containerd[1461]: 2025-01-30 13:50:59.572 [INFO][5249] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2588830b5fc43e2baf0335f07a7e46bd4ab7893b73e8b3ed99e02510b3196e4f" Jan 30 13:50:59.575299 containerd[1461]: time="2025-01-30T13:50:59.573670960Z" level=info msg="TearDown network for sandbox \"2588830b5fc43e2baf0335f07a7e46bd4ab7893b73e8b3ed99e02510b3196e4f\" successfully" Jan 30 13:50:59.579466 containerd[1461]: time="2025-01-30T13:50:59.579397144Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2588830b5fc43e2baf0335f07a7e46bd4ab7893b73e8b3ed99e02510b3196e4f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:50:59.579598 containerd[1461]: time="2025-01-30T13:50:59.579481763Z" level=info msg="RemovePodSandbox \"2588830b5fc43e2baf0335f07a7e46bd4ab7893b73e8b3ed99e02510b3196e4f\" returns successfully" Jan 30 13:50:59.580214 containerd[1461]: time="2025-01-30T13:50:59.580180633Z" level=info msg="StopPodSandbox for \"58209fed96d67646a80783f3c4ecb2875331ab32213690facf0a635f0f5e4d31\"" Jan 30 13:50:59.668581 containerd[1461]: 2025-01-30 13:50:59.626 [WARNING][5273] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="58209fed96d67646a80783f3c4ecb2875331ab32213690facf0a635f0f5e4d31" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--kube--controllers--5786599c57--zjrvp-eth0", GenerateName:"calico-kube-controllers-5786599c57-", Namespace:"calico-system", SelfLink:"", UID:"3a17c611-62b6-4f6c-b060-7ce3741ea277", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 50, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5786599c57", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal", ContainerID:"f3100e757cdea7a3d112766af456d960c52fb3451feb4884052237c81f390b64", Pod:"calico-kube-controllers-5786599c57-zjrvp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.87.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7dc03b293f3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:50:59.668581 containerd[1461]: 2025-01-30 13:50:59.626 [INFO][5273] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="58209fed96d67646a80783f3c4ecb2875331ab32213690facf0a635f0f5e4d31" Jan 30 13:50:59.668581 containerd[1461]: 2025-01-30 13:50:59.626 [INFO][5273] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="58209fed96d67646a80783f3c4ecb2875331ab32213690facf0a635f0f5e4d31" iface="eth0" netns="" Jan 30 13:50:59.668581 containerd[1461]: 2025-01-30 13:50:59.626 [INFO][5273] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="58209fed96d67646a80783f3c4ecb2875331ab32213690facf0a635f0f5e4d31" Jan 30 13:50:59.668581 containerd[1461]: 2025-01-30 13:50:59.626 [INFO][5273] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="58209fed96d67646a80783f3c4ecb2875331ab32213690facf0a635f0f5e4d31" Jan 30 13:50:59.668581 containerd[1461]: 2025-01-30 13:50:59.655 [INFO][5279] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="58209fed96d67646a80783f3c4ecb2875331ab32213690facf0a635f0f5e4d31" HandleID="k8s-pod-network.58209fed96d67646a80783f3c4ecb2875331ab32213690facf0a635f0f5e4d31" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--kube--controllers--5786599c57--zjrvp-eth0" Jan 30 13:50:59.668581 containerd[1461]: 2025-01-30 13:50:59.656 [INFO][5279] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:50:59.668581 containerd[1461]: 2025-01-30 13:50:59.656 [INFO][5279] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:50:59.668581 containerd[1461]: 2025-01-30 13:50:59.664 [WARNING][5279] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="58209fed96d67646a80783f3c4ecb2875331ab32213690facf0a635f0f5e4d31" HandleID="k8s-pod-network.58209fed96d67646a80783f3c4ecb2875331ab32213690facf0a635f0f5e4d31" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--kube--controllers--5786599c57--zjrvp-eth0" Jan 30 13:50:59.668581 containerd[1461]: 2025-01-30 13:50:59.664 [INFO][5279] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="58209fed96d67646a80783f3c4ecb2875331ab32213690facf0a635f0f5e4d31" HandleID="k8s-pod-network.58209fed96d67646a80783f3c4ecb2875331ab32213690facf0a635f0f5e4d31" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--kube--controllers--5786599c57--zjrvp-eth0" Jan 30 13:50:59.668581 containerd[1461]: 2025-01-30 13:50:59.665 [INFO][5279] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:50:59.668581 containerd[1461]: 2025-01-30 13:50:59.666 [INFO][5273] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="58209fed96d67646a80783f3c4ecb2875331ab32213690facf0a635f0f5e4d31" Jan 30 13:50:59.668581 containerd[1461]: time="2025-01-30T13:50:59.668381003Z" level=info msg="TearDown network for sandbox \"58209fed96d67646a80783f3c4ecb2875331ab32213690facf0a635f0f5e4d31\" successfully" Jan 30 13:50:59.668581 containerd[1461]: time="2025-01-30T13:50:59.668417566Z" level=info msg="StopPodSandbox for \"58209fed96d67646a80783f3c4ecb2875331ab32213690facf0a635f0f5e4d31\" returns successfully" Jan 30 13:50:59.669889 containerd[1461]: time="2025-01-30T13:50:59.669102472Z" level=info msg="RemovePodSandbox for \"58209fed96d67646a80783f3c4ecb2875331ab32213690facf0a635f0f5e4d31\"" Jan 30 13:50:59.669889 containerd[1461]: time="2025-01-30T13:50:59.669152411Z" level=info msg="Forcibly stopping sandbox \"58209fed96d67646a80783f3c4ecb2875331ab32213690facf0a635f0f5e4d31\"" Jan 30 13:50:59.761499 containerd[1461]: 2025-01-30 13:50:59.716 [WARNING][5298] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="58209fed96d67646a80783f3c4ecb2875331ab32213690facf0a635f0f5e4d31" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--kube--controllers--5786599c57--zjrvp-eth0", GenerateName:"calico-kube-controllers-5786599c57-", Namespace:"calico-system", SelfLink:"", UID:"3a17c611-62b6-4f6c-b060-7ce3741ea277", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 50, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5786599c57", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal", ContainerID:"f3100e757cdea7a3d112766af456d960c52fb3451feb4884052237c81f390b64", Pod:"calico-kube-controllers-5786599c57-zjrvp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.87.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7dc03b293f3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:50:59.761499 containerd[1461]: 2025-01-30 13:50:59.717 [INFO][5298] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="58209fed96d67646a80783f3c4ecb2875331ab32213690facf0a635f0f5e4d31" Jan 30 13:50:59.761499 containerd[1461]: 2025-01-30 13:50:59.717 [INFO][5298] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="58209fed96d67646a80783f3c4ecb2875331ab32213690facf0a635f0f5e4d31" iface="eth0" netns="" Jan 30 13:50:59.761499 containerd[1461]: 2025-01-30 13:50:59.717 [INFO][5298] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="58209fed96d67646a80783f3c4ecb2875331ab32213690facf0a635f0f5e4d31" Jan 30 13:50:59.761499 containerd[1461]: 2025-01-30 13:50:59.717 [INFO][5298] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="58209fed96d67646a80783f3c4ecb2875331ab32213690facf0a635f0f5e4d31" Jan 30 13:50:59.761499 containerd[1461]: 2025-01-30 13:50:59.743 [INFO][5305] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="58209fed96d67646a80783f3c4ecb2875331ab32213690facf0a635f0f5e4d31" HandleID="k8s-pod-network.58209fed96d67646a80783f3c4ecb2875331ab32213690facf0a635f0f5e4d31" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--kube--controllers--5786599c57--zjrvp-eth0" Jan 30 13:50:59.761499 containerd[1461]: 2025-01-30 13:50:59.743 [INFO][5305] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:50:59.761499 containerd[1461]: 2025-01-30 13:50:59.743 [INFO][5305] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:50:59.761499 containerd[1461]: 2025-01-30 13:50:59.756 [WARNING][5305] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="58209fed96d67646a80783f3c4ecb2875331ab32213690facf0a635f0f5e4d31" HandleID="k8s-pod-network.58209fed96d67646a80783f3c4ecb2875331ab32213690facf0a635f0f5e4d31" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--kube--controllers--5786599c57--zjrvp-eth0" Jan 30 13:50:59.761499 containerd[1461]: 2025-01-30 13:50:59.756 [INFO][5305] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="58209fed96d67646a80783f3c4ecb2875331ab32213690facf0a635f0f5e4d31" HandleID="k8s-pod-network.58209fed96d67646a80783f3c4ecb2875331ab32213690facf0a635f0f5e4d31" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--kube--controllers--5786599c57--zjrvp-eth0" Jan 30 13:50:59.761499 containerd[1461]: 2025-01-30 13:50:59.758 [INFO][5305] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:50:59.761499 containerd[1461]: 2025-01-30 13:50:59.759 [INFO][5298] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="58209fed96d67646a80783f3c4ecb2875331ab32213690facf0a635f0f5e4d31" Jan 30 13:50:59.762597 containerd[1461]: time="2025-01-30T13:50:59.761565072Z" level=info msg="TearDown network for sandbox \"58209fed96d67646a80783f3c4ecb2875331ab32213690facf0a635f0f5e4d31\" successfully" Jan 30 13:50:59.772347 containerd[1461]: time="2025-01-30T13:50:59.771517079Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"58209fed96d67646a80783f3c4ecb2875331ab32213690facf0a635f0f5e4d31\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:50:59.772347 containerd[1461]: time="2025-01-30T13:50:59.771646439Z" level=info msg="RemovePodSandbox \"58209fed96d67646a80783f3c4ecb2875331ab32213690facf0a635f0f5e4d31\" returns successfully" Jan 30 13:50:59.773005 containerd[1461]: time="2025-01-30T13:50:59.772971660Z" level=info msg="StopPodSandbox for \"421e0c771368fe0e4ea98a719c41a544386fe75ef7226effaa6f667e5b1d45e2\"" Jan 30 13:50:59.874301 containerd[1461]: 2025-01-30 13:50:59.827 [WARNING][5323] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="421e0c771368fe0e4ea98a719c41a544386fe75ef7226effaa6f667e5b1d45e2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--apiserver--5d54978bfb--dbkmf-eth0", GenerateName:"calico-apiserver-5d54978bfb-", Namespace:"calico-apiserver", SelfLink:"", UID:"58e99075-1376-48e5-b2c7-99d946da1951", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 50, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d54978bfb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal", ContainerID:"a6aab7009007a38cceedcd7c698b1349e733ba89172511beaf66c78bab84742d", Pod:"calico-apiserver-5d54978bfb-dbkmf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.87.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4d9b535ea12", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:50:59.874301 containerd[1461]: 2025-01-30 13:50:59.828 [INFO][5323] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="421e0c771368fe0e4ea98a719c41a544386fe75ef7226effaa6f667e5b1d45e2" Jan 30 13:50:59.874301 containerd[1461]: 2025-01-30 13:50:59.828 [INFO][5323] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="421e0c771368fe0e4ea98a719c41a544386fe75ef7226effaa6f667e5b1d45e2" iface="eth0" netns="" Jan 30 13:50:59.874301 containerd[1461]: 2025-01-30 13:50:59.828 [INFO][5323] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="421e0c771368fe0e4ea98a719c41a544386fe75ef7226effaa6f667e5b1d45e2" Jan 30 13:50:59.874301 containerd[1461]: 2025-01-30 13:50:59.828 [INFO][5323] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="421e0c771368fe0e4ea98a719c41a544386fe75ef7226effaa6f667e5b1d45e2" Jan 30 13:50:59.874301 containerd[1461]: 2025-01-30 13:50:59.861 [INFO][5329] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="421e0c771368fe0e4ea98a719c41a544386fe75ef7226effaa6f667e5b1d45e2" HandleID="k8s-pod-network.421e0c771368fe0e4ea98a719c41a544386fe75ef7226effaa6f667e5b1d45e2" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--apiserver--5d54978bfb--dbkmf-eth0" Jan 30 13:50:59.874301 containerd[1461]: 2025-01-30 13:50:59.862 [INFO][5329] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:50:59.874301 containerd[1461]: 2025-01-30 13:50:59.862 [INFO][5329] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:50:59.874301 containerd[1461]: 2025-01-30 13:50:59.870 [WARNING][5329] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="421e0c771368fe0e4ea98a719c41a544386fe75ef7226effaa6f667e5b1d45e2" HandleID="k8s-pod-network.421e0c771368fe0e4ea98a719c41a544386fe75ef7226effaa6f667e5b1d45e2" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--apiserver--5d54978bfb--dbkmf-eth0" Jan 30 13:50:59.874301 containerd[1461]: 2025-01-30 13:50:59.870 [INFO][5329] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="421e0c771368fe0e4ea98a719c41a544386fe75ef7226effaa6f667e5b1d45e2" HandleID="k8s-pod-network.421e0c771368fe0e4ea98a719c41a544386fe75ef7226effaa6f667e5b1d45e2" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--apiserver--5d54978bfb--dbkmf-eth0" Jan 30 13:50:59.874301 containerd[1461]: 2025-01-30 13:50:59.871 [INFO][5329] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:50:59.874301 containerd[1461]: 2025-01-30 13:50:59.873 [INFO][5323] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="421e0c771368fe0e4ea98a719c41a544386fe75ef7226effaa6f667e5b1d45e2" Jan 30 13:50:59.875940 containerd[1461]: time="2025-01-30T13:50:59.874366446Z" level=info msg="TearDown network for sandbox \"421e0c771368fe0e4ea98a719c41a544386fe75ef7226effaa6f667e5b1d45e2\" successfully" Jan 30 13:50:59.875940 containerd[1461]: time="2025-01-30T13:50:59.874403737Z" level=info msg="StopPodSandbox for \"421e0c771368fe0e4ea98a719c41a544386fe75ef7226effaa6f667e5b1d45e2\" returns successfully" Jan 30 13:50:59.875940 containerd[1461]: time="2025-01-30T13:50:59.875387501Z" level=info msg="RemovePodSandbox for \"421e0c771368fe0e4ea98a719c41a544386fe75ef7226effaa6f667e5b1d45e2\"" Jan 30 13:50:59.875940 containerd[1461]: time="2025-01-30T13:50:59.875428094Z" level=info msg="Forcibly stopping sandbox \"421e0c771368fe0e4ea98a719c41a544386fe75ef7226effaa6f667e5b1d45e2\"" Jan 30 13:50:59.962420 containerd[1461]: 2025-01-30 13:50:59.920 [WARNING][5347] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="421e0c771368fe0e4ea98a719c41a544386fe75ef7226effaa6f667e5b1d45e2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--apiserver--5d54978bfb--dbkmf-eth0", GenerateName:"calico-apiserver-5d54978bfb-", Namespace:"calico-apiserver", SelfLink:"", UID:"58e99075-1376-48e5-b2c7-99d946da1951", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 50, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d54978bfb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-c562bfccd2c0c34ec50f.c.flatcar-212911.internal", ContainerID:"a6aab7009007a38cceedcd7c698b1349e733ba89172511beaf66c78bab84742d", Pod:"calico-apiserver-5d54978bfb-dbkmf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.87.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4d9b535ea12", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:50:59.962420 containerd[1461]: 2025-01-30 13:50:59.921 [INFO][5347] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="421e0c771368fe0e4ea98a719c41a544386fe75ef7226effaa6f667e5b1d45e2" Jan 30 13:50:59.962420 containerd[1461]: 2025-01-30 13:50:59.922 [INFO][5347] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="421e0c771368fe0e4ea98a719c41a544386fe75ef7226effaa6f667e5b1d45e2" iface="eth0" netns="" Jan 30 13:50:59.962420 containerd[1461]: 2025-01-30 13:50:59.922 [INFO][5347] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="421e0c771368fe0e4ea98a719c41a544386fe75ef7226effaa6f667e5b1d45e2" Jan 30 13:50:59.962420 containerd[1461]: 2025-01-30 13:50:59.922 [INFO][5347] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="421e0c771368fe0e4ea98a719c41a544386fe75ef7226effaa6f667e5b1d45e2" Jan 30 13:50:59.962420 containerd[1461]: 2025-01-30 13:50:59.950 [INFO][5353] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="421e0c771368fe0e4ea98a719c41a544386fe75ef7226effaa6f667e5b1d45e2" HandleID="k8s-pod-network.421e0c771368fe0e4ea98a719c41a544386fe75ef7226effaa6f667e5b1d45e2" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--apiserver--5d54978bfb--dbkmf-eth0" Jan 30 13:50:59.962420 containerd[1461]: 2025-01-30 13:50:59.950 [INFO][5353] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:50:59.962420 containerd[1461]: 2025-01-30 13:50:59.951 [INFO][5353] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:50:59.962420 containerd[1461]: 2025-01-30 13:50:59.958 [WARNING][5353] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="421e0c771368fe0e4ea98a719c41a544386fe75ef7226effaa6f667e5b1d45e2" HandleID="k8s-pod-network.421e0c771368fe0e4ea98a719c41a544386fe75ef7226effaa6f667e5b1d45e2" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--apiserver--5d54978bfb--dbkmf-eth0" Jan 30 13:50:59.962420 containerd[1461]: 2025-01-30 13:50:59.958 [INFO][5353] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="421e0c771368fe0e4ea98a719c41a544386fe75ef7226effaa6f667e5b1d45e2" HandleID="k8s-pod-network.421e0c771368fe0e4ea98a719c41a544386fe75ef7226effaa6f667e5b1d45e2" Workload="ci--4081--3--0--c562bfccd2c0c34ec50f.c.flatcar--212911.internal-k8s-calico--apiserver--5d54978bfb--dbkmf-eth0" Jan 30 13:50:59.962420 containerd[1461]: 2025-01-30 13:50:59.959 [INFO][5353] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:50:59.962420 containerd[1461]: 2025-01-30 13:50:59.961 [INFO][5347] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="421e0c771368fe0e4ea98a719c41a544386fe75ef7226effaa6f667e5b1d45e2" Jan 30 13:50:59.962420 containerd[1461]: time="2025-01-30T13:50:59.962351166Z" level=info msg="TearDown network for sandbox \"421e0c771368fe0e4ea98a719c41a544386fe75ef7226effaa6f667e5b1d45e2\" successfully" Jan 30 13:50:59.967978 containerd[1461]: time="2025-01-30T13:50:59.967929844Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"421e0c771368fe0e4ea98a719c41a544386fe75ef7226effaa6f667e5b1d45e2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:50:59.968136 containerd[1461]: time="2025-01-30T13:50:59.968032933Z" level=info msg="RemovePodSandbox \"421e0c771368fe0e4ea98a719c41a544386fe75ef7226effaa6f667e5b1d45e2\" returns successfully" Jan 30 13:51:02.587666 systemd[1]: Started sshd@11-10.128.0.25:22-139.178.68.195:36544.service - OpenSSH per-connection server daemon (139.178.68.195:36544). Jan 30 13:51:02.938231 sshd[5361]: Accepted publickey for core from 139.178.68.195 port 36544 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:51:02.940081 sshd[5361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:51:02.949474 systemd-logind[1448]: New session 12 of user core. Jan 30 13:51:02.953514 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 13:51:03.268815 sshd[5361]: pam_unix(sshd:session): session closed for user core Jan 30 13:51:03.274183 systemd[1]: sshd@11-10.128.0.25:22-139.178.68.195:36544.service: Deactivated successfully. Jan 30 13:51:03.277075 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 13:51:03.281063 systemd-logind[1448]: Session 12 logged out. Waiting for processes to exit. Jan 30 13:51:03.283102 systemd-logind[1448]: Removed session 12. Jan 30 13:51:03.336730 systemd[1]: Started sshd@12-10.128.0.25:22-139.178.68.195:36560.service - OpenSSH per-connection server daemon (139.178.68.195:36560). Jan 30 13:51:03.684030 sshd[5382]: Accepted publickey for core from 139.178.68.195 port 36560 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:51:03.685844 sshd[5382]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:51:03.692240 systemd-logind[1448]: New session 13 of user core. Jan 30 13:51:03.702487 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 13:51:04.055944 sshd[5382]: pam_unix(sshd:session): session closed for user core Jan 30 13:51:04.062281 systemd[1]: sshd@12-10.128.0.25:22-139.178.68.195:36560.service: Deactivated successfully. Jan 30 13:51:04.065889 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 13:51:04.067602 systemd-logind[1448]: Session 13 logged out. Waiting for processes to exit. Jan 30 13:51:04.069154 systemd-logind[1448]: Removed session 13. Jan 30 13:51:04.121678 systemd[1]: Started sshd@13-10.128.0.25:22-139.178.68.195:36564.service - OpenSSH per-connection server daemon (139.178.68.195:36564). Jan 30 13:51:04.470081 sshd[5393]: Accepted publickey for core from 139.178.68.195 port 36564 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:51:04.472132 sshd[5393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:51:04.479208 systemd-logind[1448]: New session 14 of user core. Jan 30 13:51:04.483484 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 13:51:04.806836 sshd[5393]: pam_unix(sshd:session): session closed for user core Jan 30 13:51:04.815709 systemd[1]: sshd@13-10.128.0.25:22-139.178.68.195:36564.service: Deactivated successfully. Jan 30 13:51:04.819318 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 13:51:04.821595 systemd-logind[1448]: Session 14 logged out. Waiting for processes to exit. Jan 30 13:51:04.823769 systemd-logind[1448]: Removed session 14. Jan 30 13:51:09.875246 systemd[1]: Started sshd@14-10.128.0.25:22-139.178.68.195:50732.service - OpenSSH per-connection server daemon (139.178.68.195:50732). Jan 30 13:51:10.240283 sshd[5434]: Accepted publickey for core from 139.178.68.195 port 50732 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:51:10.242326 sshd[5434]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:51:10.249698 systemd-logind[1448]: New session 15 of user core. Jan 30 13:51:10.255654 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 13:51:10.575441 sshd[5434]: pam_unix(sshd:session): session closed for user core Jan 30 13:51:10.580246 systemd[1]: sshd@14-10.128.0.25:22-139.178.68.195:50732.service: Deactivated successfully. Jan 30 13:51:10.583300 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 13:51:10.586362 systemd-logind[1448]: Session 15 logged out. Waiting for processes to exit. Jan 30 13:51:10.588178 systemd-logind[1448]: Removed session 15. Jan 30 13:51:13.145819 systemd[1]: run-containerd-runc-k8s.io-66fcfac1654fa28a522378325305e4d101b2b27075bdb3477c242578b7f30880-runc.qfTrYo.mount: Deactivated successfully. Jan 30 13:51:15.644066 systemd[1]: Started sshd@15-10.128.0.25:22-139.178.68.195:40296.service - OpenSSH per-connection server daemon (139.178.68.195:40296). Jan 30 13:51:15.996994 sshd[5465]: Accepted publickey for core from 139.178.68.195 port 40296 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:51:15.998855 sshd[5465]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:51:16.004756 systemd-logind[1448]: New session 16 of user core. Jan 30 13:51:16.009458 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 13:51:16.327923 sshd[5465]: pam_unix(sshd:session): session closed for user core Jan 30 13:51:16.333064 systemd[1]: sshd@15-10.128.0.25:22-139.178.68.195:40296.service: Deactivated successfully. Jan 30 13:51:16.335825 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 13:51:16.338186 systemd-logind[1448]: Session 16 logged out. Waiting for processes to exit. Jan 30 13:51:16.340009 systemd-logind[1448]: Removed session 16. Jan 30 13:51:21.397659 systemd[1]: Started sshd@16-10.128.0.25:22-139.178.68.195:40306.service - OpenSSH per-connection server daemon (139.178.68.195:40306). Jan 30 13:51:21.739202 sshd[5478]: Accepted publickey for core from 139.178.68.195 port 40306 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:51:21.741009 sshd[5478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:51:21.747508 systemd-logind[1448]: New session 17 of user core. Jan 30 13:51:21.758537 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 13:51:22.065935 sshd[5478]: pam_unix(sshd:session): session closed for user core Jan 30 13:51:22.072385 systemd[1]: sshd@16-10.128.0.25:22-139.178.68.195:40306.service: Deactivated successfully. Jan 30 13:51:22.075738 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 13:51:22.076850 systemd-logind[1448]: Session 17 logged out. Waiting for processes to exit. Jan 30 13:51:22.078418 systemd-logind[1448]: Removed session 17. Jan 30 13:51:27.134666 systemd[1]: Started sshd@17-10.128.0.25:22-139.178.68.195:59388.service - OpenSSH per-connection server daemon (139.178.68.195:59388). Jan 30 13:51:27.489142 sshd[5498]: Accepted publickey for core from 139.178.68.195 port 59388 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:51:27.491209 sshd[5498]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:51:27.498127 systemd-logind[1448]: New session 18 of user core. Jan 30 13:51:27.507486 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 13:51:27.815235 sshd[5498]: pam_unix(sshd:session): session closed for user core Jan 30 13:51:27.821337 systemd[1]: sshd@17-10.128.0.25:22-139.178.68.195:59388.service: Deactivated successfully. Jan 30 13:51:27.824396 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 13:51:27.826506 systemd-logind[1448]: Session 18 logged out. Waiting for processes to exit. Jan 30 13:51:27.828452 systemd-logind[1448]: Removed session 18. Jan 30 13:51:27.880688 systemd[1]: Started sshd@18-10.128.0.25:22-139.178.68.195:59402.service - OpenSSH per-connection server daemon (139.178.68.195:59402). Jan 30 13:51:28.222761 sshd[5510]: Accepted publickey for core from 139.178.68.195 port 59402 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:51:28.224598 sshd[5510]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:51:28.231151 systemd-logind[1448]: New session 19 of user core. Jan 30 13:51:28.235464 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 13:51:28.630576 sshd[5510]: pam_unix(sshd:session): session closed for user core Jan 30 13:51:28.635523 systemd[1]: sshd@18-10.128.0.25:22-139.178.68.195:59402.service: Deactivated successfully. Jan 30 13:51:28.638819 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 13:51:28.640817 systemd-logind[1448]: Session 19 logged out. Waiting for processes to exit. Jan 30 13:51:28.642517 systemd-logind[1448]: Removed session 19. Jan 30 13:51:28.700698 systemd[1]: Started sshd@19-10.128.0.25:22-139.178.68.195:59404.service - OpenSSH per-connection server daemon (139.178.68.195:59404). Jan 30 13:51:29.042080 sshd[5521]: Accepted publickey for core from 139.178.68.195 port 59404 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:51:29.043913 sshd[5521]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:51:29.050288 systemd-logind[1448]: New session 20 of user core. Jan 30 13:51:29.054492 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 13:51:30.079847 sshd[5521]: pam_unix(sshd:session): session closed for user core Jan 30 13:51:30.086214 systemd[1]: sshd@19-10.128.0.25:22-139.178.68.195:59404.service: Deactivated successfully. Jan 30 13:51:30.088977 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 13:51:30.090252 systemd-logind[1448]: Session 20 logged out. Waiting for processes to exit. Jan 30 13:51:30.091756 systemd-logind[1448]: Removed session 20. Jan 30 13:51:30.145945 systemd[1]: Started sshd@20-10.128.0.25:22-139.178.68.195:59416.service - OpenSSH per-connection server daemon (139.178.68.195:59416). Jan 30 13:51:30.500383 sshd[5539]: Accepted publickey for core from 139.178.68.195 port 59416 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:51:30.502135 sshd[5539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:51:30.509878 systemd-logind[1448]: New session 21 of user core. Jan 30 13:51:30.517626 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 13:51:30.969146 sshd[5539]: pam_unix(sshd:session): session closed for user core Jan 30 13:51:30.975634 systemd-logind[1448]: Session 21 logged out. Waiting for processes to exit. Jan 30 13:51:30.976637 systemd[1]: sshd@20-10.128.0.25:22-139.178.68.195:59416.service: Deactivated successfully. Jan 30 13:51:30.980031 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 13:51:30.981907 systemd-logind[1448]: Removed session 21. Jan 30 13:51:31.032638 systemd[1]: Started sshd@21-10.128.0.25:22-139.178.68.195:59418.service - OpenSSH per-connection server daemon (139.178.68.195:59418). Jan 30 13:51:31.374848 sshd[5550]: Accepted publickey for core from 139.178.68.195 port 59418 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:51:31.377107 sshd[5550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:51:31.383453 systemd-logind[1448]: New session 22 of user core. Jan 30 13:51:31.387447 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 13:51:31.693576 sshd[5550]: pam_unix(sshd:session): session closed for user core Jan 30 13:51:31.698248 systemd[1]: sshd@21-10.128.0.25:22-139.178.68.195:59418.service: Deactivated successfully. Jan 30 13:51:31.701359 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 13:51:31.703781 systemd-logind[1448]: Session 22 logged out. Waiting for processes to exit. Jan 30 13:51:31.705426 systemd-logind[1448]: Removed session 22. Jan 30 13:51:36.759889 systemd[1]: Started sshd@22-10.128.0.25:22-139.178.68.195:37398.service - OpenSSH per-connection server daemon (139.178.68.195:37398). Jan 30 13:51:37.102308 sshd[5567]: Accepted publickey for core from 139.178.68.195 port 37398 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:51:37.104102 sshd[5567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:51:37.110367 systemd-logind[1448]: New session 23 of user core. Jan 30 13:51:37.117463 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 13:51:37.421658 sshd[5567]: pam_unix(sshd:session): session closed for user core Jan 30 13:51:37.427430 systemd[1]: sshd@22-10.128.0.25:22-139.178.68.195:37398.service: Deactivated successfully. Jan 30 13:51:37.430295 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 13:51:37.431364 systemd-logind[1448]: Session 23 logged out. Waiting for processes to exit. Jan 30 13:51:37.433148 systemd-logind[1448]: Removed session 23. Jan 30 13:51:42.490707 systemd[1]: Started sshd@23-10.128.0.25:22-139.178.68.195:37406.service - OpenSSH per-connection server daemon (139.178.68.195:37406). Jan 30 13:51:42.844421 sshd[5603]: Accepted publickey for core from 139.178.68.195 port 37406 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:51:42.846472 sshd[5603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:51:42.852840 systemd-logind[1448]: New session 24 of user core. Jan 30 13:51:42.857475 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 30 13:51:43.178801 systemd[1]: run-containerd-runc-k8s.io-66fcfac1654fa28a522378325305e4d101b2b27075bdb3477c242578b7f30880-runc.CADjeH.mount: Deactivated successfully. Jan 30 13:51:43.246478 sshd[5603]: pam_unix(sshd:session): session closed for user core Jan 30 13:51:43.253882 systemd[1]: sshd@23-10.128.0.25:22-139.178.68.195:37406.service: Deactivated successfully. Jan 30 13:51:43.254226 systemd-logind[1448]: Session 24 logged out. Waiting for processes to exit. Jan 30 13:51:43.258370 systemd[1]: session-24.scope: Deactivated successfully. Jan 30 13:51:43.261784 systemd-logind[1448]: Removed session 24. Jan 30 13:51:48.315026 systemd[1]: Started sshd@24-10.128.0.25:22-139.178.68.195:33596.service - OpenSSH per-connection server daemon (139.178.68.195:33596). Jan 30 13:51:48.675435 sshd[5635]: Accepted publickey for core from 139.178.68.195 port 33596 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 13:51:48.677700 sshd[5635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:51:48.684806 systemd-logind[1448]: New session 25 of user core. Jan 30 13:51:48.691131 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 30 13:51:49.021015 sshd[5635]: pam_unix(sshd:session): session closed for user core Jan 30 13:51:49.026932 systemd[1]: sshd@24-10.128.0.25:22-139.178.68.195:33596.service: Deactivated successfully. Jan 30 13:51:49.029664 systemd[1]: session-25.scope: Deactivated successfully. Jan 30 13:51:49.030868 systemd-logind[1448]: Session 25 logged out. Waiting for processes to exit. Jan 30 13:51:49.032509 systemd-logind[1448]: Removed session 25.